text
sequencelengths
2
2.54k
id
stringlengths
9
16
[ [ "Extending Deep Learning Models for Limit Order Books to Quantile\n Regression" ], [ "Abstract We showcase how Quantile Regression (QR) can be applied to forecast financial returns using Limit Order Books (LOBs), the canonical data source of high-frequency financial time-series.", "We develop a deep learning architecture that simultaneously models the return quantiles for both buy and sell positions.", "We test our model over millions of LOB updates across multiple different instruments on the London Stock Exchange.", "Our results suggest that the proposed network not only delivers excellent performance but also provides improved prediction robustness by combining quantile estimates." ], [ "Introduction", "Traditional time-series modelling is often dominated by Markov-like models with stochastic driving terms such as the vector autoregressive model (VAR) [14].", "These models make strong parametric assumptions to the functional form of the predictive model (in particular the AR family) and also require the target time-series to be stationary.", "However, financial price (and volume) rarely conform to these assumptions and even returns, the first order differences of prices, are rarely stationary [3].", "Deep learning has gained popularity in financial modelling since they are not constrained by the above assumptions (see [10], [11] for some examples).", "Modern deep learning architectures also allow one to tailor the loss function, as is demonstrated in [5] where the Sharpe ratio is directly maximised as a cost function.", "Deep networks often require a large number of observations to calibrate weights and this property fits nicely with financial applications that utilise high frequency microstructure data.", "Nowadays, billions of market data are generated everyday and most of them are recorded in Limit Order Books (LOBs) [7], [1].", "A LOB is a record of all unmatched orders of a given instrument in a market comprising of levels at different prices containing resting limit orders to sell and buy, also called ask and bid orders.", "A bid (ask) order is an order to buy (sell) an asset at or below (above) a specified price.", "We can consider LOBs as the most granular financial data as a LOB represents the demand and supply of a given instrument at any moment in time.", "In our previous works [12], [13], we demonstrate that deep learning models can deliver improved predictive performance, in comparison with standard methods when modelling LOB data.", "One of the important contributions of deep learning is the ability to automate the process of feature extraction.", "In the work of [9] as well as our own, it is demonstrated that such models can extract representative features that are related to the demand and supply in the order book.", "Given billions of market quotes from LOBs, this approach has proved to be more effective than algorithms that rely on hand-crafted features.", "Features derived from a human-centric understanding of a process (such as “moving-average crossover”) do not guarantee best performance with respect to the target function, as is also seen in [5].", "Further, in complex non-stationary environments such as finance, it is far from trivial to select informative features by hand, even after years of work in the industry.", "In this work, we utilise deep neural networks and Quantile Regression (QR) [4] to model the returns from LOBs.", "QR is particularly useful for financial time-series as it models the conditional quantiles of a response.", "Returns are in general heterogeneous, highly peaked and have fat tails compared to a normal distribution [3].", "A point estimation is really not enough to describe the full distribution of returns and we can obtain considerably more information by estimating multiple quantiles.", "We regard Quantile Regression as able to provide valuable, non-stationary, extra information regarding risk exposure.", "Unlike most literature, where mid-price is used to represent financial time-series, we define returns by directly using first level prices from LOBs.", "Modelling mid-price is appropriate if we are using daily data, but it is improper for intraday strategies as we ignore spreads which are the differences between best ask and bid price.The upper plot of Figure REF illustrates the effects of spreads as transaction costs on returns.", "The return, $r_{mid,t}$ , from using mid-price is about three times higher than the actual return, $r_{long,t}$ , that we can obtain (we have to buy from ask sides and sell from bid sides for aggressively entering or exiting positions).", "Further, intraday strategies often involve both long and short positions to increase profitability.", "However, returns from these two positions are not symmetric at a given time stamp.", "We observe this at the bottom of Figure REF .", "A return, $r_{long,t}$ , is -0.23 from a long position, but it does not imply a profit of 0.23 by taking a short position ($r_{short,t} = 0.01$ ).", "Indeed, the two return series are statistically different under Kolmogorov-Smirnov and Wilcoxon signed-rank tests.", "This discrepancy comes from changing spreads and indicates that separate models are required to estimate returns for different positions.", "Figure: Top: returns rr obtained from using mid-price or first level prices from LOBs.", "Bottom: returns obtained from long and short positions at a given time stamp.Our contributions: We propose a network architecture that can simultaneously estimate multiple return quantiles from both buy and sell positions by training with different Quantile Loss functions.", "Our model consists of a block of convolutional layers and multiple LSTM branches to estimate different quantiles.", "The convolutional block, as a feature extraction mechanism, processes raw limit order book data and LSTM layers are used to capture time dependencies among the resulting feature maps.", "We show that this method delivers better predictive performance than other popular machine learning algorithms.", "Also, better performance can be achieved by combining estimates from different quantiles." ], [ "Data Description and Returns", "Our dataset consists of one year full-resolution LOB data for five of the most liquid stocks listed on the London Stock Exchange (LSE), namely, Lloyds Bank, Barclays, Tesco, BT and Vodafone.", "The data spans all trading days from the 3rd of January 2017 to the 24th of December 2017 and only normal trading periods (between 08:30:00 and 16:30:00) are included.", "We take price and volume for 10 levels on both ask and bid sides of a LOB so there are 40 features at each timestamp.", "Overall, our dataset has more than 134 million observations and there are, on average, 150,000 events per day per stock.", "The first 6 months are used as training data, the next 3 months as validation data and the last 3 months as test data.", "In the context of high-frequency data, 3 months test data corresponds to millions of observations and therefore provides sufficient scope for testing model performance and robustness.", "We prepare our input data using the procedure outlined in our previous work [13].", "We define returns by taking spreads into account.", "A return, $r(t)$ , at time $t$ , both long or short, can be decomposed as: $ \\!\\!r_i(t)\\!", "=\\!\\frac{\\Delta p_i(t)}{p_{mid}(t)}, \\Delta p_i(t) \\!=\\!", "z_i(t)\\Delta m(t)\\!", "-\\!\\frac{s(t)\\!+\\!s(t\\!+\\!k)}{2}$ where $\\begin{split}z_i(t) &= \\left\\lbrace \\begin{tabular}{cc}1, & if i=\\mathrm {long} \\\\-1, & if i=\\mathrm {short}\\end{tabular} \\right.", "\\\\\\Delta m(t) &= p_{mid}(t+k) - p_{mid}(t) \\\\p_{mid}(t) &= \\frac{p_{ask}^{(1)}(t) + p_{bid}^{(1)}(t)}{2}\\end{split}$ and $k$ is the predicted horizon, $s(t)$ and $s(t+k)$ are spreads at time $t$ and $t+k$ .", "We denote the first level ask and bid price as $p_{ask}^{(1)}(t)$ and $p_{bid}^{(1)}(t)$ .", "A schematic description of Equation (REF ) is given in Figure REF – we cross half the spread now, follow the mid price and cross the other half later.", "As we already observe the current spread $s(t)$ , there is no need to model it and we take it out from (REF ), so the return $r_i^{\\prime }(t)$ of interest is defined as: $ r_i^{\\prime }(t)= \\frac{\\Delta p_i^{\\prime }(t) }{p_{mid}(t)}, \\quad \\Delta p_i^{\\prime }(t) = \\Delta p_i(t) + s(t).$ Note that Equation (REF ) can be also written as: $ r_i^{\\prime }(t) = z_i(t) r_{mid}(t) - r_{spread}(t)/2$ where $r_{mid} = \\Delta m(t)/p_{mid}(t)$ and $r_{spread}(t) = (s(t+k)-s(t))/p_{mid}(t)$ .", "Instead of modelling ($r^{\\prime }_{long}(t), r^{\\prime }_{short}(t)$ ), we can also estimate quantiles for mid-price change and spread change ($ r_{mid}(t) , r_{spread}(t) $ ).", "Figure: A schematic description of Equation ().", "p a (1) (t)p_a^{(1)}(t) and p b (1) (t)p_b^{(1)}(t) represent best ask and bid prices at time tt.", "Left: Shows a return from a long position; Right: Shows a return from a short position." ], [ "Quantile Regression", "In Quantile Regression (QR), we predict the conditional quantile of the target distribution, i.e.", "$\\mathbb {P}(r\\le r^{(\\tau )} | x) = \\tau $ , where $\\tau $ is the quantile of interest and $x$ is the input.", "We can obtain the estimates by minimizing the Quantile Loss (QL) with respect to $r^{(\\tau )}$ , leading to the $\\tau $ th quantile: $L_{\\tau }(r, \\hat{r}^{\\tau }) &=& \\sum _{t: r(t) < \\hat{r}^{\\tau }(t)} (\\tau - 1)|r(t) - \\hat{r}^{\\tau }(t)| + \\nonumber \\\\&&\\sum _{t: r(t) > \\hat{r}^{\\tau }(t)} \\tau |r(t) - \\hat{r}^{\\tau }(t)|$ where $r(t)$ is the observation at time $t$ and $\\hat{r}^{\\tau }(t)$ is the predicted value.", "The common used mean absolute error is equivalent to QL with $\\tau = 0.5$ .", "Note that each quantile has its own QL function and, in order to obtain multiple quantiles, we need separate models to estimate each of them.", "Our network solves this constraint by training with multiple QLs to model all quantiles of interest simultaneously.", "QR has a common problem known as quantile crossing as quantile curves can cross each other, leading to an invalid distribution for the response, e.g.", "the predicted 90th percentile of the response is smaller than the 80th percentile which is impossible.", "We follow the work of [2] to rearrange the original estimated non-monotone curve into a monotone rearranged curve." ], [ "Network Architecture", "We propose a neural network architecture that simultaneously models different quantiles of returns from both long and short positions.", "This model is an extension of our previous work [13] and, for a detailed discussion on how initial layers are constructed, please find more information there.", "We denote this new architecture as DeepLOB-QR.", "As each quantile has its own loss function, if we are interested in 3 quantiles for returns from each position, we would need to estimate 6 separate models.", "This is computationally demanding for LOBs data as we have millions of them in a single day.", "Further, each of the models is essentially estimating the “same” underlying quantity (long or short return), just different quantiles of it.", "We would thus expect that there are common features that can contribute to the estimation of all models and it is wasteful to estimate them separately.", "DeepLOB-QR is designed to solve the above constraints by using a common convolutional block and branching out several LSTM layers to model different interested quantiles.", "The “main input” takes raw LOBs data to extract features that can modulate relationships between demand and supply of an instrument.", "The two auxiliary inputs take past returns from long and short positions.We isolate LOBs data and past returns here because one important property of convolutional layer is parameter sharing and price and volume series have different dynamics compared to returns.", "Overall, this is a multi-input and multi-ouput setup but trained using different loss functions (6 QLs in our case).", "The last parallel LSTM layers (LSTM$@32$ ) are only trained using their corresponding losses, while each of the two LSTM$@64$ layers is trained using 3 losses and the convolutional block is trained using all losses.", "Figure: Model architecture for DeepLOB-QR.", "Here 1×2@161 \\times 2@16 represents a convolutional layer with 16 filters of size (1×2)(1\\times 2) .", "QL: τ\\tau represents the quantile loss at quantile τ\\tau ." ], [ "Forecast Combination", "The works of [8], [6] suggest that combinations of individual quantile estimates can form a much robuster point estimation and help reduce prediction uncertainty.", "After obtaining quantile estimates $\\hat{r}^{(\\tau )}(t)$ , $\\tau \\in \\mathcal {S}$ where $\\mathcal {S}$ denotes the set of considered quantiles, we can combine them as: $\\hat{r}(t) = \\sum _{\\tau \\in \\mathcal {S}} \\pi ^{(\\tau )} \\hat{r}^{(\\tau )}(t), \\quad \\sum _{\\tau \\in \\mathcal {S}} \\pi ^{(\\tau )}=1$ where the weights $\\pi ^{(\\tau )}$ represents the probability assigned to the prediction of quantile $\\tau $ .", "This estimator, as a linear combination of order statistics, forms a point estimation of the central location of a distribution based on small sets of quantile estimates.", "We can reflect our beliefs on how each quantile estimate affects the central location by adjusting the corresponding weights.", "The simplest method of estimating the weights is to use a fixed weighting scheme.", "We can also form a constrained optimzation problem [6] to find an optimal combination of quantile estimates: $\\mathrm {\\mathbf {\\pi }} = \\arg \\underset{\\mathrm {\\mathbf {\\pi }}}{\\min } \\ E_t [r(t-1) - \\sum _{\\tau \\in \\mathcal {S}} \\pi ^{(\\tau )} \\hat{r}^{(\\tau )}(t-1) ]^2,$ where $\\mathrm {\\mathbf {\\pi }} = [\\pi ^{(\\tau )}]_{\\tau \\in \\mathcal {S}}$ and $\\sum _{\\tau \\in \\mathcal {S}} \\pi ^{(\\tau )}=1$ .", "This procedure hence treats quantile estimation as constrained regression, where the weights are non-negative and sum to unity." ], [ "Experiments and Results", "As in Figure REF , we predict three quantiles (0.25, 0.5 and 0.75) for returns from long positions and same three quantiles for short positions, totalling 6 QLs.", "Our prediction horizon, $k$ , is 100 steps into the future.", "In order to compare with other methods, we assess how different models estimate the central location of a response.", "We denote estimates of the 0.5 quantile from our model as DeepLOB-QR and estimates obtained from the combination scheme (Equation (REF )) as DeepLOB-QR(C).", "We compare to four other models: an autoregressive model (AR), a generalised linear model (GLR), a support a vector regression (SVR) and a neural network with multiple fully connected layers (MLP).", "Table REF shows the results of our model compared to other methods.", "Performance is measured by mean absolute error (MAE), mean squared error (MSE), median absolute error (MEAE) and $R^2$ score.", "All errors, except $R^2$ score, are divided by errors from a repetitive modelA repetitive model means using current observations as future predictions, a zero-intelligence model., therefore, we understand how much better a model is compared to a zero-intelligence method.", "DeepLOB-QR achieves the smallest errors and the highest $R^2$ for modelling returns from both long and short positions.", "Also, our results suggest that better prediction results can be obtained from forecast combinations (DeepLOB-QR(C)).", "However, as the prediction horizon ($k$ ) increases, the problem becomes more difficult and we obtain worse performance, as shown in Table REF .", "To visualise how quantile estimates form a prediction interval, we plot them against the true observations in Figure REF .", "Table: Experiment results for the LSE dataset.Table: Experiment results for different prediction horizons kk.Figure: Top: Real returns from long positions is in red and 0.5 quantile estimate is in black.", "The upper boundary of grey shading represents 0.75 quantile estimate and the lower boundary is for 0.25 quantile estimate; Bottom: Real returns and quantile estimates for short positions." ], [ "Conclusion", "In the context of time-series from high-frequency limit order book (LOB) data we show that quantile regression (QR) can provide us with prediction uncertainty by forming a confidence interval and better predictve performance can be obtained by combining multiple quantile estimates.", "A promising direction for future work is to design trading strategies based on derived signals.", "We can combine QR with Reinforcement Learning and use uncertainty information to tackle the exploration and exploitation problem." ] ]
1906.04404
[ [ "A quantum walk with both a continuous-time and a continuous-spacetime\n limit" ], [ "Abstract Nowadays, quantum simulation schemes come in two flavours.", "Either they are continuous-time discrete-space models (a.k.a Hamiltonian-based), pertaining to non-relativistic quantum mechanics.", "Or they are discrete-spacetime models (a.k.a Quantum Walks or Quantum Cellular Automata-based) enjoying a relativistic continuous spacetime limit.", "We provide a first example of a quantum simulation scheme that unifies both approaches.", "The proposed scheme supports both a continuous-time discrete-space limit, leading to lattice fermions, and a continuous-spacetime limit, leading to the Dirac equation.", "The transition between the two can be thought of as a general relativistic change of coordinates, pushed to an extreme.", "As an emergent by-product of this procedure, we obtain a Hamiltonian for lattice-fermions in curved spacetime with synchronous coordinates." ], [ "Introduction", "Confronted with the inefficiency of classical computers for simulating quantum particles, Feynman realized that one ought to use quantum computers instead [23].", "What better than a quantum system to simulate another quantum system?", "An important obstacle however is understand to which extent quantum systems can be represented by discrete quantum models.", "Indeed, whenever we simulate a physical system on a classical computer, we first need a discrete model of the physical system “in terms of bits”: a simulation algorithm.", "Similarly, quantum simulation requires a discrete quantum model of the quantum system “in terms of qubits”: a quantum simulation algorithm.", "In the recent years, several such quantum simulation schemes have been devised [28], [26], [30], [29].", "Some of which were experimentally implemented [27], including for interacting quantum particles [34], [43].", "Most often these discrete models are Hamiltonian-based, meaning that they are discrete-space continuous-time ($\\Delta _x\\ \\textrm {finite},\\ \\Delta _t\\longrightarrow 0$ ).", "Their point of departure is always a discrete-space continuous-time reformulation of the target physical phenomena (e.g.", "the Kogut-Susskind Hamiltonian [31] formulation of quantum electrodynamics).", "Next, they either look for a quantum system in nature that mimics this Hamiltonian, or they perform a staggered trotterization of it in order to obtain unitaries [39], [3].", "But even when the Hamiltonian is trotterized, time steps need remain orders of magnitude smaller than space steps ($\\Delta _t\\ll \\Delta _x$ ).", "Thus, in either case, by having first discretized space alone and not time, Hamiltonian-based schemes take things back to the non-relativistic quantum mechanical setting: Lorentz-covariance is broken; the bounded speed of light can only be approximately recovered (via Lieb-Robinson bounds, with issues such as those pointed in [21], [36]).", "This also creates more subtle problems such as fermion doubling, where spurious particles are created due to the periodic nature of the momentum space on a lattice.", "From a relativistic point of view it would be more natural to discretize space and time right from the start, simultaneously and with the same scale ($\\Delta _x=\\Delta _t\\ \\textrm {finite}$ ).", "The resulting quantum simulation scheme would then take the form of a network of local quantum gates, homogeneously repeated across space and time—a Quantum Cellular Automata (QCA).", "[38], [9], [7].", "Feynman himself introduced QCA right along with the idea of quantum simulation [24].", "He had pursued similar ideas earlier on in the one-particle sector, with his attractively simple `checkerboard model' of the electron in discrete $(1+1)$ –spacetime [22].", "Later, the one-particle sector of QCA became known as Quantum Walks (QW), and was found to provide quantum simulation schemes for non-interacting fundamental particles [40], [12], [35], [20] in $(3+1)$ –spacetime [8], [33], be it curved [18], [5], [4], [32] or not, or in the presence of an electromagnetic field [19], [16] or more in general a Yang-Mills interaction [2].", "Some of these were implemented [25], [37].", "The sense in which QW are Lorentz-covariant was made explicit [10], [13], [15], [17].", "The bounded speed of light is very natural in circuit-based quantum simulation, as it is directly enforced by the wiring between the local quantum gates.", "Yet, in spite of their many successes, discrete-spacetime models have fallen short of being able to account for realistic interacting QFT so far.", "The quantum simulation results over the multi-particle sector of QW (namely, QCA) are either abstract (e.g.", "universality [6]) or phenomenological (e.g.", "molecular binding [1], [14]).", "An exception is a [11], where contact is made with $(1+1)$ –QED in two ways : by mimicking its construction and by informally recovering its main phenomenology.", "All this points out to a core difficulty : there is no clear sense in which discrete-spacetime models of interacting particles have continuum limit ($\\Delta _x=\\Delta _t\\longrightarrow 0$ ), in fact it is not even clear that interacting QFT themselves have such a continuum limit.", "In many ways, the classical Lagrangian that serves as departure point of a QFT is but a partial prescription for a numerical scheme (e.g.", "a regularized Feynman path integral), whose convergence is often challenging (renormalization).", "Continuous-spacetime does not seem to be the friendly place where QCA and QFT should meet.", "Clearly, Hamiltonian-based and QCA-based simulation schemes both have their advantages.", "It would be nice to have the best of both worlds: a discrete-spacetime model ($\\Delta _x=\\Delta _t$ finite) that would be plastic enough to support both a non-relativistic continuous-time discrete-space limit ($\\Delta _x$ finite, $\\Delta _t\\longrightarrow 0$ ), in order to establish contact with the discrete-space continuous-time formulation of the QFT, and a fully relativistic spacetime-continuum limit ($\\Delta _x=\\Delta _t\\longrightarrow 0$ ) limit.", "For a proof-of-concept we should aim for the free Dirac QFT first, and build a QW that converges both to its continuous-time discrete-space formulation (namely “Lattice fermions”, i.e.", "the free part of the Kogut-Susskind Hamiltonian) and to its continuous-spacetime formulation (the Dirac equation).", "This is exactly what we achieve in this paper.", "For our construction, we needed `plasticity', in the sense of a tunable speed of propagation.", "Indeed intuitively, during the process where the continous-time limit discrete-space is taken, whenever $\\Delta _t$ gets halved relative to $\\Delta _x$ , so is the particle's speed—because it gets half the time to propagate.", "This in turn is analogous to a change of coordinates, relabelling event $(t,x)$ into $(2t,x)$ in General Relativity.", "In order to keep physical distances the same, a synchronous metric $g=\\textrm {diag}(1,-g_{xx})$ then becomes $g^{\\prime }=\\textrm {diag}(1,-4 g_{xx})$ under such a change of coordinates.", "The original curved Dirac QW [18] is precisely able to handle any synchronous metric in the massless case; this was the starting point of our construction.", "Numerous trial–and–error modifications where needed, however, in order to control the relative scalings of $\\Delta _t$ and $\\Delta _x$ and in order to handle the mass elegantly.", "No wonder, therefore, that our result handles the case of curved $(1+1)$ –spacetime `for free'.", "Our QW yields an original curved lattice-fermions Hamiltonian, never appeared in the literature [41], [42], in the continuous-time discrete-space limit, and the standard curved Dirac equation in the spacetime-limit.", "Roadmap.", "Section presents the QW.", "Section shows the different limits it supports (see Fig REF ).", "Section deals with synchronous curved $(1+1)$ –spacetime.", "Section promotes the one-particle sector QW, to the many–non–interacting–particles sector, of a QCA.", "Section summarizes the results, and concludes." ], [ "The model", "We consider a QW over the $(1+1)$ –spacetime grid, which we refer to as the `Plastic QW'.", "Its coin or spin degree of freedom lies $\\mathcal {H}_2$ , for which we may chose some orthonormal basis $\\lbrace | v^- \\rangle , | v^+ \\rangle \\rbrace $ .", "The overall state of the walker lies the composite Hilbert space $\\mathcal {H}_2\\otimes \\mathcal {H}_\\mathbb {Z}$ and may be thus be written $\\Psi =\\sum _l \\psi ^+(l) | v_+ \\rangle \\otimes | l \\rangle + \\psi ^-(l) | v_- \\rangle \\otimes | l \\rangle $ , where the scalar field $\\psi ^+$ (resp.", "$\\psi ^-$ ) gives, at every position $m\\in \\mathbb {Z}$ , the amplitude of the particle being there and about to move left (resp.", "right).", "We use $(j,l) \\in \\mathbb {N} \\times \\mathbb {Z}$ , to label respectively instants and points in space and let: $\\Psi _{j+2}=W \\Psi _j$ where $W = \\Lambda ^{-\\kappa } {S}({{C}_{-\\zeta }}\\otimes \\text{Id}_\\mathbb {Z}) {S}({C}_{\\zeta }\\otimes \\text{Id}_\\mathbb {Z} )\\Lambda ^\\kappa $ with ${S}$ a state-dependent shift operator such that $({S}\\Psi )_{j,l} =\\begin{pmatrix}\\psi ^+_{j+1,l}\\\\\\psi ^-_{j-1,l}\\end{pmatrix},$ ${C}_\\zeta $ an element of $U(2)$ that depends on angles $\\theta $ and $\\zeta $ , ${C}_\\zeta = \\begin{pmatrix} -\\cos \\theta & e^{-i \\zeta }\\sin \\theta \\\\e^{i \\zeta } \\sin \\theta & \\cos \\theta \\end{pmatrix}$ and $\\Lambda $ another element of $U(2)$ that depends on $f^\\pm =\\sqrt{1-c}\\pm \\sqrt{c+1}$ , $\\Lambda = \\frac{1}{2}\\left(\\begin{array}{cc}- f^- & f^+ \\\\f^+ & f^- \\\\\\end{array}\\right).$ Later $c\\in [0,1]$ will be interpreted as a speed of propagation or a hopping rate.", "To investigate the continuous limit, we first introduce a time discretization step $\\Delta _t$ and a space discretization step $\\Delta _x$ .", "We then introduce, for any parameter $a$ appearing in Eq.", "(REF ), a field $\\tilde{a}$ over the spacetime positions $\\mathbb {R}^+ \\times \\mathbb {R}$ , such that $a_{j,l}=\\tilde{a}(t_j,x_l)$ , with $t_j=j \\Delta _t$ , and $x_l = l \\Delta _x$ .", "Moreover the translation operator $S$ will now proceed by $\\Delta _x$ steps, so that $(S\\widetilde{\\Psi })(x_l)=\\left(\\widetilde{\\psi }^+(x_l+\\Delta _x),\\widetilde{\\psi }^-(x_l-\\Delta _x)\\right)^\\intercal $ i.e.", "$S\\widetilde{\\Psi }=\\exp ( i \\sigma _z \\Delta _x p )\\widetilde{\\Psi }$ , with $p=- i \\partial _x$ .", "Eq.", "(REF ) then reads: $\\widetilde{\\Psi }(t_j+2\\Delta _t) = \\widetilde{W}^{\\theta }_{\\zeta } \\widetilde{\\Psi }(t_j).$ Let us drop the tildes to lighten the notation.", "We suppose that all functions are $C^2$ ; for a more detailed analysis of the regularity condition which make these kinds of schemes convergent, the reader is referred to [8].", "Now, the continuum limit is the couple of differential equations obtained from Eq.", "(REF ) by letting both $\\Delta _t$ and $\\Delta _x$ go to zero.", "But how do we choose to let them go to zero, and what happens to the parameters, then?", "First, let us parametrize the time and space steps with a common $\\varepsilon $ : $\\Delta _t = \\varepsilon \\hspace{28.45274pt} \\Delta _x = \\varepsilon ^{1-\\alpha }$ where $\\alpha \\in [0,1]$ will allow us to have $\\Delta _t$ and $\\Delta _x$ tend to zero differently.", "Second, let us parametrize the positive real number $\\kappa $ , and the angles $\\theta $ , $\\zeta $ , by the same $\\varepsilon $ : $\\kappa &= \\varepsilon ^\\alpha ,\\\\\\theta &= \\arccos (c \\kappa ),\\\\\\zeta & = m \\frac{(-1)^{\\kappa }\\varepsilon }{\\sin (\\theta )}.\\\\$ Later $m\\ge 0$ will be interpreted as a mass.", "Summarizing, $\\varepsilon $ will be taken to zero, triggering the continuum limit; $\\alpha $ will remain fixed throughout the limit, governing which type of continuum limit is being taken; $m, c$ will remain fixed throughout the limit, stating which mass and speed are being simulated; $\\Delta _t, \\Delta _x, \\kappa , \\theta , \\zeta $ will vary throughout the limit, entirely determined by the above four.", "Notice that when we take the $\\varepsilon \\longrightarrow 0$ limit, $W$ remains unitary.", "For instance, focussing on the top-left entry of $C_\\zeta $ we need $\\cos \\theta =c \\varepsilon ^\\alpha \\le 1$ , which requires that $\\alpha \\ge 0$ as already imposed.", "Altogether, these jets define a family of QWs indexed by $\\varepsilon $ , whose embedding in spacetime, and defining angles, depend on $\\varepsilon $ .", "The continuum limit of Eq.", "(REF ) can then be investigated by Taylor expanding $\\Psi (t,x)$ and $W^{\\varepsilon ,\\alpha }$ for different values of $\\alpha $ ." ], [ "Continuum limit and scalings", "Substituting for $\\theta $ as in (REF ), the coin operator reads $C^{\\varepsilon ,\\alpha }_\\zeta = \\big (-\\varepsilon ^\\alpha c \\sigma _z + (\\sigma _x \\cos \\zeta + \\sigma _y \\sin \\zeta ) \\sqrt{1- c^2 \\varepsilon ^{2\\alpha }} \\big ).$" ], [ "Case (i): $\\mathbf {\\alpha = 1.}$", "In this case $\\Delta _x = O(1)$ does not scale with $\\varepsilon $ and so the translation operator is along fixed steps $\\Delta _x$ .", "In the leading order the operator $W^{\\varepsilon ,1}$ depends linearly on $\\varepsilon $ .", "The coin operator $C_\\zeta $ reads: $C^{\\varepsilon ,1}_\\zeta \\simeq - \\varepsilon c \\sigma _z + \\sigma _x \\cos (m \\varepsilon )+ \\sigma _y\\sin (m \\varepsilon )) + O(\\varepsilon ^2).$ It is straightforward, after some simple algebra to derive the evolution operator at the zero-th order in $\\varepsilon $ : $W^{0,1} = \\text{Id}.$ Then the evolution operator reads: $W^{\\varepsilon ,1} \\simeq \\text{Id} - 2 \\varepsilon ( i c \\sigma _x\\sin (i \\Delta _x\\partial _x)e^{\\Delta _x \\sigma _z \\partial _x } - i m \\sigma _z) + O(\\varepsilon ^2)$ Replacing this into Eq.", "(REF ), and expanding $\\Psi $ around $(x,t)$ we obtain: $&\\Psi + \\varepsilon 2\\partial _t \\Psi \\\\&= ( \\text{Id} - 2 \\varepsilon i c \\sigma _x\\sin (i \\Delta _x\\partial _x)e^{\\Delta _x \\sigma _z \\partial _x } +2 i \\varepsilon m \\sigma _z)\\Psi + O(\\varepsilon ^2) $ In the formal limit when $\\varepsilon \\longrightarrow 0$ this coincides with the hamiltonian equation: $i \\partial _t \\Psi = H_L \\Psi $ where $H_L = c \\sigma _x\\sin (i \\Delta _x\\partial _x)e^{\\Delta _x \\sigma _z \\partial _x } - m \\sigma _z.$ This $H_L$ is precisely the Dirac Hamiltonian in the vacuum for a lattice fermion on a one dimensional grid (up to some minor re-encoding depending on conventions, see Sec.", "V for more details), i.e.", "the continuous-time discrete-space counterpart of the Dirac equationNotice that $H_L$ commutes with $\\sigma _x$ , thus preserving the chiral symmetry w.r.t the components of the spinor $\\Psi = (\\psi ^+,\\psi ^-)^\\intercal $ .", "This will no longer be true when we introduce the mass, which notoriously breaks chirality..", "Indeed the standard, Dirac equation in continuous spacetime, can be recovered at the level of (REF ) by setting $\\Delta _x=\\epsilon $ a posteriori, and computing the leading order of the expansion around $\\epsilon =0$ , which is $i \\partial _t \\Psi =( c \\sigma _x \\partial _x - m\\sigma _z )\\Psi + O(\\epsilon ^2)$ , i.e.", "in the formal limit when $\\epsilon \\longrightarrow 0$ , $i \\partial _t \\Psi &= H_D \\Psi \\\\H_D &= c \\sigma _x \\partial _x - m\\sigma _z$ Can we get to $H_D$ directly?" ], [ "Case (ii): $0<\\alpha <1$ .", "In this case the leading order of the translation operator is ${S}\\simeq (\\text{Id} + \\varepsilon ^{1-\\alpha } \\sigma _z \\partial _x),$ whereas that of the coin operator is: $C^{\\varepsilon ,\\alpha }_\\zeta = \\big (-\\varepsilon ^\\alpha c \\sigma _z + (\\sigma _x \\cos \\zeta + \\sigma _y \\sin \\zeta ) \\sqrt{1- c^2 \\varepsilon ^{2\\alpha }} \\big ).$ The leading order of the Taylor expansion of the evolution operator reads: $W^{\\varepsilon ,\\alpha } \\simeq \\text{Id} + 2 \\varepsilon (-i c \\sigma _x \\partial _x + i m \\sigma _z)+ O(\\varepsilon ^{1+\\alpha })$ which directly recovers the standard, massive Dirac equation in continuous time : $i \\partial _t \\Psi = H_D \\Psi .$ Notice that this result arises from the fact that the leading orders are given by terms of the kind $c \\varepsilon ^{\\alpha } \\varepsilon ^{1-\\alpha }\\partial _x$ , which no longer depend on $\\alpha $ , for a final result of order $O(\\varepsilon )$ .", "Thus asking that $0 <\\alpha < 1$ , and thereby enforcing that $\\Delta _t\\longrightarrow 0$ faster than $\\Delta _x\\longrightarrow 0$ , yields the same result than letting $\\Delta _t\\longrightarrow 0$ , and then $\\Delta _x\\longrightarrow 0$ , successively.", "Now, what if we let both of them go to zero at the same rate?" ], [ "Case (iii): $\\alpha =0$ .", "In this case the leading order of the translation operator is ${S} \\simeq (\\text{Id} + \\varepsilon \\sigma _z \\partial _x) + O(\\varepsilon ^2),$ and the quantum coin becomes: $C^{\\varepsilon ,0}_\\zeta = -\\sigma _z c + (\\sigma _x \\cos \\zeta +\\sigma _y \\sin \\zeta ) \\sqrt{1- c^2} .$ This special case is somehow opposite to that of Case (i), where the coin operator was scaling in $\\varepsilon $ , and the shift operator was independent of it.", "The leading order in $\\varepsilon $ of the evolution operator leads to: $W^{\\varepsilon ,0} \\simeq \\text{Id} + 2\\Lambda \\left( -i c \\sigma _x \\partial _x + i m \\sigma _z) \\right)\\Lambda ^{-1} + O(\\varepsilon ^{2}).$ Again the formal limit yields $i \\partial _t \\Psi &= H_D \\Psi .\\\\$ Summarizing the results so far: Theorem 3.1 Fix $m\\ge 0$ , $c\\in [0,1]$ .", "For different values of $\\alpha \\in [0,1]$ , consider the family of QWs, parametrized by $\\varepsilon \\ge 0$ : $\\Psi (t_j+2\\Delta _t)=W^{\\varepsilon ,\\alpha }\\Psi (t_j)$ where $W^{\\varepsilon ,\\alpha } = \\Lambda ^{-\\kappa } {S}({{C}_{-\\zeta }}\\otimes \\text{Id}_\\mathbb {Z}) {S}({C}_{\\zeta }\\otimes \\text{Id}_\\mathbb {Z} )\\Lambda ^\\kappa $ with $S=\\exp (\\sigma _z\\Delta _x \\partial _x),$ ${C}_{\\zeta }= \\begin{pmatrix} - \\cos \\theta & e^{-i \\zeta }\\sin \\theta \\\\e^{i \\zeta } \\sin \\theta & \\cos \\theta \\end{pmatrix},$ $\\Lambda = \\frac{1}{2}\\left(\\begin{array}{cc}- f^- & f^+ \\\\f^+ & f^- \\\\\\end{array}\\right), f^\\pm =\\sqrt{1-c}\\pm \\sqrt{c+1},$ $\\kappa &= \\varepsilon ^\\alpha ,\\\\\\theta &= \\arccos (c \\kappa ),\\\\\\zeta & = m \\frac{(-1)^{\\kappa }\\varepsilon }{\\sin (\\theta )}.$ For any $0\\le \\alpha \\le 1$ , the $\\varepsilon $ –parametrized family admits a continuous time limit as $\\varepsilon \\longrightarrow 0$ .", "For $\\alpha =1$ , this continuous-time limit is discrete in space.", "For $0\\le \\alpha <1$ this continuous-time limit is also continuous in space.", "$ i \\partial _t \\Psi = c \\sigma _x\\sin (i \\Delta _x\\partial _x)e^{\\Delta _x \\sigma _z \\partial _x }\\Psi - m \\sigma _z\\Psi , \\hspace{28.45274pt} \\text{for} \\hspace{5.69046pt} \\alpha = 1.$ $ i \\partial _t \\Psi = c \\sigma _x \\partial _x\\Psi - m\\sigma _z \\Psi , \\hspace{48.36958pt} \\text{for} \\hspace{5.69046pt} 0\\le \\alpha < 1$ In both cases, the continuum limit is the differential equation corresponding to a massive Dirac fermion with mass $m$ and propagating at speed $c$ ." ], [ "Introducing a non homogeneous hopping rate $c$", "The aim of this section is to generalize Th.", "REF to an inhomogeneous speed of propagation or hopping rate $c(t,x)$ .", "In the continuous spacetime limit this corresponds to introducing a non-vanishing spacetime curvature.", "We will see that the spacetime-dependence of $c$ leads to a supplementary terms in the expansion of $W^{\\varepsilon ,\\alpha }_{\\zeta }$ , proportional to $\\Psi \\partial _x C^{\\varepsilon ,\\alpha }$ with $a = \\lbrace x, t\\rbrace $ .", "Let us look at each of the above case.", "Keeping the same scaling laws and dynamical equations as in Th.", "REF , we just need to generalise (REF ) as follows: $\\theta (t,x) &= \\arccos (c(t,x) \\kappa )\\\\\\zeta (t,x) & = m \\frac{(-1)^{\\kappa }\\varepsilon }{\\sin (\\theta (t,x))}$ Again we assume that $c(t,x)$ is in $C^2$ .", "As in the previous section, we distinguish several $\\varepsilon $ –parametrized families of QWs, for different values of $\\alpha $ .", "$\\alpha = 1$ .", "The translation operator again no longer depends on $\\varepsilon $ .", "We have that $W^{\\varepsilon ,1}_{\\zeta } = \\text{Id} - 2 \\varepsilon (\\lbrace e^{\\Delta _x \\sigma _z \\partial _x} \\sigma _x, e^{\\Delta _x \\sigma _z \\partial _x} c(t,x)\\sigma _z\\rbrace + i m \\sigma _z) + O(\\varepsilon ^2)$ where {,} are the usual Poisson brackets.", "Replacing this once again into Eq.", "(REF ), expanding $\\Psi $ around $(x,t)$ and taking the formal limit for $\\varepsilon \\longrightarrow 0$ , we recover the following hamiltonian equation: $i \\partial _t \\Psi &= H_L(x,t) \\Psi \\\\H_L &= -i \\lbrace e^{\\Delta _x \\sigma _z \\partial _x} \\sigma _x, e^{\\Delta _x \\sigma _z \\partial _x} c(t,x)\\sigma _z\\rbrace - m\\sigma _z .$ Quite surprisingly by setting $\\Delta _x=\\varepsilon $ a posteriori, we recover the curved massive Dirac equation in $(1+1)$ – dimensional spacetime: $i \\partial _t \\Psi = \\sigma _x\\partial _x \\Psi + \\frac{\\sigma _x}{2}\\Psi \\partial _x c - m \\sigma _z \\Psi $ This suggests us that Case (i') may be a simple way to simulate the Dirac equation in a curved spacetime by the implementation of a continuous-time discrete-space QW, which to the best of our knowledge is an original result .", "Again we can get to the same PDE directly with by setting $0\\ge \\alpha <1$ .", "Indeed, Case (ii') and Case (iii') are analogous to the homogeneous case, except for a supplementary term in the PDE represented by the spatial derivative of the coin $W^{\\varepsilon ,\\alpha }_{m} \\simeq W_m^{\\varepsilon ,\\alpha } + \\frac{1}{2} \\partial _x W_{0} ^{\\varepsilon ,\\alpha }$ .", "It is tedious but straightforward verify that Th.", "REF generalises as follows: Theorem 4.1 Fix $m\\ge 0$ , $c\\in [0,1]$ .", "For different values of $\\alpha \\in [0,1]$ , consider the family of QWs, parametrized by $\\varepsilon \\ge 0$ : $\\Psi (t_j+2\\Delta _t)=W^{\\varepsilon ,\\alpha }\\Psi (t_j)$ where $W^{\\varepsilon ,\\alpha } = \\Lambda ^{-\\kappa } {S}({{C}_{-\\zeta }}\\otimes \\text{Id}_\\mathbb {Z}) {S}({C}_{\\zeta }\\otimes \\text{Id}_\\mathbb {Z} )\\Lambda ^\\kappa $ with $S=\\exp (\\sigma _z\\Delta _x \\partial _x),$ ${C}_{\\zeta }= \\begin{pmatrix} - \\cos \\theta & e^{-i \\zeta }\\sin \\theta \\\\e^{i \\zeta } \\sin \\theta & \\cos \\theta \\end{pmatrix},$ $\\Lambda = \\frac{1}{2}\\left(\\begin{array}{cc}- f^- & f^+ \\\\f^+ & f^- \\\\\\end{array}\\right), f^\\pm =\\sqrt{1-c}\\pm \\sqrt{c+1},$ $\\kappa &= \\varepsilon ^\\alpha ,\\\\\\theta (t,x) &= \\arccos (c(t,x) \\kappa ),\\\\\\zeta (t,x) & = m \\frac{(-1)^{\\kappa }\\varepsilon }{\\sin (\\theta (t,x))}.$ For any $0\\le \\alpha \\le 1$ , the $\\varepsilon $ –parametrized family admits a continuous time limit as $\\varepsilon \\longrightarrow 0$ .", "For $\\alpha =1$ , this continuous-time limit is discrete in space.", "For $0\\le \\alpha <1$ this continuous-time limit is also continuous in space.", "$ i \\partial _t \\Psi = -i \\lbrace e^{\\Delta _x \\sigma _z \\partial _x} \\sigma _x, e^{\\Delta _x \\sigma _z \\partial _x} c(t,x)\\sigma _z\\rbrace - m\\sigma _z , \\hspace{15.6491pt} \\text{for} \\hspace{5.69046pt} \\alpha = 1$ $ i \\partial _t \\Psi = c(t,x) \\sigma _x \\partial _x \\Psi + \\sigma _x \\Psi \\frac{1}{2}\\partial _x c(t,x) - m \\sigma _z \\Psi , \\hspace{14.22636pt} \\text{for} \\hspace{5.69046pt} 0\\le \\alpha < 1$ In both cases, the continuum limit is the differential equation for a massive Dirac fermion propagating on an arbitrary in curved spacetime, with synchronous coordinates—i.e.", "coordinates in which the metric tensor has coefficients $g^{00}=1$ , $g^{01}=g^{10}=0$ and $g^{11}=-\\frac{1}{c(x,t)^2}$ ." ], [ "Many-particle model", "Let us now extend our formalism to the multi-particle sector.", "We will construct a `Plastic' Quantum Cellular Automata in $(1+1)$ –dimensions aiming to model Dirac fermions without interaction.", "It could be argued that the Plastic QCA will be nothing but many Plastic QW weaved together, which is true of course.", "Still this implies a not so obvious shift in mathematical formalism, and constitutes a mandatory prior step in order to later approach the modelling of interacting QFTs.", "Figure: The Plastic QCA.", "A layer of UU is a applied, alternated with a swap, then a layer of U * U^*, and a swap again.For this Plastic QCA we adopt the conventions depicted in Fig.", "REF .", "Each red (resp.", "black) wire carries a qubit, which codes for the presence or absence of a left-moving (resp.", "right-moving) electron.", "The cells are located right at the crossings (or right afterwards).", "Therefore their Hilbert space is ${\\cal H}={\\cal H}_L\\otimes {\\cal H}_R$ with ${\\cal H}_L={\\cal H}_R=\\mathbb {C}^2$ , i.e.", "they are made out of two subcells, accounting the left-moving and right-moving modes respectively.", "The dynamics acts in four steps.", "The even steps act according to a partition $\\mathcal {P}$ , the odd step act according to the same partition, but half-shifted, i.e.", "$\\mathcal {Q}$ .", "By `partition', we mean a way of grouping subcells, to then act upon them with the synchronous application of a local unitary, across space.", "Partition $\\mathcal {P}$ groups the right subcell of the left cells, with the left subcell of the right cells, in order to act upon them with a local unitary gate.", "In the first step, this local unitary gate is $U$ , which can therefore be thought of as being located at positions $(t_j+\\frac{\\Delta _t}{2},x_l+\\frac{\\Delta _x}{2})$ for $j\\in 2\\mathbb {N}$ and $l\\in \\mathbb {Z}$ .", "In the third step, this local unitary gate is $U^*$ , which can therefore be thought of as being located at positions $(t_j+\\frac{3\\Delta _t}{2},x_l+\\frac{\\Delta _x}{2})$ for $j\\in 2\\mathbb {N}+1$ and $m\\in \\mathbb {Z}$ .", "Partition $\\mathcal {Q}$ simply regroups the left and right subcell of a given cell, i.e.", "the local unitary acts per cell.", "In both the second and fourth steps the local unitary is $V$ , which can therefore be thought of as being located at positions $(t_j,x_l)$ for $j\\in \\mathbb {N}$ and $l\\in \\mathbb {Z}$ .", "The local unitary gate $V$ is the simplest: $V=\\begin{pmatrix}1 & 0 & 0 & 0\\\\0 & 0& 1& 0\\\\0 & 1& 0& 0\\\\0 & 0 & 0& -1\\end{pmatrix}$ $= |00\\rangle \\langle 00|+|01\\rangle \\langle 10|\\\\+|10\\rangle \\langle 01|-|11\\rangle \\langle 11|,$ i.e.", "it is a mere swap, just represented by the wire crossings in Fig.", "REF .", "Its only oddity is the $(-1)$ phase when two excitations permute: we refer to [11] for a justification.", "The local unitary gate $U$ , on the other hand, acts non-trivially in the one-particle sector, in a way which is related to the coin operator: $U=\\begin{pmatrix}1 & 0 & 0 & 0\\\\0 & e^{-i \\zeta }\\sin \\theta & -\\cos \\theta & 0\\\\0 & \\cos \\theta & e^{i \\zeta }\\sin \\theta & 0\\\\0 & 0 & 0 & -1\\end{pmatrix}$ $= |00\\rangle \\langle 00|+ e^{-i \\zeta }\\sin \\theta |01\\rangle \\langle 01| -\\cos \\theta |01\\rangle \\langle 10|\\\\+ \\cos \\theta |10\\rangle \\langle 01|+e^{i \\zeta }\\sin \\theta |10\\rangle \\langle 10|- |11\\rangle \\langle 11|.$ Altogether, the global evolution is: $\\Psi (t_j+2\\Delta _t)= G \\Psi (t_j)$ $G= \\bigotimes _{\\cal Q} V \\bigotimes _{{\\cal P}} U^* \\bigotimes _{\\cal Q} V \\bigotimes _{{\\cal P}} U$ We wish to argue that in the discrete-space continuous-time limit, this Plastic QCA behaves like many-particle non-interacting lattice fermions.", "We will do so by focussing on the one-particle sector behaviour of this QCA, and then second quantizing it by standard methods.", "Here is the one-particle sector: $\\Psi (t_j) =\\sum _{l\\in \\mathbb {Z}} &\\psi ^+(x_l,t_j) \\ldots | 00 \\rangle ^{l-1}| 10 \\rangle ^l| 00 \\rangle ^{l+1}\\ldots +\\\\&\\psi ^-(x_l,t_j) \\ldots | 00 \\rangle ^{l-1}| 01 \\rangle ^l| 00 \\rangle ^{l+1}\\ldots $ After a tedious but straightforward calculation, we can extract the recurrence relation for the amplitudes $\\psi _l(t_j) = \\lbrace \\psi ^+(x_l,t_j),\\psi ^-(x_l,t_j)\\rbrace $ at time $t_j+2\\Delta _t$ ; Taylor expand them around $\\varepsilon = 0$ using the scalings (REF ) for $\\alpha =1$ ; and take the formal limit for $\\varepsilon \\longrightarrow 0$ .", "This yields, with a spacetime dependent $U$ , to the discretized one-particle Hamiltonian: $H_{\\mathcal {G}d} \\psi _l(t) =\\frac{i}{2 } \\sigma _x \\left(c_{l-\\frac{\\Delta _x}{2}}\\psi _{l-\\Delta _x}-c_{l+\\frac{\\Delta _x}{2}}\\psi _{l+\\Delta _x}\\right) - m \\sigma _z \\psi _l.$ This is our proposed, curved spacetime lattice fermions Hamiltonian.", "One can check that in the continuous-space limit, this Hamiltonian goes to the standard, continuous spacetime Hamiltonian of the curved $(1+1)$ –spacetime Dirac equation in synchronous coordinates.", "Moreover, the proposed Hamiltonian (REF ) coincides, in the case of a homogeneous and static metric, with your usual one-particle lattice fermion Hamiltonian: $H_{d} \\psi _l(t) =\\frac{i c}{2 } \\sigma _x \\left(\\psi _{l-\\Delta _x}-\\psi _{l+\\Delta _x}\\right) - m \\sigma _z \\psi _l.$ This latter Hamiltonian, when massless, commutes with $\\sigma _x$ (preserving the chiral symmetry).", "Any other massless representation, e.g.", "that one commuting with $\\sigma _y$ , can be recovered by choosing different phase in the operator $U$In order to recover a massless Hamiltonian commuting with $\\sigma _y$ we can choose the operator $U^{\\prime }=\\begin{pmatrix}1 & 0 & 0 & 0\\\\0 & e^{-i \\zeta }\\sin \\theta & -\\cos \\theta & 0\\\\0 & \\cos \\theta & -e^{i \\zeta }\\sin \\theta & 0\\\\0 & 0 & 0 & -1\\end{pmatrix}$ .", "The one-particle lattice fermion Hamiltonian of (REF ) is quite comparable to that of (REF ), but has the advantage of being rather well-known and admitting a standard second-quantization [31].", "Moreover the two can be related in discrete spacetime picture in the sence that the one-particle sector of $G$ coincides with $W$ up to an initial encoding.", "Indeed, $G$ acts in the one-particle sector as follows: $W^{\\prime } =( {S}^-{{C}_{-\\zeta }} {S}^+) ( {S}^-{{C}_{\\zeta }} {S}^+ )$ where ${S}^\\pm $ are partial shifts in space: $({S}^+\\Psi )_{j,l} =\\begin{pmatrix}\\psi ^+_{j,l+1}\\\\\\psi ^-_{j,l}\\end{pmatrix}\\quad ({S}^-\\Psi )_{j,l} =\\begin{pmatrix}\\psi ^+_{j,l}\\\\\\psi ^-_{j,l-1}\\end{pmatrix}$ The operator is thus equivalent to $W^\\theta _\\zeta $ up to an encoding $E = {S^+}$ and setting $\\Lambda = I_2$ : ${W^{\\prime }}^{\\theta }_{\\zeta } = E^\\dagger {W}^{\\theta }_{\\zeta } E.$ From (REF ) we can derive the continuous-time the Kogut-Susskind Hamiltonian of the free Dirac QFT (a self-adjoint operator in the Fock space) using the more abstract language of modern quantum field theory.", "We can use the discretized single-particle Hamiltonian as a self-adjoint operator in the Fock space $\\mathcal {F}$ according to ${H} = \\sum _l {\\Psi }^*(H_df_l)\\Psi (f_l) = \\\\\\frac{1}{2} \\sum _l\\big [ \\Psi ^{2\\dagger }_{l+1} \\Psi ^{1}_{l} + \\Psi ^{2\\dagger }_{l} \\Psi ^{1}_{l+1} + h.c.\\big ] + m \\sum _l \\big [ \\Psi ^{1\\dagger }_{l} \\Psi ^{1}_{l} - \\Psi ^{2\\dagger }_{l} \\Psi ^{2}_{l}.", "\\big ] $ where the second quantized discretized Dirac field operator $\\Psi ^\\alpha _l = \\Psi ^\\alpha _l(f_l) = \\int dx \\Psi ^\\alpha (x) f^*_l(x) \\hspace{14.22636pt} \\alpha = \\lbrace +, -\\rbrace $ satisfies the above anti-commutation relations and the orthonormal set of basis functions $f_l$ spans the discretized Hilbert space $\\mathbb {Z}$ .", "Notice that the above equation takes the form of the Fermi-Hubbard equation where first term implements the hopping between the neighboring sites and the second term is a local term accounting the mass-term.", "Altogether, this provides strong evidence that the Plactic QCA $G$ has discrete-space continuous-time limit the free Dirac Kogut-Susskind Hamiltonian (REF ).", "However, it would prove this directly in the many-particle sector, without restricting to the one-particle sector and then second-quantizing.", "We leave this question open." ], [ "Conclusion", "Summary of results.", "We introduced a Quantum Walk (QW) over the $(1+1)$ –spacetime grid, given by (REF ).", "The QW has parameters $m$ , $c$ , $\\varepsilon $ , $\\alpha $ .", "The first two are parameters of the physics that we simulate : $m$ is the mass of the Dirac fermion, $c$ is the speed of light.", "The second two control the scaling of the discretization : $\\varepsilon \\longrightarrow 0$ whenever we take a continuum limit, whereas $0\\ge \\alpha \\ge 1$ will remain fixed but determine which limit is to be taken, by specifying the relative scalings of $\\Delta _t=\\varepsilon $ and $\\Delta _x=\\varepsilon ^{1-\\alpha }$ .", "When $\\alpha =0$ the continuous-spacetime limit ($\\Delta _x=\\Delta _t\\longrightarrow 0$ ) yields the Dirac equation.", "The same is true of all intermediate cases $0\\le \\alpha <1$ .", "But when $\\alpha =1$ , the continuous-time discrete-space limit ($\\Delta _x \\textrm {finite},\\Delta _t\\longrightarrow 0$ ) gets triggered, and yields the lattice fermion Hamiltonian.", "The result is encapsulated in Th.", "REF , and generalized to a spacetime dependent $c(x,t)$ , in Th.", "REF recovering the curved spacetime the lattice fermion and Dirac equation for synchronous coordinates.", "Finally the QW is made into a QCA by considering many non-interacting particles; in the limits this yields free Dirac QFT as expected, i.e.", "lattice fermions for many-non-interactive particles.", "Perspectives.", "The QCA may be viewed as a discrete-spacetime version of free Dirac QFT.", "In the same way that free Dirac QFT has as asymetric space-discretization the lattice fermions, this QCA has as asymetric continuous-time limit the lattice fermions.", "It is unitary and strictly respects the speed of light.", "Our long term aim is to add interactions this QCA, thereby obtaining a discrete-spacetime version of some interacting QFT in the style of [11]—except that it would support a continuous-time limit towards the non-relativistic, discrete-space continuous-time reformulations of the interacting QFT, for due validation.", "Discrete-space continuous-time may well be the friendly place where QCA and QFT should meet, after all.", "An unexpected by-product of this work is the provision of a space-discretization of the curved $(1+1)$ –spacetime Dirac equation in synchronous coordinates, i.e.", "the provision of an original curved lattice fermions Hamiltonian.", "This opens the route for elaborating curved Kogut-Susskind Hamiltonian, and to eventually suggest Hamiltonian-based quantum simulators of interacting particles over a curved background.", "An interesting remark raised by one of the anonymous referees is the following.", "On the one hand, the non-relativistic, naive lattice fermions Hamiltonians are known [31] to suffer the fermion-doubling problem, i.e.", "a spurious degree of freedom.", "On the other hand, the Dirac QW does not suffer this problem [3].", "An intriguing question is whether the Plastic QW hereby presented, which borrows from both worlds, suffers this problem or not.", "We leave this as an open question.", "Finally, we wonder whether the Shrödinger limit ($c \\longrightarrow \\infty $ ) could be fitted into the picture." ], [ "Competing interests", "The authors declare that there are no competing interests." ], [ "Author Contribution", "GDM and PA contributed equally to the main results of the manuscript." ], [ "Data Availability", "No data sets were generated or analysed during the current study." ], [ "Acknowledgements", "The authors would like to thank Pablo Arnault and Cédric Bény for motivating discussions." ] ]
1906.04483
[ [ "The incompressible limit of compressible finitely extensible nonlinear\n bead-spring chain models for dilute polymeric fluids" ], [ "Abstract We explore the behaviour of global-in-time weak solutions to a class of bead-spring chain models, with finitely extensible nonlinear elastic (FENE) spring potentials, for dilute polymeric fluids.", "In the models under consideration the solvent is assumed to be a compressible, isentropic, viscous, isothermal Newtonian fluid, confined to a bounded open domain in $\\mathbb{R}^3$, and the velocity field is assumed to satisfy a complete slip boundary condition.", "We show that as the Mach number tends to zero the system is driven to its incompressible counterpart." ], [ "Introduction", "The purpose of this paper is to explore the singular limit of finite-energy global-in-time weak solutions, as the Mach number tends to zero, to a class of kinetic models of dilute polymeric fluids with noninteracting polymer chains, where the solvent is a compressible, isentropic, viscous isothermal Newtonian fluid, and the polymer molecules suspended in the solvent are idealised as linear bead-spring chains with finitely extensible nonlinear elastic (FENE) spring potentials.", "The existence of finite-energy global weak solutions to this class of models, which are coupled compressible Navier–Stokes–Fokker–Planck systems, has been shown in [3] in the case of a nonslip boundary condition for the velocity of the solvent.", "With minor modifications, the existence proof presented in [3] extends to the case of a complete slip (also referred to as Navier slip) boundary condition and, for the sake of simplicity of the exposition, it is the latter complete slip boundary condition that we shall assume hereafter.", "By performing a rigorous passage to the limit we show that, as the Mach number tends to zero, the compressible Navier–Stokes–Fokker–Planck system is driven to its incompressible limit, which is the coupled FENE-type bead-spring chain model arising from the kinetic theory of dilute solutions of polymeric liquids with noninteracting polymer chains.", "The limiting model involves the unsteady incompressible Navier–Stokes equations, with an elastic extra-stress tensor appearing on the right-hand side of the momentum equation.", "The elastic extra-stress tensor stems from the random movement of the polymer chains and is defined by the Kramers expression through the associated probability density function that satisfies a Fokker–Planck-type parabolic equation.", "The existence of global-in-time weak solutions to the limiting incompressible model was shown in [1] in the case of a nonslip boundary condition for the velocity of the solvent.", "Again, the existence proof presented in [1] extends to the case of a complete slip boundary condition.", "Our main result is therefore that the incompressible Navier–Stokes–Fokker–Planck system considered in [1], but with a complete slip boundary condition, is the singular limit, as the Mach number tends to zero, of the compressible model considered in [3], with a complete slip boundary condition.", "Our proofs rely on combining the theoretical machinery developed in the monograph of Feireisl and Novotný [7], devoted to the rigorous analysis of various asymptotic limits of the compressible Navier–Stokes–Fourier system, with the analytical techniques involved in the existence proofs in [1], [3], and [9] for coupled incompressible and compressible Navier–Stokes–Fokker–Planck systems.", "For fully macroscopic models of compressible non-Newtonian fluids there are relatively few results in the literature concerning the incompressible limit.", "In [12], Lei and Zhou studied the existence of global classical solutions of the incompressible Oldroyd-B system on the two-dimensional torus.", "The Oldroyd-B model arises in the description of viscoelastic fluid flow and consists of a coupling of the Navier–Stokes equations with a system of first-order partial differential equations for the elastic extra stress tensor.", "Using well-posedness and stability results for the compressible case they pass to the incompressible limit.", "This strategy is based on dispersive energy estimates for the compressible Oldroyd-B model.", "The outcome of their analysis is a global existence/uniqueness result for solutions to the two-dimensional incompressible Oldroyd-B model for small data and a justification of the incompressible Oldroyd-B model as a limit of the slightly compressible Oldroyd-B model.", "In [11], Lei studied the incompressible limit problem for the compressible Oldroyd-B model on a torus, and showed that compressible flows with well-prepared initial data converge to incompressible flows as the Mach number tends to zero.", "The case of bounded domains $\\Omega \\subset \\mathbb {R}^3$ was explored by Guillopé, Salloum and Talhouk [10], where they proved the local and global existence of strong solutions to the weakly compressible Oldroyd-B model, and, with the aid of conformal coordinates, they also studied the incompressible limit problem with well-prepared initial data.", "Subsequently Fang and Zi [6] proved, using the techniques of Danchin [4] in scale-invariant Besov spaces on $\\mathbb {R}^d$ , that solutions to the compressible Oldroyd-B model with ill-prepared initial data (in which case strong time-oscillations of solutions need to be considered) converge to those of the corresponding incompressible Oldroyd-B model as the Mach number tends to zero.", "More recently, Ren and Ou [13] showed the local existence of strong solutions to an Oldroyd-B model for incompressible viscoelastic fluids in a bounded domain $\\Omega \\subset \\mathbb {R}^d$ , $d=2,3$ , via the incompressible limit: the main idea of the paper was to derive uniform estimates with respect to the Mach number for a linearised compressible Oldroyd-B system.", "To the best of our knowledge no rigorous asymptotic results are available in the literature for the incompressible limit of global weak solutions (with large initial data) to kinetic models of dilute compressible polymeric fluids involving the (compressible) Navier–Stokes–Fokker–Planck system, and it is the study of this question that is our objective here.", "The paper is structured as follows: in the next two subsections of this introductory section we perform a nondimensionalisation of the compressible Navier–Stokes–Fokker–Planck system and we state our main result; Section is then devoted to the proof of our main result.", "We end the paper, in Section , with concluding remarks concerning possible extensions." ], [ "Dimensionless form of the Navier–Stokes–Fokker–Planck system", "We consider the following system of nonlinear partial differential equations consisting of the equations of continuity and balance of linear momentum, having the form of the compressible Navier–Stokes equations in which the elastic extra-stress tensor (the polymeric part of the Cauchy stress tensor) appears as a source term in the balance of linear momentum equation.", "In particular, for a given $T \\in {\\mathbb {R}}_{>0}$ and a bounded open domain $\\Omega \\subset {\\mathbb {R}}^3$ with $\\partial \\Omega \\in C^{2,\\alpha }$ , $\\alpha \\in (0,1)$ , we denote by $\\rho : \\Omega \\times [0,T] \\rightarrow {\\mathbb {R}}$ the density of the solvent, by $u: \\overline{\\Omega } \\times [0,T] \\rightarrow {\\mathbb {R}}^3$ the velocity of the solvent, which satisfies the conservation of linear momentum equation for a viscous compressible fluid, and $\\psi :\\overline{\\Omega }\\times \\overline{D} \\times [0,T] \\rightarrow {\\mathbb {R}}_{\\ge 0}$ is the probability density function associated with the random movement of the polymer chains and satisfying a Fokker–Planck equation to be stated below.", "The configuration space domain $D:=D_1 \\times \\cdots \\times D_K \\subset \\mathbb {R}^{3K}$ is the Cartesian product of $K$ balanced convex open sets $D_i \\subset \\mathbb {R}^d$ , $i=1,\\dots ,K$ , with $q_i \\in D_i$ denoting the orientation vector of the $i$ -th spring in the bead-spring chain representing an idealisation of a polymer chain suspended in the solvent.", "The coupled Navier–Stokes–Fokker–Planck system under consideration has the following form: $\\begin{split}\\partial _t \\rho + {{\\mathrm {div}}_x}\\,(\\rho u) & = 0 \\quad \\mbox{ in } \\Omega \\times (0,T], \\\\\\partial _t (\\rho u) + {{\\mathrm {div}}_x}\\,(\\rho u\\otimes u) + \\frac{1}{{\\rm Ma}^2} \\nabla _x p(\\rho )& = \\frac{1}{\\rm Re}\\, {{\\mathrm {div}}_x}\\,\\mathsf {S}(u) + \\frac{1}{{\\rm Fr}^2}\\, \\rho f + \\frac{1}{\\rm Re}\\,{{\\mathrm {div}}_x}\\,\\mathsf {\\tau }_1 - \\frac{\\tilde{\\xi }}{{ \\rm Ma}^2} \\nabla _x \\varrho ^2\\quad \\mbox{ in } \\Omega \\times (0,T], \\\\\\partial _t \\psi + {{\\mathrm {div}}_x}\\,(u\\psi )+ \\sum _{i=1}^K {\\rm div}_{q_i} ((\\nabla _x u) q_i \\psi ) & = \\delta \\Delta _x \\psi + \\frac{1}{4 {\\mathrm {De}}} \\sum _{i=1}^{K} \\sum _{j=1}^{K} A_{ij}\\, {\\rm div}_{q_i}\\left( M \\nabla _{q_j} \\left(\\frac{\\psi }{M}\\right)\\right) \\quad \\mbox{ in } \\Omega \\times D \\times (0,T],\\\\\\end{split}$ with the initial conditions $\\begin{split}\\rho (\\cdot ,0) & = \\rho _0(\\cdot ) \\ge 0 \\quad \\mbox{ in } \\Omega , \\\\(\\rho u)(\\cdot ,0) & = (\\rho _0 u_0 )(\\cdot ) \\quad \\mbox{ in } \\Omega , \\\\\\psi (\\cdot ,\\cdot ,0) & = \\psi _0(\\cdot ,\\cdot ) \\ge 0\\quad \\mbox{ on } \\Omega \\times D.\\end{split}$ In the above system $\\mathrm {Ma}$ is the Mach number, $\\mathrm {Re}$ is the Reynolds number, $\\mathrm {Fr}$ is the Froude number, $\\mathrm {De}$ is the Deborah number, $\\tilde{\\xi }$ and $\\delta $ are positive real numbers, $A_{ij}$ , $i,j=1,\\dots ,K$ , in the Fokker–Planck equation (REF )$_3$ , are the entries of $\\mathsf {A} = [A_{ij}]_{i,j=1}^K, \\quad \\mbox{a symmetric positive definite matrix, with smallest eigenvalue }a_0\\in {\\mathbb {R}}_{>0},$ called the Rouse matrix, or connectivity matrix (e.g.", "$\\mathsf {A} = {\\rm tridiag}[-1,2,-1]$ in the case of a topologically linear bead-spring chain).", "The function $p=p(\\rho )$ is the pressure, $\\mathsf {S}$ is the Newtonian part of the viscous stress tensor, defined by $\\mathsf {S}(u) := \\mu ^S \\bigg ( \\mathsf {D}u- \\frac{1}{3} ({{\\mathrm {div}}_x}\\,u)\\, \\mathsf {I}\\bigg ) + \\mu ^B ({{\\mathrm {div}}_x}\\,u)\\, \\mathsf {I} ,$ where $\\mathsf {I}$ is the $3 \\times 3$ identity tensor, $\\mathsf {D}v:= \\frac{1}{2} (\\nabla _x v + (\\nabla _x v)^{\\rm T})$ is the rate of strain tensor, $\\mu ^S$ and $\\mu ^B$ are positive constants referred to as the shear viscosity and the bulk viscosity, respectively, and the function $M=M(q)$ is the Maxwellian, to be definedhttps://www.overleaf.com/project/5bd0b4be1ae53d317f17aefd below, associated with the model.", "In addition, the pressure $p$ is assumed to be related to the density $\\rho $ of the solvent by the isentropic equation of state: $p(\\rho ) := c_p \\rho ^\\gamma ,$ where $c_p \\in {\\mathbb {R}}_{>0}$ and the constant $\\gamma > 3/2$ .", "The above system is supplemented with boundary conditions, which will be stated below.", "In a bead-spring chain model consisting of $K+1$ beads, linearly coupled with $K$ elastic springs to represent a polymer chain, the extra-stress tensor $\\mathsf {\\tau }$ is given by a version of the Kramers expression depending on the probability density function $\\psi $ of the random conformation vector $q = (q^{\\rm T}_1, \\dots , q^{\\rm T}_K)^{\\rm T} \\in D:= D_1 \\times \\cdots \\times D_K \\subset {\\mathbb {R}}^{3K}$ of the chain, where $q_i$ represents the 3-component conformation/orientation vector of the $i$ -th spring in the chain.", "Typically $D_i$ is the whole of ${\\mathbb {R}}^3$ or a bounded open 3-dimensional ball centred at the origin $0 \\in {\\mathbb {R}}^3$ , for each $i = 1, \\dots , K.$ If $K =1$ , then the model is referred to as the dumbbell model.", "Here we shall concentrate on finitely extensible nonlinear elastic (FENE) bead-spring chain models, with $D := B(\\mathbf {0}, b^{\\frac{1}{2}}_1) \\times \\cdots \\times B(\\mathbf {0}, b^{\\frac{1}{2}}_K), \\mbox{ where } b_i > 0,\\ i=1, \\dots , K,\\ K \\ge 1,$ and $B(\\mathbf {0}, b^{\\frac{1}{2}}_i)$ is a bounded open ball in ${\\mathbb {R}}^3$ of radius $b^{\\frac{1}{2}}_i$ , centred at $\\mathbf {0} \\in {\\mathbb {R}}^3$ .", "On the right-hand side of (REF )$_2$ , the 3-component vector function $f$ is the nondimensional density of body forces and the elastic extra-stress tensor is of the form: $\\mathsf {\\tau } (\\psi )(x,t) := \\frac{1}{\\rm Re}\\mathsf {\\tau }_1(\\psi ) (x,t)- \\frac{1}{{\\rm Ma}^2} \\bigg ( \\int _{D\\times D} \\gamma (q,q^{\\prime }) \\psi (x,q,t) \\psi (x,q^{\\prime },t) {\\rm \\, d} q {\\rm \\, d} q^{\\prime } \\bigg )\\, \\mathsf {I}= \\frac{1}{\\rm Re} \\mathsf {\\tau }_1(x,t) - \\frac{\\tilde{\\xi }}{{\\rm Ma}^2} \\varrho ^2(x,t)\\, \\mathsf {I},$ where $\\gamma : D \\times D \\rightarrow {\\mathbb {R}}_{\\ge 0}$ is a smooth, time-independent, $x$ -independent and $\\psi $ -independent interaction kernel, which we take here for the sake of simplicity to be $\\gamma (q, q^{\\prime }) = \\tilde{\\xi }, \\quad \\mbox{ where } \\tilde{\\xi }\\in {\\mathbb {R}}_{\\ge 0}, $ and the polymer number density is defined by $\\varrho (x,t) := \\int _D \\psi (x,q,t) {\\rm \\,d}q, \\quad (x,t) \\in \\Omega \\times [0,T].$ Moreover, $\\mathsf {\\tau }_1(\\psi )$ appearing in (REF ) is the Kramers expression: $\\mathsf {\\tau }_1(\\psi ) := \\frac{1-\\beta }{\\mathrm {De}}\\left(\\sum _{i=1}^K \\mathsf {C}_i ( \\psi ) - (K+1) \\int _D \\psi {\\rm \\,d}q \\,\\mathsf {I}\\right),$ where $(1-\\beta ) := \\eta _p / (\\eta _s + \\eta _p)$ , with $\\eta _p$ signifying the polymeric viscosity and $\\eta _s$ the viscosity of the solvent.", "In the above $\\mathsf {C}_i ( \\psi ) (x,t) := \\int _{D} \\psi (x,q,t)\\, U^{\\prime }_i\\bigg (\\frac{1}{2} | q_i|^2\\bigg ) q_i q^{\\rm T}_i {\\rm \\,d}q, \\quad i=1,\\dots ,K,$ where $U_i$ is the $i$ -th spring potential, which we shall now fix and we shall also define the associated Maxwellian.", "Let ${\\mathcal {O}}_i \\subset [0,\\infty )$ denote the image of $D_i$ ($0 \\in {\\mathcal {O}}_i$ ) under the mapping $q_i \\in D_i \\mapsto \\frac{1}{2} |q_i|^2$ and let us consider the spring potential $U_i \\in C^1( {\\mathcal {O}}_i; {\\mathbb {R}}_{\\ge 0} ),$ $i= 1,\\dots ,K$ .", "In the case of the FENE spring potential, $\\mathcal {O}_i = [0,b_i)$ , $i=1,\\dots , K$ .", "We shall suppose that $U_i(0) = 0$ and that $U_i$ is unbounded on ${\\mathcal {O}}_i $ for $i= 1,\\dots ,K$ (i.e., $\\lim _{s \\rightarrow b_i{-}} U_i(s) = +\\infty $ ).", "The elastic spring-force $F_i : D_i \\subseteq {\\mathbb {R}}^3 \\rightarrow {\\mathbb {R}}^3$ of the $i$ -th spring in the chain is then defined by $F_i(q_i) := U^{\\prime }_i\\bigg ( \\frac{1}{2}|q_i|^2 \\bigg )q_i, \\quad i=1,\\dots ,K,$ and the partial Maxwellian $M_i$ , associated with the spring potential $U_i$ , is defined by $M_i(q_i) := \\frac{1}{Z_i} \\mathrm {e}^{-U_i(\\frac{1}{2} | q_i |^2)}, \\quad Z_i := \\int _{D_i} {\\mathrm {e}}^{-U_i(\\frac{1}{2} | q_i |^2)}{\\rm d} q_i, \\quad i=1,\\dots ,K.$ The total Maxwellian $M_i$ in the model is then $M(q) := \\prod _{i=1}^K M_i (q_i) \\quad \\mbox{ for all } q= (q_1^{\\rm T}, \\dots , q_K^{\\rm T})^{\\rm T} \\in D = D_1 \\times \\cdots \\times D_K.$ A straightforward calculation yields that, for $i=1,\\dots ,K$ , $M(q) \\nabla _{q_i} [M(q)]^{-1} = - [ M(q)]^{-1} \\nabla _{q_i} M(q)= U_i^{\\prime } \\bigg (\\frac{1}{2} |q_i|^2\\bigg ) q_i,$ and $\\int _D M(q) {\\rm \\,d}q = 1.$ We shall suppose that for $i=1,\\dots ,K$ there exist constants $c_{ij} >0,$ $j=1, \\dots , 4$ , and $\\theta _i > 1$ such that the spring potential $U_i$ and the associated Maxwellian $M_i$ satisfy $c_{i1} [{\\rm dist} (q_i, \\partial D_i)]^{\\theta _i} \\le M_i (q_i)\\le c_{i2} [{\\rm dist} (q_i, \\partial D_i)]^{\\theta _i} \\quad \\mbox{ for all } q_i \\in D_i,$ $c_{i3} \\le {\\rm dist} (q_i, \\partial D_i)\\, U^{\\prime }_{i}\\bigg (\\frac{1}{2} |q_i|^2\\bigg ) \\le c_{i4} \\quad \\mbox{ for } q_i \\in D_i \\mbox{ with } i=1,\\dots , K.$ It then follows from the above that $\\int _{D_i} \\left[ 1+ \\bigg [U_i\\bigg (\\frac{1}{2} |q_i|^2\\bigg )\\bigg ]^2 + \\bigg [U^{\\prime }_i\\bigg (\\frac{1}{2} |q_i|^2\\bigg )\\bigg ]^2 \\right]M_i (q_i) {\\rm \\,d}q_i < \\infty , \\quad i=1, \\dots , K.$ In the system (REF ) the Reynolds number Re and the Deborah number De are defined, respectively, by ${\\rm Re} := \\frac{\\rho _0 L_0 U_0}{\\eta _s + \\eta _p},$ ${\\rm De} := \\frac{\\lambda U_0}{L_0} = \\frac{\\zeta _0 U_0}{4 H L_0},$ with $\\mathrm {De}$ characterising the elastic relaxation properties, $\\lambda >0$ being the characteristic relaxation time, $U_0$ the characteristic speed of the fluid, $L_0$ the characteristic macroscopic lengthscale (e.g.", "the diameter of the flow domain $\\Omega $ ), $H>0$ the spring constant, $\\zeta _0>0$ the characteristic drag coefficient, and $\\rho _0$ is the characteristic density; the Mach number and the Froude number are defined, respectively, by ${\\rm Ma} := \\frac{U_0}{ \\sqrt{ p_0 / \\rho _0} },$ ${\\rm Fr} := \\frac{U_0}{ \\sqrt{ L_0 / f_0} },$ where $p_0$ is the characteristic pressure, $f_0$ signifies the characteristic density of body forces; and $\\delta := \\frac{(l_0 / L_0)^2}{(4 (K+1) \\zeta _0 U_0)/ (4H L_0)} = \\frac{(l_0 / L_0)^2}{4 (K+1) {\\mathrm {De}}}$ is the polymeric diffusion coefficient, where $l_0$ is the characteristic microscopic lengthscale (e.g.", "the length of a typical polymer chain suspended in the solvent).", "We supplement the system of partial differential equations under consideration with a complete slip boundary condition for the velocity field of the solvent, that is, $u\\cdot n |_{\\partial \\Omega } = 0, \\quad \\quad [\\mathsf {S}(u) + \\mathsf {\\tau }_1]n \\times n |_{\\partial \\Omega } = 0.$ Moreover, we impose the following boundary and initial conditions on solutions of the Fokker–Planck equation: $\\left[\\frac{1}{4 {\\mathrm {De}}} \\sum _{j=1}^K A_{ij} M \\nabla _{q_j} \\left( \\frac{\\psi }{M} \\right) - (\\nabla _x u) q_i \\psi \\right]\\cdot \\frac{ q_i }{| q_i |} = 0 \\quad \\mbox{ on } \\Omega \\times \\partial \\overline{D}_i \\times (0, T], \\mbox{ for } i=1,\\dots ,K,$ $\\delta \\nabla _x \\psi \\cdot n = 0 \\quad \\mbox{ on } \\partial \\Omega \\times D \\times (0,T],$ $\\psi (\\cdot ,\\cdot ,0) = \\psi _0 (\\cdot , \\cdot ) \\ge 0 \\quad \\mbox{ on } \\Omega \\times D,$ where $\\partial \\overline{D}_i := D_1 \\times \\cdots \\times D_{i-1} \\times \\partial D_i \\times D_{i+1} \\times \\cdots \\times D_K,$ $q_i$ is normal to $\\partial D_i$ , as $D_i$ is a bounded ball centred at the origin, and $n$ is the unit outward normal vector to $\\partial \\Omega .$ Remark 1.1 The Navier–Stokes–Fokker–Planck system stated above is in dimensionless form.", "It is arrived at by introducing new variables of the form $X^* = \\frac{X}{X_0}$ into the dimensionful version of the system, where $X_0>0$ is the characteristic value of $X$ ; for example, $L_0>0$ is a reference length, $T_0>0$ is a reference time, $U_0>0$ is a reference velocity, $\\rho _0>0$ is a reference density of the solvent, proceeding similarly with the characteristic values of the other physical entities entering into the equations: $p_0>0$ , $\\mu ^S_0>0$ , $\\mu ^B_0>0$ , $f_0>0$ , $\\eta ^p>0$ .", "In the above scaling the polymeric pressure term $p_p = \\xi \\varrho ^2$ is treated as a part of a fluid pressure, and it is therefore rescaled by the reference value $p_0$ .", "In this paper we concentrate on the following choices of the various dimensionless numbers: ${\\rm Ma} = \\varepsilon \\ll 1$ , i.e., the characteristic velocity is dominated by the speed of sound; ${\\mathrm {De}} = 1$ and ${\\rm Re} =1$ ; One can choose ${\\rm Fr} = \\sqrt{\\varepsilon }$ (the case of low stratification), but for the sake of simplicity we shall not consider the influence of external forces in the momentum equation in this work, and will therefore set $f = 0$ ; For the dimensionless coefficient $\\tilde{\\xi }$ we shall assume here that $\\tilde{\\xi }= \\bar{\\xi }\\, {\\rm Ma}^2 = \\bar{\\xi }\\, \\varepsilon ^2$ , with some $\\bar{\\xi }> 0$ .", "The system (REF ) therefore takes the following form: for a given $T \\in {\\mathbb {R}}_{>0}$ and a bounded open domain $\\Omega \\subset {\\mathbb {R}}^3$ , with $\\partial \\Omega \\in C^{2,\\alpha }$ , $\\alpha \\in (0,1)$ , we consider the following system of partial differential equations: $\\begin{split}\\partial _t \\rho + {{\\mathrm {div}}_x}\\,(\\rho u) & = 0 \\quad \\mbox{ in } \\Omega \\times (0,T], \\\\\\partial _t (\\rho u) + {{\\mathrm {div}}_x}\\,(\\rho u\\otimes u) + \\frac{1}{\\varepsilon ^2} \\nabla _x p(\\rho )& = {{\\mathrm {div}}_x}\\,\\mathsf {S}(u) + {{\\mathrm {div}}_x}\\,\\mathsf {\\tau }_1 - {\\bar{\\xi }} \\nabla _x \\varrho ^2\\quad \\mbox{ in } \\Omega \\times (0,T], \\\\\\partial _t \\psi + {{\\mathrm {div}}_x}\\,(u\\psi )+ \\sum _{i=1}^K {\\rm div}_{q_i} ((\\nabla _x u) q_i \\psi ) & = \\delta \\Delta _x \\psi + \\frac{1}{4} \\sum _{i=1}^{K} \\sum _{j=1}^{K} A_{ij} {\\rm div}_{q_i}\\left( M \\nabla _{q_j} \\bigg (\\frac{\\psi }{M}\\bigg )\\right) \\quad \\mbox{ in } \\Omega \\times D \\times (0,T],\\\\ \\mbox{ with } \\varrho (x,t) & := \\int _D \\psi (x,q,t) {\\rm \\,d}q, \\quad (x,t) \\in \\Omega \\times [0,T]\\end{split}\\qquad \\mathrm {(NSFP_\\varepsilon )}$ for each $\\varepsilon \\in (0,1),$ supplemented with the boundary conditions (REF )–(REF ), the initial conditions (REF ), and the initial data appearing in (REF ) assumed to satisfy the following properties: $\\Vert u_{0,\\varepsilon } \\Vert _{L^2(\\Omega )} \\le c,$ $\\rho _{0,\\varepsilon } = \\bar{\\rho }+ \\varepsilon r_{\\varepsilon ,0}\\quad \\mbox{ where } \\quad \\Vert r_{\\varepsilon ,0} \\Vert _{L^\\infty (\\Omega )} \\le c~\\quad \\mbox{ and } \\quad \\bar{\\rho }> 0, \\quad \\rho _{0,\\varepsilon } \\ge 0\\quad \\mbox{ a.e. }", "x\\in \\Omega , \\quad \\int _{\\Omega } r_{\\varepsilon ,0} \\,\\mathrm {d}x = 0,$ $\\psi _{0,\\varepsilon } \\ge 0 \\mbox{ a.e.", "on }\\Omega \\times D \\quad \\mbox{ and } \\quad \\Vert {\\mathcal {F}}(\\widehat{\\psi }_{0,\\varepsilon })\\Vert _{L^1_{M}(\\Omega \\times D)} \\le c, \\quad \\bigg \\Vert \\int _D \\psi _{0,\\varepsilon } (\\cdot ,q) {\\rm \\,d}q \\,\\bigg \\Vert _{L^{\\infty }_{\\ge 0} (\\Omega ) } \\le c ,$ where $\\overline{\\varrho }$ is a static constant density satisfying the equilibrium state (static state) equation $\\nabla _x p(\\bar{\\rho }) = 0 \\quad \\mbox{ in } \\Omega .$ In the above we have used the following notation: $\\widehat{\\psi }:= \\psi /M\\qquad \\mbox{and}\\qquad \\mathcal {F}(s) := s (\\log (s) - 1) \\quad \\mbox{for $s > 0$},\\quad \\mbox{with $\\mathcal {F}(0):=0.$}$ Clearly, $\\mathcal {F}^{\\prime }(s) = \\log (s)$ and $\\mathcal {F}^{\\prime \\prime }(s) = \\frac{1}{s}$ for $s>0$ .", "Remark 1.2 (Remark 1.3 in [3]) Defining the polymer number density by (REF ), formally integrating the Fokker–Planck equation in (REF ) over $D$ , and using the boundary condition (REF ), we infer that $\\partial _t \\varrho + {{\\mathrm {div}}_x}\\,(\\varrho u) = \\delta \\Delta _x \\varrho \\quad \\mbox{ on } \\Omega \\times (0,T] ,$ with $\\varrho (x,0) = \\varrho _0$ in $\\Omega $ and $\\frac{\\partial \\varrho }{\\partial n} |_{\\partial \\Omega } = 0$ .", "Hence by the boundary and initial conditions (REF ), (REF ) we get that $\\delta \\nabla _x \\varrho \\cdot n = 0 \\quad \\mbox{ on } \\partial \\Omega \\times (0,T] \\quad \\mbox{ and } \\quad \\varrho (x,0) = \\int _D \\psi _0(x,q)\\, {\\rm d} q \\quad \\mbox{ for } x \\in \\Omega .$ If ${{\\mathrm {div}}_x}\\,u= 0$ and $\\varrho (\\cdot , 0)$ is constant, then $\\varrho (x,t)$ is constant (and equal to $\\varrho (x, 0)\\equiv \\mathrm {Const.", "}>0$ ) for all $(x,t) \\in \\Omega \\times (0,T]$ , and thus $ \\varrho (x,t) = \\int _D \\psi (x,q,t)\\, {\\rm d} q = \\int _D \\psi _0(x,q)\\, {\\rm d} q= \\varrho (x,0)\\in {\\mathbb {R}}_{>0} \\quad \\mbox{ for all } (x,t) \\in \\Omega \\times (0,T].", "$ In other words, the polymer number density is constant.", "Our aim is to show by rigorous analysis that, as the Mach number converges to zero, the primitive system (REF ) converges to the following incompressible Navier–Stokes–Fokker–Planck system: $\\begin{split}{{\\mathrm {div}}_x}\\,U& = 0 \\quad \\mbox{ in } \\Omega \\times (0,T], \\\\\\partial _t (\\bar{\\rho }U) + {{\\mathrm {div}}_x}\\,(\\bar{\\rho }U\\otimes U) + \\nabla _x \\Pi & = \\mu _S {{\\mathrm {div}}_x}\\,\\mathsf {D}_x U+ {{\\mathrm {div}}_x}\\,\\mathsf {\\tau }_1 (\\Psi )\\quad \\mbox{ in } \\Omega \\times (0,T], \\\\\\partial _t \\Psi + {{\\mathrm {div}}_x}\\,(U\\Psi )+ \\sum _{i=1}^K {\\rm div}_{q_i} ((\\nabla _x U) q_i \\Psi ) & = \\delta \\Delta _x \\Psi + \\frac{1}{4} \\sum _{i=1}^{K} \\sum _{j=1}^{K} A_{ij} {\\rm div}_{q_i}\\left( M \\nabla _{q_j} \\left(\\frac{\\Psi }{M}\\right)\\right) \\quad \\mbox{ in } \\Omega \\times (0,T],\\end{split}$ where $\\Pi $ is a new pressure function, $U$ is the limiting divergence-free velocity field, the density of the solvent $\\bar{\\rho }$ is constant, $\\mathsf {\\tau }_1 := (1-\\beta )\\left(\\sum _{i=1}^K \\mathsf {C}_i ( \\Psi ) - (K+1) \\int _D \\Psi {\\rm \\,d}q\\, \\mathsf {I}\\right)$ is the Kramers expression for the polymeric extra stress tensor in the incompressible limit, with, again, $(1-\\beta ) := \\eta _p / (\\eta _s + \\eta _p)$ , and $\\mathsf {C}_i ( \\Psi ) (x,t) := \\int _{D} \\Psi (x,q,t)\\, U^{\\prime }_i\\bigg (\\frac{1}{2} | q_i|^2\\bigg ) q_i q^T_i {\\rm \\,d}q, \\quad i=1,\\dots ,K.$ Remark 1.3 One can formally arrive at the system (REF ) by inserting in the primitive system the expansions: $\\begin{split}\\rho & = \\bar{\\rho }+ \\varepsilon \\rho ^{(1)} + \\varepsilon ^2 \\rho ^{(2)} + \\cdots , \\\\u &= U + \\varepsilon u^{(1)} + \\varepsilon ^2u^{(2)} + \\cdots ,\\\\\\psi & = \\Psi + \\varepsilon \\psi ^{(1)} + \\varepsilon ^2 \\psi ^{(2)} + \\cdots ,\\end{split}$ and neglecting all terms of order $\\varepsilon $ and higher.", "Our goal here is to deduce (REF ) from the primitive system (REF ) through a rigorous passage to the limit, as $\\varepsilon \\rightarrow 0$ (i.e, as $\\mathrm {Ma} \\rightarrow 0$ )." ], [ "Main result", "For $r \\in [1,\\infty )$ , let $L^r_M(\\Omega \\times D)$ denote the Maxwellian-weighted Lebesgue space equipped with the norm $\\Vert \\varphi \\Vert _{L^r_M} := \\left( \\int _{\\Omega \\times D} M |\\varphi |^r {\\rm \\,d}q\\,{\\rm d}x\\right)^{\\frac{1}{r}},$ and let $H^1_M(\\Omega \\times D)$ denote the Maxwellian-weighted Sobolev space with the norm $\\Vert \\varphi \\Vert _{H^1_M} := \\left( \\int _{\\Omega \\times D} M \\left[ |\\varphi |^2 + | \\nabla _x \\varphi |^2 + | \\nabla _q \\varphi |^2 \\right]{\\rm \\,d}q\\,{\\rm d}x\\right)^{\\frac{1}{2}}.$ We also define $Z_r := \\lbrace \\phi \\in L^r_M(\\Omega \\times D)\\,:\\, \\phi \\ge 0\\, \\mbox{ a.e.", "on }\\Omega \\times D \\rbrace .$ Definition 1.1 We call a triple $(\\rho _\\varepsilon , u_\\varepsilon , \\widehat{\\psi }_\\varepsilon )$ a weak solution to the system (REF ) provided that: 1) The functions $\\rho _\\varepsilon $ , $u_\\varepsilon $ , $\\widehat{\\psi }_\\varepsilon $ satisfy the following regularity properties: $\\rho _\\varepsilon \\in C_{w} ([0,T]; L^\\gamma _{\\ge 0} (\\Omega )) \\cap H^1 (0,T; W^{1,6}(\\Omega )^{\\prime }) \\cap L^2(0,T; H^1(\\Omega )^{\\prime }),$ $u_\\varepsilon \\in L^2 (0,T ; H^1 (\\Omega )^3) \\quad \\mbox{ and } \\quad \\widehat{\\psi }_\\varepsilon \\in L^{v}(0,T;Z_1) \\cap H^1(0,T; M^{-1} (H^s(\\Omega \\times D))^{\\prime }),$ where $v \\in [1,\\infty )$ and $s > 1+ \\frac{3}{2} (K +1) $ , with finite relative entropy and Fisher information, ${\\mathcal {F}}(\\widehat{\\psi }_\\varepsilon ) \\in L^\\infty (0,T; L^1_M(\\Omega \\times D)) \\quad \\mbox{ and } \\sqrt{\\widehat{\\psi }_\\varepsilon } \\in L^2(0,T; H^1_M(\\Omega \\times D)),$ $\\mathsf {\\tau }_1 (M \\widehat{\\psi }_\\varepsilon ) \\in L^r(\\Omega \\times [0,T))^{3 \\times 3}, \\quad \\mbox{ where } r \\in \\bigg [ 1, \\frac{20}{13} \\bigg );$ 2) In addition, $\\varrho _\\varepsilon := \\int _D M\\widehat{\\psi }_\\varepsilon {\\rm \\,d}q \\in L^\\infty (0,T;L^2(\\Omega )) \\cap L^2(0,T; H^1(\\Omega )),$ and $\\varrho _\\varepsilon \\in L^{\\frac{5\\zeta }{3(\\zeta -1)}}(0,T; L^\\zeta (\\Omega )) \\quad \\quad \\mbox{ for any }\\zeta \\in (1,6);$ 3) Moreover the following relations are satisfied: $\\int _0^T \\left\\langle \\partial _t \\rho _\\varepsilon , \\eta \\right\\rangle _{W^{1,6}(\\Omega )} \\, {\\rm d}t- \\int _0^T \\int _\\Omega \\rho _\\varepsilon u_\\varepsilon \\cdot \\nabla _x \\eta \\,{\\rm d}x\\, {\\rm d}t= 0 \\quad \\mbox{ for all } \\eta \\in L^2(0,T; W^{1,s}(\\Omega )) ,$ for some $s\\in \\left( \\left(\\frac{6\\gamma }{6+\\gamma }\\right)^{\\prime },\\infty \\right]$ , where $\\left(\\frac{6\\gamma }{6+\\gamma }\\right)^{\\prime }$ is the Hölder conjugate of $\\frac{6\\gamma }{6+\\gamma }$ , and with $\\rho _\\varepsilon ( \\cdot , 0) = \\rho _{0,\\varepsilon }$ , $\\begin{split}& \\int _0^T \\left\\langle \\partial _t(\\rho _\\varepsilon u_\\varepsilon ), w \\right\\rangle _{W^{1,r}_0(\\Omega )} \\, {\\rm d}t+ \\int _0^T \\int _\\Omega \\left[ \\mathsf {S}(u_\\varepsilon ) - \\rho _\\varepsilon u_\\varepsilon \\otimes u_\\varepsilon - \\frac{1}{\\varepsilon }c_p \\rho _\\varepsilon ^\\gamma \\, \\mathsf {I} \\right] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\\\ &= - \\int _0^T \\int _\\Omega \\left( \\mathsf {\\tau }_1(M \\widehat{\\psi }_\\varepsilon ) - \\bar{\\xi }\\varrho _\\varepsilon ^2\\, \\mathsf {I} \\right): \\nabla _x w\\,{\\rm d}x\\, {\\rm d}t\\quad \\mbox{ for all } w \\in L^{\\frac{\\gamma + \\vartheta }{\\vartheta }}(0,T; W^{1,r}(\\Omega )^3), \\mbox{ where } w\\cdot n|_{\\partial \\Omega }=0,\\end{split}$ with $(\\rho u)(\\cdot ,0) = (\\rho _0u_0)(\\cdot ),$ $\\vartheta (\\gamma ):= \\frac{2\\gamma - 3}{3}$ for $\\frac{3}{2}< \\gamma \\le 4$ and $\\vartheta (\\gamma ):= \\frac{5}{12}\\gamma $ for $4\\le \\gamma $ ; $r:= \\max \\left\\lbrace 4, \\frac{6\\gamma }{2\\gamma - 3} \\right\\rbrace $ , and $\\begin{split}& \\int _0^T \\left\\langle M\\partial _t \\widehat{\\psi }_\\varepsilon , \\varphi \\right\\rangle _{H^s (\\Omega \\times D)} \\, {\\rm d}t+ \\frac{1}{4} \\sum _{i,j=1}^K A_{ij} \\int _{\\Omega \\times D} M \\nabla _{q_j} {\\widehat{\\psi }_\\varepsilon }\\cdot \\nabla _{q_i} {\\varphi } {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\\\ &\\quad + \\int _0^T \\int _{\\Omega \\times D} M [ \\delta \\nabla _x \\widehat{\\psi }_\\varepsilon - u_\\varepsilon \\widehat{\\psi }_\\varepsilon ] \\cdot \\nabla _x \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t- \\int _0^T \\int _{\\Omega \\times D } M \\sum _{i=1}^K [( \\nabla _x u_\\varepsilon ) q_i ] \\widehat{\\psi }_\\varepsilon \\cdot \\nabla _{q_i} \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\\\ & = 0 \\quad \\mbox{ for all } \\varphi \\in L^2(0,T ; H^s(\\Omega \\times D)),\\end{split}$ with $\\widehat{\\psi }_\\varepsilon (\\cdot , 0) = \\widehat{\\psi }_{0,\\varepsilon }(\\cdot )$ and $s > 1 + \\frac{3}{2}(K+1).$ Let $H$ denote the standard Helmholtz projection onto the space of solenoidal (divergence-free) functions; that is, $v = {H}[v] + {H}^{\\bot }[v], \\quad H^{\\bot }[v] := \\nabla _x \\Phi ,$ where $\\Phi \\in H^1(\\Omega )/{\\mathbb {R}}$ is the unique solution of the problem $\\int _\\Omega \\nabla _x \\Phi \\cdot \\nabla _x \\phi \\,{\\rm d}x= \\int _\\Omega v \\cdot \\nabla _x \\phi \\,{\\rm d}x\\quad \\mbox{ for all }\\phi \\in H^1(\\Omega )/{\\mathbb {R}}.$ Definition 1.2 (Weak solution to the limiting system) We shall say that the couple $(U, \\Psi )$ is a weak solution to the problem (REF ), provided that the following properties hold: 1) $U \\in L^\\infty (0,T;L^2 (\\Omega )^3) \\cap L^2(0,T;H^1(\\Omega )^3)$ , $U \\cdot n |_{\\partial \\Omega } = 0$ for a.e.", "$t \\in (0,T)$ , ${{\\mathrm {div}}_x}\\,U = 0$ for a.e.", "$(x,t) \\in \\Omega \\times (0,T)$ and $\\widehat{\\Psi }\\in L^{v}(0,T;Z_1) \\cap H^1(0,T; M^{-1} (H^s(\\Omega \\times D))^{\\prime }),$ where $v \\in [1,\\infty )$ and $s > 1+ \\frac{3}{2} (K +1) $ , with finite relative entropy and Fisher information, i.e., ${\\mathcal {F}}(\\widehat{\\Psi }) \\in L^\\infty (0,T; L^1_M(\\Omega \\times D)) \\quad \\mbox{ and } \\quad \\sqrt{\\widehat{\\Psi }} \\in L^2(0,T; H^1_M(\\Omega \\times D)),$ $\\mathsf {\\tau }_1 (M \\widehat{\\Psi }) \\in L^r(\\Omega \\times [0,T)), \\quad \\mbox{ where } r \\in \\bigg [ 1, \\frac{20}{13} \\bigg ).$ 2) Moreover the following relations are satisfied: $\\begin{split}& - \\int _0^T \\int _\\Omega \\bar{\\rho }U\\cdot \\partial _t w \\,{\\rm d}x\\, {\\rm d}t+ \\int _0^T \\int _\\Omega \\left[ \\mathsf {S}(U) - \\bar{\\rho }U\\otimes U\\right] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\\\ &\\qquad = \\int _\\Omega \\bar{\\rho }U_0 \\cdot w(0,\\cdot ) \\,{\\rm d}x- \\int _0^T \\int _\\Omega \\left( \\mathsf {\\tau }_1(M \\widehat{\\Psi }) \\right): \\nabla _x w\\,{\\rm d}x\\, {\\rm d}t\\\\ &\\hspace{-28.45274pt}\\mbox{ for all }\\mbox{ for all } w \\in L^{\\frac{\\gamma + \\vartheta }{\\vartheta }}(0,T; W^{1,r}(\\Omega )^3),\\mbox{ such that } {{\\mathrm {div}}_x}\\,w = 0,\\ w \\cdot n|_{\\partial \\Omega } = 0,\\end{split}$ where $U_0 \\in L^2(\\Omega \\times (0,T))^3$ , $\\vartheta (\\gamma )= \\frac{2\\gamma - 3}{3}$ for $\\frac{3}{2}< \\gamma \\le 4$ and $\\vartheta (\\gamma )= \\frac{5}{12}\\gamma $ for $4\\le \\gamma $ ; $r= \\max \\left\\lbrace 4, \\frac{6\\gamma }{2\\gamma - 3} \\right\\rbrace $ , and $\\begin{split}& \\int _0^T \\left\\langle M\\partial _t \\widehat{\\Psi }, \\varphi \\right\\rangle _{H^s (\\Omega \\times D)} \\, {\\rm d}t+ \\frac{1}{4} \\sum _{i,j=1}^K A_{ij} \\int _{\\Omega \\times D} M \\nabla _{q_j} {\\widehat{\\Psi }}\\cdot \\nabla _{q_i} {\\varphi } {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\\\ &\\quad + \\int _0^T \\int _{\\Omega \\times D} M [ \\delta \\nabla _x \\widehat{\\Psi }- U\\widehat{\\Psi }] \\cdot \\nabla _x \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t- \\int _0^T \\int _{\\Omega \\times D } M \\sum _{i=1}^K [( \\nabla _x U) q_i ] \\widehat{\\Psi }\\cdot \\nabla _{q_i} \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t= 0\\\\ &\\mbox{ for all } \\varphi \\in L^2(0,T ; H^s(\\Omega \\times D)),\\end{split}$ with $\\widehat{\\Psi }(\\cdot , 0) = \\widehat{\\Psi }_0(\\cdot )$ and $s > 1 + \\frac{3}{2}(K+1).$ Here $\\widehat{\\Psi }_0 \\in L^1_M(\\Omega \\times D;\\mathbb {R}_{\\ge 0})$ and $\\mathcal {F}(\\widehat{\\Psi }_0) \\in L^1_M(\\Omega \\times D;\\mathbb {R}_{\\ge 0})$ , and $\\int _D M \\widehat{\\Psi }_0 \\in L^\\infty (\\Omega ;\\mathbb {R}_{\\ge 0})$ .", "Our main result is then the following theorem.", "Theorem 1.1 (Main theorem) Suppose that $\\Omega \\subset {\\mathbb {R}}^3$ is a bounded open domain with $\\partial \\Omega \\in C^{2, \\alpha }$ , with some $\\alpha \\in (0,1)$ .", "Let $p$ satisfy (REF ) with $\\gamma > \\frac{3}{2}$ , let the stress tensor $\\mathsf {S}$ satisfy (REF ), and let the Rouse matrix satisfy (REF ).", "Let $(\\rho _\\varepsilon ,\\, u_\\varepsilon ,\\, \\psi _\\varepsilon )$ be a weak solution triple to the system (REF ) emanating from the initial data satisfying (REF )–(REF ) and subject to the boundary conditions (REF )–(REF ).", "Then, by passing to suitable subsequences if necessary, we have that $\\rho _\\varepsilon \\rightarrow \\bar{\\rho }\\mbox{ strongly in }(L^2+L^q)(\\Omega \\times (0,T)) \\quad \\mbox{ with }q=\\min \\lbrace 2,\\gamma \\rbrace ,$ $u_{0,\\varepsilon } \\rightharpoonup U_0 \\mbox{ weakly in } L^2(\\Omega \\times (0,T))^3,$ $u_\\varepsilon \\rightharpoonup U\\mbox{ weakly in } L^2(0,T; H^1(\\Omega )^3),$ $\\widehat{\\psi }_\\varepsilon \\rightarrow \\widehat{\\Psi }\\mbox{ strongly in } \\mbox{ in } L^v (0,T; L^1_M(\\Omega \\times D)),$ $\\varrho _\\varepsilon \\stackrel{\\ast }{\\rightharpoonup }\\varrho \\quad \\mbox{ weakly-* in }L^\\infty (0,T; L^2(\\Omega )),\\quad \\varrho _\\varepsilon \\rightharpoonup \\varrho \\quad \\mbox{ weakly in }L^2(0,T; H^1(\\Omega )).$ Moreover, the couple $(U,\\Psi )$ is a weak solution to (REF ) according to Definition REF with initial datum $U(0,\\cdot ) = H[U_0]$ , where $U_0$ and $\\Psi _0$ are weak limits of (REF ) and (REF ), and $\\widehat{\\Psi }_0 :=\\Psi _0/M$ ." ], [ "Proof of the main result", "Before embarking on the proof, a remark is in order concerning our choice of boundary condition for the velocity field.", "Remark 2.1 As was noted in the Introduction, we have confined ourselves in this paper to the case of a complete slip boundary condition, which models an acoustically hard boundary.", "Then, if $\\Omega \\subset {\\mathbb {R}}^3$ is a bounded domain, as is being assumed here, the gradient part of the velocity field associated to acoustic waves may exhibit fast oscillations in time, and the convergence of the velocity field $u_\\varepsilon $ in the limit of $\\varepsilon \\rightarrow 0$ is genuinely weak with respect to the temporal variable (see [7]).", "In the case of a Dirichlet (no-slip) boundary condition for the velocity field, a boundary layer may appear because of the presence of viscosity and the specific geometrical properties of the boundary.", "The decay of the acoustic waves and consequently of the velocity field can then be deduced as in [5].", "Our starting point is the following result concerning the existence of weak solutions to the primitive system.", "Its proof for the case of a homogeneous Dirichlet boundary condition for the velocity field can be found in [3], and the arguments contained therein can be easily adapted to the case of complete slip boundary condition by replacing the function space of divergence-free three-component vector functions contained in $H^1_0(\\Omega )$ throughout the proof in [3] by the function space of divergence-free three-component vector functions contained in $H^1(\\Omega )$ with vanishing normal trace on $\\partial \\Omega $ .", "Proposition 2.1 (Existence of solutions to the primitive system) For each fixed $\\varepsilon >0$ there exists a triple $(\\rho _\\varepsilon , u_\\varepsilon , \\widehat{\\psi }_\\varepsilon )$ , which is a global weak solution to problem (REF ) in the sense of Definition REF .", "Furthermore, the solution triple $(\\rho _\\varepsilon , u_\\varepsilon , \\widehat{\\psi }_\\varepsilon )$ satisfies, for a.e.", "$t^{\\prime } \\in (0,T)$ , the energy inequality $\\begin{split}& \\frac{1}{2} \\int _\\Omega \\rho _\\varepsilon (t^{\\prime }) |u_\\varepsilon (t^{\\prime }) |^2 \\,{\\rm d}x+ \\frac{1}{\\varepsilon ^2} \\int _\\Omega \\frac{1}{\\gamma -1}p(\\rho _\\varepsilon (t^{\\prime })) \\,{\\rm d}x+ k \\int _{\\Omega \\times D} M {\\mathcal {F}}(\\widehat{\\psi }_\\varepsilon (t^{\\prime })) {\\rm \\,d}q \\,{\\rm d}x+ \\bar{\\xi }\\Vert \\varrho _\\varepsilon (t^{\\prime }) \\Vert ^2_{L^2(\\Omega )}\\\\ &\\quad + \\mu ^S c_0 \\int _0^{t^{\\prime }} \\int _0^{t^{\\prime }} \\Vert u_\\varepsilon \\Vert ^2_{H^1(\\Omega )} \\, {\\rm d}t\\\\ &\\quad + k \\int _0^{t^{\\prime }} \\int _{\\Omega \\times D} M \\Big [\\frac{a_0}{2 \\lambda } \\big | \\nabla _q \\sqrt{\\widehat{\\psi }_\\varepsilon } \\big |^2+ 2 \\delta \\big | \\nabla _x \\sqrt{\\widehat{\\psi }_\\varepsilon }\\big |^2 \\Big ] {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t+2 \\bar{\\xi }\\delta \\int _0^{t^{\\prime }} \\Vert \\nabla _x \\varrho _\\varepsilon \\Vert ^2_{L^2(\\Omega )} \\, {\\rm d}t\\\\ & \\le \\exp (t^{\\prime }) \\Big [\\frac{1}{2} \\int _{\\Omega } \\rho _{0,\\varepsilon }|u_{0,\\varepsilon }|^2 \\,{\\rm d}x+ \\int _\\Omega P(\\rho _{0,\\varepsilon }) \\,{\\rm d}x+ k \\int _{\\Omega \\times D} M {\\mathcal {F}}(\\widehat{\\psi }_{0,\\varepsilon }) {\\rm \\,d}q \\,{\\rm d}x+\\bar{\\xi }\\int _{\\Omega } \\bigg (\\int _D M \\widehat{\\psi }_{0,\\varepsilon } {\\rm \\,d}q\\bigg )^2 \\,{\\rm d}x\\Big ].\\end{split}$" ], [ "Energy equality", "In order to obtain uniform bounds on $(\\rho _\\varepsilon , u_\\varepsilon , \\widehat{\\psi }_\\varepsilon )$ with respect to $\\varepsilon $ , we shall prove a formal energy equality.", "Its derivation is in fact the same as in the proof of the existence of weak solutions in Proposition REF .", "For the moment we shall keep all characteristic numbers in their general form and will permit a nonzero density of body forces $f$ so as to state the formal energy equality in its most general form.", "By taking the $L^2(\\Omega )$ inner product of the continuity equation first with $\\frac{1}{2} |u|^2$ and then with $P^{\\prime }(\\rho ) - P^{\\prime }(\\bar{\\rho })\\rho $ , where $P(\\rho ) = \\frac{p(\\rho )}{\\gamma - 1}$ and where $\\bar{\\rho }$ is determined by (REF ), and taking the $L^2(\\Omega )^3$ inner product of the momentum equation with $u$ (by noting the boundary conditions and performing partial integration) we deduce that $\\begin{split}& \\frac{{\\rm d}}{{\\rm d} t} \\int _\\Omega \\bigg (\\frac{1}{2} \\rho |u|^2 + \\frac{1}{{\\rm Ma}^2} (P(\\rho ) - P^{\\prime }(\\bar{\\rho })(\\rho - \\bar{\\rho }) - P(\\bar{\\rho }) )\\bigg )\\,{\\rm d}x+ \\frac{1}{{\\rm Re}} \\int _\\Omega \\mu ^S | \\mathsf {D} u- \\frac{1}{3} ({{\\mathrm {div}}_x}\\,u) \\mathsf {I} |^2 \\,{\\rm d}x\\\\ & + \\frac{1}{{\\rm Re}} \\int _\\Omega \\mu ^B | {{\\mathrm {div}}_x}\\,u|^2 \\,{\\rm d}x+ \\frac{1}{{\\rm Re}} \\int _\\Omega \\mathsf {\\tau }_1 : \\mathsf {D} u\\,{\\rm d}x- \\frac{\\tilde{\\xi }}{{\\rm Ma}^2} \\int _\\Omega \\varrho ^2 {{\\mathrm {div}}_x}\\,u\\,{\\rm d}x= \\frac{1}{{\\rm Fr}^2} \\int _\\Omega \\rho f \\cdot u\\,{\\rm d}x.\\end{split}$ Now let us concentrate on the Fokker–Planck equation.", "Multiplying the Fokker–Planck equation in (REF ) by $\\mathcal {F}^{\\prime }(\\widehat{\\psi })$ , integrating over $\\Omega \\times D$ and noticing that $\\nabla _{q_i} M = - M q_i U^{\\prime }_i \\left( \\frac{ | q_i |^2 }{2} \\right)$ , we deduce that $\\begin{split}& \\frac{\\rm d}{\\, {\\rm d}t} \\int _{\\Omega \\times D} M\\mathcal {F}(\\widehat{\\psi }) {\\rm \\,d}q \\,{\\rm d}x+ (K + 1) \\int _{\\Omega \\times D} M({{\\mathrm {div}}_x}\\,u) \\widehat{\\psi }{\\rm \\,d}q\\,{\\rm d}x\\\\& \\quad - \\sum _{i=1}^K \\int _{\\Omega \\times D} (q^T_i q_i) M U^{\\prime }_i \\left( \\frac{ | q_i |^2 }{2} \\right) \\widehat{\\psi }: (\\nabla _x u) {\\rm \\,d}q\\,{\\rm d}x\\\\& \\quad + 4 \\delta \\int _{\\Omega \\times D} M |\\nabla _x \\sqrt{\\widehat{\\psi }} |^2 {\\rm \\,d}q\\,{\\rm d}x+ \\frac{1}{{\\mathrm {De}}} \\sum _{i,j=1}^K A_{ij} \\int _{\\Omega \\times D} M \\nabla _{q_i} \\sqrt{\\widehat{\\psi }}\\cdot \\nabla _{q_j} \\sqrt{\\widehat{\\psi }} {\\rm \\,d}q\\,{\\rm d}x= 0.\\end{split}$ Multiplying (REF ) by $\\frac{1-\\beta }{{\\rm Wi\\, Re}}$ and adding to (REF ) we deduce that $\\begin{split}& \\frac{{\\rm d}}{{\\rm d} t} \\int _\\Omega \\bigg (\\frac{1}{2} \\rho |u|^2 + \\frac{1}{{\\rm Ma}^2} (P(\\rho ) - P^{\\prime }(\\bar{\\rho })(\\rho - \\bar{\\rho }) - P(\\bar{\\rho }) )\\bigg )\\,{\\rm d}x+ \\frac{1}{{\\rm Re}} \\int _\\Omega \\mu ^S | \\mathsf {D} u- \\frac{1}{3} ({{\\mathrm {div}}_x}\\,u) \\mathsf {I} |^2 \\,{\\rm d}x\\\\ &\\quad + \\frac{1}{{\\rm Re}} \\int _\\Omega \\mu ^B | {{\\mathrm {div}}_x}\\,u|^2 \\,{\\rm d}x+ \\frac{1}{{\\rm Re}} \\frac{1-\\beta }{\\mathrm {De}} \\frac{\\rm d}{\\, {\\rm d}t} \\int _{\\Omega \\times D} M \\mathcal {F}(\\widehat{\\psi }) {\\rm \\,d}q \\,{\\rm d}x+ \\frac{ 4 \\delta }{\\rm Re} \\frac{1-\\beta }{\\mathrm {De}} \\delta \\int _{\\Omega \\times D} M |\\nabla _x \\sqrt{\\widehat{\\psi }} |^2 {\\rm \\,d}q\\,{\\rm d}x\\\\&\\quad + \\frac{1}{{\\rm Wi\\,Re}} \\frac{1-\\beta }{\\mathrm {De}} \\int _{\\Omega \\times D} \\sum _{i,j=1}^K A_{ij} M \\nabla _{q_i} \\sqrt{\\widehat{\\psi }}\\cdot \\nabla _{q_j} \\sqrt{\\widehat{\\psi }} {\\rm \\,d}q\\,{\\rm d}x- \\frac{\\tilde{\\xi }}{{\\rm Ma}^2} \\int _\\Omega \\varrho ^2 {{\\mathrm {div}}_x}\\,u\\,{\\rm d}x= \\frac{1}{{\\rm Fr}^2} \\int _\\Omega \\rho f \\cdot u\\,{\\rm d}x.\\end{split}$ It remains to deal with the last term on the left-hand side of (REF ).", "To this end we integrate the Fokker–Planck equation over $D$ and we get (REF ).", "After multiplying (REF ) by $\\varrho $ and integrating over $\\Omega $ we have $\\frac{{\\rm d}}{{\\rm d} t} \\int _\\Omega \\varrho ^2 \\,{\\rm d}x+ 2 \\delta \\int _\\Omega |\\nabla _x \\varrho |^2 \\,{\\rm d}x= -\\int _\\Omega \\varrho ^2 ({{\\mathrm {div}}_x}\\,u) \\,{\\rm d}x.$ Multiplying (REF ) by $\\frac{\\tilde{\\xi }}{{\\rm Ma}^2}$ and substituting the resulting expression into (REF ) gives $\\begin{split}& \\frac{\\rm d}{\\, {\\rm d}t} \\left[ \\int _\\Omega \\left(\\frac{1}{2} \\rho |u|^2 + \\frac{1}{{\\rm Ma}^2} (P(\\rho ) - P^{\\prime }(\\bar{\\rho })(\\rho - \\bar{\\rho }) - P(\\bar{\\rho }) )\\right)\\,{\\rm d}x+ \\frac{1-\\beta }{{\\rm Re}\\,{\\mathrm {De}}} \\int _{\\Omega \\times D} M \\mathcal {F}(\\widehat{\\psi }){\\rm \\,d}q\\,{\\rm d}x+ \\frac{\\tilde{\\xi }}{{\\rm Ma}^2} \\int _\\Omega \\varrho ^2 \\,{\\rm d}x\\right]\\\\ &\\quad + \\Big \\lbrace \\frac{1}{{\\rm Re}} \\int _\\Omega \\mu ^S | \\mathsf {D}(u) - \\frac{1}{3} ( {{\\mathrm {div}}_x}\\,u) |^2 \\,{\\rm d}x+ \\frac{1}{{\\rm Re} } \\int _\\Omega \\mu ^B | {{\\mathrm {div}}_x}\\,u|^2 \\,{\\rm d}x+ \\frac{4\\delta (1-\\beta )}{{\\rm Re}\\, {\\mathrm {De}}} \\int _{\\Omega \\times D} M | \\nabla _x \\sqrt{\\widehat{\\psi }}|^2 {\\rm \\,d}q\\,{\\rm d}x\\\\ &\\quad + \\frac{1-\\beta }{{\\rm Re}\\, {\\mathrm {De}}^2}\\int _{\\Omega \\times D} M \\sum _{i,j = 1}^K A_{ij} \\nabla _{q_i} \\sqrt{\\widehat{\\psi }} \\cdot \\nabla _{q_j} \\sqrt{\\widehat{\\psi }} {\\rm \\,d}q\\,{\\rm d}x+ \\frac{2 \\delta \\tilde{\\xi }}{{\\rm Ma}^2} \\int _\\Omega | \\nabla _x \\varrho |^2 \\,{\\rm d}x\\Big \\rbrace \\\\ &= \\frac{1}{{\\rm Fr}^2} \\int _\\Omega \\rho f \\cdot u\\,{\\rm d}x.\\end{split}$ In accordance with our assumptions on the characteristic numbers we shall take $f = 0$ , ${\\rm Ma} = \\varepsilon $ , ${\\mathrm {De}} = 1$ , ${\\rm Re} = 1$ , and $\\tilde{\\xi }= \\bar{\\xi }\\varepsilon ^2$ in the above equality; thereby, $\\begin{split}& \\frac{{\\rm d}}{{\\rm d} t} \\left[ \\int _\\Omega \\left(\\frac{1}{2} \\rho _\\varepsilon |u_\\varepsilon |^2 + \\frac{1}{{\\varepsilon }^2} (P(\\rho _\\varepsilon ) - P^{\\prime }(\\bar{\\rho })(\\rho _\\varepsilon - \\bar{\\rho }) - P(\\bar{\\rho }) )\\right)\\,{\\rm d}x+ (1-\\beta ) \\int _{\\Omega \\times D} M \\mathcal {F}(\\widehat{\\psi }_\\varepsilon ){\\rm \\,d}q\\,{\\rm d}x+ {\\bar{\\xi }} \\int _\\Omega \\varrho ^2_\\varepsilon \\,{\\rm d}x\\right]\\\\ &\\quad + \\Big \\lbrace \\int _\\Omega \\mu ^S | \\mathsf {D}(u_\\varepsilon ) - \\frac{1}{3} ( {{\\mathrm {div}}_x}\\,u_\\varepsilon ) |^2 \\,{\\rm d}x+ \\int _\\Omega \\mu ^B | {{\\mathrm {div}}_x}\\,u_\\varepsilon |^2 \\,{\\rm d}x+ 4\\delta (1-\\beta ) \\int _{\\Omega \\times D} M | \\nabla _x \\sqrt{\\widehat{\\psi }_\\varepsilon }|^2 {\\rm \\,d}q\\,{\\rm d}x\\\\ & \\quad + (1-\\beta )\\int _{\\Omega \\times D} M \\sum _{i,j = 1}^K A_{ij} \\nabla _{q_i} \\sqrt{\\widehat{\\psi }_\\varepsilon } \\cdot \\nabla _{q_j} \\sqrt{\\widehat{\\psi }_\\varepsilon } {\\rm \\,d}q\\,{\\rm d}x+ 2 \\delta \\bar{\\xi }\\int _\\Omega | \\nabla _x \\varrho _\\varepsilon |^2 \\,{\\rm d}x\\Big \\rbrace \\\\ &= 0 \\, .\\end{split}$ Consequently, using also (REF ) we obtain $\\begin{split}&\\left[ \\int _\\Omega \\left(\\frac{1}{2} \\rho _\\varepsilon |u_\\varepsilon |^2 + \\frac{1}{{\\varepsilon }^2} ({P}(\\rho _\\varepsilon )- {P}^{\\prime }(\\bar{\\rho })(\\rho _\\varepsilon - \\bar{\\rho }) - {P}(\\bar{\\rho }) )\\right)\\,{\\rm d}x+ (1-\\beta ) \\int _{\\Omega \\times D} M \\mathcal {F}(\\widehat{\\psi }_\\varepsilon ){\\rm \\,d}q\\,{\\rm d}x+ {\\bar{\\xi }} \\int _\\Omega \\varrho ^2_\\varepsilon \\,{\\rm d}x\\right] (t)\\\\ &\\quad + \\int _0^t \\Big \\lbrace \\int _\\Omega \\mu ^S | \\mathsf {D}(u_\\varepsilon ) - \\frac{1}{3} ( {{\\mathrm {div}}_x}\\,u_\\varepsilon ) |^2 \\,{\\rm d}x+ \\int _\\Omega \\mu ^B | {{\\mathrm {div}}_x}\\,u_\\varepsilon |^2 \\,{\\rm d}x+ 4\\delta (1-\\beta ) \\int _{\\Omega \\times D} M | \\nabla _x \\sqrt{\\widehat{\\psi }_\\varepsilon }|^2 {\\rm \\,d}q\\,{\\rm d}x\\\\ &\\quad + (1-\\beta ) a_0\\int _{\\Omega \\times D} M| \\nabla _q \\sqrt{\\widehat{\\psi }_\\varepsilon }|^2 {\\rm \\,d}q\\,{\\rm d}x+ 2 \\delta \\bar{\\xi }\\int _\\Omega | \\nabla _x \\varrho _\\varepsilon |^2 \\,{\\rm d}x\\Big \\rbrace \\, {\\rm d}t\\\\ &\\le \\left[ \\int _\\Omega \\Big (\\frac{1}{2} \\rho _{0,\\varepsilon } |u_{0,\\varepsilon }|^2 + \\frac{1}{{\\varepsilon }^2} ({P}(\\rho _{0,\\varepsilon }) - {P}^{\\prime }(\\bar{\\rho })(\\rho _{0,\\varepsilon } - \\bar{\\rho }) - {P}(\\bar{\\rho }) ) \\Big ) \\,{\\rm d}x\\right.\\\\ & \\quad \\left.+ (1-\\beta ) \\int _{\\Omega \\times D} M \\mathcal {F}(\\widehat{\\psi }_{0,\\varepsilon }){\\rm \\,d}q\\,{\\rm d}x+ {\\bar{\\xi }} \\int _\\Omega \\varrho ^2_{0,\\varepsilon } \\,{\\rm d}x\\right], \\qquad \\mbox{for a.e.", "$t \\in (0,T)$}.\\end{split}$ Remark 2.2 We note that ${P}^{\\prime \\prime }(\\rho ) = p^{\\prime }(\\rho ) / \\rho > 0$ for all $\\rho \\in (0,\\infty )$ .", "Thus ${P}$ is a strictly convex function of $\\rho $ on $[0,\\infty )$ , and ${P}(\\rho ) - {P}^{\\prime }(\\bar{\\rho })(\\rho - \\bar{\\rho }) - {P}(\\bar{\\rho }) \\approx c (\\rho - \\bar{\\rho })^2$ provided that $\\rho $ is close to $\\bar{\\rho }$ ." ], [ "Uniform estimates", "In order to deduce uniform bounds from (REF ) we introduce, similarly as in [7], the decomposition into essential and residual parts of a measurable function $h$ as $h = [h]_{{\\rm {ess}}} + [h]_{{\\rm {res}}},\\quad \\ [ h]_{{\\rm {ess}}} := \\chi (\\rho _\\varepsilon )h,\\quad \\ [h]_{{\\rm {res}}} := (1-\\chi (\\rho _\\varepsilon ))h,$ with $\\chi \\in C^\\infty _c ( (0,\\infty ) )$ , $0\\le \\chi \\le 1, \\ \\chi =1$ on the set $ {\\cal {O}}_{{\\rm {ess}}}$ , where ${\\cal {O}}_{{\\rm {ess}}} := [\\overline{\\varrho }/2 , 2 \\overline{\\varrho }] ,\\quad {\\cal {O}}_{{\\rm {res}}} := (0,\\infty )\\setminus {\\cal {O}}_{{\\rm {ess}}}.$ The following estimates result from the energy estimate, with a constant $c$ independent of $\\varepsilon $ .", "According to (REF )–(REF ) the expression on the right-hand side of (REF ) is bounded for $\\varepsilon \\rightarrow 0$ .", "Thus we infer that $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\int _\\Omega \\rho _\\varepsilon | u_\\varepsilon |^2 \\,{\\rm d}x\\le c ,$ $\\int _0^T \\int _\\Omega \\mu ^S | \\mathsf {D}(u_\\varepsilon ) - \\frac{1}{3} ( {{\\mathrm {div}}_x}\\,u_\\varepsilon ) |^2 \\,{\\rm d}x+ \\int _\\Omega \\mu ^B | {{\\mathrm {div}}_x}\\,u_\\varepsilon |^2 \\,{\\rm d}x\\le c ,$ $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\int _\\Omega \\left[ (\\tilde{P}(\\rho _\\varepsilon ) - \\tilde{P}^{\\prime }(\\bar{\\rho })(\\rho _\\varepsilon - \\bar{\\rho }) - \\tilde{P}(\\bar{\\rho }) ) \\right]_{\\rm ess} \\,{\\rm d}x\\le c\\,\\varepsilon ^2,$ $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\int _\\Omega \\left[ (\\tilde{P}(\\rho _\\varepsilon ) - \\tilde{P}^{\\prime }(\\bar{\\rho _\\varepsilon })(\\rho - \\bar{\\rho }) - \\tilde{P}(\\bar{\\rho }) ) \\right]_{\\rm res} \\,{\\rm d}x\\le c\\,\\varepsilon ^2,$ $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\int _\\Omega | \\varrho _\\varepsilon |^2 \\,{\\rm d}x+ \\int _0^T \\int _\\Omega | \\nabla _x \\varrho _\\varepsilon |^2 \\,{\\rm d}x\\, {\\rm d}t\\le c,$ $\\Vert {\\mathcal {F}}(\\widehat{\\psi }_\\varepsilon )\\Vert _{L^\\infty (0,T; L^1_M(\\Omega \\times D))} \\le c,$ $\\Vert \\nabla _x \\sqrt{\\widehat{\\psi }_\\varepsilon } \\Vert _{L^2(0,T; L_M^2(\\Omega \\times D))}+ \\Vert \\nabla _q \\sqrt{\\widehat{\\psi }_\\varepsilon } \\Vert _{L^2(0,T; L_M^2(\\Omega \\times D))} \\le c.$ By (REF ), together with (REF ), we have that $\\Vert \\sqrt{\\widehat{\\psi }_\\varepsilon } \\Vert _{L^2(0,T; H^1_M(\\Omega \\times D))} \\le c .$ The bound (REF ) and Sobolev embedding yield $\\Vert \\varrho _\\varepsilon \\Vert _{L^2(0,T; L^6(\\Omega ))} \\le c.$ Hence, by an interpolation argument, we deduce that $\\Vert \\varrho _\\varepsilon \\Vert _{L^{\\frac{10}{3}}((0,T) \\times \\Omega ))}+ \\Vert \\varrho _\\varepsilon \\Vert _{L^4(0,T; L^3(\\Omega ))} \\le c .$ As $\\tilde{P}$ is strictly convex, the relation (REF ) yields $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\int _\\Omega \\left| \\left[ \\frac{\\rho _\\varepsilon - \\bar{\\rho }}{\\varepsilon } \\right]_{\\rm ess}\\right|^2 \\,{\\rm d}x\\le c,$ and, by virtue of (REF ) and (REF ), $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\int _\\Omega [1+ \\rho _\\varepsilon ]^\\gamma _{\\rm res} \\,{\\rm d}x\\le c\\, \\varepsilon ^2.$ Next, as a direct consequence of (REF ), we have that $\\operatornamewithlimits{ess\\,sup}_{t \\in (0,T)} \\Vert [\\rho _\\varepsilon u_\\varepsilon ]_{\\rm ess} \\Vert _{L^2(\\Omega )} \\le c,$ and by Korn's inequality ([7]) and (REF ), (REF ) gives $\\Vert u_\\varepsilon \\Vert _{L^2(0,T; H^{1}(\\Omega ))} \\le c.$ Moreover by (REF ), (REF ), (REF ), and (REF ) we have that $\\begin{split}& \\Vert \\rho _\\varepsilon \\Vert _{L^\\infty (0,T; L^\\gamma (\\Omega ))}+ \\Vert \\rho _\\varepsilon u_\\varepsilon \\Vert _{L^\\infty (0,T; L^{ \\frac{2\\gamma }{\\gamma + 1}}(\\Omega ))}+ \\Vert \\rho _\\varepsilon u_\\varepsilon \\Vert _{L^2(0,T; L^{ \\frac{6\\gamma }{\\gamma + 6}}(\\Omega ))}+ \\Vert \\rho _\\varepsilon |u_\\varepsilon |^2 \\Vert _{L^2(0,T; L^{ \\frac{6\\gamma }{4\\gamma + 3}}(\\Omega ))}\\le c.\\end{split}$ Furthermore, by using the continuity equation we additionally infer that $\\Vert \\partial _t \\rho _\\varepsilon \\Vert _{L^2(0,T; W^{1,s^{\\prime }} (\\Omega )^{\\prime })} \\le c,\\quad \\mbox{ where $s$ is as in (\\ref {T61_1})}.$ Next, we shall establish the necessary bounds on the extra stress tensor $\\mathsf {\\tau }_1$ .", "We deduce from (REF ), (REF ), and by noting that $M |_{\\partial D} = 0$ , that $\\mathsf {C}_i (M\\varphi ) = -\\int _{D} (\\nabla _{q_i} M) q_i^T \\varphi {\\rm \\,d}q= \\int _{D} M (\\nabla _{q_i} \\varphi ) q_i^T {\\rm \\,d}q + \\left( \\int _D M \\varphi {\\rm \\,d}q \\right) \\mathsf {I} .$ As $\\nabla _{{q_i}} \\varphi = \\nabla _{q_i} (\\sqrt{\\varphi }\\,)^2 = 2 \\sqrt{\\varphi }\\, \\nabla _{q_i} \\sqrt{\\varphi }$ for any sufficiently smooth nonnegative function $\\varphi $ , we have that $\\begin{split}\\Vert \\mathsf {C}_i (M\\varphi ) \\Vert _{L^r(\\Omega )}& \\le c \\left[\\int _\\Omega \\left( \\int _D M \\varphi {\\rm \\,d}q \\right)^{\\frac{r}{2}}\\left( \\int _D M | \\nabla _{q_i} \\sqrt{\\varphi }|^2 {\\rm \\,d}q \\right)^{\\frac{r}{2}} \\,{\\rm d}x+ \\int _\\Omega \\left( \\int _D M \\varphi {\\rm \\,d}q \\right)^r\\right]^{\\frac{1}{r}}\\\\&\\le c \\left[\\Vert \\nabla _{q_i} \\sqrt{\\varphi } \\Vert _{L^2_M(\\Omega \\times D)}\\left\\Vert \\int _D M \\varphi {\\rm \\,d}q \\right\\Vert ^{\\frac{1}{2}}_{L^{\\frac{r}{2-r}}(\\Omega )}+ \\left\\Vert \\int _D M \\varphi {\\rm \\,d}q \\right\\Vert _{L^r(\\Omega )}\\right],\\end{split}$ for $r\\in [1,2)$ .", "Then, for $s\\in [1,2]$ we get that, for any such function $\\varphi $ , $\\Vert \\mathsf {C}_i(M \\varphi ) \\Vert _{L^s(0,T; L^r(\\Omega ))}\\!\\le \\!c\\left[\\Vert \\nabla _{q_i} \\sqrt{\\varphi } \\Vert _{L^2(0,T;L^2_M(\\Omega \\times D)}\\left\\Vert \\int _D M \\varphi {\\rm \\,d}q \\right\\Vert ^{\\frac{1}{2}}_{L^v(0.T;L^{\\frac{r}{2-r}}(\\Omega ))}+ \\left\\Vert \\int _D M \\varphi {\\rm \\,d}q \\right\\Vert _{L^s(0.T;L^r(\\Omega ))}\\right],$ where $v= \\frac{s}{2-s}$ if $s\\in [1,2)$ and $v = \\infty $ if $s=2$ .", "We deduce from (REF ), (REF ) and (REF ) that, for $i=1, \\dots , K,$ $\\Vert \\mathsf {C}_i (M \\widehat{\\psi }_\\varepsilon ) \\Vert _{L^s(0,T;L^r(\\Omega ))} \\le c, \\quad \\mbox{ as } \\quad \\Vert \\varrho _\\varepsilon \\Vert _{L^v(0,T;L^{\\frac{r}{2-r}}(\\Omega ))} \\le c,$ where $r\\in [1,2),$ $s\\in [1,2]$ , and $v= \\frac{s}{2-s}$ if $s \\in [1,2)$ , and $v = \\infty $ if $s=2$ .", "Next by (REF ) and interpolation we have $\\Vert \\varrho _\\varepsilon \\Vert _{L^{\\frac{2}{v}} (0,T;L^v(\\Omega ))}\\le c \\Vert \\varrho _\\varepsilon \\Vert ^{1-\\nu }_{L^\\infty (0,T;L^2(\\Omega ))} \\Vert \\varrho \\Vert ^\\nu _{L^2(0,T;H^1(\\Omega ))} \\le c,$ where $\\nu = \\frac{3(v-2)}{2v}$ , and $v\\in (2,6]$ .", "By (REF ), (REF ), (REF ), (REF ), (REF ) (REF ), (REF ) we deduce that $\\Vert \\mathsf {C}_i (M \\widehat{\\psi }_\\varepsilon ) \\Vert _{L^2(0,T;L^{\\frac{4}{3}} (\\Omega ))}+ \\Vert \\mathsf {C}_i (M \\widehat{\\psi }_\\varepsilon ) \\Vert _{L^{\\frac{20}{13}} (\\Omega \\times (0,T) )} \\le c,$ $\\Vert \\mathsf {\\tau }_1 (M \\widehat{\\psi }_\\varepsilon ) \\Vert _{L^2(0,T;L^{\\frac{4}{3}} (\\Omega ))}+ \\Vert \\mathsf {\\tau }_1 (M \\widehat{\\psi }_\\varepsilon ) \\Vert _{L^{\\frac{20}{13}} (\\Omega \\times (0,T) )}+ \\Vert \\tau _1(M\\widehat{\\psi }_\\varepsilon ) \\Vert _{L^{\\frac{4}{3}}(0,T; L^{\\frac{12}{7}}(\\Omega ))}\\le c ,$ where $c$ is independent of $\\varepsilon $ .", "Now let us show that $\\Vert M \\partial _t \\widehat{\\psi }_\\varepsilon \\Vert _{L^2(0,T; H^s(\\Omega \\times D)^{\\prime })} \\le c,$ with $s > 1+ \\frac{3}{2}(K+1) $ .", "Testing the Fokker–Planck equation (REF )$_3$ with $\\varphi \\in L^2 (0,T; W^{1,\\infty }(\\Omega \\times D))$ and since $\\nabla _x \\psi = 2 \\sqrt{\\psi }\\, \\nabla _x \\sqrt{\\psi }$ for sufficiently smooth $\\psi $ , we infer that $\\begin{split}\\left|\\int _0^T \\int _{\\Omega \\times D} M \\partial _t \\widehat{\\psi }_\\varepsilon \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\right|& \\le 2 \\delta \\left|\\int _0^T\\int _{\\Omega \\times D} M \\sqrt{\\widehat{\\psi }_\\varepsilon } \\nabla _x \\sqrt{\\widehat{\\psi }_\\varepsilon }\\cdot \\nabla _x \\varphi {\\rm \\,d}q \\,{\\rm d}x\\, {\\rm d}t\\right|\\\\ & \\quad +\\frac{1}{2} \\left|\\sum _{i,j =1}^K A_{i,j} \\int _0^T \\int _{\\Omega \\times D}M \\sqrt{\\widehat{\\psi }_\\varepsilon } \\nabla _{q_j} \\sqrt{\\widehat{\\psi }_\\varepsilon }\\cdot \\nabla _{q_i} \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\right|\\\\ & \\quad + \\left|\\int _0^T \\int _{\\Omega \\times D} M u_\\varepsilon \\widehat{\\psi }_\\varepsilon \\cdot \\nabla _x \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\right|\\\\ & \\quad +\\left|\\int _0^T \\int _{\\Omega \\times D}M \\sum _{i=1}^K \\left[ (\\nabla _x u_\\varepsilon ) q_i \\right] \\widehat{\\psi }_\\varepsilon \\cdot \\nabla _{q_i} \\varphi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\right|\\\\ &\\le c \\Vert \\varrho _\\varepsilon \\Vert _{L^\\infty (0,T;L^2(\\Omega ))}\\Big [\\Vert \\nabla _x \\sqrt{{\\widehat{\\psi }}_\\varepsilon } \\Vert _{L^2(0,T; L^2_M(\\Omega \\times D))}+ \\Vert \\nabla _q \\sqrt{{\\widehat{\\psi }}_\\varepsilon } \\Vert _{L^2(L_M^2 (\\Omega \\times D))}\\\\ & \\quad + \\Vert \\nabla _x u_\\varepsilon \\Vert _{L^2(0,T;L^2(\\Omega ))} + \\Vert u_\\varepsilon \\Vert _{L^2(0,T;L^2(\\Omega ))}\\Big ] \\Vert \\varphi \\Vert _{L^2(0,T; W^{1,\\infty }(\\Omega \\times D))}\\\\ & \\le c.\\end{split}$ The last inequality holds by (REF ), (REF ), (REF ) and as $H^s (\\Omega \\times D) \\subset W^{1,\\infty } (\\Omega \\times D)$ thanks to our assumption that $s>1 + \\frac{3}{2}(K+1)$ .", "Therefore (REF ) holds.", "Finally, we note that by choosing $\\varphi = 1 $ in (REF ) and by the choice of the initial datum $\\widehat{\\psi }_{0,\\varepsilon }$ (cf.", "(REF )), we have that $\\int _\\Omega \\varrho _\\varepsilon \\,{\\rm d}x= \\int _{\\Omega \\times D} M \\widehat{\\psi }_{\\varepsilon } \\,{\\rm d}q\\,{\\rm d}x= \\int _{\\Omega \\times D} M \\widehat{\\psi }_{0,\\varepsilon } \\,{\\rm d}q\\,{\\rm d}x\\le c,$ where again $c$ is independent of $\\varepsilon $ ." ], [ "Convergence as $\\varepsilon \\rightarrow 0$", "Obviously $\\rho _\\varepsilon - \\bar{\\rho }= [\\rho _\\varepsilon - \\bar{\\rho }]_{ess} + [\\rho _\\varepsilon - \\bar{\\rho }]_{res}$ ; hence, thanks to (REF ), (REF ), we have that $\\rho _\\varepsilon \\rightarrow \\bar{\\rho }\\quad \\mbox{ strongly in } L^\\infty (0,T;(L^\\gamma + L^2)(\\Omega )) .$ Next, by (REF ) we have $u_\\varepsilon \\rightharpoonup U\\quad \\mbox{ weakly in } L^2(0,T;H^{1}(\\Omega )^3)$ (by extracting a subsequence if necessarily).", "Using (REF ), (REF ) and Sobolev embedding we get $\\rho _\\varepsilon u_\\varepsilon \\stackrel{\\ast }{\\rightharpoonup }\\bar{\\rho }\\, U\\quad \\mbox{ weakly-* in } L^\\infty (0,T;L^{\\frac{2\\gamma }{1+ \\gamma }}(\\Omega )^3),$ $\\rho _\\varepsilon u_\\varepsilon \\rightharpoonup \\bar{\\rho }\\, U\\quad \\mbox{ weakly in } L^2(0,T;L^{\\frac{6\\gamma }{6+ \\gamma }}(\\Omega )^3) .$ Moreover, from (REF ) and (REF ) it follows that $[\\rho _\\varepsilon u_\\varepsilon ]_{\\rm res } \\rightarrow 0 \\quad \\mbox{ in } L^\\infty (0,T; L^s (\\Omega )^3)\\quad \\mbox{ as } \\varepsilon \\rightarrow 0 \\mbox{ for } 1\\le s \\le 2\\gamma / (\\gamma + 1).$ Then, employing (REF ) and (REF ), we have from (REF ) that $\\int _0^T\\int _\\Omega U\\cdot \\nabla _x \\eta \\,{\\rm d}x\\, \\mathrm {d}t = 0 \\quad \\mbox{ for all } \\eta \\in C_c^\\infty (\\overline{\\Omega }\\times (0,T)).", "$ Since the limiting velocity $U\\in L^2(0,T; H^1(\\Omega )^3)$ , we obtain that ${{\\mathrm {div}}_x}\\,U= 0 \\quad \\mbox{ for a.e. }", "(x,t) \\in \\Omega \\times (0,T) \\mbox{ and } U\\cdot n|_{\\partial \\Omega } = 0 \\mbox{ in the sense of traces}.$ Thus in the limit of $\\varepsilon \\rightarrow 0$ the continuity equation is reduced to the divergence-free condition for the limiting velocity field $U$ .", "This justifies the use of solenoidal test functions for the momentum equation.", "As a direct consequence of (REF ) and (REF ) we have $\\mathsf {S}( u_\\varepsilon ) \\rightharpoonup \\mu ^S \\mathsf {D} U\\quad \\mbox{ weakly in } L^2(\\Omega \\times (0,T))^{3 \\times 3}.$ Thanks to (REF ) and (REF ) we deduce that $\\rho _\\varepsilon u_\\varepsilon \\otimes u_\\varepsilon \\rightharpoonup \\overline{\\bar{\\rho }U\\otimes U}\\quad \\mbox{ weakly in } L^2(0,T; L^{\\frac{6\\gamma }{4\\gamma + 3}} (\\Omega )^{3\\times 3}).$ In the above ${\\overline{\\overline{\\rho } U\\otimes U}}$ denotes a weak limit of $\\lbrace \\rho _\\varepsilon u_\\varepsilon \\otimes u_\\varepsilon \\rbrace _{\\varepsilon >0}$ in $L^2(0,T;L^q(\\Omega )^{3\\times 3})$ for a certain $q>1$ .", "Although we do not really expect that ${\\overline{\\bar{\\rho }U\\otimes U}} = \\bar{\\rho }U\\otimes U,$ we will show in the next section that $\\int _0^T \\int _\\Omega {\\overline{\\bar{\\rho }U\\otimes U}} : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t=\\int _0^T \\int _\\Omega {[{\\bar{\\rho }U\\otimes U}]} : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t$ for any $w \\in C^\\infty _c((0,T) \\times \\overline{\\Omega })^3$ , ${{\\mathrm {div}}_x}\\,w = 0$ , $w\\cdot n|_{\\partial \\Omega } = 0$ .", "The relation (REF ) may be interpreted as an expression of the fact that the difference ${{\\mathrm {div}}_x}\\,(\\overline{\\bar{\\rho }U\\otimes U} - {\\bar{\\rho }U\\otimes U})$ is proportional to a gradient and that it can be therefore incorporated into the limiting pressure.", "Next we turn our attention to passage to the limit $\\varepsilon \\rightarrow 0$ in the terms related to the probability density function $\\widehat{\\psi }_\\varepsilon $ .", "In particular, we will show the strong convergence of the sequence $\\lbrace \\widehat{\\psi }_\\varepsilon \\rbrace _{\\varepsilon >0}$ using Dubinskiǐ's compactness theorem (cf.", "[2], for example,) stated in the next lemma.", "Lemma 2.2 Let $X_1$ be a seminormed set, and let $X_2$ and $X_3$ be Banach spaces such that $X_1 \\subset \\subset X_2 \\subset X_3$ .", "Then, for $p_1,\\,p_2 \\in [1,\\infty ) $ the following embedding is compact $\\left\\lbrace f\\in L^{p_1}(0,T;X_1) \\ : \\partial _t f \\in L^{p_2}(0,T;X_3) \\right\\rbrace \\subset \\subset L^{p_1} (0,T; X_2).$ We shall employ Lemma REF with $X_2 = L^1_M(\\Omega \\times D)$ , $X_3 = M^{-1}H^s(\\Omega \\times D)^{\\prime }$ , $X_1 = \\left\\lbrace \\phi \\in Z_1 \\ : \\ \\int _{\\Omega \\times D} M \\left[ | \\nabla _q \\sqrt{\\phi }|^2 + | \\nabla _x \\sqrt{\\phi } |^2\\right] {\\rm \\,d}q\\,{\\rm d}x< \\infty \\right\\rbrace $ $p_1 = v$ and $p_2 = 2$ .", "We note that $X_1 \\subset \\subset X_2$ (for the details, see [1]).", "Moreover, $X_2 \\subset X_3$ .", "Hence, thanks to (REF ) and (REF ), we have $\\widehat{\\psi }_\\varepsilon \\rightarrow \\widehat{\\Psi }\\quad \\mbox{ in } L^v (0,T; L^1_M(\\Omega \\times D)).$ We infer also that $M^{\\frac{1}{2}} \\nabla _x \\sqrt{\\widehat{\\psi }_\\varepsilon } \\rightharpoonup M^{\\frac{1}{2}} \\nabla _x \\sqrt{\\widehat{\\Psi }}\\quad \\mbox{ weakly in }L^2(0,T; L^2(\\Omega \\times D )^3),$ $M^{\\frac{1}{2}} \\nabla _q \\sqrt{\\widehat{\\psi }_\\varepsilon } \\rightharpoonup M^{\\frac{1}{2}} \\nabla _q \\sqrt{\\widehat{\\Psi }}\\quad \\mbox{ weakly in }L^2(0,T; L^2(\\Omega \\times D )^{3K}),$ $M \\frac{\\partial \\widehat{\\psi }_\\varepsilon }{\\partial t} \\rightharpoonup M \\frac{\\partial \\widehat{\\Psi }}{\\partial t}\\quad \\mbox{ weakly in }L^2(0,T; H^s(\\Omega \\times D )^{\\prime }).$ By (REF ), (REF ), and (REF ) we obtain that $\\mathsf {\\tau } (M \\widehat{\\psi }_\\varepsilon ) \\rightarrow \\mathsf {\\tau } (M \\widehat{\\Psi })\\quad \\mbox{ strongly in }L^r(\\Omega \\times (0,T))^{3 \\times 3},\\quad \\mbox{ where }r \\in \\bigg [ 1, \\frac{20}{13} \\bigg ).$ We observe that, by (REF ), as $\\varepsilon \\rightarrow 0$ we have that $\\varrho _\\varepsilon \\stackrel{\\ast }{\\rightharpoonup }\\varrho \\quad \\mbox{ weakly-* in }L^\\infty (0,T; L^2(\\Omega )),\\quad \\varrho _\\varepsilon \\rightharpoonup \\varrho \\quad \\mbox{ weakly in }L^2(0,T; H^1(\\Omega )).$ Next, we note that $\\varrho = \\int _D M\\widehat{\\Psi }{\\rm \\,d}q \\in L^\\infty (0,T;L^2(\\Omega )) \\cap L^2(0,T; H^1(\\Omega )).$ Indeed, (REF ) and (REF ), together with (REF ) provide (REF ).", "By (REF ), (REF ), (REF ) and (REF ), we may pass to the limit $\\varepsilon \\rightarrow 0$ in (REF ) with test functions $w \\in C^1_c([0,T);C^\\infty (\\overline{\\Omega })^3)$ , $w\\cdot n|_{\\partial \\Omega }=0$ and ${{\\mathrm {div}}_x}\\,w=0$ , and obtain that $\\begin{split}& \\int _0^T \\int _\\Omega \\bar{\\rho }U\\partial _t w \\,{\\rm d}x\\, {\\rm d}t+ \\int _0^T \\int _\\Omega \\left[ \\overline{\\bar{\\rho }U\\otimes U}- \\mu ^S \\mathsf {D} U\\right] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\\\ &= \\int _0^T \\int _\\Omega \\left( \\mathsf {\\tau }_1(M \\widehat{\\Psi }) \\right): \\nabla _x w\\,{\\rm d}x\\, {\\rm d}t- \\int _\\Omega \\bar{\\rho }U_0 w(0) \\,{\\rm d}x\\, {\\rm d}t,\\end{split}$ where we have assumed that $u_{\\varepsilon ,0} \\rightharpoonup U_0$ in $L^2(\\Omega )^3$ .", "We note that, since in the above formulation (REF ) we have restricted ourselves to divergence-free test functions, the limiting term $\\nabla _x \\varrho ^2$ does not appear because the gradient of a scalar function can be absorbed into the limiting pressure $\\Pi $ .", "Next let us pass to the limit in (REF ) and finally obtain (REF ).", "Initially we fix the test function to $\\phi \\in C([0,T]; C^\\infty (\\overline{\\Omega \\times D}))$ .", "The first term of (REF ) converges to the first term of (REF ) thanks to (REF ).", "For the second term of (REF ) we observe that $\\begin{split}\\int _0^T \\int _{\\Omega \\times D} M \\nabla _{q_i} \\widehat{\\psi }_\\varepsilon \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t= &~ 2 \\int _0^T \\int _{\\Omega \\times D} M \\left( \\sqrt{\\widehat{\\psi }_\\varepsilon } - \\sqrt{\\widehat{\\Psi }} \\right)\\nabla _{q_i} \\sqrt{\\widehat{\\psi }_\\varepsilon } \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\\\&+ 2 \\int _0^T \\int _{\\Omega \\times D} M \\sqrt{\\widehat{\\Psi }}\\,\\nabla _{q_i} \\sqrt{\\widehat{\\psi }_\\varepsilon } \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t= : I_1 + I_2.\\end{split}$ Hence, by noting that $| \\sqrt{c_1} - \\sqrt{c_2} | \\le \\sqrt{|c_1 - c_2|}$ for all $c_1,\\,c_2 \\in {\\mathbb {R}}_{\\ge 0}$ and by (REF ), we have that $\\begin{split}|I_1| & \\le C \\left\\Vert \\sqrt{\\widehat{\\psi }_\\varepsilon } - \\sqrt{\\widehat{\\Psi }} \\right\\Vert _{L^2(0,T;L^2_M(\\Omega \\times D ))}\\Vert \\nabla _{q_j} \\phi \\Vert _{L^\\infty (0,T;L^\\infty (\\Omega \\times D))}\\\\& \\le C \\left\\Vert {\\widehat{\\psi }_\\varepsilon } - {\\widehat{\\Psi }} \\right\\Vert _{L^1(0,T;L^1_M(\\Omega \\times D ))}\\Vert \\nabla _{q_j} \\phi \\Vert _{L^\\infty (0,T;L^\\infty (\\Omega \\times D))}.\\end{split}$ Thus, (REF ) implies that $I_1$ converges to zero as $\\varepsilon \\rightarrow 0$ .", "Similarly, because $M^{\\frac{1}{2}} \\sqrt{\\widehat{\\Psi }}\\, \\nabla _{q_j} \\phi \\in L^2(0,T;L^2(\\Omega \\times D)^3)$ for $j=1,\\dots ,K$ , it follows form (REF ) that, as $\\varepsilon \\rightarrow 0$ , $I_2 \\rightarrow 2 \\int _0^T \\int _{\\Omega \\times D} M \\sqrt{\\widehat{\\Psi }}\\,\\nabla _{q_i} \\sqrt{\\widehat{\\Psi }} \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t= \\int _0^T \\int _{\\Omega \\times D} M\\nabla _{q_i} {\\widehat{\\Psi }} \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t.$ Therefore the second term in (REF ) converges to the second term in (REF ).", "For the remaining terms in (REF ) we note that $\\begin{split}\\int _0^T \\int _{\\Omega \\times D}[M (\\nabla _x u_\\varepsilon ) q_i ] \\widehat{\\psi }_\\varepsilon \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t= & \\int _0^T \\int _{\\Omega \\times D}[M (\\nabla _x u_\\varepsilon ) q_i ] \\left( \\widehat{\\psi }_\\varepsilon - \\widehat{\\Psi }\\right) \\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\\\ & + \\int _0^T \\int _{\\Omega \\times D}[M (\\nabla _x u_\\varepsilon ) q_i ] \\widehat{\\Psi }\\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\\\= : & ~I_3 + I_4.\\end{split}$ Next, by (REF ), using Sobolev embedding and (REF ), and recalling (REF ), we obtain that $\\begin{split}|I_3| & \\le c\\left\\Vert \\int _{ D}M | \\widehat{\\psi }_\\varepsilon - \\widehat{\\Psi } | {\\rm \\,d}q \\right\\Vert _{L^2(0,T;L^2(\\Omega ))}\\Vert \\nabla _{q_j} \\phi \\Vert _{L^\\infty (0,T;L^\\infty (\\Omega \\times D))} \\\\& \\le c\\left\\Vert \\widehat{\\psi }_\\varepsilon - \\widehat{\\Psi } \\right\\Vert ^{\\frac{2}{5}}_{L^2(0,T;L^1_M(\\Omega \\times D))}\\Vert \\varrho _\\varepsilon + \\varrho \\Vert ^{\\frac{3}{5}}_{L^2(0,T;L^6(\\Omega ))}\\Vert \\nabla _{q_j} \\phi \\Vert _{L^\\infty (0,T;L^\\infty (\\Omega \\times D))}, \\end{split}$ and therefore, by invoking (REF ) and (REF )$_2$ , we deduce that $I_3$ converges to zero as $\\varepsilon \\rightarrow 0$ .", "Similarly, by noting that (REF ) implies that $\\int _D M \\widehat{\\Psi }q_i \\otimes \\nabla _{q_i} \\phi {\\rm \\,d}q \\in L^2(\\Omega \\times (0,T))^{3 \\times 3}$ , $i=1,\\dots ,K$ , we get by (REF ) that $I_4 \\rightarrow \\int _0^T \\int _{\\Omega \\times D}[M (\\nabla _x U) q_i ] \\widehat{\\Psi }\\cdot \\nabla _{q_j} \\phi {\\rm \\,d}q\\,{\\rm d}x\\, {\\rm d}t\\quad \\mbox{ as }\\varepsilon \\rightarrow 0.$ Therefore the last term of (REF ) converges to the last term in (REF ).", "Analogously, the third term of (REF ) converges to the third term in (REF ).", "In this way we obtain (REF ) for smooth test functions $\\phi \\in C([0,T]; C^\\infty (\\overline{\\Omega \\times D}))$ .", "In order to extend the class of test functions for the limiting equation we use the density of the function space $C([0,T]; C^\\infty (\\overline{\\Omega \\times D}))$ in $L^2(0,T; H^s({\\Omega \\times D}))$ , the embedding $H^s(\\Omega \\times D) \\subset W^{1,\\infty } (\\Omega \\times D)$ , (REF ), (REF ), (REF ), (REF ), (REF ), and (REF )." ], [ "Convergence of the convective part of the momentum equation", "In this section we concentrate on proving (REF ).", "The proof is based on performing a Helmholtz decomposition of the momentum $\\rho _\\varepsilon u_\\varepsilon = H [\\rho _\\varepsilon u_\\varepsilon ] + H^\\perp [\\rho _\\varepsilon u_\\varepsilon ]$ , the proof of the compactness of the solenoidal part in the decomposition, and the analysis of the acoustic equation governing the time-evolution of the gradient component in the decomposition.", "Motivated by the discussion in Section 5.4.2 of [7], we begin by decomposing the convective term as follows: $\\rho _\\varepsilon u_\\varepsilon \\otimes u_\\varepsilon =H[\\rho _\\varepsilon u_\\varepsilon ] \\otimes u_\\varepsilon +H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H[ u_\\varepsilon ]+H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp [ u_\\varepsilon ] .$ Let us emphasise that the solenoidal part of the momentum $H[\\rho _\\varepsilon u_\\varepsilon ]$ does not exhibit oscillations in time and in particular it converges a.e.", "on the set $\\Omega \\times (0,T)$ .", "In order to see this, we choose a test function in the momentum equation (REF ) of the form $H[w]$ , where $w \\in C^\\infty _c(\\overline{\\Omega }\\times [0,T))^3$ , $w \\cdot n |_{\\partial \\Omega } = 0$ .", "Thanks to the uniform estimates obtained in Section REF , by noting that the singular term is irrelevant as ${{\\mathrm {div}}_x}\\,H[w] = 0$ , we deduce that the family $\\lbrace \\int _\\Omega H[\\rho _\\varepsilon u_\\varepsilon ] \\cdot w \\,{\\rm d}x\\rbrace _{\\varepsilon >0} $ forms a bounded and equicontinuous sequence in $C([0,T])$ .", "Therefore by the Arzelà–Ascoli theorem we have that $\\int _\\Omega H[\\rho _\\varepsilon u_\\varepsilon ] \\cdot w \\,{\\rm d}x\\rightarrow \\int _\\Omega H[\\bar{\\rho }U] \\cdot w \\,{\\rm d}x\\quad \\mbox{ in } C([0,T])$ for any $w$ as above.", "By a density argument and noting (REF ), we infer that $H[\\rho _\\varepsilon u_\\varepsilon ] \\rightarrow H[\\bar{\\rho }U] \\quad \\mbox{ in }C([0,T] ; L^{\\frac{2\\gamma }{\\gamma +1}}_{weak}(\\Omega )^3) .$ Next, we have that $\\bar{\\rho }H[u_\\varepsilon ] \\cdot u_\\varepsilon = H[(\\bar{\\rho }- \\rho _\\varepsilon )u_\\varepsilon ] \\cdot u_\\varepsilon + H[\\rho _\\varepsilon u_\\varepsilon ] \\cdot u_\\varepsilon .$ The first term on the right-hand side of this equality converges to zero weakly in $L^1(\\Omega \\times (0,T))$ by (REF ), (REF ) and compact embedding.", "The second term, by (REF ), compact embedding, (REF ), and since $\\gamma > \\frac{3}{2}$ , converges weakly in $L^1(\\Omega \\times (0,T))$ to $\\bar{\\rho }|U|^2$ .", "Hence, as $U= H[U]$ , we deduce that $H[u_\\varepsilon ] \\rightarrow U\\quad \\mbox{ strongly in }L^2(0,T; L^2(\\Omega )^3).$ This, together with (REF ), implies that $H[\\rho _\\varepsilon u_\\varepsilon ] \\otimes u_\\varepsilon \\rightharpoonup \\bar{\\rho }U\\otimes U\\quad \\mbox{ weakly in } L^2(0,T;L^{\\frac{6\\gamma }{4\\gamma +3}}(\\Omega )^{3 \\times 3}) .$ Combining (REF ) and (REF ) we get $H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H [u_\\varepsilon ] \\rightarrow 0 \\quad \\mbox{ weakly in } L^2(0,T;L^{\\frac{6\\gamma }{4\\gamma +3}}(\\Omega )^{3 \\times 3}) .$ Summarising (REF ), (REF ), and (REF ), we see that in order to prove (REF ) all that is left to be shown is that $\\int _0^T \\int _\\Omega H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp [u_\\varepsilon ] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\rightarrow 0 \\quad \\mbox{ for any }w \\in C^\\infty _c(\\overline{\\Omega } \\times [0,T))^3, \\ {{\\mathrm {div}}_x}\\,w = 0,\\ w \\cdot n |_{\\partial \\Omega } = 0.$ As our a-priori estimates do not provide any bound on the time-derivative of the gradient part of the velocity/momentum, in order to prove (REF ) we follow the strategy from [7], which is based on the observation that possible oscillations in time mutually cancel in the acoustic waves described by means of $H^\\perp [\\rho _\\varepsilon u_\\varepsilon ]$ governed by the acoustic equation associated with our (REF ) system." ], [ "Acoustic equation", "The analysis of the following acoustic equation will allow us to control the temporal oscillations of the gradient part of the momentum.", "Let us set, to this end, $V_\\varepsilon := \\rho _\\varepsilon u_\\varepsilon , \\quad \\quad r_\\varepsilon := \\frac{\\rho _\\varepsilon -\\bar{\\rho }}{\\varepsilon },$ and $\\mathsf {L} := \\mathsf {S}(u_\\varepsilon ) - \\rho _\\varepsilon u_\\varepsilon \\otimes u_\\varepsilon - \\frac{1}{\\varepsilon ^2} \\left(p(\\rho _\\varepsilon ) - p^{\\prime }(\\bar{\\rho }) (\\rho _\\varepsilon - \\bar{\\rho }) - p(\\bar{\\rho })\\right)\\mathsf {I} + \\mathsf {\\tau }(\\psi _\\varepsilon ).$ We can then rewrite the continuity equation and the momentum equation with the extra stress tensor on its right-hand side in the form of Lighthill's acoustic analogy (cf.", "eqs.", "(5.137) and (5.140) in [7]): $\\begin{split}\\varepsilon \\partial _t r_\\varepsilon + {{\\mathrm {div}}_x}\\,V_\\varepsilon & = 0 \\quad \\quad \\mbox{ in } (0,T) \\times \\Omega ,\\\\\\varepsilon \\partial _t V_\\varepsilon + p^{\\prime }(\\bar{\\rho }) \\nabla _x r_\\varepsilon & = \\varepsilon {{\\mathrm {div}}_x}\\,\\mathsf {L} \\quad \\quad \\mbox{ in } (0,T) \\times \\Omega ,\\end{split}$ supplemented with the boundary condition $V_\\varepsilon \\cdot n |_{\\partial \\Omega } = 0 .$ The equations (REF ), together with boundary condition (REF ), are understood in a weak sense; specifically, the integral identity $\\int _0^T \\int _\\Omega \\left(\\varepsilon r_\\varepsilon \\partial _t \\varphi + V_\\varepsilon \\cdot \\nabla _x \\varphi \\right) \\,{\\rm d}x\\, \\mathrm {d}t = 0$ holds for any $\\varphi \\in C_c^\\infty ( \\overline{\\Omega }\\times (0,T) )$ and $\\int _0^T \\int _\\Omega \\left(\\varepsilon V_\\varepsilon \\cdot \\partial _t \\varphi + p^{\\prime }(\\bar{\\rho }) r_\\varepsilon {{\\mathrm {div}}_x}\\,\\varphi \\right) \\,{\\rm d}x\\, \\mathrm {d}t = \\varepsilon \\int _0^T \\int _\\Omega \\mathsf {L}_\\varepsilon : \\nabla _x \\varphi \\,{\\rm d}x\\, \\mathrm {d}t$ holds for any $\\varphi \\in C_c^\\infty ( \\overline{\\Omega }\\times (0,T) )^3$ , $\\varphi \\cdot n |_{\\partial \\Omega } = 0$ .", "By the uniform estimates (REF ), (REF ) we obtain that $\\Vert r_\\varepsilon \\Vert _{L^\\infty (0,T; (L^2 + L^q)(\\Omega ))} \\le c \\quad \\mbox{ for } q= \\min \\lbrace \\gamma ,2\\rbrace ,$ and by (REF ), (REF ) we deduce that $\\Vert V_\\varepsilon \\Vert _{L^\\infty (0,T; (L^2 + L^s)(\\Omega ))} \\le c \\quad \\mbox{ with } s\\in [1,2\\gamma / (\\gamma +1)] .$ Moreover, by combining (REF ), (REF ), (REF ), (REF ), (REF ), (REF ) we deduce that $ \\mathsf {L} = \\mathsf {L}_1 + \\mathsf {L}_2 + \\mathsf {L}_3 ,$ where $\\Vert \\mathsf {L}_1 \\Vert _{L^\\infty (0,T; L^1(\\Omega ))} \\le c ,\\quad \\Vert \\mathsf {L}_2 \\Vert _{L^2(0,T; L^2(\\Omega ))} \\le c, \\quad \\mbox{ and } \\quad \\quad \\Vert \\mathsf {L}_3\\Vert _{L^2(0,T; L^{\\frac{4}{3}}(\\Omega ))} \\le c.$ Now we can follow the strategy developed in [7].", "We reduce the problem to a finite number of modes, which are represented by eigenfunctions of the wave operator in (REF ), (REF ).", "It then transpires that the nonvanishing oscillatory part of the convective term can be represented as the gradient of a scalar function, and it can be therefore incorporated into the limiting pressure $\\Pi $ , whereby it is irrelevant in the incompressible limit of $\\varepsilon \\rightarrow 0$ .", "The details of the spectral analysis of the wave operator appearing in (REF ), (REF ) are contained in the next section." ], [ "Spectral analysis of the wave operator", "Next, as in Section 5.4.5 of [7], we now focus on the following eigenvalue problem associated with the operator appearing on the left-hand side of (REF ), (REF ): ${{\\mathrm {div}}_x}\\,\\phi = - \\lambda \\zeta , \\quad p^{\\prime }(\\bar{\\rho }) \\nabla _x \\zeta = \\lambda \\phi \\ \\mbox{ in } \\Omega , \\quad \\phi \\cdot n|_{\\partial \\Omega } = 0,$ and reformulate it as the following homogeneous Neumann problem: $- \\Delta _x \\zeta = \\Lambda \\zeta \\ \\mbox{ in } \\ \\Omega ,\\quad \\nabla _x \\zeta \\cdot n|_{\\partial \\Omega }=0, \\quad -\\Lambda = \\frac{\\lambda ^2}{p^{\\prime }(\\bar{\\rho })},$ for which we have a countable system of eigenvalues $0=\\Lambda _0 < \\Lambda _1 \\le \\Lambda _2\\le \\cdots $ , with the associated system of real eigenfunctions $\\lbrace \\zeta _n \\rbrace _{n=0}^\\infty $ being a basis of the Hilbert space $L^2(\\Omega )$ .", "The complex eigenfunctions $\\lbrace \\phi _{\\pm n} \\rbrace _{n=0}^\\infty $ are determined through (REF ) as $\\phi _{\\pm n} := \\pm \\sqrt{\\frac{p^{\\prime }(\\bar{\\rho })}{\\Lambda _n}} \\nabla _x \\zeta _n, \\quad n=1,2,\\dots .$ Moreover, we can decompose the Hilbert space $L^2(\\Omega )^3= L^2_{\\mathrm {div}}(\\Omega )^3 \\oplus H_{\\perp } (\\Omega )$ , where $H_\\perp (\\Omega )$ is the closure in the $L^2(\\Omega )^3$ norm of the space spanned by $\\lbrace ( {-i}/{\\sqrt{p^{\\prime }(\\bar{\\rho })}}) \\phi _n \\rbrace _{n=1}^\\infty $ , and where $L^2_{\\mathrm {div}}(\\Omega )^3$ denotes the subspace of divergence-free 3-component vector functions contained in $L^2(\\Omega )^3$ , with vanishing normal trace on $\\partial \\Omega $ ; that is, the closure in the $L^2(\\Omega )^3$ -norm of the set of all $\\varphi \\in C_c^\\infty (\\Omega )^3$ s.t.", "${{\\mathrm {div}}_x}\\,\\varphi = 0$ in $\\Omega $ .", "In order to reduce the problem to a finite number of modes, we introduce the corresponding orthogonal projection $P_N : L^2(\\Omega )^3\\rightarrow {\\rm span}\\,\\bigg \\lbrace \\frac{-i}{\\sqrt{p^{\\prime }(\\bar{\\rho })}} \\phi _n \\bigg \\rbrace _{n \\le N}, \\quad N = 1,2,\\dots $ and we set $H^\\perp _N [\\varphi ] := P_N H^\\perp [\\varphi ]= H^\\perp P_N[\\varphi ] $ (since $P_N$ commutes with $H^\\perp $ ).", "For any $\\varphi \\in L^2(\\Omega )^3$ we consider the Fourier coefficients $a_n [\\varphi ] := \\frac{-i}{\\sqrt{p^{\\prime }(\\bar{\\rho })}}\\int _\\Omega \\varphi \\cdot \\phi _n \\,{\\rm d}x$ and a scale of Hilbert spaces $H_{\\alpha ,\\perp }(\\Omega ) \\subset H_{\\perp }(\\Omega )$ , $\\alpha \\in [0,1]$ , with the norm $\\Vert \\cdot \\Vert _{L^2_{\\alpha ,\\perp }}$ , defined by $\\Vert \\varphi \\Vert ^2_{L^2_{\\alpha ,\\perp }} := \\sum _{n=1}^\\infty \\Lambda ^\\alpha _n\\, | a_n [\\varphi ]|^2,$ where $\\lbrace \\Lambda _n\\rbrace _{n=1}^\\infty \\subset \\mathbb {R}_{>0}$ is the family of nonzero eigenvalues associated with (REF ).", "We note that $\\begin{split}\\Vert H^\\perp [\\varphi ] - H^\\perp _N[\\varphi ] \\Vert ^2_{L^2_{\\alpha _1,\\perp }}& = \\sum _{n=N+1}^\\infty \\Lambda _n^{\\alpha _1} | a_n[\\varphi ]|^2\\le \\Lambda _N^{\\alpha _1 - \\alpha _2} \\sum _{n=N+1}^\\infty \\Lambda _n^{\\alpha _2} | a_n[\\varphi ]|^2\\\\ &= \\Lambda _N^{\\alpha _1- \\alpha _2}\\Vert H^\\perp [\\varphi ] - H^\\perp _N [\\varphi ]\\Vert ^2_{L^2_{\\alpha _2,\\perp }}\\quad \\mbox{ for } \\alpha _2 > \\alpha _1\\,\\, \\mbox{and any integer $N \\ge 0$}.\\end{split}$ As $H_{0,\\perp } = H_{\\perp }$ and $H_{1,\\perp } \\subset L^6(\\Omega )^3$ , we deduce by an interpolation argument that there exists an $\\bar{\\alpha }\\in (0,1)$ such that $H_{\\alpha ,\\perp }(\\Omega ) \\subset L^{s^{\\prime }}(\\Omega )^3 \\quad \\mbox{ where }\\, s = \\frac{2\\gamma }{\\gamma +1}\\; \\mbox{ and }\\; s^{\\prime }=\\frac{2\\gamma }{\\gamma -1}\\;\\mbox{ whenever } \\;\\alpha \\ge \\bar{\\alpha }.$ Let us return to (REF ) and rewrite it as follows: $\\begin{split}\\int _0^T \\int _\\Omega &H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp [u_\\varepsilon ] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\\\ & =\\int _0^T \\int _\\Omega H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp _N [u_\\varepsilon ] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t+ \\int _0^T \\int _\\Omega H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes ( H^\\perp [u_\\varepsilon ] - H^\\perp _N [u_\\varepsilon ] ): \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t.\\end{split}$ By (REF ), (REF ) and (REF ) we deduce that $\\left|\\int _0^T \\int _\\Omega H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] \\otimes ( H^\\perp [u_\\varepsilon ] - H^\\perp _N [u_\\varepsilon ] ): \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\right| \\le c \\Lambda _N^{-\\frac{1}{2}(1-\\bar{\\alpha })},$ uniformly as $\\varepsilon \\rightarrow 0$ , for any fixed $w\\in C^\\infty _c(\\overline{\\Omega } \\times [0,T))^3$ , such that ${{\\mathrm {div}}_x}\\,w = 0$ and $w \\cdot n |_{\\partial \\Omega } = 0$ .", "We note that $\\Lambda _N \\rightarrow \\infty $ as $N \\rightarrow \\infty $ .", "By a duality argument and (REF ) we have also that $\\Vert H^\\perp [\\varphi ] - H^\\perp _N [\\varphi ] \\Vert ^2_{[H^1(\\Omega )]^*}\\le c \\Lambda _N^{\\bar{\\alpha }-1}\\Vert \\varphi \\Vert ^2_{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )}.$ Indeed, this bound is an immediate consequence of the following: $\\int _\\Omega (H^\\perp [\\varphi ] - H^\\perp _N [\\varphi ])\\cdot \\psi \\,{\\rm d}x&= \\int _\\Omega \\varphi \\cdot (H^\\perp [\\psi ] - H^\\perp _N [\\psi ])\\,{\\rm d}x\\\\&\\le \\Vert \\varphi \\Vert _{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )} \\Vert H^\\perp [\\psi ] - H^\\perp _N [\\psi ]\\Vert _{L^{\\frac{2\\gamma }{\\gamma -1}}(\\Omega )}\\\\& \\le \\Vert \\varphi \\Vert _{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )} \\Vert H^\\perp [\\psi ] - H^\\perp _N[\\psi ] \\Vert _{L^2_{\\bar{\\alpha },\\perp }}\\\\& \\le \\Vert \\varphi \\Vert _{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )} \\Lambda _N^{\\frac{1}{2}(\\bar{\\alpha }- 1)}\\Vert H^\\perp [\\psi ] - H^\\perp _N [\\psi ]\\Vert _{L^2_{1,\\perp }}\\\\& \\le c \\Vert \\varphi \\Vert _{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )} \\Lambda _N^{\\frac{1}{2}(\\bar{\\alpha }- 1)}(\\Vert H^\\perp [\\psi ]\\Vert _{L^2_{1,\\perp }} + \\Vert H^\\perp _N [\\psi ]\\Vert _{L^2_{1,\\perp }})\\\\&\\le c \\Vert \\varphi \\Vert _{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )} \\Lambda _N^{\\frac{1}{2}(\\bar{\\alpha }- 1)}(\\Vert \\psi \\Vert _{L^2_{1,\\perp }} + \\Vert \\psi \\Vert _{L^2_{1,\\perp }})\\\\& \\le c \\Vert \\varphi \\Vert _{L^{\\frac{2\\gamma }{\\gamma +1}}(\\Omega )} \\Lambda _N^{\\frac{1}{2}(\\bar{\\alpha }- 1)}\\Vert \\psi \\Vert _{H^1(\\Omega )},$ where the penultimate inequality in this chain of inequalities follows by noting that $a_n[H^\\perp [\\psi ]]=a_n[\\psi ]$ for all $n \\ge 1$ , that $a_n[H^\\perp _N [\\psi ]] = a_n[\\psi ]$ for $1 \\le n \\le N$ and $a_n[H^\\perp _N [\\psi ]] = 0$ for $n > N$ , whereby $\\Vert H^\\perp [\\psi ]\\Vert _{L^2_{1,\\perp }} = \\Vert \\psi \\Vert _{L^2_{1,\\perp }}$ and $\\Vert H^\\perp _N [\\psi ]\\Vert _{L^2_{1,\\perp }} \\le \\Vert \\psi \\Vert _{L^2_{1,\\perp }}$ .", "With these arguments in hand, the proof of (REF ) reduces to showing that $\\int _0^T \\int _\\Omega H^\\perp _N[\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp _N [u_\\varepsilon ] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\rightarrow 0\\mbox{ for any }w \\in C^\\infty _c(\\overline{\\Omega } \\times [0,T))^3, \\ {{\\mathrm {div}}_x}\\,w = 0,\\ w \\cdot n |_{\\partial \\Omega } = 0$ as $\\varepsilon \\rightarrow 0$ , or, rather, thanks to (REF ), it suffices to show that $\\int _0^T \\int _\\Omega H^\\perp _N[\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp _N [\\rho _\\varepsilon u_\\varepsilon ] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t\\rightarrow 0 \\mbox{ for any }w \\in C^\\infty _c(\\overline{\\Omega } \\times [0,T))^3, \\ {{\\mathrm {div}}_x}\\,w = 0,\\ w \\cdot n |_{\\partial \\Omega } = 0$ as $\\varepsilon \\rightarrow 0$ ; this is done in the next subsection." ], [ "Weak limit in the convective term", "In order to prove (REF ) let us choose a test function for (REF ) of the following `separated' form $\\varphi (x,t) = \\kappa (t)\\, \\zeta _n(x) \\quad \\mbox{ with } \\kappa \\in C^\\infty _c (0,T),$ where $\\zeta _n$ and $\\Lambda _n$ solve the eigenvalue problem (REF ), and for (REF ) as a test function we take $\\varphi (x,t) = \\kappa (t)\\, \\frac{1}{\\sqrt{\\Lambda _n}} \\nabla _x \\zeta _n \\quad \\mbox{ with } \\kappa \\in C^\\infty _c(0,T) .$ Consequently, we obtain the following system of ordinary differential equations: $\\begin{split}\\varepsilon \\partial _t( b_n[r_\\varepsilon ]) - \\sqrt{\\Lambda _n}\\, a_n[V_\\varepsilon ] & = 0 \\quad \\mbox{ in } (0,T),\\\\\\varepsilon \\partial _t( a_n[V_\\varepsilon ]) + p^{\\prime }(\\bar{\\rho }) \\sqrt{\\Lambda _n}\\, b_n [r_\\varepsilon ] & = \\varepsilon L_{\\varepsilon ,n} \\quad \\mbox{ in } (0,T),\\end{split}$ for $n= 1,2, \\dots $ , where $a_n[V_\\varepsilon ]$ are the Fourier coefficients of $V_\\varepsilon $ , defined by (REF ), $b_n[r_\\varepsilon ] := \\int _{\\Omega } r_\\varepsilon \\zeta _n \\,{\\rm d}x,$ and $\\Vert L_{\\varepsilon ,n} \\Vert _{L^1(0,T)} \\le c\\quad \\mbox{ for any fixed }n=1,2,3,\\dots .$ Then, in terms of the Helmholtz projection, (REF ) reads as follows: $\\begin{split}\\varepsilon \\partial _t ([r_\\varepsilon ]_N) + {{\\mathrm {div}}_x}\\,(H^\\perp _N[\\rho _\\varepsilon u_\\varepsilon ]) & = 0 \\quad \\mbox{ in } \\Omega \\times (0,T),\\\\\\varepsilon \\partial _t (H^\\perp [\\rho _\\varepsilon u_\\varepsilon ]) + p^{\\prime }(\\bar{\\rho }) \\nabla _x [r_\\varepsilon ]& = \\varepsilon L_{\\varepsilon ,N} \\quad \\mbox{ in } (0,T) \\times \\Omega ,\\end{split}$ where we set $[r_\\varepsilon ]_N := \\sum _{n=1}^N b_n [r_\\varepsilon ] \\zeta _n$ , and thanks to (REF ) we get that $\\Vert L_{\\varepsilon ,N}\\Vert _{L^1(\\Omega \\times (0,T))} \\le c$ for any fixed $N\\ge 1$ .", "Notice that (REF ) is satisfied in a strong sense, since $[r_\\varepsilon ]_N$ and $H^\\perp [\\rho _\\varepsilon u_\\varepsilon ]$ are regular enough.", "By introducing the potential $\\Phi _{\\varepsilon ,N}$ via $\\nabla _x \\Phi _{\\varepsilon ,N} = H^\\perp _N[\\rho _\\varepsilon u_\\varepsilon ], \\quad \\int _\\Omega \\Phi _{\\varepsilon ,N} \\,{\\rm d}x= 0,$ we can reformulate (REF ) as $\\int _0^T \\int _\\Omega H^\\perp _N [\\rho _\\varepsilon u_\\varepsilon ] \\otimes H^\\perp [\\rho _\\varepsilon u_\\varepsilon ] : \\nabla _x w \\,{\\rm d}x\\, {\\rm d}t= - \\int _0^T \\int _\\Omega \\Delta _x \\Phi _{\\varepsilon ,N} \\nabla _x \\Phi _{\\varepsilon ,N} \\cdot w \\,{\\rm d}x\\, {\\rm d}t,$ where $w \\in C^\\infty _c(\\overline{\\Omega } \\times (0,T))^3$ , ${{\\mathrm {div}}_x}\\,w = 0$ , $w \\cdot n |_{\\partial \\Omega }= 0.$ Then by (REF ) we obtain that $\\int _0^T \\int _\\Omega \\Delta _x \\Phi _{\\varepsilon ,N} \\nabla _x \\Phi _{\\varepsilon ,N} \\cdot w \\,{\\rm d}x\\, {\\rm d}t= \\varepsilon \\int _0^T \\int _\\Omega [r_\\varepsilon ]_N \\nabla _x \\Phi _{\\varepsilon ,N} \\cdot \\partial _t w \\,{\\rm d}x\\, {\\rm d}t+ \\varepsilon \\int _{0}^T \\int _\\Omega [r_\\varepsilon ]_N L_{\\varepsilon ,N} \\cdot w \\,{\\rm d}x\\, \\mathrm {d}t .$ Therefore, thanks to (REF ), the right-hand side of (REF ) converges to zero as $\\varepsilon \\rightarrow 0$ for any fixed $w$ as above.", "This step completes the proof of (REF ) and consequently we may replace $\\overline{\\bar{\\rho }U\\otimes U}$ by ${\\bar{\\rho }U\\otimes U}$ in (REF ), and deduce that (REF ) is satisfied.", "The formulation (REF ) with the incompressibility constraint (REF ) is supplemented by the boundary conditions $U\\cdot n|_{\\partial \\Omega } = 0, \\quad [\\mathsf {D}U]n \\times n|_{\\partial \\Omega } = 0.$ Moreover, using (REF ) we can show that $U\\in C([0,T]; L^2_{weak}(\\Omega ))$ , and we thus deduce also that $\\int _\\Omega U(0,\\cdot ) \\cdot w \\,{\\rm d}x= \\int _\\Omega U_0 \\cdot w \\,{\\rm d}x\\quad \\mbox{ for all } w \\in C^\\infty _c(\\overline{\\Omega }), \\ {{\\mathrm {div}}_x}\\,w = 0; \\ w \\cdot n |_{\\partial \\Omega }= 0;$ in other words, $U(0,\\cdot ) = H[U_0]$ .", "The formulation (REF ) is satisfied for all functions $w$ that are smooth, divergence-free, and $w \\cdot {n}|_{\\partial \\Omega } = 0$ .", "However this family of test functions can be extended to the one mentioned in (REF ) by a density argument and the estimates obtained in Section REF ." ], [ "Concluding remarks", "We studied the behaviour of global-in-time weak solutions to a class of bead-spring chain models with finitely extensible nonlinear elastic (FENE) spring potentials for dilute polymeric fluids, and we proved that as the Mach number tends to zero the system is driven to its incompressible counterpart.", "Our analysis was performed under the assumption that the velocity of the solvent in which the polymer molecules are immersed satisfies a complete slip boundary condition.", "The corresponding passage to the limit in the case of a nonslip boundary condition for the velocity field is, as in the case in the compressible Navier–Stokes system (i.e.", "in the absence of coupling to the Fokker–Planck equation), more technical (cf.", "Ch.7 of [7], particularly the first paragraph of subsection 7.1.2) and will be considered elsewhere.", "The existence of global weak solutions to the coupled incompressible Navier–Stokes–Fokker–Planck system was proved in [1] on arbitrary bounded open Lipschitz domains $\\Omega \\subset \\mathbb {R}^d$ , $d \\in \\lbrace 2,3\\rbrace $ , while the existence proof in [3] for the corresponding compressible Navier–Stokes–Fokker–Planck system was restricted to bounded domains $\\Omega \\subset \\mathbb {R}^d$ , $d \\in \\lbrace 2,3\\rbrace $ , with $\\partial \\Omega \\subset C^{2,\\alpha }$ , $\\alpha \\in (0,1)$ .", "Using the ideas in [8] the $C^{2,\\alpha }$ regularity of $\\partial \\Omega $ assumed in [3] can be relaxed to $\\Omega $ being a bounded open Lipschitz domain.", "For simplicity and for the sake of consistency with the assumptions on $\\Omega $ in [7] we have however restricted ourselves here to domains of class $C^{2,\\alpha }$ ." ], [ "Acknowledgments", "AWK was partially supported by a Newton Fellowship of the Royal Society and by the grant Iuventus Plus 0871/IP3/2016/74 of Ministry of Sciences and Higher Education RP.", "Aneta Wróblewska-Kamińska Institute of Mathematics Polish Academy of Sciences ul.", "Śniadeckich 8 00-656 Warsaw, Poland Endre Süli Mathematical Institute University of Oxford Woodstock Road Oxford OX2 6GG, UK" ] ]
1906.04534
[ [ "Spring-Electrical Models For Link Prediction" ], [ "Abstract We propose a link prediction algorithm that is based on spring-electrical models.", "The idea to study these models came from the fact that spring-electrical models have been successfully used for networks visualization.", "A good network visualization usually implies that nodes similar in terms of network topology, e.g., connected and/or belonging to one cluster, tend to be visualized close to each other.", "Therefore, we assumed that the Euclidean distance between nodes in the obtained network layout correlates with a probability of a link between them.", "We evaluate the proposed method against several popular baselines and demonstrate its flexibility by applying it to undirected, directed and bipartite networks." ], [ "Introduction", "Link prediction is usually understood as a problem of predicting missed edges in partially observed networks or predicting edges which will appear in the near future of evolving networks [21].", "A prediction is based on the currently observed edges and takes into account a topological structure of the network.", "Also, there may be some side information or meta-data such as node and edge attributes.", "The importance of link prediction problem follows naturally from a variety of its practical applications.", "For example, popular online social networks such as Facebook and LinkedIn suggest a list of people you may know.", "Many e-commerce websites have personalized recommendations which can be interpreted as predictions of links in bipartite graphs [27].", "Link prediction can also help in the reconstruction of some partially studied biological networks by allowing researches to focus on the most probable connections [19].", "More formally, in order to evaluate the performance of the proposed link prediction method we consider the following problem formulation.", "The input is a partially observed graph and our aim is to predict the status (existence or non-existence) of edges for unobserved pairs of nodes.", "This definition is sometimes called structural link prediction problem [21].", "Another possible definition suggests to predict future edges based on the past edges but it is limited to time-evolving networks which have several snapshots.", "An extensive survey of link prediction methods can be found in [17], [19], [6].", "Here we describe some of the most popular approaches that are usually used as baselines for evaluation [21], [28].", "The simplest framework of link prediction methods is the similarity-based algorithm, where to each pair of nodes a score is assigned based on topological properties of the graph [19].", "This score should measure similarity (also called proximity) of any two chosen nodes.", "For example, one such score is the number of common neighbours that two nodes share, because usually if nodes have a lot of common neighbours they tend to be connected with each other and belong to one cluster.", "Other popular scores are Shortest Distance, Preferential Attachment [3], Jaccard [29] and Adamic-Adar score [2].", "Another important class of link prediction methods are latent feature models [24], [20], [21], [7], [1].", "The basic idea is to assign each node a vector of latent features in such a way that connected nodes will have similar latent features.", "Many approaches from this class are based on the matrix factorization technique which gained popularity through its successful application to the Netflix Prize problem [14].", "The basic idea is to factor the adjacency matrix of a network into the product of two matrices.", "The rows and columns of these matrices can be interpreted as latent features of the nodes.", "Latent features can be also the result of a graph embedding [9].", "In particular, there are recent attempts to apply neural networks for this purpose [26], [10].", "In this paper, we propose to use spring-electrical models to address the link prediction problem.", "These models have been successfully used for networks visualization [8], [31], [12].", "A good network visualization usually implies that nodes similar in terms of network topology, e.g., connected and/or belonging to one cluster, tend to be visualized close to each other [25].", "Therefore, we assumed that the Euclidean distance between nodes in the obtained network layout correlates with a probability of a link between them.", "Thus, our idea is to use the described distance as a prediction score.", "We evaluate the proposed method against several popular baselines and demonstrate its flexibility by applying it to undirected, directed and bipartite networks.", "The rest of the paper is organized as follows.", "First, we formalize the considered problem and present standard metrics for performance evaluation in Section  and review related approaches which we used as baselines in Section .", "Next, we discuss spring-electrical models and introduce our method for link prediction in Section .", "We start a comparison of methods with a case study discussed in Section .", "Section  presents experiments with undirected networks.", "We modify the basic model to apply it to bipartite and directed networks in Section , followed by conclusion in Section ." ], [ "Problem Statement", "We focus on the structural definition of the link prediction problem [21].", "The network with some missing edges is given and the aim is to predict these missing edges.", "This definition allows us to work with networks having only a single time snapshot.", "We also assume that there is no side information such as node or edge attributes, thus, we focus on the link prediction methods based solely on the currently observed link structure.", "More formally, suppose that we have a network ${G = \\langle V, E \\rangle }$ without multiple edges and loops, where $V$ is the set of nodes and $E$ is the set of edges.", "We assume that $G$ is a connected graph, otherwise we change it to its largest connected component.", "The set of all pairs of nodes from $V$ is denoted by $U$ .", "Given the network $G$ , we actually do not know its missing edges.", "Thus, we hide a random subset of edges $E_{pos} \\subset E$ , while keeping the network connected.", "The remaining edges are denoted by $E_{train}$ .", "Also we randomly sample unconnected pairs of nodes $E_{neg}$ from $U \\backslash E$ .", "In this way, we form ${E_{test} = E_{pos} \\cup E_{neg}}$ such that ${|E_{neg}| = |E_{pos}|}$ and ${E_{test} \\cap E_{train} = \\emptyset }$ .", "We train models on the network ${G^{\\prime } = \\langle V, E_{train} \\rangle }$ and try to find missing edges $E_{pos}$ in $E_{test}$ .", "We assume that each algorithm provides a list of scores for all pairs of nodes $(u,v) \\in E_{test}$ .", "The $score(u,v)$ characterizes similarity of nodes $u$ and $v$ .", "The higher the $score(u,v)$ , the higher probability of these nodes to be connected is.", "To measure the quality of algorithms we use the area under the receiver operating characteristic curve (AUC) [11].", "From a probabilistic point of view AUC is the probability that a randomly selected pair of nodes from $E_{pos}$ has higher score than a randomly selected pair of nodes from $E_{neg}$ : $\\sum _{(u_p, v_p) \\in E_{pos}}\\sum _{(u_n, v_n) \\in E_{neg}}\\frac{ I \\left[{score(u_p,v_p) > score(u_n,v_n)}\\right] }{ | E_{pos} | \\cdot | E_{neg} |},$ where $I[\\cdot ]$ denotes an indicator function.", "We repeat the evaluation several times in order to compute the mean AUC as well as the standard deviation of AUC." ], [ "Related work", "In this section we describe some popular approaches to link prediction problem.", "The mentioned methods will be used as baselines during our experiments." ], [ "Local Similarity Indices", "Local similarity-based methods calculate $score(u,v)$ by analyzing direct neighbours of $u$ and $v$ based on different assumptions about link formation behavior.", "We use $\\delta (u)$ to denote the set of neighbours of $u$ .", "The assumption of Common Neighbours index is that a pair of nodes has a higher probability to be connected if they share many common neighbours $CN(u, v) := |\\delta (u) \\cap \\delta (v)|.$ Adamic-Adar index is a weighted version of Common Neighbours index $AA(u, v) := \\sum _{z \\in \\delta (u) \\cap \\delta (v)} \\frac{1}{|\\delta (z)|}.$ The weight of a common neighbour is inversely proportional to its degree.", "Preferential Attachment index is motivated by Barabási–Albert model [3] which assumes that the ability of a node to obtain new connections correlates with its current degree, $PA(u, v) := |\\delta (u)| \\cdot |\\delta (v)|.$ Our choice of these three local similarity indices is based on the methods comparison conducted in [17], [21]." ], [ "Matrix Factorization", "Matrix factorization approach is extensively used for link prediction problem [24], [20], [21], [7], [1].", "The adjacency matrix of the network is approximately factorized into the product of two matrices with smaller ranks.", "Rows and columns of these matrices can be interpreted as latent features of the nodes and the predicted score for a pair of nodes is a dot-product of corresponding latent vectors.", "A truncated singular value decomposition (Truncated SVD) of matrix $A \\in R^{m \\times n}$ is a factorization of the form $A_r = U_r \\Sigma _r V^T_r$ , where $U_r \\in R^{m \\times r}$ has orthonormal columns, $\\Sigma _r = diag(\\sigma _1, ... , \\sigma _r) \\in R^{r \\times r}$ is diagonal matrix with $\\sigma _i \\ge 0$ and $V_r \\in R^{n \\times r}$ also has orthonormal columns [13].", "Actually it solves the following optimization problem: $\\begin{aligned}\\underset{A_r: rank(A_r) \\le r}{\\text{min}} \\Vert A - U_r \\Sigma _r V^T_r \\Vert _F = \\@root \\of {\\sigma _{r+1}^2 + ... + \\sigma _{n}^2},\\end{aligned}\\\\$ where $\\sigma _1, ... , \\sigma _n$ are singular values of the matrix $A$ .", "To cope with sparse matrices we use scipy.sparse.linalg.svdshttps://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.sparse.linalg.svds.html implementation of Truncated SVD based on the implicitly restarted Arnoldi method [30].", "Another popular approach for training latent features is a non-negative matrix factorization (NMF).", "NMF with $r$ components is a group of algorithms where a matrix $A \\in R^{n \\times m}$ is factorized into two matrices $W_r \\in R^{n \\times r}$ and $H_r \\in R^{m \\times r}$ with the property that all three matrices have non-negative elements [18]: $\\begin{aligned}\\underset{W_r, H_r: W_r \\ge 0, H_r \\ge 0}{\\text{min}} \\Vert A - W_r H_r^T \\Vert _F.\\end{aligned}\\\\$ These conditions are consistent with the non-negativity of the adjacency matrix in our problem.", "We take as a baseline alternating non-negative least squares method with coordinate descent optimization approach [5] from sklearn.decomposition.NMFhttp://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html." ], [ "Neural Embedding", "Several attempts to apply neural networks for graph embedding, such as DeepWalk [26] and node2vec [10], were motivated by word2vec, a widely used algorithm for extracting vector representations of words [23].", "The general idea of adopting word2vec for graph embedding is to treat nodes as “words” and generate “sentences” using random walks.", "The objective is to maximize likelihood of observed nodes co-occurrences in random walks.", "Probability of nodes $u$ and $v$ with latent vectors $x_u$ and $x_v$ to co-occur in a random walk is estimated using a softmax: $P(u | v) = \\frac{\\exp (x_u \\cdot x_v)}{\\sum _{w \\in V} \\exp (x_w \\cdot x_v)}.$ In practice a direct computation of the softmax is infeasible, thus, some approximations, such as a “negative sampling” or a “hierarchical softmax”, are used [22].", "In this paper we consider node2vec which has shown a good performance in the link prediction [10].", "This method generates 2nd order random walks $c_1, \\ldots , c_n$ with a transition probability defined by the following relation: $P\\left(c_i = x | c_{i-1} = v, c_{i-2} = t\\right) \\propto {\\left\\lbrace \\begin{array}{ll}0,~~\\text{if}~~(v,x) \\notin E\\\\\\frac{1}{p},~~\\text{else if}~~d_{tx} = 0\\\\1,~~\\text{else if}~~d_{tx} = 1\\\\\\frac{1}{q},~~\\text{else if}~~d_{tx} = 2\\end{array}\\right.", "}$ where $d_{tx}$ is the graph distance between nodes $t$ and $x$ .", "The parameters $p$ and $q$ allows to interpolate between walks that are more akin to breadth-first or depth-first search.", "Generated random walks are given as input to word2vec.", "Finally, for each node $u$ a vector $x_u$ is assigned.", "In order to estimate $score(u, v)$ we compute the dot-product of the corresponding latent vectors: $node2vec(u, v) := x_u \\cdot x_v.$ We have used a reference implementation of node2vechttps://github.com/aditya-grover/node2vec with default parameters unless stated otherwise." ], [ "Spring-Electrical Models for Link Prediction", "Currently the main application of spring-electrical models to graph analisys is a graph visualization.", "The basic idea is to represent a graph as a mechanical system of like charges connected by springs [8].", "In this system between each pair of nodes act repulsive forces and between adjacent nodes act attractive forces.", "In an equilibrium state of the system, the edges tend to have uniform length (because of the spring forces), and nodes that are not connected tend to be drawn further apart (because of the electrical repulsion).", "Actually, in practice edge attraction and vertex repulsion forces may be defined using functions that are not precisely based on the Hooke's and Coulomb's laws.", "For instance, in  [8], the pioneering work of Fruchterman and Reingold, repulsive forces are inversely proportional to the distance and attractive forces are proportional to the square of the distance.", "In [31] and [12] spring-electrical models were further studied and the repulsive force got new parameters $C$ , $K$ and $p$ , which we will discuss later.", "In our research, we will also use their modification with the following forces: $f_r(u, v) &= -C K^{1+p}/||x_u - x_v||^p, \\,\\,\\,\\,\\,\\, p > 0, u \\ne v; u, v \\in V ,\\\\f_a(u, v) &= ||x_u - x_v||^2/K , \\,\\,\\,\\,\\,\\, (u, v) \\in E; u, v \\in V.$ Here we denote by $\\Vert x_u-x_v\\Vert $ the Euclidean distance between coordinate vectors of the nodes $u$ and $v$ in a layout, and by $f_r(u, v)$ and $f_a(u, v)$ we denote values of repulsive and attractive forces, respectively.", "Figure REF illustrates the forces acting on one of the nodes in a simple star graph.", "Figure: Spring-electrical modelFigure: Triangulation of 3d-sphere (3d latent features)Figure: Triangulation of 3d-sphere (AUC scores)By exploiting that force is the negative gradient of energy, the force model can be transformed into an energy model, such that force equilibria correspond to (local) energy minima [25].", "An optimal layout is achieved in the equilibrium state of the system with the minimum value of the system energy.", "Thus, finding of an optimal layout is actually the following optimization problem: $\\min _{\\lbrace x_w, w \\in V \\rbrace } \\left( \\sum _{(u,v) \\in E} \\frac{||x_u - x_v||^3}{3K} + \\sum _{{u, v \\in V\\\\ u \\ne v}} \\frac{\\frac{1}{p-1} C K^{1+p} }{||x_u - x_v||^{p-1} } \\right).$ A lot of algorithms were proposed to find the optimal layout.", "All of them meet two main challenges: (i) a computational complexity of the repulsive forces, (ii) a slow convergence speed and trapping in local minima.", "Note that local minima of the energy might lead to layouts with bad visualization characteristics.", "We will use Scalable Force Directed Placement (SFDP) algorithm described in [12], which is able to overcome both challenges.", "The computational complexity challenge in SFDP is solved by Barnes-Hut optimization [4].", "As a result, the total complexity of all repulsive forces calculation reduces to $O(|V|\\log |V|)$ , compared to a straightforward method with complexity $O(|V|^2)$ .", "The second challenge is solved by a multilevel approach in combination with an adaptive cooling scheme [31], [12].", "The idea is to iteratively coarse the network until the network size falls below some threshold.", "Once the initial layout for the coarsest network is found, it is successively refined and extended to all the levels starting with the coarsest networks and ending with the original.", "Let us now discuss how the model parameters influence on the equilibrium states of the system.", "According to Theorem 1 in [12], parameters $K$ and $C$ do not change possible equilibriums and only scale them, however, they influence on the speed of convergence to an equilibrium.", "As it follows from equation (), a parameter $p$ controls the strength of repulsive forces.", "For small values of $p$ , nodes on the periphery of a layout are affected strongly by repulsive forces of central nodes.", "This leads to so-called “peripheral effect”: edges at the periphery are longer than edges at the center [12].", "On the other hand, despite larger values of $p$ reduce “peripheral effect”, too small repulsive forces might lead to clusters collapsing [25].", "We study influence of the repulsive force exponent $p$ on the performance of our method in Section .", "A good network visualization usually implies that nodes similar in terms of network topology, e.g., connected and/or belonging to one cluster, tend to be visualized close to each other [25].", "Therefore, we assumed that the Euclidean distance between nodes in the obtained network layout correlates with a probability of a link between them.", "Thus, to address the link prediction problem, we first find a network layout using SFDP and then use the distance between nodes as a prediction score: $\\text{SFDP}(u, v) = \\Vert x_u - x_v \\Vert .$ Our method can be interpreted as a latent feature approach to link prediction." ], [ "Case study", "Before discussing experiments with real-world networks, we consider a graph obtained by triangulation of a three-dimensional sphere.", "This case study reveals an important difference between SFDP and other baselines which use latent feature space.", "First, we have trained three-dimensional latent vectors and visualized them (see Figure REF ).", "SFDP arranges latent vectors on the surface of a sphere as one might expect.", "SVD's latent vectors form three mutually perpendicular rays and node2vec places latent vectors on the surface of a cone.", "The reason of such a different behavior is that SFDP uses the Euclidean distance in its loss function (see equation (REF )), while node2vec and SVD relies on the dot-product.", "One can see that dot-product based methods fail to express the fact that all nodes in the considered graph are structurally equivalent.", "The difference between the dot-product and the Euclidean distance is in the way they deal with latent vectors corresponding to completely unrelated nodes.", "The dot-product tends to make such vectors perpendicular, while the Euclidean distance just places them “far away”.", "The number of available mutually perpendicular direction is determined by the dimensionality of the latent feature space.", "As the result, dimensionality becomes restrictive for dot-product based methods.", "In order to further support this observation we have evaluated AUC score of the discussed methods depending on the dimensionality of the latent feature space.", "The results are presented on Figure REF .", "SFDP has good quality starting from very low dimensions.", "Node2vec achieves a reasonable quality in the ten-dimensional latent feature space, while SVD and NMF needs about 100 dimensions.", "Our experiments with real-world networks, described above, confirm that SFDP might have a competitive quality of link prediction even in very low dimensions.", "This advantage might lead to some practical applications as many problems related to vector embedding are much easier in low dimensions, e.g., searching of the nearest neighbors." ], [ "Experiments with Undirected Networks", "First, we have chosen several undirected networks in which geographical closeness correlates with the probability to obtain a connection.", "Thus, the ability to infer a distance feature can be tested on them.", "“PowerGrid” [15] is an undirected and unweighted network representing the US electric powergrid network.", "There are $4,941$ nodes and $6,594$ supplies in the system.", "It is a sparse network with average degree $2.7$ .", "“Euroroad” [15] is a road network located mostly in Europe.", "Nodes represent cities and an edge between two nodes denotes that they are connected by a road.", "This network consists of $1,174$ vertices (cities) and $1,417$ edges (roads).", "“Airport” [15] has information about $28,236$ flights between $1,574$ US airports in 2010.", "Airport network has hubs, i.e.", "several busiest airports.", "Thus, the connections occur not only because of geographical closeness, but also based on the airport sizes.", "We have also chosen several undirected networks of other types.", "“Facebook” [15] is a Facebook friendship network consist of $817,035$ friendships and $63,731$ users.", "This network is a subset of full Facebook friendship graph.", "“Reactome” [15] has information about $147,547$ interactions between $6,327$ proteins.", "“Ca-HepTh” [16] is a collaboration network from the arXiv High Energy Physics - Theory section from January 1993 to April 2003.", "The network has $9,877$ authors and $25,998$ collaborations.", "All the datasets and our source code are available in our GitHub repositoryhttps://github.com/KashinYana/link-prediction.", "In our experiments we have used two implementations of SFDP, from graphvizhttp://www.graphviz.org and graph_toolhttps://graph-tool.skewed.de/ libraries, with the default parameters unless stated otherwise.", "Table: Comparison with latent features modelsFollowing the discussion in Section , we have first analyzed the behavior of latent feature methods in low dimensions.", "On the sparse datasets “PowerGrid” and “Euroroad” we hide $10\\%$ of edges in order to keep a train network connected.", "On other datasets $30\\%$ of edges were included in a test set.", "We repeat this process several times and report mean AUC scores as well as 95% confidence intervals.", "The results are presented on Figure REF .", "As it was expected the dot-product based methods have a clear growing trend on the most of networks, with lower performance in low dimensions.", "In contrast, SFDP has a good quality starting from two dimensions usually with a slight increase at dimensionality equals three and a slowly decreasing trend after this point.", "We have also studied higher dimensions (up to 500 dimensions).", "In Table REF for each method and dataset one can find an optimal dimension with the corresponding AUC score, standard deviation values smaller than $0.0005$ are shown as zero.", "Surprisingly SFDP demonstrates competitive quality in comparison even with high-dimensional dot-product based methods.", "This observation suggests that real networks might have a lower inherit dimensionality than one might expect.", "Figure: Influence of the repulsive force exponentAs the influence of dimensionality on the performance of SFDP is not very significant we further focused on the two-dimensional SFDP.", "Speaking about other parameters, according to Section  parameters $C$ and $K$ do not change SFDP performance regarding link prediction since they only scale the optimal layout.", "Thus, we tried to vary the parameter $p$ .", "Based on Figure REF , we have decided to continue using of the default value of the repulsive force exponent $p$ which equals 2.", "Table: Comparison with local similarity indicesFinally, we have compared SFDP with local similarity indices.", "The results can be found in Table REF .", "SFDP has shown the highest advance on the geographical networks “PowerGrid” and “Euroroad”.", "This observation supports our hypothesis that SFDP can infer geographical distance.", "The result of SFDP on the “Airport” network is not so good and we link this fact to the presence of distinct hubs, we will return to this network in Section REF .", "On other datasets spring-electrical approach has shown superior or competitive quality." ], [ "Model Modifications", "The basic spring-electrical model can be adapted to different networks types.", "In this section we will present possible modifications for bipartite and directed networks." ], [ "Bipartite Networks", "A bipartite network is a network which nodes can be divided into two disjoint sets such that edges connect nodes only from different sets.", "It is interesting to study this special case because the link prediction in bipartite networks is close to a collaborative filtering problem.", "We use the following bipartite datasets in our experiments: “Movielens” [15] dataset contains information how users rated movies on the website http://movielens.umn.edu/.", "The version of the dataset which we used has $9,746$ users, $6,040$ movies and 1 million ratings.", "Since we are not interested in a rating score, we assign one value to all edge weights.", "“Frwiki” [15] is a network of $201,727$ edit events done by $30,997$ users in $2,884$ articles of the French Wikipedia.", "“Condmat” [15] is an authorship network from the arXiv condensed matter section (cond-mat) from 1995 to 1999.", "It contains $58,595$ edges which connect $16,726$ publications and $38,741$ authors.", "Let us consider the “Movielens” dataset.", "This network has two types of nodes: users and movies.", "When applying SFDP model to this network we expect movies of the same topic to be placed nearby.", "Similarly, we expect users which may rate the same movie to be located closely.", "In the basic spring-electrical models the repulsive forces are assigned between all nodes.", "It works good for visualization purpose, but it hinders formation of cluster by users and movies.", "Therefore, we removed repulsive forces between nodes of the same type.", "Consider a bipartite network $G= \\langle V, E \\rangle $ , which nodes are partitioned into two subsets $V = L \\sqcup R$ such that $E \\subset L \\times R$ .", "In our modification of SFDP model for bipartite networks, denoted Bi-SFDP, the following forces are assigned between nodes.", "$f_{r}(u, v) &= -C K^{(1+p)} / ||x_u - x_v||^p, \\,\\,\\,\\,\\,\\, p > 0, u \\in L, v \\in R ,\\\\f_a(u, v) &= ||x_u - x_v||^2 / K , \\,\\,\\,\\,\\,\\, (u, v) \\in E; u \\in L, v \\in R.$ To carry out experiments with Bi-SFDP model we have written a patch for graph_tools library.", "Figure REF demonstrates how this modification affects the optimal layout.", "We consider a small user–movie network of ten users and three movies.", "Note that on Figure REF  (b) some of the yellow nodes were collapsed.", "The reason is that if we remove repulsive forces between nodes of the same type, users which link to the same movies will have the same positions.", "Thus, Bi-SFDP model could assigned close positions to users with the same interests.", "During our preliminary experiments with bipartite networks we have found that PA demonstrates too high results.", "The reason is that the preferential attachment mechanism is too strong in bipartite networks.", "In order to focus on thinner effects governing links formation, we have changed a way to sample the set of negative pairs of nodes $E_{neg}$ .", "A half of pairs of nodes in $E_{neg}$ were sampled with probability proportional to the product of its degrees, such pairs of nodes are counterexamples for the preferential attachment mechanism.", "The results of the experiments are summarized in Table REF .", "Baselines CN and AA are not included in the table, because their scores are always equal to zero on bipartite networks.", "Figure: The visualization of a bipartite network by SFDP and Bi-SFDPBi-SFDP modification has shown the increase in quality compared with the basic SFDP model on “Movielens” and “Frwiki” datasets.", "It means that our assumption works for these networks.", "In contrast, on the “Condmat” standard SFDP and node2vec outperforms all other baselines.", "In general, the results for bipartite graphs are very dataset-dependent." ], [ "Directed Networks", "Spring-electrical models are not suitable for predicting links in directed networks because of the symmetry of the forces and the distance function.", "Therefore, we first propose to transform the original directed network.", "Given a directed network $G= \\langle V, E \\rangle $ , we obtain an undirected bipartite network $G^{\\prime }= \\langle V^{\\prime }, E^{\\prime } \\rangle $ , $V^{\\prime } = L \\sqcup R$ , $E^{\\prime } \\subset L \\times R$ by the following process.", "Each node $u \\in V$ corresponds to two nodes $u_{out} \\in L$ and $u_{in} \\in R$ .", "One of the nodes is responsible for outgoing connections, the other one for incoming connections.", "Thus, for each directed edge $(u, v) \\in E$ an edge $(u_{out}, v_{in})$ is added in $E^{\\prime }$ .", "Figure REF illustrates the described transformation.", "As the result, $G^{\\prime }$ has information about the all directed edges of $G$ .", "Then Bi-SFDP model can be applied to find a layout of the network $G^{\\prime }$ .", "Finally, prediction scores for pairs of nodes from the network $G$ can be easily inherited from the layout of the network $G^{\\prime }$ .", "We have called this approach Di-SFDP and have tested on the following datasets.", "“Twitter” [15] is a user-user network, where directed edges represent the fact that one user follows the other user.", "The network contains $23,370$ users and $33,101$ follows.", "“Google+” [15] is also a user-user network.", "Directed links indicate that one user has the other user in his circles.", "There are $23,628$ users and $39,242$ friendships in the network.", "“Cit-HepTh” [15] has information about $352,807$ citations among $27,770$ publications in the arXiv High Energy Physics Theory (hep-th) section.", "All pairs of nodes $(u, v)$ such that $(v, u) \\in E$ but $(u, v)\\notin E$ we call difficult pairs.", "They can not be correctly scored by the basic SFDP model.", "It is especially interesting to validate models on such pairs of nodes.", "Therefore, in our experiments a half of pairs of nodes in $E_\\text{neg}$ are difficult pairs.", "The experiment results are shown in Table REF .", "The baselines PA, CC and AA can be also calculated on $G^{\\prime }$ , but their quality is close to a random predictor.", "One can see that Di-SFDP outperforms other baselines on two of the datasets and have a competitive quality on the last one.", "Note that out-of-box node2vec can not correctly score difficult pairs of nodes as it infers only one latent vector for each node, while other methods has two latent vectors, one is responsible for outgoing connections and another one for incoming connections.", "Di-SFDP has also helped us to improve quality on the “Airport” dataset.", "Despite “Airport” is an undirected network, due to presence of hubs it has a natural orientation of edges.", "Thus, our idea was to first orient edges from the nodes of low degrees to the nodes of high degrees and then apply Di-SFDP.", "This trick allowed us to improve the mean AUC from 0.938 to 0.972.", "Table: AUC scores for directed datasets, |E pos E_{pos}|/|EE| = 0.30.3Figure: Di-SFDP graph transformation" ], [ "Conclusion", "In this paper we proposed to use spring-electrical models to address the link prediction problem.", "We first applied the basic SFDP model to the link prediction in undirected networks and then adapted it to bipartite and directed networks by introducing two novel methods, Bi-SFDP and Di-SFDP.", "The considered models demonstrates superior or competitive performance of our approach over several popular baselines.", "A distinctive feature of the proposed method in comparison with other latent feature models is a good performance even in very low dimensions.", "This advantage might lead to some practical applications as many problems related to vector embedding are much easier in low dimensions, e.g., searching of the nearest neighbors.", "On the other hand this observation suggests that real networks might have lower inherit dimensionality than one might expect.", "We consider this work as a good motivation towards a new set of research directions.", "Future research can be focused on choosing an optimal distance measure for latent feature models and deeper analysis of inherit networks dimensionality." ] ]
1906.04548
[ [ "Comparative Penning-Trap Tests of Lorentz and CPT Symmetry" ], [ "Abstract The theoretical and experimental prospects for Lorentz- and CPT-violating quantum electrodynamics in Penning traps are reviewed in this work.", "With the recent reported results for the measurements of magnetic moments for both protons and antiprotons, improvements with factors of up to 3000 for the constraints of various coefficients for Lorentz and CPT violation are obtained." ], [ "Introduction", "Among the most fundamental symmetries of relativity and particle physics are the Lorentz and CPT invariances.", "However, in recent years it has been suggested that tiny violations of Lorentz and CPT symmetry are possible in a unified theory of gravity and quantum physics such as strings.", "[1] The comprehensive and relativistic description of such violations is given by the Standard-Model Extension (SME), [2] a general framework constructed from General Relativity and the Standard Model by adding to the action all possible Lorentz-violating terms.", "Each of such terms is formed as the contraction of Lorentz-violating operator with a corresponding coefficient controlling the size of Lorentz violation.", "High-precision experiments across a broad range of subfields of physics, including Penning traps, provide striking constraints on the coefficients for Lorentz violation.", "[3] In the context of minimal SME, with operators of mass dimension restricted to $d\\le 4$ , several studies of observables for Lorentz violation in Penning traps have been conducted.", "[4] The relevant theory of Lorentz-violating electrodynamics with nonminimal operators of mass dimensions up to six was also developed.", "[5] More recently, this treatment was generalized to include operators of arbitrary mass dimension using gauge field theory.", "[6] In this work, we further the study of experimental observables for Lorentz violation by comparing different Penning-trap experiments and extract new constraints on various coefficients for Lorentz violation using the recent published results of the magnetic moments from the BASE collaboration.", "[8], [7]" ], [ "Theory", "For a charged Dirac fermion $\\psi $ with mass $m$ confined in a Penning trap, the magnetic moment and the related $g$ -factor of the particle can be obtained by measuring two frequencies, the Larmor frequency $\\nu _L=\\omega _L/2\\pi $ and the cylotron frequency $\\nu _c=\\omega _c/2\\pi $ , and determining their ratio $g/2=\\nu _L/\\nu _c=\\omega _L/\\omega _c$ .", "In an ideal Penning-trap experiment with the magnetic field $B=B \\hat{x}_3$ lying along the positive $x^3$ axis of the apparatus frame, the leading-order contributions from Lorentz violation to $\\omega _c$ for both fermion and antifermions vanish, while the corrections to $\\omega _L$ are given by $\\delta \\omega _L^{w}=2 {\\widetilde{b}}_{w}^{3} - 2 {\\widetilde{b}}_{F,w}^{33} B ,\\quad \\delta \\omega _L^{\\overline{w}}=- 2 {\\widetilde{b}}_{w}^{*3} + 2 {\\widetilde{b}}_{F,w}^{*33} B ,$ where $w=e^-$ , $p$ for electrons and protons and $\\overline{w}=e^+$ , $\\overline{p}$ for positrons and antiprotons.", "The tilde quantities are defined as a combination of different coefficients for Lorentz violation.", "[5] The expressions for the shifts in the anomaly frequencies (REF ) are valid in the apparatus frame where the direction of the magnetic field is chosen to be in the positive $\\hat{x}_3$ direction.", "The results in terms of the constant coefficients in the Sun-centered frame requires a transformation matrix between the two frames, [9] which reveals the dependence of the anomaly frequencies on the sidereal time and the geometric conditions of the experiment." ], [ "Applications", "For a confined proton or antiproton in a Penning trap, the relevant experiment-independent observables to the studies of the magnetic moments are the 18 coefficients for Lorentz violation ${\\widetilde{b}}_{p}^{J}$ , ${\\widetilde{b}}_{p}^{*J}$ , ${\\widetilde{b}}_{F,p}^{(JK)}$ , and ${\\widetilde{b}}_{F,p}^{*(JK)}$ , where $J, K=X, Y, Z$ in the Sun-centered frame.", "A comparison of the magnetic moments of protons and antiprotons in different Penning-trap experiments offers an excellent opportunity to constrain various of the 18 coefficients for Lorentz violation listed above.", "Existing analysis involving the comparison of the magnetic moments of protons from the BASE collaboration at Mainz and of antiprotons from the ATRAP experiment at CERN is given in our previous work.", "[5] Recently, the sensitivities of the magnetic moments for both protons and antiprotons have been improved by several orders of magnitude by the BASE collaboration at Main and CERN.", "[8], [7] Here we conduct a similar comparison analysis using the recent published results to extract improved constraints on coefficients for Lorentz violation.", "For the magnetic moment of the proton measured at Mainz, the laboratory colatitude is $\\chi \\simeq 40.0^\\circ $ and the applied magnetic field $B\\simeq 1.9$ T points to local south, which corresponds to the $\\hat{x}$ direction in the standard laboratory frame.", "For the antiproton magnetic moment measurement at CERN, the laboratory colatitude is $\\chi ^* \\simeq 43.8^\\circ $ and the magnetic field $B^*\\simeq 1.95$ T points $\\theta =60.0^\\circ $ east of local north.", "The experimental data for both experiments were taken over an extended time period, so we can plausibly average the sidereal variations to be zero, leaving only the constant parts.", "Together with the numerical values of other quantities reported by both BASE measurements, we obtain the bounds for ${\\widetilde{b}}_{p}^{Z}$ , ${\\widetilde{b}}_{p}^{*Z}$ , ${\\widetilde{b}}_{F,p}^{(XX)}$ , ${\\widetilde{b}}_{F,p}^{*(XX)}$ , ${\\widetilde{b}}_{F,p}^{(YY)}$ , ${\\widetilde{b}}_{F,p}^{*(YY)}$ , ${\\widetilde{b}}_{F,p}^{(ZZ)}$ , and ${\\widetilde{b}}_{F,p}^{*(ZZ)}$ , with improvement factors of up to 3000 compared to the previous results.", "[5] Note that among the 18 independent observables in Penning-trap experiments a large number of them remain unexplored to date.", "Performing a study of sidereal variations could in principle provide sensitivities to other components of the tilde coefficients.", "Such analysis could be possible once the quantum logic readout currently under development at the BASE collaboration becomes feasible.", "[10]" ], [ "Acknowledgments", "This work was supported in part by the Department of Energy and by the Indiana University Center for Spacetime Symmetries." ] ]
1906.04364
[ [ "Extremely High energy peaked BL Lac nature of the TeV blazar Mrk 501" ], [ "Abstract Extremely High energy peaked BL Lac (EHBL) objects are a special class of blazars with peculiar observational properties at X-ray and $\\gamma$--ray energies.", "The observations of these sources indicate hard X-ray and $\\gamma$--ray spectra and absence of rapid flux variations in the multi-wavelength light curves.", "These observational features challenge the leptonic models for blazars due to unusually hard particle spectrum in the emission region of the blazar jet and provide a strong motivation for exploring alternative scenarios to interpret the broad-band emission from blazars.", "At present, only few TeV blazars have been observed as EHBL objects in the extragalactic Universe.", "Due to their hard $\\gamma$--ray spectra and long term variability, the observations of EHBL type of blazars at different redshifts help in probing the cosmic magnetic field and extragalactic background light in the Universe.", "Such objects also provide astrophysical sites to explore the particle acceleration mechanisms like magnetic reconnection and second order Fermi acceleration.", "Therefore, it has become important to identify more objects as EHBL using the observations available in the literature.", "Recent studies on the blazar Mrk 501 indicate that this source may exhibit an EHBL behaviour.", "In this paper, we use long term observations of Mrk 501 to explore its nature.", "Two sets of data, related to low and high/flaring activity states of Mrk 501, have been presented and compared with the observed features of a few well known EHBL type of blazars." ], [ "Introduction", "Blazars are recognized as a special class of radio-loud active galactic nuclei having jets oriented along the line of sight of the observer at the Earth.", "The blazar jets are characterized as the relativistic plasma outflows ejected from the supermassive black hole at the center of the host galaxies with an elliptical morphology [1], [2].", "The origin and launching of the relativistic jets have not been fully understood, however the Blandford & Znajek process suggests that the jet is powered by the rotational energy of the black hole at the center [3].", "Results from the multi-wavelength observations using various space and ground-based telescopes over the last three decades indicate that blazars are the dominant extragalactic sources of electromagnetic radiation from microwave/radio to very high energy (VHE; E $>$ 50 GeV) $\\gamma $ –rays.", "These objects are observed to exhibit high luminosity (up to 10$^{48}$ -10$^{49}$  erg s$^{-1}$ ) in the $\\gamma $ –ray energy band.", "The observed multi-wavelength emission from blazars is characterized as a non-thermal continuum, strongly polarized and highly variable at timescales from minutes to years [4], [5], [6].", "The dominant non-thermal emission from blazars is observed to originate from the relativistic jets.", "The small viewing angle of the jet makes blazar emission strongly beamed and Doppler boosted which dominates the observed broad-band spectral energy distribution (SED) of these sources.", "In the simplest scenario, the emission region is assumed to be relativistically moving along the jet axis with a bulk Lorentz factor same as the Doppler factor ($\\delta $ ) in the small viewing angle ($\\le 10^{\\circ }$ ) approximation.", "Therefore, the observed bolometric luminosity of blazars is Doppler boosted by a factor of $\\delta ^4$ .", "The observed frequency of photons is amplified by $\\delta $ while the measured timescales are reduced by $1/\\delta $ .", "These simple arguments are supported by the measurements of extremely high bolometric luminosities varying at very short timescales.", "The SED of blazars is characterized by a two-hump structure peaking at low and high energies.", "The origin of the first hump peaking at lower energies ranging from IR/optical to X-rays is attributed to the synchrotron radiation from the relativistic electrons in the jet magnetic field.", "The observations of high degree of polarization at radio and optical wavebands from blazars also support the hypothesis that synchrotron is the dominant process at low energies.", "Additionally, this also supports the idea of highly ordered magnetic field in the jet.", "The physical process involved in the second hump peaking at energies from high energy (HE; E $>$ 100 MeV) to VHE $\\gamma $ –rays is not completely known and has become a subject of intense research in the field of blazar astrophysics today.", "However, two alternative scenarios namely Leptonic and Hadronic are considered for modelling the $\\gamma $ –ray emission from blazars.", "In the Leptonic scenario, the emission of $\\gamma $ –ray photons is attributed to the inverse Compton (IC) scattering of the low energy target photons by the relativistic leptons (electrons and positrons) in the blazar jet.", "Two different physical processes namely synchrotron self Compton (SSC) and external Compton (EC) are used to explain the $\\gamma $ –ray emission from blazars within the Leptonic scenario depending on the dominance of target photon field.", "If the low energy synchrotron photons scatter mainly off the relativistic electrons (which produce the synchrotron radiation) and gain energy to become $\\gamma $ –ray photons, the model is referred to as SSC [7], [8], [9].", "In the case of an EC model, the target photon fields originating from regions (accretion disk, broad-line region, dusty torus, illuminated molecular clouds, and cosmic microwave background radiation) external to the jet dominate over the synchrotron photons for IC scattering [10], [11], [12].", "On the other hand, in the Hadronic scenario, the $\\gamma $ –ray emission is explained by considering the presence of relativistic protons in the blazar jet.", "The processes like proton synchrotron and photo-hadronic interactions followed by subsequent pion decay and secondary particles cascade are invoked to describe the observed $\\gamma $ –ray emission from the blazars [13], [14], [15].", "Alternatively, the hybrid scenario involving lepto-hadronic processes for $\\gamma $ –ray emission has also been proposed in the literature [16], [17], [18].", "In the hybrid scenario, the hadronic processes associated with relativistic protons are assumed to significantly contribute to the emission of observed $\\gamma $ –rays along with the IC process by the relativistic electrons in the blazar jet [19].", "The information related to the exact content of protons in the jet is very important for understanding the jet dynamics and launching from the center of the blazar host galaxy.", "The acceleration processes for electrons and protons up to relativistic energies in the blazar jet are not exactly known.", "Single zone leptonic models with the given population of relativistic electrons and positrons in the steady state have been successfully used to explain the non-thermal simultaneous or quasi-simultaneous multi-wavelength emission from almost all the observed blazars [16], [20], [21].", "However, the detection of a short term variability at minutes scale challenges the one zone leptonic scenario because it requires very high values of the bulk Lorentz factor for the emission region [22], [23], [24].", "Time dependent leptonic models with single emission zone are also being used to explain the variability of blazars in the literature [25], [26], [27].", "Several complex leptonic models with multi-zone emissions in the jet have been proposed to understand the short timescale $\\gamma $ –ray variability observed from many blazars [28], [29], [30].", "The detection of statistically significant neutrino events during the high activity from the same source in $\\gamma $ –rays would provide unique tools to discriminate between the leptonic and hadronic components in the $\\gamma $ –ray emission from blazars.", "However, recent detection of TeV neutrino events with low statistical significance (less than 5$\\sigma $ ) from the blazars TXS 0506+056 [31] poses a serious challenge to the dominance of hadronic processes in blazar jets for $\\gamma $ –ray emission and reinforces the lepto-hadronic scenario.", "The detection of a special class of blazars known as Extremely High energy peaked BL Lac objects (EHBLs) having an exceptionally hard TeV spectrum with steady emission over a long timescale challenges the leptonic models for the blazar emission.", "These observations indicate that exploring the exact physical process for $\\gamma $ –ray emission from blazar jets is very exciting and an open problem in high energy astrophysics.", "In this paper, we use observed properties/parameters of a well-studied blazar Mrk 501 available from the long term observations to explore its nature during different activity states of the source.", "In Section 2, we briefly describe different types of blazars based on their observational properties so far.", "Some of the important characteristics of EHBL blazars are discussed in Section 3.", "In Section 4, we use all the observations available on Mrk 501 in the literature to study the behaviour of X-ray and $\\gamma $ –ray emission from the source and discuss its possible association with the well known EHBLs observed to date.", "Finally we summarize the findings of our study in Section 5.", "The $\\Lambda $ CDM cosmological model with parameters $\\Omega _m=0.27$ , $\\Omega _{\\Lambda }=0.73$ and $H_0=70$  km s$^{-1}$  Mpc$^{-1}$ has been adopted in the paper." ], [ "Blazars Types", "Although blazars in general have similar emission properties, they are classified in different types based on their observed characteristics.", "The widely accepted different types for blazars are briefly described below:" ], [ "Optical types", "On the basis of the observed characteristic optical spectral lines and their rest frame equivalent width (EW), blazars have been classified in two types: BL Lacertae (BL Lac) objects and Flat Spectrum Radio Quasars (FSRQs) [1], [2], [32].", "BL Lac objects exhibit no or weak spectral lines (EW $<$ 5 Å) against the bright featureless optical continuum whereas FSRQs have prominent broad emission lines (EW $>$ 5 Å).", "The observations of strong optical spectral lines from the FSRQs indicate the presence of broad-line emission regions illuminated by the bright accretion disk in this type of blazars.", "The accretion disk in FSRQs is radiatively efficient with a near-Eddington accretion whereas BL Lacs have less radiatively efficient accretion disks with a sub-Eddington accretion.", "Due to the presence of a bright accretion disk, FSRQs are observed to be more luminous than BL Lac class and it provides intense external target photon fields for IC scattering that produces $\\gamma $ –ray photons in the jet emission region of the FSRQs in the Leptonic scenario.", "Also, EC models are observed to be more relevant for FSRQs than BL Lac type of blazars where SSC processes are dominant for $\\gamma $ –ray emission under the framework of the Leptonic scenario.", "This observational classification of blazars based on the optical emission is very simple and provides limited physical distinction for a large number of sources observed in different energy bands." ], [ "SED types", "In blazars, the non-thermal jet emission is found to be dominating at all energies from radio to $\\gamma $ –rays and the broad-band SED is very important to understand the emission process.", "The physical process for the origin of the low energy hump in the blazar SED is ascribed to the Doppler boosted synchrotron radiation produced by the relativistic leptons in the jet magnetic field.", "Therefore, the energy corresponding to the peak position of the low energy hump (E$_{syn}^p$ ) in the observed SED is used as an important parameter for the classification of blazars.", "From the observations, it is found that the rest frame peak energy of the low energy hump follows a distribution in the waveband from IR/optical to X-rays for BL Lac type of blazars.", "Therefore, on the basis of rest frame peak energy, BL Lac objects have been classified in three classes namely [33]: Low energy peaked BL Lac (LBL: E$_{syn}^p \\le $ 0.5 eV), Intermediate energy peaked BL Lac (IBL: 0.5 eV $\\le E_{syn}^p \\le $ 5 eV) and High energy peaked BL Lac (HBL: E$_{syn}^p \\ge $ 5 eV).", "The low energy hump in the SED of FSRQs is observed to peak at E$_{syn}^p$ $\\le $ 0.5 eV.", "Hence, FSRQs can be associated with the LBL class of objects under the classification scheme based on the blazar SED.", "An extension for the general classification of blazars on the basis of the position of the rest frame synchrotron peak frequency ($\\nu _{syn}^p$ ) in the SED is also proposed by [9].", "According to this classification, all blazars are classified as: Low Synchrotron Peaked (LSP: $\\nu _{syn}^p \\le 10^{14}$ Hz), Intermediate Synchrotron Peaked (ISP: 10$^{14} \\le \\nu _{syn}^p \\le 10^{15}$ Hz) and High Synchrotron Peaked (HSP: $\\nu _{syn}^p \\ge 10^{15}$ Hz).", "Most of the blazars detected so far at TeV energies belong to the HBL or HSP class.", "A small fraction of HBLs is observed to exhibit a SED with the position of the synchrotron peak at hard X-ray energies (E$_{syn}^p \\ge $ 5 keV) or higher (E$_{syn}^p \\sim $ MeV-GeV).", "These type of blazars are referred to as EHBLs and constitute an interesting class of sources to be explored in the extragalactic Universe at X-ray and $\\gamma $ –ray energies [34]." ], [ "Blazar Sequence", "The observed values of the synchrotron peak frequency ($\\nu _{syn}^p$ ) vary between $\\sim $ 10$^{12}$ Hz to over 10$^{18}$ Hz for all type of blazars [35].", "A strong anti-correlation obtained between the bolometric luminosity and the rest frame synchrotron peak frequency forms a sequence referred to as the Blazar Sequence [36].", "According to the observed Blazar Sequence, FSRQs (having the lowest value of $\\nu _{syn}^p$ ) have the highest luminosity whereas HBLs (having the highest value of $\\nu _{syn}^p$ ) have the lowest luminosity.", "This indicates that FSRQs and BL Lacs are high and low-power blazars respectively.", "The Compton dominance (ratio of the peak luminosity in high energy component to the peak luminosity in synchrotron or low energy hump in the SED) for all types of blazars decreases with increasing synchrotron peak frequency.", "It means the Compton dominance is maximum for FSRQs and minimum for HBLs.", "Also, the high energy component of SED peaks at MeV-GeV energies for FSRQs and at TeV energies for HBLs.", "For EHBL type of blazars, the IC peak is observed at higher energies than the classical HBLs because of their harder $\\gamma $ –ray spectra.", "Therefore, it is very interesting to study the Compton dominance for EHBL type of blazars which will help to understand the strong radiative cooling of the relativistic leptons due to IC scattering in blazars.", "The existence of EHBL type of blazars with hard power law spectrum and E$_{syn}^p$ up to 100 keV was first predicted by [34] using X-ray observation of blazars by the BeppoSAX.", "These sources are suspected to be hidden in the HBL class of blazars.", "Some of the key theoretical predictions about EHBL type of blazars are as follows: Possible candidates for invoking hadronic models with hard particle spectrum and strong magnetic field (more than 100 G) to explain the observed high energy $\\gamma $ –ray emission and long timescale variability [17].", "With the lowest intrinsic luminosity and highest synchrotron peak energy, these objects follow the blazar sequence as FSRQ$\\rightarrow $ LBL$\\rightarrow $ IBL$\\rightarrow $ HBL$\\rightarrow $ EHBL (in the order of decreasing luminosity and increasing synchrotron peak energy).", "The peak energy in the synchrotron hump indicates the maximum energy of the particles in the emission region.", "This indicates that EHBL jets are the extreme and efficient astrophysical accelerators in the Universe.", "However, the very hard spectrum of the injected population of relativistic particles with index less than 1.5 challenges the simple acceleration models like diffusive shock acceleration in the blazar jets.", "Since their prediction, the study of EHBL type of blazars has become an intense area of research in multi-wavelength astrophysics.", "Some of the observable properties of EHBLs are summarized below: Difficult to be identified by radio observations because of weak or faint radio emission and are mostly observed at X-ray and TeV $\\gamma $ –ray energies unlike any other class of blazars.", "Low and high energy peaks in the broad-band SED are shifted to hard X-ray and VHE $\\gamma $ –rays.", "IR and optical emissions are dominated by the thermal emission from the host galaxy and non-thermal jet emission dominates at UV energies.", "Exhibit exceptionally hard intrinsic X-ray and TeV $\\gamma $ –ray spectra (spectral index $\\le $ 2) and lack variability at short timescales (variability timescale of several days is observed) despite the predictions from leptonic models.", "Pose a strong challenge for the standard one zone leptonic SSC models for high energy emission because they require large values of the Doppler factor ($\\delta >$ 50) and very high values of the minimum Lorentz factor ($\\gamma _{min} >$ 150) for electrons.", "From the above discussions it is obvious that the observed emission properties of EHBLs are very different from the classical BL Lac type of blazars and HBLs in particular.", "Therefore, it will not be appropriate to use the source parameters derived from the study of BL Lac objects as a probe for EHBLs.", "However, these objects with very peculiar observational properties constitute an interesting class of blazars for exploring the broad-band emission processes.", "A very small population of EHBLs detected so far gives a strong motivation for the identification of new sources belonging to this subclass of blazars.", "A brief review of the observational properties of the VHE $\\gamma $ –ray emission from the prominent EHBL type of blazars detected to date is given in the Appendix." ], [ "EHBL nature of Mrk 501", "Mrk 501 was first presented as a spheroidal galaxy with faint corona and star-like nucleus in the 5th list of galaxies with UV continuum in 1972 from the sky survey during 1969–70 at Byurakan Observatory [37].", "In 1975, this object was identified as an elliptical galaxy B2 1652+39 (Mrk 501) at redshift $z$ = 0.0337 from the spectrographic, photometric, polarimetric and radio observations [38].", "The dominant contribution of non-thermal emission to the luminosity of the nuclear region was also detected from B2 1652+39 with optical properties similar to the elliptical galaxies and BL Lac objects.", "In 1981, this source was reported in a catalog of radio sources with flux density above 1 Jy at 5 GHz [39].", "The redshift of Mrk 501 was again measured to be $z$ = 0.034 from the stellar absorption/emission lines of the host galaxy using direct imaging and spectroscopic observations of selected BL Lac objects from the 1 Jy radio catalog performed during 1985–1990 [40].", "The first VHE $\\gamma $ –ray emission from this source above 300 GeV was discovered by the Whipple telescope in 1995 [41] soon after the discovery of TeV $\\gamma $ –ray photons from the first extragalactic source Mrk 421 in 1991 [42].", "Since the discovery of TeV $\\gamma $ –ray emission from Mrk 501, several intensive broad-band studies have been performed on this blazar, but its exact nature has not yet been completely understood.", "In the following, we use all the observational results on the blazar Mrk 501 which are available in the literature today and compare them with the properties of well known EHBLs (Appendix) to characterize its nature.", "Among the various characteristics of the EHBL type of blazars as known so far, an object can be characterized as an EHBL if it exhibits important observational features like: (i) hard intrinsic VHE spectrum with a spectral index $\\Gamma _{int} \\le $ 2 and peak energy in the second hump of the SED above 1 TeV, (ii) hard Fermi-LAT spectrum with a photon spectral index $\\Gamma _{LAT} \\le $ 2 in the MeV-GeV energy band, (iii) hard X-ray spectrum with a photon spectral index $\\Gamma _X \\le $ 2 and synchrotron peak energy above E$_{syn}^p \\sim $ 10 keV, and (iv) no short term variability in the TeV $\\gamma $ –ray emission." ], [ "Intrinsic VHE or TeV $\\gamma $ –ray spectrum", "After its identification as a TeV $\\gamma $ –ray source in 1996, Mrk 501 has become a prominent target for VHE observations by ground-based $\\gamma $ –ray telescopes until today.", "Assuming that the physical process for TeV $\\gamma $ –ray production is similar for a given class of blazars like EHBLs, the differences in the observed TeV spectra of sources at different redshifts can be attributed to the partial absorption of VHE photons due to the interaction with low energy extragalactic background light (EBL) photons.", "EBL represents the repository of the photons in the wavelength range from far-IR to optical/UV produced by the stars and galaxies throughout the history of the Universe.", "The VHE $\\gamma $ –ray photons interact with low energy EBL photons via photon-photon pair production while propagating from source to the observer and are absorbed in the intergalactic space.", "This absorption leads to the energy and redshift dependent attenuation of the VHE $\\gamma $ –ray photons emitted from the source which is characterized by a physical quantity called optical depth ($\\tau $ ).", "The detailed procedure for the estimation of $\\tau $ as a function of the source redshift ($z$ ) and energy of the VHE photons ($E$ ) for different models of EBL photon density is discussed in [43].", "The observed ($F_{obs}$ ) and intrinsic ($F_{int}$ ) VHE spectra of a source at cosmological distance are related as $F_{obs}=F_{int}~~e^{-\\tau (E,z)}$ where e$^{-\\tau }$ is referred to as EBL attenuation factor.", "This energy dependent attenuation makes the observed VHE $\\gamma $ –ray spectrum softer than the emitted intrinsic spectrum of the source.", "If $F_{obs}$ and $F_{int}$ are described by a power law with photon spectral indices $\\Gamma _{obs}$ and $\\Gamma _{int}$ respectively, then it is expected that $\\Gamma _{int}~ < ~\\Gamma _{obs}$ for a given source.", "If the TeV $\\gamma $ –rays are produced by the IC scattering in the Klein-Nishina (KN) regime, the intrinsic VHE spectrum is softer than the Thomson regime [44].", "However, the hard VHE spectra of the EHBL type of blazars contradict the leptonic origin and suggest the hadronic origin of the TeV photons from these sources.", "We have used the most recent and updated EBL model proposed by Franceschini & Rodighiero [45] to estimate the optical depth ($\\tau $ ) for $\\gamma $ –ray photons in the energy range 50 GeV–15 TeV emitted from the sources at different redshifts in the range $z$ = 0.034-0.230.", "The corresponding EBL attenuation factor as a function of energy for the EHBLs described in the Appendix is shown in Figure REF .", "It is evident that the EBL absorption effect on the TeV $\\gamma $ –ray spectra of Mrk 501 is minimal as compared to the other EHBLs reported in Figure REF .", "Therefore, the observed TeV spectra of Mrk 501 can be approximated as the intrinsic emission spectra which reflect the best physical process for the production of TeV $\\gamma $ –ray photons at the source.", "Figure: EBL attenuation factor (e -τ ^{-\\tau }) as a function of TeV γ\\gamma –ray photon energy for well known EHBLs (Appendix) atdifferent redshifts (zz = 0.125-0.230) and for the blazar Mrk 501 at zz = 0.034.", "The EBL model described in is used to estimate the optical depth (τ\\tau ) for a given energy of VHE γ\\gamma –ray photon (EE) emitted from a source atredshift (zz).The distribution of the power law spectral indices ($\\Gamma $ ) in VHE $\\gamma $ –ray band for Mrk 501 and the well known EHBLs is shown in Figure REF (a).", "All the spectral indices values have been obtained from the literature reported in the online TeV cataloghttp://tevcat.uchicago.edu.", "For Mrk 501, we have used the spectral index measurements available in the literature since the discovery of the VHE $\\gamma $ –ray emission from this source in 1996 until today.", "The VHE observations of Mrk 501 are divided into two categories namely low and high/flaring states.", "In the low state, the VHE $\\gamma $ –ray emission from this source is found to be described by an average power law spectral index of 2.41$\\pm $ 0.05 whereas in the flaring state the average spectral index is 2.24$\\pm $ 0.06.", "This indicates that the spectral behaviour of VHE $\\gamma $ –ray emission from Mrk 501 does not change significantly during the flaring states of the source.", "However, an harder-when-brighter behaviour, an observational characteristics of TeV blazars, has been observed for Mrk 501 at several occasions during its flaring activity in VHE regime [46], [47].", "In Figure REF (a), the intrinsic spectral indices of the well known EHBLs (Appendix) differ significantly from their observed values indicating the effect of EBL absorption due to relatively large redshift of the sources in the range $z$ = 0.125-0.230.", "However, the values of VHE spectral indices for Mrk 501 are found to be consistent with the intrinsic spectral indices for EHBLs (EHBL-Intrinsic) within the error bars.", "This implies that the observed TeV $\\gamma $ –ray emission from Mrk 501 exhibits similar spectral characteristics as the EHBLs within the statistical uncertainty.", "Figure: Comparison of power law spectral indices of Mrk 501 in VHE (E>E > 50 GeV) and HE (E>E > 100 MeV) γ\\gamma –ray bands with thewell known EHBLs (Appendix) as observed so far.", "(a) VHE spectral indices are used from the literature based on observations using variousground-based atmospheric Cherenkov telescopes like Whipple, HEGRA, MAGIC, VERITAS, H.E.S.S.", "and TACTIC reported in the TeV catalog.", "(b) HE spectral indices have been obtained from the Fermi-LAT observations.", "For Mrk 501, only maximum, minimum and average valuesof the spectral indices are shown." ], [ "HE or ", "The Large Area Telescope on board the Fermi Gamma-ray Space Telescope satellite (Fermi-LAT) provides HE $\\gamma $ –ray observations of the blazars in the energy range 100 MeV to more than 500 GeV [48].", "The distribution of HE spectral indices derived from the Fermi-LAT observations of Mrk 501 and other EHBLs (Appendix) is shown in Figure REF (b).", "For Mrk 501, HE $\\gamma $ –ray emission is observed to follow a power law distribution with an average photon spectral index of 1.73$\\pm $ 0.03 in the energy range 0.1-300 GeV.", "This is consistent with the LAT spectral index for a sample of EHBLs (EHBL-LAT) as shown in Figure REF (b) and reported in [49].", "We have also used the results from the long term Fermi-LAT observations on Mrk 501 and EHBLs reported in the last released LAT catalogs namely the Third Fermi-LAT catalog (3FGL) and the Third catalog of hard Fermi-LAT sources (3FHL) in Figure REF (b).", "3FGL catalog is based on the data in the energy range 0.1–300 GeV from the first four years of Fermi-LAT observations [50] and 3FHL includes the first seven years of data in the 10 GeV–2 TeV energy range [51].", "The values of LAT photon spectral index 1.716$\\pm $ 0.016 and 1.61$\\pm $ 0.07 for Mrk 501 reported in the 3FGL and 3FHL catalogs respectively are consistent with the corresponding values for EHBLs within statistical uncertainties.", "The effect of EBL absorption on the Fermi-LAT spectrum of the sources measured in the energy range 0.1-500 GeV is negligible for redshift $z \\le $ 0.5.", "Therefore, the LAT spectral index can be approximated as the intrinsic HE $\\gamma $ –ray index of the sources with the redshift $z \\le $ 0.5.", "In the present study, the values of LAT spectral index or HE $\\gamma $ –ray index are observed to be less than 2 for Mrk 501 and other well known EHBLs (Appendix).", "This implies that the IC or $\\gamma $ –ray peak will be shifted at TeV energies in the broad-band SED of these sources in log($E$ ) versus log($E^2$ d$N$ /d$E$ ) plane.", "Therefore, the HE spectral characteristics of Mrk 501 from the Fermi-LAT observations support the EHBL nature of the source.", "Figure: Comparison of X-ray emission parameters observed from Mrk 501 with the EHBLs (Appendix) as observed so far.", "(a) Synchrotron peak energy obtained from various X-ray observations performed in the energy range 0.3-79 keV(b) Spectral parameters obtained from the X-ray observations.", "Only maximum, minimum and average values of the spectral parametersare shown for Mrk 501." ], [ "X-ray emission characteristics", "The origin of X-ray emission from blazars has been properly understood and is ascribed to the synchrotron radiation emitted from the relativistic population of leptons (electrons and positrons) in the jet magnetic field.", "Therefore, the observed characteristics of the X-ray emission of blazars are widely used to characterize different types of blazars.", "We have used some of the important features like synchrotron peak energy and X-ray spectral index derived from the simultaneous observations with NuSTAR and Neil Gehrels Swift Observatory of Mrk 501 and other EHBLs (Appendix) in the present study.", "The distribution of these characteristic parameters is depicted in Figure REF (a-b).", "The synchrotron peak energy (E$_{syn}^p$ ) shown in Figure REF (a) for different EHBLs has been obtained from the literature [49].", "Detailed broad-band studies on Mrk 501 suggest a synchrotron peak energy between 2-5 keV in low or flaring state of the source [52], [53].", "However, during the correlated X-ray and TeV flaring activity of Mrk 501 in 1997, the synchrotron peak energy was observed to be $\\sim $ 100 keV with a shift of two orders of magnitude with respect to the low activity state [54].", "The average synchrotron emission from Mrk 501 is observed to peak in the energy range 2–100 keV [55].", "The position of the synchrotron peak in the broad-band SED of Mrk 501 is shifted to higher energies during the flaring state than that for the low emission state of the source.", "The X-ray emission from the sample of EHBLs (Appendix) is found to be described by a log-parabolic model with a photon spectral index ($\\alpha $ ) and curvature parameter ($\\beta $ ) in the energy range 0.3-79 keV [49], [56].", "The distribution of the spectral parameters $\\alpha $ and $\\beta $ for all the EHBLs along with Mrk 501 is shown in Figure REF (b).", "It is obvious from the Figure REF (b) that the X-ray spectral parameters of Mrk 501 are consistent with the corresponding parameters for EHBLs.", "However, during one of its flaring events, the X-ray emission from Mrk 501 was described by a power law with the hardest spectral index of 1.2$\\pm $ 0.1 [54].", "A significant variation in the X-ray spectrum has been observed during the flaring activity of Mrk 501 with an harder-when-brighter trend with respect to previous studies of the source [46], [52], [57].", "Thus, the X-ray emissions of Mrk 501 show almost similar features like EHBLs during the flaring state of the source.", "Figure: Synchrotron peak luminosity (L syn p L_{syn}^p) as a function of synchrotron peak energy for EHBLs discussed in the Appendix." ], [ "Synchrotron peak luminosity", "From the prediction of the blazar sequence, EHBLs are expected to be least luminous.", "We have estimated the synchrotron peak luminosity using the results from the X-ray observations reported in [49] for the sample of EHBLs discussed in the Appendix.", "The synchrotron peak luminosity ($L_{syn}^p$ ) has been calculated using the formula: $L_{syn}^p=\\frac{F_p~\\times ~4\\pi d_L^2}{(1+z)^{2-\\alpha }}$ where $F_p$ is the synchrotron peak energy flux, $d_L$ is the luminosity distance and $\\alpha $ is the observed photon spectral index.", "The distribution of the synchrotron peak luminosity as a function of the synchrotron peak energy for the EHBLs is presented in Figure REF .", "The synchrotron peak luminosity for the known EHBLs is obtained between 10$^{44}$ -10$^{45}$  erg sec$^{-1}$ which is consistent with the values estimated for blazars in the range 10$^{43}$ -10$^{47}$  erg sec$^{-1}$ [58], [59].", "The values of the synchrotron peak luminosity ($\\sim 10^{45}$  erg sec$^{-1}$ ) for Mrk 501 reported in the literature are in agreement with the values estimated for the EHBLs in the present study [52], [57].", "During the flaring state of Mrk 501, the synchrotron peak position is shifted towards higher energy with a decrease in the synchrotron peak luminosity [54].", "We observe that Mrk 501 follows the blazar sequence like other EHBL type of blazars.", "However, more X-ray observations of the EHBLs in wide energy range are required to test the blazar sequence hypothesis with high confidence." ], [ "Variability", "Mrk 501 is one of the blazars known for its strong variability in all energy bands.", "The X-ray and VHE $\\gamma $ –ray emissions from Mrk 501 are observed to be highly variable during the flaring activity.", "In 2005, VHE $\\gamma $ –ray flux from this source was observed to vary on a minute scale during a flare [47].", "The high activity state of Mrk 501 in X-ray and TeV $\\gamma $ –ray bands observed in June 2014 was characterized by a minimum variability timescale of a few minutes [60], [61].", "The variability characteristics from Mrk 501 contradict the hypothesis for temporal behaviour of the well known EHBLs (Appendix) where no short term variability is present.", "Therefore, the temporal characteristics of Mrk 501 observed in the X-ray and $\\gamma $ –ray energy bands are not compatible with the nature of EHBLs as observed so far.", "Therefore, it is very important to explore the temporal behaviour of more objects which are probable candidates for EHBLs using the long and short term observations available with the X-ray and $\\gamma $ –ray telescopes as the long term variability is expected to be one of the characteristics of such type of blazars.", "In this work, we have explored the nature of the well studied blazar Mrk 501 which is frequently observed in high activity states at X-ray and $\\gamma $ –ray energies.", "We have used the X-ray and $\\gamma $ –ray observational results on Mrk 501 available in the literature to probe its nature as an EHBL.", "We find that the spectral characteristics of Mrk 501 measured in X-ray and $\\gamma $ –ray energy bands are broadly consistent with the behaviour of the well known EHBLs as reported in the literature so far.", "However, the temporal features with high variability at X-ray and VHE $\\gamma $ –ray energies during the flaring state of Mrk 501 are not in agreement with the nature of EHBLs where no variability or long term variability is expected to be present.", "Long term observations in the low activity state of Mrk 501 suggest that the source is a strong EHBL candidate in its low emission state.", "Recently, Ahnen et al.", "(2018) have concluded that the EHBL nature of Mrk 501 is a dynamic characteristic of the source which changes over time [62].", "In this study, the authors have used an extensive multi-wavelength observation on Mrk 501 during March-July 2012 to characterize the broad-band emission from the source.", "It has also been suggested by the authors that the classification of blazars as EHBLs on the basis of observations has some diversity.", "Therefore, a comprehensive multi-wavelength study of the large sample of blazars in the redshift range 0.01$\\le z \\le $ 1 is very important to fully explore the nature and population of the EHBL type of blazars in the extragalactic Universe.", "The EHBL type of blazars is very important for probing the density of EBL photons because of the increased photon statistics at the highest energy end of the spectrum due to hard VHE intrinsic spectral index.", "The TeV observations of EHBLs at large cosmological redshifts provide an important tool for the study of the history of the Universe through EBL.", "The hard intrinsic spectrum of EHBL blazars along with the lack of variability also provides a constant VHE flux which is very important for probing the intergalactic magnetic field over a long timescale.", "The intrinsic VHE photon spectral index $\\le $ 2 indicates a hard underlying particle spectrum in the emission region which challenges the acceleration process in the blazar jet.", "Due to their hard $\\gamma $ –ray spectrum, the IC peak in the SED of EHBLs is shifted to TeV energies and this makes their observations with the Fermi-LAT at GeV energies very difficult in short time intervals.", "EHBLs have also been proposed as possible sources of astrophysical neutrinos and ultra-high energy cosmic rays [63], [64].", "Therefore, the identification of new EHBL candidates is one of the strong motivations for the $\\gamma $ –ray observations using the upcoming Cherenkov Telescope Array (CTA) observatory and individual systems like H.E.S.S.", "II and MACE (Major Atmospheric Cherenkov Experiment) with lower threshold energy below 30 GeV and for the current generation imaging atmospheric Cherenkov telescopes such as MAGIC and VERITAS." ], [ "Acknowledgements", "We thank the anonymous reviewer for the valuable comments and suggestions that allow us to improve the contents of the manuscript.", "The authors acknowledge the use of the observational results reported in the literature and in the online TeV catalog.", "The blazar 1ES 0229+200 was initially classified as HBL in 1995 from X-ray to radio flux ratio after its discovery in 1992 by the Einstein Slew Survey [65].", "The source is hosted by an elliptical galaxy at redshift $z$ = 0.139 [66].", "The first VHE $\\gamma $ –ray emission from 1ES 0229+200 was discovered by the H.E.S.S.", "telescopes during 2005–2006 observations above an energy threshold of 580 GeV [67].", "The observed VHE spectrum was described by a power law with a spectral index of 2.50$\\pm $ 0.19 in the energy range 500 GeV–15 TeV.", "The long term monitoring of 1ES 0229+200 by the VERITAS telescope during 2009–2012 detected a significant VHE $\\gamma $ –ray emission from the source with the observed spectrum described by a simple power law of spectral index 2.59$\\pm $ 0.12 in the energy range 300 GeV–16 TeV [68].", "The VHE emission detected with the VERITAS telescope indicated a long term variability on a yearly timescale.", "Near simultaneous X-ray observations suggested a hard spectrum with an index $\\sim $ 1.7 and the position of synchrotron peak at energy above 10 keV in the SED.", "The Fermi-LAT observations suggested that the high energy spectrum of the source can be described by a hard power law with a spectral index of 1.36$\\pm $ 0.25 in the energy range 1–300 GeV [69].", "The blazar 1ES 0347-121 was classified as BL Lac object in 1993 after its discovery in 1992 by the Einstein Slew Survey [70].", "It was observed to harbor a supermassive black hole in the elliptical host galaxy at redshift $z$ = 0.188 [66].", "The H.E.S.S.", "telescopes first discovered a VHE $\\gamma $ –ray emission from this source in 2006 above an energy threshold of 250 GeV [71].", "The observed differential photon spectrum was described by a power law with a spectral index of 3.10$\\pm $ 0.23 and no evidence of short term variability on monthly timescales was observed.", "The Fermi-LAT observations indicated high energy $\\gamma $ –ray emission in the energy range 1-300 GeV to be compatible with the power law photon index 1.65$\\pm $ 0.17.", "The X-ray emission in the energy range 0.3-8 keV was described by a power law model with a spectral index of 1.99$\\pm $ 0.06.", "The blazar RGB J0710+591 was discovered in 1984 as fully resolved elliptical galaxy with a nuclear point source at redshift $z$ = 0.125 [72].", "The first VHE emission from this source was detected by VERITAS during 2008–2009 observations with no evidence of variability in the $\\gamma $ –ray light curve above 300 GeV [73].", "The observed VHE $\\gamma $ –ray spectrum was described by a power law with a photon spectral index of 2.69$\\pm $ 0.26 in the energy range 310 GeV to 4.6 TeV.", "The HE $\\gamma $ –ray spectrum in MeV-GeV band was also fitted by a power law with a spectral index of 1.46$\\pm $ 0.17 using Fermi-LAT observations.", "The X-ray emission from the source was observed to be consistent with the hard photon spectrum having a power law index of 1.86$\\pm $ 0.01 indicating a synchrotron peak energy above 10 keV.", "The host galaxy of RGB J0710+591 was observed to make a significant contribution to the optical emission of the source.", "The blazar 1ES 1101-232 was discovered as a distant VHE $\\gamma $ –ray source in an elliptical host galaxy at redshift $z$ = 0.186 [74].", "The discovery of VHE $\\gamma $ –ray emission from this source was reported in 2007 using H.E.S.S.", "observations performed during 2004–2005 in the energy range above 250 GeV [75].", "The observed VHE spectrum was described as a power law function with a spectral index of 2.94$\\pm $ 0.20 in the energy range of 200 GeV to 4 TeV.", "No evidence of variability at any timescale in the VHE light curve of the source was detected.", "The X-ray emission of the source in the energy range 3–15 keV was found to be compatible with a broken power law of spectral indices 2.49$\\pm $ 0.02 and 2.78$\\pm $ 0.16 before and after the break energy ($\\sim $ 8 keV) respectively.", "The peak energy in the first hump of the SED was found to be located between 0.5 keV and 3.5 keV.", "The X-ray and optical emissions from the source were also observed to be constant like their VHE counterpart.", "The H.E.S.S.", "telescopes discovered VHE $\\gamma $ –ray emission from the point source HESS J1943+213 located in the Galactic plane and spatially coincident with the hard X-ray unidentified source IGR J19443+2117 during the Galactic plane survey between 2005 and 2008 [76].", "The time averaged differential VHE spectrum was described by a power law with a soft photon spectral index of 3.1$\\pm $ 0.3 in the energy range 470 GeV–6 TeV.", "No significant variability was observed in the VHE $\\gamma $ -ray emission of the source during this period.", "The X-ray spectrum of the source was observed to be hard with a spectral index $\\sim $ 1.04 and without any cut-off up to 195 keV.", "The IR observations indicated an elliptical host galaxy with redshift $z$ $\\sim $ 0.14 [76].", "The VERITAS observations of this source between May 2014 and November 2015 revealed a detection of VHE $\\gamma $ –rays described by a power law with a spectral index of 2.81$\\pm $ 0.12 in the energy range of 180 GeV to 2 TeV [77].", "Near simultaneous observation with the Fermi-LAT in the energy range 3-300 GeV was used to constrain its redshift $z <$ 0.23 [77]." ] ]
1906.04486
[ [ "Importance Resampling for Off-policy Prediction" ], [ "Abstract Importance sampling (IS) is a common reweighting strategy for off-policy prediction in reinforcement learning.", "While it is consistent and unbiased, it can result in high variance updates to the weights for the value function.", "In this work, we explore a resampling strategy as an alternative to reweighting.", "We propose Importance Resampling (IR) for off-policy prediction, which resamples experience from a replay buffer and applies standard on-policy updates.", "The approach avoids using importance sampling ratios in the update, instead correcting the distribution before the update.", "We characterize the bias and consistency of IR, particularly compared to Weighted IS (WIS).", "We demonstrate in several microworlds that IR has improved sample efficiency and lower variance updates, as compared to IS and several variance-reduced IS strategies, including variants of WIS and V-trace which clips IS ratios.", "We also provide a demonstration showing IR improves over IS for learning a value function from images in a racing car simulator." ], [ "Introduction", "An emerging direction for reinforcement learning systems is to learn many predictions, formalized as value function predictions contingent on many different policies.", "The idea is that such predictions can provide a powerful abstract model of the world.", "Some examples of systems that learn many value functions are the Horde architecture composed of General Value Functions (GVFs) [37], [20], systems that use options [35], [29], predictive representation approaches [36], [28], [31] and systems with auxiliary tasks [7].", "Off-policy learning is critical for learning many value functions with different policies, because it enables data to be generated from one behavior policy to update the values for each target policy in parallel.", "The typical strategy for off-policy learning is to reweight updates using importance sampling (IS).", "For a given state $s$ , with action $a$ selected according to behavior $\\mu $ , the IS ratio is the ratio between the probability of the action under the target policy $\\pi $ and the behavior: $\\frac{\\pi (a|s)}{\\mu (a|s)}$ .", "The update is multiplied by this ratio, adjusting the action probabilities so that the expectation of the update is as if the actions were sampled according to the target policy $\\pi $ .", "Though the IS estimator is unbiased and consistent [8], [27], it can suffer from high or even infinite variance due to large magnitude IS ratios, in theory [1] and in practice [25], [16], [17].", "There have been some attempts to modify off-policy prediction algorithms to mitigate this variance.There is substantial literature on variance reduction for another area called off-policy policy evaluation, but which estimates only a single number or value for a policy (e.g., see [38]).", "The resulting algorithms differ substantially, and are not appropriate for learning the value function.", "Weighted IS (WIS) algorithms have been introduced [25], [16], [15], which normalize each update by the sample average of the ratios.", "These algorithms improve learning over standard IS strategies, but are not straightforward to extend to nonlinear function approximation.", "In the offline setting, a reweighting scheme, called importance sampling with unequal support [39], was introduced to account for samples where the ratio is zero, in some cases significantly reducing variance.", "Another strategy is to rescale or truncate the IS ratios, as used by V-trace [4] for learning value functions and Tree-Backup [24], Retrace [21] and ABQ [17] for learning action-values.", "Truncation of IS-ratios in V-trace can incur significant bias, and this additional truncation parameter needs to be tuned.", "An alternative to reweighting updates is to instead correct the distribution before updating the estimator using weighted bootstrap sampling: resampling a new set of data from the previously generated samples [33], [2].", "Consider a setting where a buffer of data is stored, generated by a behavior policy.", "Samples for policy $\\pi $ can be obtained by resampling from this buffer, proportionally to $\\frac{\\pi (a|s)}{\\mu (a|s)}$ for state-action pairs $(s,a)$ in the buffer.", "In the sampling literature, this strategy has been proposed under the name Sampling Importance Resampling (SIR) [26], [33], [5], and has been particularly successful for Sequential Monte Carlo sampling [5], [32].", "Such resampling strategies have also been popular in classification, with over-sampling or under-sampling typically being preferred to weighted (cost-sensitive) updates [14].", "A resampling strategy has several potential benefits for off-policy prediction.We explicitly use the term prediction rather than policy evaluation to make it clear that we are not learning value functions for control.", "Rather, our goal is to learn value functions solely for the sake of prediction.", "Resampling could even have larger benefits for learning approaches, as compared to averaging or numerical integration problems, because updates accumulate in the weight vector and change the optimization trajectory of the weights.", "For example, very large importance sampling ratios could destabilize the weights.", "This problem does not occur for resampling, as instead the same transition will be resampled multiple times, spreading out a large magnitude update across multiple updates.", "On the other extreme, with small ratios, IS will waste updates on transitions with very small IS ratios.", "By correcting the distribution before updating, standard on-policy updates can be applied.", "The magnitude of the updates vary less—because updates are not multiplied by very small or very large importance sampling ratios—potentially reducing variance of stochastic updates and simplifying learning rate selection.", "We hypothesize that resampling (a) learns in a fewer number of updates to the weights, because it focuses computation on samples that are likely under the target policy and (b) is less sensitive to learning parameters and target and behavior policy specification.", "In this work, we investigate the use of resampling for online off-policy prediction for known, unchanging target and behavior policies.", "We first introduce Importance Resampling (IR), which samples transitions from a buffer of (recent) transitions according to IS ratios.", "These sampled transitions are then used for on-policy updates.", "We show that IR has the same bias as WIS, and that it can be made unbiased and consistent with the inclusion of a batch correction term—even under a sliding window buffer of experience.", "We provide additional theoretical results characterizing when we might expect the variance to be lower for IR than IS.", "We then empirically investigate IR on three microworlds and a racing car simulator, learning from images, highlighting that (a) IR is less sensitive to learning rate than IS and V-trace (IS with clipping) and (b) IR converges more quickly in terms of the number of updates." ], [ "Background", "We consider the problem of learning General Value Functions (GVFs) [37].", "The agent interacts in an environment defined by a set of states $\\mathcal {S}$ , a set of actions $\\mathcal {A}$ and Markov transition dynamics, with probability $\\mathrm {P}(s^{\\prime }|s,a)$ of transitions to state $s^{\\prime }$ when taking action $a$ in state $s$ .", "A GVF is defined for policy $\\pi : \\mathcal {S}\\!\\times \\!\\mathcal {A}\\!\\rightarrow \\!", "[0,1]$ , cumulant $c: \\mathcal {S}\\!", "\\times \\!\\mathcal {A}\\!\\times \\!", "\\mathcal {S}\\!", "\\rightarrow \\!", "\\mathbb {R}$ and continuation function $\\gamma : \\mathcal {S}\\!\\times \\!", "\\mathcal {A}\\!\\times \\!\\mathcal {S}\\rightarrow [0,1]$ , with $C_{t+1} \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}c(S_t, A_t, S_{t+1})$ and $\\gamma _{t+1} \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\gamma (S_t, A_t, S_{t+1})$ for a (random) transition $(S_t, A_t, S_{t+1})$ .", "The value for a state $s \\in \\mathcal {S}$ is $V(s) \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\mathbb {E}_\\pi \\left[ G_t | S_t = s\\right] &&\\text{where } G_t \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}C_{t+1} + \\gamma _{t+1} C_{t+2} + \\gamma _{t+1} \\gamma _{t+2} C_{t+3} + \\ldots .$ The operator $\\mathbb {E}_\\pi $ indicates an expectation with actions selected according to policy $\\pi $ .", "GVFs encompass standard value functions, where the cumulant is a reward.", "Otherwise, GVFs enable predictions about discounted sums of others signals into the future, when following a target policy $\\pi $ .", "These values are typically estimated using parametric function approximation, with weights $\\theta \\in \\mathbb {R}^d$ defining approximate values $V_\\theta (s)$ .", "In off-policy learning, transitions are sampled according to behavior policy, rather than the target policy.", "To get an unbiased sample of an update to the weights, the action probabilities need to be adjusted.", "Consider on-policy temporal difference (TD) learning, with update $ \\alpha _t\\delta _t\\nabla _\\theta V_{\\theta }(s)$ for a given $S_t = s$ , for learning rate $\\alpha _t \\in \\mathbb {R}^+$ and TD-error $\\delta _t \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}C_{t+1} + \\gamma _{t+1}V_{\\theta }(S_{t+1}) - V_{\\theta }(s)$ .", "If actions are instead sampled according to a behavior policy $\\mu : \\mathcal {S}\\times \\mathcal {A}\\rightarrow [0,1]$ , then we can use importance sampling (IS) to modify the update, giving the off-policy TD update $\\alpha _t\\rho _t\\delta _t\\nabla _\\theta V_{\\theta }(s)$ for IS ratio $\\rho _t \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\frac{\\pi (A_t | S_t)}{\\mu (A_t | S_t)}$ .", "Given state $S_t = s$ , if $\\mu (a | s) > 0$ when $\\pi (a | s) > 0$ , then the expected value of these two updates are equal.", "To see why, notice that $\\mathbb {E}_\\mu \\left[\\alpha _t\\rho _t\\delta _t\\nabla _\\theta V_{\\theta }(s) |S_t = s\\right]= \\alpha _t\\nabla _\\theta V_{\\theta }(s)\\mathbb {E}_\\mu \\left[\\rho _t\\delta _t |S_t = s\\right]$ which equals $\\mathbb {E}_\\pi \\left[\\alpha _t\\rho _t\\delta _t\\nabla _\\theta V_{\\theta }(s) |S_t = s\\right]$ because $\\mathbb {E}_\\mu \\left[\\rho _t\\delta _t |S_t = s\\right]&= \\sum _{a\\in \\mathcal {A}} \\mu (a| s) \\frac{\\pi (a| s)}{\\mu (a| s)} \\mathbb {E}\\left[\\delta _t |S_t = s, A_t = a\\right]= \\ \\mathbb {E}_\\pi \\left[\\delta _t |S_t = s\\right].$ Though unbiased, IS can be high-variance.", "A lower variance alternative is Weighted IS (WIS).", "For a batch consisting of transitions $\\lbrace (s_i, a_i, s_{i+1}, c_{i+1}, \\rho _i)\\rbrace _{i=1}^n$ , batch WIS uses a normalized estimate for the update.", "For example, an offline batch WIS TD algorithm, denoted WIS-Optimal below, would use update $ \\alpha _t \\frac{\\rho _t}{\\sum _{i=1}^n \\rho _i} \\delta _t\\nabla _\\theta V_{\\theta }(s)$ .", "Obtaining an efficient WIS update is not straightforward, however, when learning online and has resulted in algorithms in the SGD setting (i.e.", "$n=1$ ) specialized to tabular [25] and linear functions [16], [15].", "We nonetheless use WIS as a baseline in the experiments and theory." ], [ "Importance Resampling", "In this section, we introduce Importance Resampling (IR) for off-policy prediction and characterize its bias and variance.", "A resampling strategy requires a buffer of samples, from which we can resample.", "Replaying experience from a buffer was introduced as a biologically plausible way to reuse old experience [11], [12], and has become common for improving sample efficiency, particularly for control [19], [30].", "In the simplest case—which we assume here—the buffer is a sliding window of the most recent $n$ samples, $\\lbrace (s_i, a_i, s_{i+1}, c_{i+1}, \\rho _i)\\rbrace _{i=t-n}^t$ , at time step $t > n$ .", "We assume samples are generated by taking actions according to behavior $\\mu $ .", "The transitions are generated with probability $d_\\mu (s) \\mu (a | s) \\mathrm {P}(s^{\\prime } | s, a)$ , where $d_\\mu : \\mathcal {S}\\rightarrow [0,1]$ is the stationary distribution for policy $\\mu $ .", "The goal is to obtain samples according to $d_\\mu (s) \\pi (a | s) \\mathrm {P}(s^{\\prime } | s, a)$ , as if we had taken actions according to policy $\\pi $ from statesThe assumption that states are sampled from $d_\\mu $ underlies most off-policy learning algorithms.", "Only a few attempt to adjust probabilities $d_\\mu $ to $d_\\pi $ , either by multiplying IS ratios before a transition [25] or by directly estimating state distributions [6], [13].", "In this work, we focus on using resampling to correct the action distribution—the standard setting.", "We expect, however, that some insights will extend to how to use resampling to correct the state distribution, particularly because wherever IS ratios are used it should be straightforward to use our resampling approach.", "$s\\sim d_\\mu $ .", "The IR algorithm is simple: resample a mini-batch of size $k$ on each step $t$ from the buffer of size $n$ , proportionally to $\\rho _i$ in the buffer.", "Using the resampled mini-batch we can update our value function using standard on-policy approaches, such as on-policy TD or on-policy gradient TD.", "The key difference to IS and WIS is that the distribution itself is corrected, before the update, whereas IS and WIS correct the update itself.", "This small difference, however, can have larger ramifications practically, as we show in this paper.", "We consider two variants of IR: with and without bias correction.", "For point $i_j$ sampled from the buffer, let $\\Delta _{i_j}$ be the on-policy update for that transition.", "For example, for TD, $\\Delta _{i_j} = \\delta _{i_j} \\nabla _\\theta V_\\theta (s_{i_j})$ .", "The first step for either variant is to sample a mini-batch of size $k$ from the buffer, proportionally to $\\rho _i$ .", "Bias-Corrected IR (BC-IR) additionally pre-multiplies with the average ratio in the buffer $\\bar{\\rho } \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\tfrac{1}{n} \\sum _{i=1}^n \\rho _i$ , giving the following estimators for the update direction $X_{\\mathrm {IR}}&\\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\tfrac{1}{k} \\sum _{j=1}^k \\Delta _{i_j} \\hspace{56.9055pt}X_{\\mathrm {BC}}\\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\tfrac{\\bar{\\rho }}{k} \\sum _{j=1}^k \\Delta _{i_j}$ BC-IR negates bias introduced by the average ratio in the buffer deviating significantly from the true mean.", "For reasonably large buffers, $\\bar{\\rho }$ will be close to 1 making IR and BC-IR have near-identical updates$\\bar{\\rho } \\approx \\mathbb {E}[\\rho (a|s)] = \\mathbb {E}[\\frac{\\pi (a|s)}{\\mu (a|s)}] = \\sum _{s,a} \\frac{\\pi (a|s)}{\\mu (a|s)}\\mu (a|s) d_{\\mu }(s) = 1$ ..", "Nonetheless, they do have different theoretical properties, particularly for small buffer sizes $n$ , so we characterize both.", "Across most results, we make the following assumption.", "Assumption 1 A buffer $B_t = \\lbrace X_{t+1}, ..., X_{t+n}\\rbrace $ is constructed from the most recent $n$ transitions sampled by time $t+n$ , which are generated sequentially from an irreducible, finite MDP with a fixed policy $\\mu $ .", "To denote expectations under $p(x) \\!=\\!", "d_\\mu (s) \\mu (a | s) \\mathrm {P}(s^{\\prime } | s, a)$ and $q(x) \\!=\\!", "d_\\mu (s) \\pi (a | s) \\mathrm {P}(s^{\\prime } | s, a)$ , we overload the notation from above, using operators $\\mathbb {E}_\\mu $ and $\\mathbb {E}_\\pi $ respectively.", "To reduce clutter, we write $\\mathbb {E}$ to mean $\\mathbb {E}_\\mu $ , because most expectations are under the sampling distribution.", "All proofs can be found in Appendix ." ], [ "Bias of IR", "We first show that IR is biased, and that its bias is actually equal to WIS-Optimal, in Theorem REF .", "Theorem 3.1 [Bias for a fixed buffer of size $n$ ] Assume a buffer $B$ of $n$ transitions sampled i.i.d according to $p(x = (s,a,s^{\\prime })) = d_\\mu (s) \\mu (a | s) \\mathrm {P}(s^{\\prime } | s, a)$ .", "Let $X_{\\mathrm {WIS}^*}\\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\sum _{i=1}^{n} \\frac{\\rho _i}{\\sum _{j=1}^n \\rho _j} \\Delta _i$ be the WIS-Optimal estimator of the update.", "Then, $\\mathbb {E}[X_{\\mathrm {IR}}] = \\mathbb {E}[X_{\\mathrm {WIS}^*}]$ and so the bias of $X_{\\mathrm {IR}}$ is proportional to $\\mathrm {Bias}(X_{\\mathrm {IR}}) = \\mathbb {E}[X_{\\mathrm {IR}}] - \\mathbb {E}_\\pi [\\Delta ] \\propto \\frac{1}{n} (\\mathbb {E}_\\pi [\\Delta ] \\sigma _\\rho ^2 - \\sigma _{\\rho , \\Delta } \\sigma _\\rho \\sigma _\\Delta )$ where $\\mathbb {E}_\\pi [\\Delta ]$ is the expected update across all transitions, with actions from $S$ taken by the target policy $\\pi $ ; $\\sigma _\\rho ^2 = \\mathrm {Var}(\\tfrac{1}{n}\\sum _{j=1}^n \\rho _j)$ ; $\\sigma _\\Delta ^2 = \\mathrm {Var}(\\tfrac{1}{n}\\sum _{i=1}^{n} \\rho _i \\Delta _i)$ ; and covariance $\\sigma _{(\\rho ,\\Delta )} = \\mathrm {Cov}(\\tfrac{1}{n}\\sum _{j=1}^n \\rho _j,\\tfrac{1}{n}\\sum _{i=1}^{n} \\rho _i \\Delta _i)$ .", "Theorem REF is the only result which follows a different set of assumptions, primarily due to using the bias characterization of $X_{\\mathrm {WIS}^*}$ found in [22].", "The bias of IR will be small for reasonably large $n$ , both because it is proportional to $1/n$ and because larger $n$ will result in lower variance of the average ratios and average update for the buffer in Equation (REF ).", "In particular, as $n$ grows, these variances decay proportionally to $n$ .", "Nonetheless, for smaller buffers, such bias could have an impact.", "We can, however, easily mitigate this bias with a bias-correction term, as shown in the next corollary and proven in Appendix REF .", "Corollary 3.1.1 BC-IR is unbiased: $\\mathbb {E}[X_{\\mathrm {BC}}] = \\mathbb {E}_\\pi [\\Delta ]$ ." ], [ "Consistency of IR", "Consistency of IR in terms of an increasing buffer, with $n \\rightarrow \\infty $ , is a relatively straightforward extension of prior results for SIR, with or without the bias correction, and from the derived bias of both estimators (see Theorem REF in Appendix REF ).", "More interesting, and reflective of practice, is consistency with a fixed length buffer and increasing interactions with the environment, $t \\rightarrow \\infty $ .", "IR, without bias correction, is asymptotically biased in this case; in fact, its asymptotic bias is the one characterized above for a fixed length buffer in Theorem REF .", "BC-IR, on the other hand, is consistent, even with a sliding window, as we show in the following theorem.", "Theorem 3.2 Let $B_t = \\lbrace X_{t+1}, ..., X_{t+n}\\rbrace $ be the buffer of the most recent $n$ transitions sampled according to Assumption REF .", "Define the sliding-window estimator $X_{t} \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\frac{1}{T} \\sum _{t=1}^T X_{\\mathrm {BC}}^{(t)}$ .", "Then, if $\\mathbb {E}_\\pi [|\\Delta | ] < \\infty $ , then $X_{T}$ converges to $\\mathbb {E}_\\pi [\\Delta ]$ almost surely as $T \\rightarrow \\infty $ ." ], [ "Variance of Updates", "It might seem that resampling avoids high-variance in updates, because it does not reweight with large magnitude IS ratios.", "The notion of effective sample size from statistics, however, provides some intuition about why large magnitude IS ratios can also negatively affect IR, not just IS.", "Effective sample size is between 1 and $n$ , with one estimator $ \\left(\\sum _{i=1}^{n} \\rho _i \\right)^2/\\sum _{i=1}^{n} \\rho _i^2$ [9], [18].", "When the effective sample size is low, this indicates that most of the probability is concentrated on a few samples.", "For high magnitude ratios, IR will repeatedly sample the same transitions, and potentially never sample some of the transitions with small IS ratios.", "Fortunately, we find that, despite this dependence on effective sample size, IR can significantly reduce variance over IS.", "In this section, we characterize the variance of the BC-IR estimator.", "We choose this variant of IR, because it is unbiased and so characterizing its variance is a more fair comparison to IS.", "We define the mini-batch IS estimator $X_{\\mathrm {IS}}\\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\frac{1}{k}\\sum _{j=1}^k\\rho _{{z_j}}\\Delta _{{z_j}}$ , where indices ${z_j}$ are sampled uniformly from $\\lbrace 1, \\ldots , n\\rbrace $ .", "This contrasts the indices $i_1, \\ldots , i_k$ for $X_{\\mathrm {BC}}$ that are sampled proportionally to $\\rho _i$ .", "We begin by characterizing the variance, under a fixed dataset $B$ .", "For convenience, let $\\mu _{B} = \\mathbb {E}_\\pi [\\Delta | B] $ .", "We characterize the sum of the variances of each component in the update estimator, which equivalently corresponds to normed deviation of the update from its mean, $\\Delta \\ | \\ B) \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\operatorname{tr}\\mathrm {Cov}(\\Delta \\ | \\ B) = {\\textstyle \\sum _{m=1}^d \\mathrm {Var}(\\Delta _m \\ |\\ B)}= \\mathbb {E}[\\Vert \\Delta - \\mu _{B} \\Vert _2^2 \\ |\\ B ]$ for an unbiased stochastic update ${\\small \\Delta \\in \\mathbb {R}^d}$ .", "We show two theorems that BC-IR has lower variance than IS, with two different conditions on the norm of the update.", "We first start with more general conditions, and then provide a theorem for conditions that are likely only true in early learning.", "Theorem 3.3 Assume that, for a given buffer $B$ , $ \\Vert \\Delta _j \\Vert _2^2 > \\frac{c}{\\rho _j}$ for samples where $\\rho _j \\ge \\bar{\\rho }$ , and that $ \\Vert \\Delta _j \\Vert _2^2 < \\frac{c}{\\rho _j}$ for samples where $\\rho _j < \\bar{\\rho }$ , for some $c > 0$ .", "Then the BC-IR estimator has lower variance than the IS estimator: $X_{\\mathrm {BC}}\\ | \\ B) < X_{\\mathrm {IS}}\\ | \\ B)$ .", "The conditions in Theorem REF preclude having update norms for samples with small $\\rho $ be quite large—larger than a number $\\propto \\frac{1}{\\rho }$ —and a small norm for samples with large $\\rho $ .", "These conditions can be relaxed to a statement on average, where the cumulative weighted magnitude of the update norm for samples with $\\rho $ below the median needs to be smaller than for samples with $\\rho $ above the mean (see the proof in Appendix REF ).", "We next consider a setting where the magnitude of the update is independent of the given state and action.", "We expect this condition to hold in early learning, where the weights are randomly initialized, and thus randomly incorrect across the state-action space.", "As learning progresses, and value estimates become more accurate in some states, it is unlikely for this condition to hold.", "Theorem 3.4 Assume $\\rho $ and the magnitude of the update $\\Vert \\Delta \\Vert _2^2$ are independent $\\mathbb {E}[\\rho _j \\Vert \\Delta _j \\Vert _2^2 \\ | \\ B] = \\mathbb {E}[\\rho _j \\ | \\ B] \\ \\mathbb {E}[ \\Vert \\Delta _j \\Vert _2^2 \\ | \\ B]$ Then the BC-IR estimator will have equal or lower variance than the IS estimator: $X_{\\mathrm {BC}}\\ | \\ B) \\le X_{\\mathrm {IS}}\\ | \\ B)$ .", "These results have focused on variance of each estimator, for a fixed buffer, which provided insight into variance of updates when executing the algorithms.", "We would, however, also like to characterize variability across buffers, especially for smaller buffers.", "Fortunately, such a characterization is a simple extension on the above results, because variability for a given buffer already demonstrates variability due to different samples.", "It is easy to check that $\\mathbb {E}[\\mathbb {E}[\\mu _{IR} \\ | \\ B]] = \\mathbb {E}[\\mu _{IS} \\ | \\ B] = \\mathbb {E}_\\pi [\\Delta ]$ .", "The variances can be written using the law of total variance $X_{\\mathrm {BC}})&= \\mathbb {E}[X_{\\mathrm {BC}}\\ | \\ B) ] + \\mathbb {E}[X_{\\mathrm {BC}}\\ | \\ B])= \\mathbb {E}[X_{\\mathrm {BC}}\\ | \\ B) ] + \\mu _{B})\\\\X_{\\mathrm {IS}})&= \\mathbb {E}[X_{\\mathrm {IS}}\\ | \\ B) ] + \\mu _{B})\\\\\\Rightarrow X_{\\mathrm {BC}}) & - X_{\\mathrm {IS}}) = \\mathbb {E}[X_{\\mathrm {BC}}\\ | \\ B) - X_{\\mathrm {IS}}\\ | \\ B)]$ with expectation across buffers.", "Therefore, the analysis of $X_{\\mathrm {BC}}\\ | \\ B)$ directly applies." ], [ "Empirical Results", "We investigate the two hypothesized benefits of resampling as compared to reweighting: improved sample efficiency and reduced variance.", "These benefits are tested in two microworld domains—a Markov chain and the Four Rooms domain—where exhaustive experiments can be conducted.", "We also provide a demonstration that IR reduces sensitivity over IS and VTrace in a car simulator, TORCs, when learning from images Experimental code for every domain except Torcs can be found at https://mkschleg.github.io/Resampling.jl.", "We compare IR and BC-IR against several reweighting strategies, including importance sampling (IS); two online approaches to weighted important sampling, WIS-Minibatch with weighting $\\rho _i/\\sum _{j=1}^k \\rho _j$ and WIS-Buffer with weighting $\\rho _i/\\tfrac{k}{n}\\sum _{j=1}^n\\rho _j$ ; and V-traceRetrace, ABQ and TreeBackup also use clipping to reduce variance.", "But, they are designed for learning action-values and for mitigating variance in eligibility traces.", "When trace parameter $\\lambda = 0$ —as we assume here—there are no IS ratios and these methods become equivalent to using Sarsa(0) for learning action-values., which corresponds to clipping importance weights [4].", "We also compare to WIS-TD(0) [15], when applicable, which uses an online approximation to WIS, with a stepsize selection strategy (as described in Appendix REF ).", "This algorithm uses only one sample at a time, rather than a mini-batch, and so is only included in Figure REF .", "Where appropriate, we also include baselines using On-policy sampling; WIS-Optimal which uses the whole buffer to get an update; and Sarsa(0) which learns action-values—which does not require IS ratios—and then produces estimate $V(s) = \\sum _a \\pi (s,a) Q(s,a)$ .", "WIS-Optimal is included as an optimal baseline, rather than as a competitor, as it estimates the update using the whole buffer on every step.", "In all the experiments, the data is generated off-policy.", "We compute the absolute value error (AVE) or the absolute return error (ARE) on every step.", "For the sensitivity plots we take the average over all the interactions as specified for the environment — resulting in MAVE and MARE respectively.", "The error bars represent the standard error over runs, which are featured on every plot — although not visible in some instances.", "For the microworlds, the true value function is found using dynamic programming with threshold $10^{-15}$ , and we compute AVE over all the states.", "For TORCs and continuous Four Rooms, the true value function is approximated using rollouts from a random subset of states generated when running the behavior policy $\\mu $ , and the ARE is computed over this subset.", "For the Torcs domain, the same subset of states is used for each run due to computational constraints and report the mean squared return error (MSRE).", "Plots showing sensitivity over number of updates show results for complete experiments with updates evenly spread over all the interactions.", "A tabular representation is used in the microworld experiments, tilecoded features with 64 tilings and 8 tiles is used in continuous Four Rooms, and a convolutional neural network is used for TORCs, with an architecture previously defined for self-driving cars [3]." ], [ "Investigating Convergence Rate", "We first investigate the convergence rate of IR.", "We report learning curves in Four Rooms, as well as sensitivity to the learning rate.", "The Four Rooms domain [34] has four rooms in an 11x11 grid world.", "The four rooms are positioned in a grid pattern with each room having two adjacent rooms.", "Each adjacent room is separated by a wall with a single connecting hallway.", "The target policy takes the down action deterministically.", "The cumulant for the value function is 1 when the agent hits a wall and 0 otherwise.", "The continuation function is $\\gamma =0.9$ , with termination when the agent hits a wall.", "The resulting value function can be thought of as distance to the bottom wall.", "The behavior policy is uniform random everywhere except for 25 randomly selected states which take the action down with probability 0.05 with remaining probability split equally amongst the other actions.", "The choice of behavior and target policy induce high magnitude IS ratios.", "Figure: Four Rooms experiments (n=2500n=2500, k=16k=16, 25 runs): left Learning curves for each method, with updates every 16 steps.", "IR and WIS-Optimal are overlapping.", "center Sensitivity over the number of interactions between updates.", "right Learning rate sensitivity plot.As shown in Figure REF , IR has noticeable improvements over the reweighting strategies tested.", "The fact that IR resamples more important transitions from the replay buffer seems to significantly increase the learning speed.", "Further, IR has a wider range of usable learning rates.", "The same effect is seen even as we reduce the total number of updates, where the uniform sampling methods perform significantly worse as the interactions between updates increases—suggesting improved sample efficiency.", "WIS-Buffer performs almost equivalently to IS, because for reasonably size buffers, its normalization factor $\\tfrac{1}{n}\\sum _{j=1}^n\\rho _j \\approx 1$ because $\\mathbb {E}[\\rho ] = 1$ .", "WIS-Minibatch and V-trace both reduce the variance significantly, with their bias having only a limited impact on the final performance compared to IS.", "Even the most aggressive clipping parameter for V-trace—a clipping of 1.0— outperforms IS.", "The bias may have limited impact because the target policy is deterministic, and so only updates for exactly one action in a state.", "Sarsa—which is the same as Retrace(0)—performs similarly to the reweighting strategies.", "The above results highlight the convergence rate improvements from IR, in terms of number of updates, without generalization across values.", "Conclusions might actually be different with function approximation, when updates for one state can be informative for others.", "For example, even if in one state the target policy differs significantly from the behavior policy, if they are similar in a related state, generalization could overcome effective sample size issues.", "We therefore further investigate if the above phenomena arise under function approximation with RMSProp learning rate selection.", "Figure: Convergence rates in Continuous Four Rooms averaged over 25 runs with 100000 interactions with the environment.", "left uniform random behavior policy and target policy which takes the down action with probability 0.90.9 and probability 0.1/30.1 / 3 for all other actions.", "Learning used incremental updates (as specified in appendix ).", "right uniform random behavior and target policy with persistent down action selection learned with mini-batch updates with RMSProp.We conduct two experiments similar to above, in a continuous state Four Rooms variant.", "The agent is a circle with radius 0.1, and the state consists of a continuous tuple containing the x and y coordinates of the agent's center point.", "The agent takes an action in one of the 4 cardinal directions moving $0.5 \\pm \\mathcal {U}(0.0, 0.1)$ in that directions with random drift in the orthogonal direction sampled from $\\mathcal {N}(0.0,0.01)$ .", "The representation is a tile coded feature vector with 64 tilings and 8 tiles.", "We provide results for both mini-batch updating (as above) and incremental updating (i.e.", "updating on each transition of a mini-batch incrementally, see appendix REF for details).", "For the mini-batch experiment, the target policy deterministically takes the down action.", "For the incremental experiment, the target policy takes the down action with probability $0.9$ and selects all other action with probability $0.1 / 3$ .", "We find that generalization can mitigate some of the differences between IR and IS above in some settings, but in others the difference remains just as stark (see Figure REF and Appendix REF ).", "If we use the behavior policy from the tabular domain, which skews the behavior in a sparse set of states, the nearby states mitigate this skew.", "However, if we use a behavior policy that selects all actions uniformly, then again IR obtains noticeable gains over IS and V-trace, for reducing the required number of updates, as shown in Figure REF .", "We find similar results for the incremental setting Figure REF (left), where resampling still outperforms all other methods in terms of convergence rates.", "Given WIS-TD(0)'s significant degrade in performance as the number of updates decreases, we also compare with using WIS-TD(0) when sampling according to resampling IR+WIS-TD(0).", "Interestingly, this method outperforms all others — albeit only slightly against IR with constant learning rate.", "This result leads us to believe RMSProp may be a optimizer poor choice for this setting.", "Expanded results can be found in Appendix REF ." ], [ "Investigating Variance", "To better investigate the update variance we use a Markov chain, where we can more easily control dissimilarity between $\\mu $ and $\\pi $ , and so control the magnitude of the IS ratios.", "The Markov chain is composed of 8 non-terminating states and 2 terminating states on the ends of the chain, with a cumulant of 1 on the transition to the right-most terminal state and 0 everywhere else.", "We consider policies with probabilities [left, right] equal in all states: $\\mu = [0.9, 0.1], \\pi =[0.1,0.9]$ ; further policy settings can be found in Appendix REF .", "We first measure the variance of the updates for fixed buffers.", "We compute the variance of the update—from a given weight vector—by simulating the many possible updates that could have occurred.", "We are interested in the variance of updates both for early learning—when the weight vector is quite incorrect and updates are larger—and later learning.", "To obtain a sequence of such weight vectors, we use the sequence of weights generated by WIS-Optimal.", "As shown in Figure REF , the variance of IR is lower than IS, particularly in early learning, where the difference is stark.", "Once the weight vector has largely converged, the variance of IR and IS is comparable and near zero.", "Figure: Learning rate sensitivity in TORCs, averaged over 10 runs.", "V-trace has clipping parameter 1.0.", "All the methods performed worse with a higher learning rate than shown here, so we restrict to this range.We can also evaluate the update variance by proxy using learning rate sensitivity curves.", "As seen in Figure REF (left) and (center), IR has the lowest sensitivity to learning rates, on-par with On-Policy sampling.", "IS has the highest sensitivity, along with WIS-Buffer and WIS-Minibatch.", "Various clipping parameters with V-trace are also tested.", "V-trace does provide some level of variance reduction but incurs more bias as the clipping becomes more aggressive.", "Figure: Learning Rate sensitivity plots in the Random Walk Markov Chain, with buffer size n=15000n = 15000 and mini-batch size k=16k=16.", "Averaged over 100 runs.", "The policies, written as [probability left, probability right] are μ=[0.9,0.1],π=[0.1,0.9]\\mu = [0.9,0.1], \\pi =[0.1,0.9] left learning rate sensitivity plot for all methods but V-trace.", "center learning rate sensitivity for V-trace with various clipping parameters right Variance study for IS, IR, and WISBatch.", "The x-axis corresponds to the training iteration, with variance reported for the weights at that iteration generated by WIS-Optimal.", "These plots show a correlation between the sensitivity to learning rate and magnitude of variance." ], [ "Demonstration on a Car Simulator", "We use the TORCs racing car simulator to perform scaling experiments with neural networks to compare IR, IS, and V-trace.", "The simulator produces 64x128 cropped grayscale images.", "We use an underlying deterministic steering controller that produces steering actions $a_{det} \\in [-1,+1]$ and take an action with probability defined by a Gaussian $a \\sim \\mathcal {N}(a_{det}, 0.1)$ .", "The target policy is a Gaussian $\\mathcal {N}(0.15, 0.0075)$ , which corresponds to steering left.", "Pseudo-termination (i.e., $\\gamma = 0$ ) occurs when the car nears the center of the road, and the cumulant becomes 1.", "Otherwise, the cumulant is zero and $\\gamma = 0.9$ .", "The policy is specified using continuous action distributions and results in IS ratios as high as $\\sim 1000$ and highly variant updates for IS.", "Again, we can see that IR provides benefits over IS and V-trace, in Figure REF .", "There is even more generalization from the neural network in this domain, than in Four Rooms where we found generalization did reduce some of the differences between IR and IS.", "Yet, IR still obtains the best performance, and avoids some of the variance seen in IS for two of the learning rates.", "Additionally, BC-IR actually performs differently here, having worse performance for the largest learning rate.", "This suggest IR has an effect in reducing variance." ], [ "Conclusion", " In this paper we introduced a new approach to off-policy learning: Importance Resampling.", "We showed that IR is consistent, and that the bias is the same as for Weighted Importance Sampling.", "We also provided an unbiased variant of IR, called Bias-Corrected IR.", "We empirically showed that IR (a) has lower learning rate sensitivity than IS and V-trace, which is IS with varying clipping thresholds; (b) the variance of updates for IR are much lower in early learning than IS and (c) IR converges faster than IS and other competitors, in terms of the number of updates.", "These results confirm the theory presented for IR, which states that variance of updates for IR are lower than IS in two settings, one being an early learning setting.", "Such lower variance also explains why IR can converge faster in terms of number of updates, for a given buffer of data.", "The algorithm and results in this paper suggest new directions for off-policy prediction, particularly for faster convergence.", "Resampling is promising for scaling to learning many value functions in parallel, because many fewer updates can be made for each value function.", "A natural next step is a demonstration of IR, in such a parallel prediction system.", "Resampling from a buffer also opens up questions about how to further focus updates.", "One such option is using an intermediate sampling policy.", "Another option is including prioritization based on error, such as was done for control with prioritized sweeping [23] and prioritized replay [30]." ], [ "Acknowledgments", "We would like to thank Huawei for their support, and especially for allowing a portion of this work to be completed during Matthew’s internship in the summer of 2018.", "We also would like to acknowledge University of Alberta, Alberta Machine Intelligence Institute, IVADO, and NSERC for their continued funding and support, as well as Compute Canada (www.computecanada.ca) for the computing resources used for this work." ], [ "Mini-Batch Algorithms", "We consider three weighted importance sampling updates as competitors to IR.", "$n$ is the size of the experience replay buffer, $k$ is the size of a single batch.", "WIS-Minibatch and WIS-Buffer both follow a similar protocol as IS, in that they uniformly sample a mini-batch from the experience replay buffer and use this to update the value functions.", "The difference comes in the scaling of the update.", "The first, WIS-Minibatch, uses the sum of the importance weights $\\rho _i$ in the sampled mini-batch, while WIS-Buffer uses the sum of importance weights in the experience replay buffer.", "WIS-Buffer is also scaled by the size of the buffer and brought to the same effective scale as the other updates with $\\frac{1}{k}$ .", "WIS-Optimal follows a different approach and performs the best possible version of WIS where the gradient descent update is calculated from the whole experience replay buffer.", "We do not provide analysis on the bias or consistency of WIS-Minibatch or WIS-Buffer, but are natural versions of WIS one might try.", "$\\Delta \\theta &= \\frac{\\sum _i^k \\rho _i \\delta _i \\nabla _\\theta V(s_i;\\theta )}{\\sum _j^k\\rho _j}&\\hspace{28.45274pt} \\text{WIS-Minibatch}\\\\\\Delta \\theta &= \\frac{n}{k} \\frac{\\sum _i^k \\rho _i \\delta _i \\nabla _\\theta V(s_i;\\theta )}{\\sum _j^n\\rho _j}&\\hspace{28.45274pt} \\text{WIS-Buffer}\\\\\\Delta \\theta &= \\frac{\\sum _i^n \\rho _i \\delta _i \\nabla _\\theta V(s_i;\\theta )}{\\sum _j^n\\rho _j}&\\hspace{28.45274pt} \\text{WIS-Optimal}$" ], [ "Incremental Algorithm", "While implementing an efficient true WIS algorithm for mini-batch updating is beyond the scope of this work, we compare WIS-TD(0) to the incremental versions of IR, IS, VTrace, and WISBatch.", "The difference between the mini-batch and incremental algorithms is how the updates are calculated.", "In the incremental scheme a random mini-batch of data is similarly sampled from the buffer.", "We then use each sample individually to update our value function.", "We do this to more naturally compare our baselines to WIS-TD(0) [15].", "WIS-TD(0) has parameters $u_{0} \\in \\lbrace \\frac{1}{64}, 1, 5, 10, 50\\rbrace * 64, \\mu \\in 10^{-2:0.25:1}$ , and $\\eta = \\frac{\\mu }{u_0}$ .", "WIS-TD(0) follows the update equations: $\\mathbf {u}_{i+1} &= (\\mathbf {1} -\\eta \\phi _i \\circ \\phi _i) \\circ \\mathbf {u}_i + \\rho _i \\phi _i \\circ \\phi _i \\quad \\triangleright \\circ \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\text{ element wise product}\\\\\\alpha _{i+1} &= \\mathbf {1} \\oslash \\mathbf {u}_{t+1} \\quad \\triangleright \\oslash \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\text{ element wise division}\\\\\\bar{\\delta }_i &= C_i + \\gamma _i \\theta _i^\\top \\phi _i^\\prime - \\theta _{i-1}^\\top \\phi _i \\\\\\theta _{i+1} &= \\theta _t +\\alpha _{i+1} \\circ \\rho _i(\\theta _{i-1}^\\top \\phi _i - \\theta _i^\\top \\phi _i)\\phi _i +\\rho _i \\bar{\\delta }_i \\alpha _{i+1} \\circ \\phi _i$ where $\\theta \\in \\mathbb {R}^d$ is the weight vector of the value function, $\\phi _i \\in \\mathbb {R}^d$ is the feature vector of the i-th transition in the experience replay buffer, and $\\phi _i^\\prime \\in \\mathbb {R}^d$ is the feature vector of the next state of the i-th transition in the experience replay buffer." ], [ "Bias of IR", "Theorem REF(Bias for a fixed buffer of size $n$ ) Assume a buffer $B$ of $n$ transitions sampled i.i.d according to $p(x = (s,a,s^{\\prime })) = d_\\mu (s) \\mu (a | s) \\mathrm {P}(s^{\\prime } | s, a)$ .", "Let $X_{\\mathrm {WIS}^*}\\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\sum _{i=1}^{n} \\frac{\\rho _i}{\\sum _{j=1}^n \\rho _j} \\Delta _i$ be the WIS-Optimal estimator of the update.", "Then, $\\mathbb {E}[X_{\\mathrm {IR}}] = \\mathbb {E}[X_{\\mathrm {WIS}^*}]$ and so the bias of $X_{\\mathrm {IR}}$ is proportional to $\\mathrm {Bias}(X_{\\mathrm {IR}}) = \\mathbb {E}[X_{\\mathrm {IR}}] - \\mathbb {E}_\\pi [\\Delta ] \\propto \\frac{1}{n} (\\mathbb {E}_\\pi [\\Delta ] \\sigma _\\rho ^2 - \\sigma _{\\rho , \\Delta } \\sigma _\\rho \\sigma _\\Delta )$ where $\\mathbb {E}_\\pi [\\Delta ]$ is the expected update across all transitions, with actions from $S$ taken by the target policy $\\pi $ ; $\\sigma _\\rho ^2 = \\mathrm {Var}(\\tfrac{1}{n}\\sum _{j=1}^n \\rho _j)$ ; $\\sigma _\\Delta ^2 = \\mathrm {Var}(\\tfrac{1}{n}\\sum _{i=1}^{n} \\rho _i \\Delta _i)$ ; and covariance $\\sigma _{(\\rho ,\\Delta )} = \\mathrm {Cov}(\\tfrac{1}{n}\\sum _{j=1}^n \\rho _j,\\tfrac{1}{n}\\sum _{i=1}^{n} \\rho _i \\Delta _i)$ .", "Notice first that when we weight with $\\rho _i$ , this is equivalent to weighting with $\\frac{d_\\mu (S_i) \\pi (A_i | S_i) \\mathrm {P}(S_{i+1} | S_i, A_i)}{d_\\mu (S_i) \\mu (A_i | S_i) \\mathrm {P}(S_{i+1} | S_i, A_i)}$ , and so is the correct IS ratio for the transition.", "$\\mathbb {E}[&X_{\\mathrm {IR}}] = \\mathbb {E}\\left[ \\mathbb {E}[X_{\\mathrm {IR}}| B ] \\right]= \\mathbb {E}\\Big [ \\mathbb {E}\\Big [ \\tfrac{1}{k} \\sum _{j=1}^k \\Delta _{i_j} | B \\Big ] \\Big ] \\\\& = \\mathbb {E}\\Big [ \\tfrac{1}{k} \\sum _{j=1}^k \\mathbb {E}[\\Delta _{i_j} | B ] \\Big ] \\ \\ \\ \\ \\ \\ \\ \\ \\triangleright \\mathbb {E}[\\Delta _{i_j} | B ] \\!=\\!\\!", "\\sum _{i=1}^{n} \\!\\tfrac{\\rho _i}{\\sum _{j=1}^n \\rho _j} \\Delta _i \\\\&= \\mathbb {E}\\Big [ \\sum _{i=1}^{n} \\frac{\\rho _i}{\\sum _{j=1}^n \\rho _j} \\Delta _i \\Big ] \\\\&= \\mathbb {E}[X_{\\mathrm {WIS}^*}].$ This bias of $X_{\\mathrm {IR}}$ is the same as $X_{\\mathrm {WIS}^*}$ , which is characterized in [22], completing the proof." ], [ "Proof of Unbiasedness of BC-IR", "Corollary REF BC-IR is unbiased: $\\mathbb {E}[X_{\\mathrm {BC}}] = \\mathbb {E}_\\pi [\\Delta ]$ .", "$\\mathbb {E}[X_{\\mathrm {BC}}] &= \\mathbb {E}\\Big [ \\tfrac{\\bar{\\rho } }{k} \\sum _{j=1}^k \\mathbb {E}[\\Delta _{i_j} | B ] \\Big ]= \\mathbb {E}\\Big [ \\bar{\\rho } \\sum _{i=1}^{n} \\frac{\\rho _i}{\\sum _{j=1}^n \\rho _j} \\Delta _i \\Big ]\\\\&= \\mathbb {E}\\Big [ \\tfrac{1}{n} \\sum _{i=1}^{n} \\rho _i \\Delta _i \\Big ]= \\tfrac{1}{n} \\sum _{i=1}^{n} \\mathbb {E}\\Big [ \\frac{\\pi (A_i|S_i)}{\\mu (A_i|S_i)} \\Delta _i \\Big ] \\\\&= \\tfrac{1}{n} \\sum _{i=1}^{n} \\mathbb {E}\\Big [ \\frac{d_\\mu (S_i) \\pi (A_i | S_i) \\mathrm {P}(S_{i+1} | S_i, A_i)}{d_\\mu (S_i) \\mu (A_i | S_i) \\mathrm {P}(S_{i+1} | S_i, A_i)} \\Delta _i \\Big ] \\\\&= \\tfrac{1}{n} \\sum _{i=1}^{n} \\mathbb {E}_\\pi \\left[ \\Delta _i \\right]= \\mathbb {E}_\\pi \\left[ \\Delta \\right].$ The last equality follows from the fact that the samples are identically distributed." ], [ "Consistency of the resampling distribution with a growing buffer", "We show that the distribution when following a resampling strategy is consistent: as $n \\rightarrow \\infty $ , the resampling distribution converges to the true distribution.", "Our approach closely follows that of [33], but we recreate it here for convenience.", "Proposition B.1 Let $X_n = \\lbrace x_1, x_2, ..., x_n\\rbrace $ be a buffer of data sampled i.i.d.", "according to proposal density $p(x_i)$ .", "Let $q(x_i)$ be some distribution of interest with associated random variable $Q$ and assume the proposal distribution samples everywhere where $q(\\cdot )$ is non-zero, i.e $supp(q)\\subseteq supp(p)$ .", "Also, let $Y$ be a discrete random variable taking values $x_i$ with probability $ \\propto \\frac{q(x_i)}{p(x_i)}$ .", "Then, $Y$ converges in distribution to $Q$ as $n \\rightarrow \\infty $ .", "Let $\\rho _i = \\frac{q(x_i)}{p(x_i)}$ .", "From the probability mass function of $Y$ , we have that: $\\mathbb {P}[Y \\le a] &= \\sum _{i=1}^n \\mathbb {P}[Y = x_i] \\mathbb {1}\\lbrace x_i \\le a\\rbrace \\\\&= \\frac{n^{-1} \\sum _{i=1}^n \\rho _i \\mathbb {1}\\lbrace x_i \\le a\\rbrace }{n^{-1} \\sum _{i=1}^n \\rho _i}\\\\&\\xrightarrow{}\\frac{\\mathbb {E}_q[\\rho (x) \\mathbb {1}\\lbrace x \\le a\\rbrace ]}{\\mathbb {E}_q[\\rho (x)]} \\\\&= \\frac{1\\cdot \\int _{-\\infty }^a \\frac{q(x)}{p(x)} p(x) dx+ 0\\cdot \\int _{a}^\\infty \\frac{q(x)}{p(x)} p(x) dx}{\\int _{-\\infty }^\\infty \\frac{q(x)}{p(x)} p(x) dx}\\\\&=\\int _{-\\infty }^a q(x)dx = \\mathbb {P}[Q \\le a]\\\\Y &\\xrightarrow{} Q$ This means a resampling strategy effectively changes the distribution of random variable $X_n$ to that of $q(x)$ , meaning we can use samples from $Y$ to build statistics about the target distribution $q(x)$ .", "This result motivates using resampling to correct the action distribution in off-policy learning.", "This result can also be used to show that the IR estimators are consistent, with $n\\rightarrow \\infty $ ." ], [ "Consistency under a sliding window", "Lemma B.2 Let $B_t = \\lbrace X_{t+1}, ..., X_{t+n}\\rbrace $ be the buffer of the most recent $n$ transitions sampled by time $t+n$ , which are generated sequentially from an irreducible, finite MDP with a fixed policy $\\mu $ .", "We define $X_{\\mathrm {BC}}^{(t)}$ be the BCIR estimator for buffer $B_t$ .", "If $\\mathbb {E}_\\pi [|\\Delta |] < \\infty $ , then we can show $\\mathbb {E}[X_{\\mathrm {BC}}^{(t)}] = \\mathbb {E}_\\pi [\\Delta ]$ .", "Using the law of iterated expectations, $ \\mathbb {E}\\left[ X_{\\mathrm {BC}}^{(t)} \\right] = \\mathbb {E}\\left[ \\mathbb {E}[ X_{\\mathrm {BC}}^{(t)} | B_t ] \\right] $ where the outer expectation is over the stationary distribution of $B_t$ and the inner expectation is over the sampling distribution of IR from the buffer $B_t$ .", "Using the definition of $X_{\\mathrm {BC}}^{(t)} | B_t$ , we have that $\\mathbb {E}[ X_{\\mathrm {BC}}^{(t)} | B_t ] &= \\bar{\\rho } \\sum _{i=1}^n \\Delta _i \\frac{\\rho _i}{\\sum _{i=1}^{n} \\rho _i} \\\\&= \\frac{1}{n} \\sum _{i=1}^n \\rho _i \\Delta _i$ Noting that stationary distribution of $B_t$ is $P(B_t = (x_{t+1}, ..., x_{t+n})) =\\\\ d_X(x_t) P(x_{t+1}|x_t) ... P(x_{t+n}|x_{t+n-1})$ , where $d_X$ is the stationary distribution of $X_t$ , we can expand the expectation as: $& \\mathbb {E}\\left[ \\frac{1}{n} \\sum _{t=1}^n \\rho _t \\Delta _t \\right] \\\\&= \\sum _{x_1,...x_n} d_X(x_1) p(x_2|x_1)...p(x_n|x_{n-1}) \\left( \\frac{1}{n} \\sum _{t=1}^n \\rho _t \\Delta _t \\right) \\\\&= \\sum _{\\begin{array}{c}s_1, a_1, r_2, s_2, ...,\\\\ s_n, a_n, r_{n+1}, s_{n+1}\\end{array}}d_\\mu (s_1) \\left( \\prod _{i=1}^n \\mu (a_i|s_i) p(s_{i+1}, r_{i+1}|s_i, a_i) \\right) \\left( \\frac{1}{n} \\sum _{t=1}^n \\rho _t \\Delta _t \\right) \\\\&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{\\begin{array}{c}s_1, a_1, r_2, s_2, ...,\\\\ s_n, a_n, r_{n+1}, s_{n+1}\\end{array}}d_\\mu (s_1) \\left( \\prod _{i=1}^n \\mu (a_i|s_i) p(s_{i+1}, r_{i+1}|s_i, a_i) \\right) \\left( \\rho _t \\Delta _t \\right) \\\\$ Next, by taking the sums over $(s_1, a_1, ... r_{n+1}, s_{n+1})$ within the products to make the summands depend only on the variables being summed over, we get $&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_1} d_\\mu (s_1)\\sum _{a_1, r_2, s_2} \\mu (a_1|s_1) p(s_2, r_2| s_1, a_1)\\sum _{a_2, r_3, s_3} \\mu (a_2|s_2) p(s_3, r_3| s_2, a_2) ...\\\\&\\sum _{a_t, r_{t+1}, s_{t+1}} \\mu (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\left( \\rho _t \\Delta _t \\right) \\\\&\\sum _{\\begin{array}{c}a_{t+1}, r_{t+2}, s_{t+2}, ...,\\\\ s_n, a_n, r_{n+1}, s_{n+1}\\end{array}} \\prod _{i=t+2}^n \\mu (a_i|s_i) p(s_{i+1}, r_{i+1}|s_i, a_i) \\\\&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_1} d_\\mu (s_1)\\sum _{a_1, r_2, s_2} \\mu (a_1|s_1) p(s_2, r_2| s_1, a_1)\\sum _{a_2, r_3, s_3} \\mu (a_2|s_2) p(s_3, r_3| s_2, a_2) ...\\\\&\\sum _{a_t, r_{t+1}, s_{t+1}} \\mu (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\left( \\rho _t \\Delta _t) \\right)$ This followed since the third line is summing over the probability of all trajectories starting from $s_{t+1}$ and thus is equal to 1.", "Next, we note that, if $C$ is a constant that does not depend on $s_1, a_1, r_2$ , then $\\sum _{s_1, a_1, r_2} d_\\mu (s_1) \\mu (a_1|s_1) p(s_2, r_2|s_1, a_1) C = d_\\mu (s_2) C$ since $d_\\mu (s_2)$ is the stationary distribution.", "Continuing from before, by reordering the sums we have and repeatedly using the above note, $&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_2} \\underbrace{\\sum _{s_1, a_1, r_2} d_\\mu (s_1)\\mu (a_1|s_1) p(s_2, r_2| s_1, a_1)}_{d_\\mu (s_2)}\\sum _{a_2, r_3, s_3} \\mu (a_2|s_2) p(s_3, r_3| s_2, a_2) ...\\\\&\\sum _{a_t, r_{t+1}, s_{t+1}} \\mu (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\left( \\rho _t \\Delta _t) \\right) \\\\&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_2} d_\\mu (s_2)\\sum _{a_2, r_3, s_3} \\mu (a_2|s_2) p(s_3, r_3| s_2, a_2) ...\\\\&\\sum _{a_t, r_{t+1}, s_{t+1}} \\mu (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\left( \\rho _t \\Delta _t) \\right)\\\\&= ... \\\\&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_t, a_t, r_{t+1}, s_{t+1}} d_\\mu (s_t) \\mu (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\left( \\rho _t \\Delta _t) \\right) \\\\$ Recall that $\\Delta _t = \\Delta (s_t, a_t, r_{t+1}, s_{t+1})$ is a function of the transition so we cannot simplify further.", "Finally, $&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_t, a_t, r_{t+1}, s_{t+1}}d_\\mu (s_t) \\mu (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\left(\\frac{\\pi (a_t)}{\\mu (a_t)} \\Delta _t) \\right) \\\\&= \\frac{1}{n} \\sum _{t=1}^n \\sum _{s_t, a_t, r_{t+1}, s_{t+1}}d_\\mu (s_t) \\pi (a_t|s_t) p(s_{t+1}, r_{t+1}| s_t, a_t) \\Delta _t \\\\&= \\frac{1}{n} \\sum _{t=1}^n \\mathbb {E}_\\pi [\\Delta ] \\\\&= \\mathbb {E}_\\pi [\\Delta ]$ Theorem REF Let $B_t = \\lbrace X_{t+1}, ..., X_{t+n}\\rbrace $ be the buffer of the most recent $n$ transitions sampled by time $t+n$ , which are generated sequentially from an irreducible, finite MDP with a fixed policy $\\mu $ .", "Define the sliding-window estimator $X_{t} \\mathrel {\\overset{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny \\sffamily def}}}{=}}\\frac{1}{T} \\sum _{t=1}^T X_{\\mathrm {BC}}^{(t)}$ .", "Then, if $\\mathbb {E}_\\pi [|\\Delta | ] < \\infty $ , then $X_{T}$ converges to $\\mathbb {E}_\\pi [\\Delta ]$ almost surely as $T \\rightarrow \\infty $ .", "Let $X_t = (S_t, A_t, R_{t+1}, S_{t+1})$ be a transition.", "Then the sequence $\\lbrace X_t\\rbrace _{t \\in \\mathbb {N}}$ forms an irreducible Markov chain as there is positive probability of eventually visiting any $X^{\\prime }$ starting from any $X$ since this is true for states $S^{\\prime }$ and $S$ in the original MDP (by irreducibility).", "Let $\\lbrace B_t\\rbrace _{t \\in \\mathbb {N}}$ be the sequence of buffers that are observed.", "This also forms an irreducible Markov chain by the same reasoning as above since $\\lbrace X_t\\rbrace _{t \\in \\mathbb {N}}$ is irreducible.", "Additionally, the sequence of pairs $\\lbrace (X_{\\mathrm {BC}}^{(t)}, B_t) \\rbrace _{t \\in \\mathbb {N}}$ is an irreducible Markov chain.", "Using the ergodic theorem (theorem 4.16 in [10]) on $\\lbrace (X_{\\mathrm {BC}}^{(t)}, B_t) \\rbrace _{t \\in \\mathbb {N}}$ with the projection function $f(x,y)= x$ , we have that $ \\lim _{T \\rightarrow \\infty } \\frac{1}{T} \\sum _{t=1}^T X_{\\mathrm {BC}}^{(t)}= \\mathbb {E}\\left[ X_{\\mathrm {BC}}^{(t)} \\right] $ where the expectation is over the joint stationary distribution of $(X_{\\mathrm {BC}}^{(t)}, B_t)$ .", "Using Lemma REF we can show that $ \\mathbb {E}\\left[ X_{\\mathrm {BC}}^{(t)} \\right] = \\mathbb {E}_\\pi [\\Delta ]$ , completing the proof." ], [ "Variance of BC-IR and IS", "This lemma characterizes the variance of the BC-IR and IS estimators for a fixed buffer.", "Lemma B.3 Let $\\mu _{B} = \\mathbb {E}_\\pi [\\Delta | B]$ be the mean update on the batch $B$ .", "Denoting the size of the buffer by $n$ and the number of size of the minibatch by $k$ , let $X_{IS} = \\frac{1}{k} \\sum _{j=1}^k \\rho _{z_j} \\Delta _{z_j} $ (with each $z_j$ sampled uniformly from $\\lbrace 1, ..., n \\rbrace $ ) be the importance sampling estimator and $X_{BC} =\\frac{1}{k} \\sum _{j=1}^n \\Delta _{i_j}$ (with each $i_j$ being sampled from $\\ell \\in \\lbrace 1, ..., n \\rbrace $ with probability proportional to $\\rho _\\ell $ ) be the bias-corrected importance resampling estimator.", "Then, the variances of the two estimators are given by $X_{\\mathrm {IS}}\\ | \\ B) &= \\frac{1}{k} \\left( \\frac{1}{n}\\sum _{j=1}^n \\rho _j^2\\left\\Vert \\Delta _j\\right\\Vert ^2_2 - \\mu _{B}^\\top \\mu _{B} \\right) \\\\X_{\\mathrm {BC}}\\ | \\ B) &= \\frac{1}{k} \\left(\\frac{\\bar{\\rho }}{n}\\sum _{j=1}^n \\rho _j \\left\\Vert \\Delta _j\\right\\Vert ^2_2 - \\mu _{B}^\\top \\mu _{B} \\right)$ Since we condition on the buffer $B$ , the only source of randomness is the sampling mechanism.", "Each index is sampled independently so we have that, $X_{\\mathrm {BC}}\\ | \\ B)= \\frac{1}{k^2} \\sum _{j=1}^k \\bar{\\rho } \\Delta _{i_j} \\ | \\ B)= \\frac{1}{k} \\bar{\\rho } \\Delta _{i_1} \\ | \\ B)$ and similarly $X_{\\mathrm {IS}}\\ | \\ B) = \\frac{1}{k}\\rho _{z_1}\\Delta _{z_1}| B)$ We can further simplify these expressions.", "For the IS estimator $& \\rho _{z_1} \\Delta _{z_1} | B) \\\\&= \\mathbb {E}[\\rho _{z_1}^2 \\Delta _{z_1}^\\top \\Delta _{z_1} | B] - \\mathbb {E}[\\rho _{z_1}\\Delta _{z_1} | B]^\\top \\mathbb {E}[\\rho _{z_1}\\Delta _{z_1} | B] \\quad \\text{by definition of $\\cdot )$} \\\\&=\\mathbb {E}[\\rho _{z_1}^2 \\Vert \\Delta _{z_1} \\Vert _2^2 | B] - \\mu _{B}^\\top \\mu _{B} \\quad \\text{since $\\rho _{z_1}\\Delta _{z_1}|B$ is unbiased for $\\mu _{B}$} \\\\&= \\frac{1}{n}\\sum _{j=1}^n \\rho _j^2 \\Vert \\Delta _j \\Vert _2^2 - \\mu _{B}^\\top \\mu _{B}$ The last line follows from the uniform sampling distribution.", "For the BC-IR estimator, recalling that $\\bar{\\rho } = \\tfrac{1}{n} \\sum _{i=1}^n \\rho _i$ , we follow similar steps, $&\\bar{\\rho }\\Delta _{i_1} | B) \\\\&= \\mathbb {E}\\left[\\bar{\\rho }^2 \\Delta _{i_1}^\\top \\Delta _{i_1} | B \\right] -\\mathbb {E}[\\bar{\\rho }\\Delta _{i_1} | B]^\\top \\mathbb {E}[\\bar{\\rho }\\Delta _{i_1} | B] \\\\&= \\mathbb {E}\\left[ \\bar{\\rho }^2 \\Vert \\Delta _{i_j} \\Vert _2^2 | B \\right] - \\mu _{B}^\\top \\mu _{B} \\quad \\text{since $\\bar{\\rho }\\Delta _{i_1} | B$ is unbiased for $\\mu _{B}$} \\\\&= \\sum _{j=1}^n \\bar{\\rho }^2 \\frac{\\rho _j}{\\sum _{i=1}^n \\rho _i} \\Vert \\Delta _j \\Vert _2^2 -\\mu _{B}^\\top \\mu _{B} \\\\&= \\frac{\\bar{\\rho }}{n}\\sum _{j=1}^n \\rho _j \\Vert \\Delta _j \\Vert _2^2 - \\mu _{B}^\\top \\mu _{B}$ The fourth line follows from the sampling distribution of the $i_j$ .", "The following two theorems present certain conditions when the BC-IR estimator would have lower variance than the IS estimator.", "Theorem REF Assume that $ \\Vert \\Delta _j \\Vert _2^2 > \\frac{c}{\\rho _j}$ for samples where $\\rho _j \\ge \\bar{\\rho }$ , and that $ \\Vert \\Delta _j \\Vert _2^2 < \\frac{c}{\\rho _j}$ for samples where $\\rho _j < \\bar{\\rho }$ , for some $c > 0$ .", "Then the BC-IR estimator has lower variance than the IS estimator.", "We show $ X_{\\mathrm {IS}}| B) - X_{\\mathrm {BC}}| B) > 0$ : $&X_{\\mathrm {IS}}| B) - X_{\\mathrm {BC}}| B)= \\frac{1}{nk} \\sum _{j=1}^n \\left\\Vert \\Delta _j\\right\\Vert ^2_2 (\\rho _j^2 - \\bar{\\rho } \\rho _j) \\\\&= \\frac{1}{nk} \\sum _{s: \\rho _s < \\bar{\\rho }} \\underbrace{\\left\\Vert \\Delta _s\\right\\Vert ^2_2}_{\\le c/\\rho _s} \\rho _s \\underbrace{(\\rho _s- \\bar{\\rho })}_{\\le 0}+ \\frac{1}{nk} \\sum _{l: \\rho _l \\ge \\bar{\\rho }} \\underbrace{\\left\\Vert \\Delta _l\\right\\Vert ^2_2}_{> c/\\rho _s} \\rho _l \\underbrace{(\\rho _l- \\bar{\\rho } )}_{\\ge 0} \\\\&> \\frac{1}{nk} \\sum _{s: \\rho _s < \\bar{\\rho }} \\frac{c}{\\rho _s} \\rho _s \\left(\\rho _s- \\bar{\\rho } \\right)+ \\frac{1}{nk} \\sum _{l: \\rho _l \\ge \\bar{\\rho }} \\frac{c}{\\rho _l} \\rho _l \\left(\\rho _l- \\bar{\\rho } \\right) \\\\&= \\frac{c}{nk} \\sum _{j=1}^n (\\rho _j - \\bar{\\rho }) = 0$ Theorem REF Assume $\\rho $ and the magnitude of the update $\\Vert \\Delta \\Vert _2^2$ are independent $\\mathbb {E}[\\rho _j \\Vert \\Delta _j \\Vert _2^2 \\ | \\ B] = \\mathbb {E}[\\rho _j \\ | \\ B] \\ \\mathbb {E}[ \\Vert \\Delta _j \\Vert _2^2 \\ | \\ B]$ Then the BC-IR estimator will have equal or lower variance than the IS estimator.", "Because of the condition, we can further simplify the variance equations from Lemma REF .", "Let $c = \\mathbb {E}[ \\Vert \\Delta _j \\Vert _2^2 \\ | \\ B]$ .", "Then for BC-IR we have $\\frac{\\bar{\\rho }}{nk} \\sum _{j=1}^n \\rho _j \\left\\Vert \\Delta _j\\right\\Vert ^2= \\frac{1}{k}\\bar{\\rho } \\mathbb {E}\\left[ \\rho _j \\left\\Vert \\Delta _j\\right\\Vert ^2 | B \\right]= \\frac{1}{k} \\bar{\\rho } \\bar{\\rho } c= \\frac{1}{k} \\bar{\\rho }^2 c$ and for IS we have $\\frac{1}{nk}\\sum _{j=1}^n \\rho _j^2 \\Vert \\Delta _j \\Vert _2^2= \\frac{1}{k} \\mathbb {E}\\left[\\rho _j^2 \\left\\Vert \\Delta _j\\right\\Vert ^2_2 | B \\right]= \\frac{c}{k} \\mathbb {E}\\left[\\rho _j^2 | B \\right]$ Now when we take the difference, we get $X_{\\mathrm {IS}}| B) - X_{\\mathrm {BC}}| B) &= \\frac{c}{k} (\\mathbb {E}\\left[\\rho _j^2 | B \\right] - \\bar{\\rho }^2)\\\\&= \\frac{c}{k} \\hat{\\sigma }^2(\\rho _j \\ | \\ B)$ where $\\hat{\\sigma }^2(\\rho _j)$ is the sample variance of the importance weights $\\lbrace \\rho _j\\rbrace _{j=1}^n$ for $B$ .", "Because the sample variance is greater than zero and $c \\ge 0$ , the BC-IR estimator will have equal or lower variance than the IS estimator." ], [ "Variance of BC-IR and WIS for a fixed dataset", "The variance of BC-IR as compared to IS discussed in section REF is only one comparison we can make.", "Similarly to bias, we can characterize the variance of the IR estimator relative to WIS-Optimal.", "$X_{\\mathrm {WIS}^*}$ is able to use a batch update on all the data in the buffer, which should result in a low-variance estimate but is an unrealistic algorithm to use in practice.", "Instead, it provides a benchmark, where the goal is to obtain similar variance to $X_{\\mathrm {WIS}^*}$ , but within realistic computational restrictions.", "Because of the relationship between IR and WIS, as used in Theorem REF , we can characterize the variance of $X_{\\mathrm {IR}}$ relative to $X_{\\mathrm {WIS}^*}$ using the law of total covariance: $X_{\\mathrm {IR}}) &= [ \\mathbb {E}[X_{\\mathrm {IR}}| B] + \\mathbb {E}\\left[ X_{\\mathrm {IR}}| B] \\right]\\\\&= [ X_{\\mathrm {WIS}^*} + \\mathbb {E}\\left[ X_{\\mathrm {IR}}| B] \\right]$ where the variability is due to having randomly sampled buffers $B$ and random sampling from $B$ .", "The second term corresponds to the noise introduced by sampling a mini-batch of $k$ transitions from the buffer $B$ , instead of using the entire buffer like WIS. For more insight, we can expand this second term, $\\mathbb {E}\\left[ X_{\\mathrm {IR}}| B] \\right] = \\mathbb {E}\\left[ (\\tfrac{1}{k} \\sum _{j=1}^k \\Delta _{i_j} - \\tfrac{1}{n} \\sum _{i=1}^n \\Delta _{i})^2 | B\\right]$ , where we consider the variance independently for each element of $\\Delta _i$ and so apply the square element-wise.", "The variability is not due to IS ratios, and instead arises from variability in the updates themselves.", "Therefore, the variance of IR corresponds to the variance of WIS, with some additional variance due to this variability around the average update in the buffer." ], [ "Markov Chain", "This section contains the full set of markov chain experiments using several different policies.", "Results can be found in figure REF and figure REF .", "See figure captions for more details.", "Figure: Sensitivity curves for Markov Chain experiments with policy action probabilities [left, right] left μ=[0.5,0.5],π=[0.1,0.9]\\mu = [0.5,0.5], \\pi =[0.1,0.9]; center μ=[0.9,0.1],π=[0.1,0.9]\\mu = [0.9,0.1], \\pi =[0.1,0.9]; right μ=[0.99,0.01],π=[0.01,0.99]\\mu = [0.99,0.01], \\pi =[0.01,0.99].Figure: Learning rate sensitivity plots for V-Trace (with the same settings as Figure ).", "Three clipping parameters were chosen, including 1.0, 0.5 ρ max \\rho _\\text{max} and 0.9 ρ max \\rho _\\text{max}, where ρ max \\rho _\\text{max} is the maximum possible IS ratio.", "For 1.0 ρ max \\rho _\\text{max}, updates under V-trace become exactly equivalent to IS.Figure: SGD: Target Policy: Top: persistent down, Bottom favored down.", "Behaviour Policy: left State Variant center State Weight Variant right Uniform.", "Sample efficiency plots." ], [ "Continuous Four Rooms", "The continuous four rooms environment is an 11x11 2d continuous world with walls setup as the original four rooms environment grid world.", "The agent is a circle with radius 0.1, and the state consists of a continuous tuple containing the x and y coordinates of the agent's center point.", "The agent takes an action in one of the 4 cardinal directions moving $0.5 \\pm \\mathbb {U}(0.0, 0.1)$ in that directions and random drift in the orthogonal direction sampled from $\\mathbb {N}(0.0,0.01)$ .", "The simulation takes 10 intermediary steps to more accurately detect collisions.", "We use three behavior policies in our experiments.", "Uniform: the agent selects all actions uniformly, State Variant: the agent selects all actions uniformly except in pre-determined subsections of the environment where the agent will take down with likelihood 0.1 and the rest distributed evenly over the other actions, State Weight Variant: the agent selects all actions uniformly except in pre-determined subsections where the pmf is defined randomly.", "We also use two target policies.", "Persistent Down: where the agent always takes the down action, Favored Down: where the agent takes the down action with likelihood 0.9 and uniformly among the other actions with likelihood 0.1.", "We use a cumulant function which indicates collision with a wall and a termination function which terminates on collision and is 0.9 otherwise for all value functions.", "We present results using SGD and RMSProp over many algorithms and parameter settings in figures REF , REF , , and REF .", "Figure: RMSProp Target Policy: Top: persistent down, Bottom favored down.", "Behaviour Policy: left State Variant center State Weight Variant right Uniform.", "Sample efficiency plots.Figure: Incremental Experiments Target Policy: Top: persistent down, Bottom favored down.", "Behaviour Policy: left State Variant center State Weight Variant right Uniform.", "Sample efficiency plots.Figure: SGD: Target Policy: Top: persistent down, Bottom favored down.", "Behaviour Policy: left State Variant center State Weight Variant right Uniform.", "Learning Rate Sensitivity" ] ]
1906.04328
[ [ "Adaptative significance levels in linear regression models with known\n variance" ], [ "Abstract The Full Bayesian Significance Test (FBST) for precise hypotheses was presented by Pereira and Stern [Entropy 1(4) (1999) 99-110] as a Bayesian alternative instead of the traditional significance test using p-value.", "The FBST is based on the evidence in favor of the null hypothesis (H).", "An important practical issue for the implementation of the FBST is the determination of how large the evidence must be in order to decide for its rejection.", "In the Classical significance tests, it is known that p-value decreases as sample size increases, so by setting a single significance level, it usually leads H rejection.", "In the FBST procedure, the evidence in favor of H exhibits the same behavior as the p-value when the sample size increases.", "This suggests that the cut-off point to define the rejection of H in the FBST should be a sample size function.", "In this work, the scenario of Linear Regression Models with known variance under the Bayesian approach is considered, and a method to find a cut-off value for the evidence in the FBST is presented by minimizing the linear combination of the averaged type I and type II error probabilities for a given sample size and also for a given dimension of the parametric space." ], [ "Introduction", "The main goal of our work is to determine how small the Bayesian evidence in the FBST should be in order to reject the null hypothesis.", "Therefore, considering the concepts in Pereira (1985), in Oliveira (2014) and the recent work of Pereira et al.", "(2017) and Gannon et al.", "(2019) related to the adaptive significance levels (levels that are function of sample size which are obtained from the generalized form of the Neyman-Pearson Lemma ), we propose to establish a cut-off value $k^*$ for the $ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)$ as a function of the sample size $n$ and the dimension of the parametric space $d$ , i.e., $ k^*= k^*(n,d)$ with $k^*\\in [0,1]$ , such that $k^*$ minimizes the linear combination of the averaged type I and type II error probabilities, $a\\alpha _{\\varphi }+b\\beta _{\\varphi }$ .", "We will focus on model selection for Linear Regression Models with known variance." ], [ "Methodology", "Consider de normal linear regression model $\\mathrm {y}=\\mathrm {X}\\theta +\\varepsilon , \\quad \\varepsilon \\sim N_n(0,\\sigma ^2\\mathbb {I}_n),$ where $\\mathrm {y}=(y_1,\\dots , y_n)^{\\top }$ is an $n \\times 1$ vector of $y_i$ observations, $\\mathrm {X}=(x_1,\\dots , x_n)^{\\top }$ is an $n \\times p$ matrix of known coefficients with $x_i=(1,x_{i1},\\dots , x_{ip-1})^{\\top }$ , $\\theta =(\\theta _{1}^{\\top },\\theta _{2}^{\\top })^{\\top }$ is a $p \\times 1$ vector of parameters, and $\\varepsilon =(\\varepsilon _1,\\dots , \\varepsilon _n)^{\\top }$ an $n\\times 1$ vector of random errors.", "Suppose that the residual error variance $\\sigma ^2$ is known, then $f(\\mathrm {y}\\vert \\theta ) \\sim N_n(\\mathrm {X}\\theta ,\\sigma ^2\\mathbb {I}_n)$ .", "The natural conjugate prior family is the family of normal distributions.", "Suppose therefore that $\\theta $ has the $N_p(\\mathrm {m}_0,\\mathrm {W}_0)$ prior distribution $g(\\theta ) \\propto \\exp \\left\\lbrace -\\frac{(\\theta -\\mathrm {m}_0)^{\\top }\\mathrm {W}_0^{-1} (\\theta -\\mathrm {m}_0)}{2} \\right\\rbrace .$ Then, the posterior distribution of $\\theta $ is $\\theta \\vert \\mathrm {y}\\sim N_p(\\mathrm {m}^*,\\mathrm {W}^*)$ , with $\\mathrm {m}^*&= (\\mathrm {W}_0^{-1}+\\sigma ^{-2}\\mathrm {X}^{\\top }\\mathrm {X})^{-1}(\\mathrm {W}_0^{-1}\\mathrm {m}_0+\\sigma ^{-2}\\mathrm {X}^{\\top }\\mathrm {y}),\\\\\\mathrm {W}^*&=(\\mathrm {W}_0^{-1}+\\sigma ^{-2}\\mathrm {X}^{\\top }\\mathrm {X})^{-1}$ If $\\theta _1$ has $s$ elements and $\\theta _2$ has $r$ elements write $\\mathrm {m}_0= \\left(\\begin{array}{c}\\mathrm {m}_0_1\\\\\\mathrm {m}_0_2\\\\\\end{array}\\right), ~~~~~~\\mathrm {W}_0= \\left(\\begin{array}{cc}\\mathrm {W}_0_{11} & \\mathrm {W}_0_{12} \\\\\\mathrm {W}_0_{21}& \\mathrm {W}_0_{22}\\end{array}\\right),$ where $\\mathrm {m}_0_1$ is $s\\times 1$ , $\\mathrm {W}_0_{11}$ is $s\\times s$ , $\\mathrm {m}_0_2$ is $r\\times 1$ , $\\mathrm {W}_0_{22}$ is $r\\times r$ .", "So, $\\theta _1 \\sim N_s\\left( \\mathrm {m}_0_1,\\mathrm {W}_0_{11}\\right),\\qquad \\theta _2 \\sim N_r\\left( \\mathrm {m}_0_2,\\mathrm {W}_0_{22}\\right),$ Using general results on multivariate normal distributions, $\\theta _1 \\vert \\theta _2 &\\sim & N_s(\\mathrm {m}_0_{1.2}(\\theta _2),\\mathrm {W}_0_{11.2}),$ where $\\mathrm {m}_0_{1.2}(\\theta _2)=\\mathrm {m}_0_1+\\mathrm {W}_0_{12}\\mathrm {W}_0_{22}^{-1}(\\theta _2-\\mathrm {m}_0_2)$ and $\\mathrm {W}_0_{11.2}=\\mathrm {W}_0_{11}-\\mathrm {W}_0_{12}\\mathrm {W}_0_{22}^{-1}\\mathrm {W}_0_{21}$ .", "A corresponding distribution result if we change $\\mathrm {m}_0$ to $\\mathrm {m}^*$ and $\\mathrm {W}_0$ to $\\mathrm {W}^*$ .", "Definition 1 Let $f(\\theta \\vert \\mathrm {y})$ be the posterior density of $\\theta $ given the observed sample.", "Consider a sharp hypothesis $\\text{\\textbf {H}}: \\theta \\in \\Theta _\\text{\\textbf {H}}$ and let ${T_{\\mathrm {y}}=\\left\\lbrace \\theta \\in \\Theta : f(\\theta \\vert \\mathrm {y})>\\text{sup}_\\text{\\textbf {H}}f(\\theta \\vert \\mathrm {y}) \\right\\rbrace }$ be the set tangential to H. The measure of evidence in favor H is defined as ${ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)=1-P(\\theta \\in T_{\\mathrm {y}}\\vert \\mathrm {y})}$ .", "The FBST is the procedure that rejects H whenever $ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)$ is small (Pereira et al., 2008).", "Suppose that we want to test the hypotheses $\\text{\\textbf {H}}&: \\theta _2=0\\nonumber \\\\\\text{\\textbf {A}}&: \\theta _2\\ne 0$ The tangential set to the null hypothesis is $T_{\\mathrm {y}}=\\left\\lbrace (\\theta _1,\\theta _2)\\in \\Theta : f(\\theta _1,\\theta _2 \\vert \\mathrm {y})>\\underset{\\text{\\textbf {H}}}{\\operatorname{sup}}f(\\theta _1,\\theta _2\\vert \\mathrm {y}) \\right\\rbrace , $ and, since $(\\theta -\\mathrm {m}^*)^{\\top }\\mathrm {W}^*^{-1} (\\theta -\\mathrm {m}^*) \\sim \\chi ^2_p$ , the evidence in favor of H is $\\small ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)=1-P\\left(\\chi ^2_p<-2\\log \\left\\lbrace \\left[\\underset{\\text{\\textbf {H}}}{\\operatorname{sup}}f(\\theta _1,\\theta _2\\vert \\mathrm {y})\\right] \\left|\\mathrm {W}^*\\right| ^{1/2}\\,(2\\pi )^{p/2}\\right\\rbrace \\right),$ where, $\\underset{\\text{\\textbf {H}}}{\\operatorname{sup}}f(\\theta _1,\\theta _2\\vert \\mathrm {y})=f(\\mathrm {m}^*_{1.2}(\\theta _2=0),\\, 0\\vert \\mathrm {y})$ .", "Consider $\\varphi (\\mathrm {y})$ as the test such that $\\varphi (\\mathrm {y})= \\left\\lbrace \\begin{array}{l} 0 \\quad if \\quad ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)> k\\\\ 1 \\quad if \\quad ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)\\le k. \\end{array} \\right.\\;$ Thus, define the set $\\Psi = \\left\\lbrace \\mathrm {y}\\in \\Omega : ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)\\le k\\right\\rbrace .", "$ The averaged error probabilities can be expressed in terms of the Bayesian prior predictive densities under the respective hypotheses as follows $\\alpha _{\\varphi }&= P(\\varphi (\\mathrm {y})=1\\vert \\text{\\textbf {H}}) \\nonumber \\\\&= \\int _{\\mathrm {y}\\in {\\Psi }}f_\\text{\\textbf {H}}(\\mathrm {y})\\,d\\mathrm {y}\\nonumber \\\\&=\\int _{\\mathrm {y}\\in {\\Psi }}\\,\\int _{\\textbf {H}} f(\\mathrm {y}\\vert \\theta _1,\\theta _2) \\, g_\\textbf {H}(\\theta _1,\\theta _2)\\, d\\theta _1\\,d\\theta _2\\nonumber \\\\&=\\int _{\\mathrm {y}\\in {\\Psi }}\\,\\int _{\\textbf {H}} f(\\mathrm {y}\\vert \\theta _1,\\theta _2) \\, g(\\theta _1\\vert \\theta _2=0)\\, d\\theta _1\\,d\\theta _2\\\\[5pt]&=\\int _{\\mathrm {y}\\in {\\Psi }}\\,\\int _{\\theta _1 \\in \\mathbb {R}^s} f(\\mathrm {y}\\vert \\theta _1,\\theta _2=0) \\, g(\\theta _1\\vert \\theta _2=0)\\, d\\theta _1\\nonumber \\\\[5pt]&=\\int _{\\mathrm {y}\\in {\\Psi }}{\\small N_n\\left( \\mathrm {X}\\mathrm {C}\\mathrm {m}_0_{1.2}(\\theta _2=0),\\,\\left(\\sigma ^2\\mathbb {I}_n+(\\mathrm {X}\\mathrm {C})\\mathrm {W}_0_{11.2}(\\mathrm {X}\\mathrm {C})^{\\top }\\right)\\right),}$ where $\\mathrm {C}_{(s+r) \\times s}=[\\mathbb {I}_s,0_{s\\times r}]^{\\top }$ .", "$\\beta _{\\varphi }&=P(\\varphi (\\mathrm {y})=0\\vert \\text{\\textbf {A}})\\nonumber \\\\&= \\int _{\\mathrm {y}\\notin {\\Psi }} f_\\text{\\textbf {A}}(\\mathrm {y})\\,d\\mathrm {y}\\nonumber \\\\&=\\int _{\\mathrm {y}\\notin {\\Psi }} \\int _{\\textbf {A}} f(\\mathrm {y}\\vert \\theta ) \\, g_\\textbf {A}(\\theta )\\, d\\theta \\nonumber \\\\[3pt]&=\\int _{\\mathrm {y}\\notin {\\Psi }} \\int _{\\textbf {A}} f(\\mathrm {y}\\vert \\theta ) \\, g(\\theta )\\, d\\theta \\nonumber \\\\[3pt]&=\\int _{\\mathrm {y}\\notin {\\Psi }} N_n\\left( \\mathrm {X}\\mathrm {m}_0,\\left(\\sigma ^2\\mathbb {I}_n+\\mathrm {X}\\mathrm {W}_0\\mathrm {X}^{\\top }\\right)\\right).$ So, the adaptive cut-off value $k^{*}$ for $ev\\left(\\text{\\textbf {H}};x\\right)$ will be the $k$ that minimizes $a\\alpha _{\\varphi }+b\\beta _{\\varphi }$ .", "Finally, define $\\varphi ^{*}(\\mathrm {y})$ as the test such that $\\varphi ^{*}(\\mathrm {y})= \\left\\lbrace \\begin{array}{l} 0 \\quad if \\quad ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)> k^{*}\\\\1 \\quad if \\quad ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)\\le k^{*}.", "\\end{array} \\right.\\;$ The optimal averaged error probabilities that depend on the sample size will be $\\alpha _{\\varphi ^{*}}^{*}= P(\\varphi ^{*}(\\mathrm {y})=1\\vert \\text{\\textbf {H}}), \\quad \\beta _{\\varphi ^{*}}^{*}=P(\\varphi ^{*}(\\mathrm {y})=0\\vert \\text{\\textbf {A}}).$" ], [ "Results", "2 Table: Cut-off values k * k^* for evH;yev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right) as function of nn, with d=dim(Θ)d=\\text{dim}(\\Theta ), a=b=1a=b=1.Figure: Cut-off values k * k^* for evH;yev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right) as function of nn, with d=dim(Θ)d=\\text{dim}(\\Theta ), a=b=1a=b=1.By increasing $n$ , $k^*$ shows a decreasing trend, which means that the influence of sample size on the determination of the cut-off for $ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)$ is very relevant.", "On the other hand, it is possible to notice the differences in the results between the two models.", "Then, the cut-off value for $ev\\left(\\text{\\textbf {H}};\\mathrm {y}\\right)$ will depend not only on the sample size but also on the dimension of the parametric space.", "More specifically, the $k^*$ value is greater when $d$ is higher.", "Figure: Optimal averaged error probabilities (α ϕ * * \\alpha ^{*}_{\\varphi ^{*}}, β ϕ * * \\beta ^{*}_{\\varphi ^{*}} and α ϕ * * +β ϕ * * \\alpha ^{*}_{\\varphi ^{*}}+\\beta ^{*}_{\\varphi ^{*}}) as function of nn, a=b=1a=b=1.With this procedure, increasing the sample size implies that the probabilities of both kind of errors and their linear combination decrease, when in most cases, setting a single level of significance independent of sample size, only type II error probability decreases." ], [ "References", " Gannon, M. A., Pereira, C. A.", "B. and Polpo, A.", "(2019).", "Blending bayesian and classical tools to define optimal sample-size-dependent significance levels.", "The American Statistician, 73(sup1), 213-222 Oliveira, M. C. (2014).", "Definição do nível de significância em função do tamanho amostral.", "Dissertação de Mestrado, Universidade de São Paulo, Instituto de Matemática e Estatística.", "Departamento de Estatística, São Paulo.", "Pereira, C. A.", "B., Nakano, E. Y., Fossaluza, V., Esteves, L. G., Gannon, M. A. and Polpo, A.", "(2017).", "Hypothesis tests for bernoulli experiments: Ordering the sample space by bayes factors and using adaptive significance levels for decisions.", "Entropy, 19(12), 696.", "Pereira, C. A.", "B., Stern, J. M. and Wechsler, S. (2008).", "an a significance test be genuinely bayesian?.", "Bayesian Analysis 3(1), 79-100.", "Pereira, C. A.", "B.", "(1985).", "Teste de hipóteses definidas em espaços de diferentes dimensões: visão Bayesisana e interpretação Clássica.", "Tese de Livre Docência, Universidade de São Paulo, Instituto de Matemática e Estatística.", "Departamento de Estatística, São Paulo.", "Pereira, C. A.", "B. and Stern, J. M. (1999).", "Evidence and credibility: Full bayesian significance test for precise hypotheses.", "Entropy 1(4), 99-110." ] ]
1906.04222
[ [ "Numerical computations of split Bregman method for fourth order total\n variation flow" ], [ "Abstract The split Bregman framework for Osher-Sol\\'e-Vese (OSV) model and fourth order total variation flow are studied.", "We discretize the problem by piecewise constant function and compute $\\nabla(-\\Delta_{\\mathrm{av}})^{-1}$ approximately and exactly.", "Furthermore, we provide a new shrinkage operator for Spohn's fourth order model.", "Numerical experiments are demonstrated for fourth order problems under periodic boundary condition." ], [ "Introduction", "A gradient flow has been of great interest in mathematics and mathematical physics because several evolution equations can be regarded as the gradient flows.", "For example, mathematical models in materials sciences including the Allen-Cahn equation and mean curvature flow can be regarded as second order $L^2$ -gradient flows.", "The Cahn-Hilliard equation is interpreted as a fourth-order $H^{-1}$ -gradient flow.", "We are interested in several important examples of gradient flows which are of the form $\\dfrac{\\partial u}{\\partial t} \\in -\\partial _H E(u)\\mbox{ for }t>0,$ where $H$ is a Hilbert space, $E:H\\rightarrow \\mathbb {R}\\cup \\lbrace \\infty \\rbrace $ is a convex, lower semi-continuous functional and the subdifferential $\\partial _H$ is defined as $\\partial _HE(u) = \\left\\lbrace p\\in H : E(v)-E(u)\\ge (p,v-u)_H\\mbox{ for all }v\\in H\\right\\rbrace .$ In this paper, we consider gradient flows (REF ) with convex energy $E$ but may be very singular.", "We give a few examples.", "Spohn [36] has proposed a mathematical model for the relaxation of a crystalline surface below the roughening temperature; $u_t = -\\Delta \\left(\\operatorname{div}\\left(\\beta \\dfrac{\\nabla u}{|\\nabla u|}+|\\nabla u|^{p-2}\\nabla u\\right)\\right),$ where $\\beta >0$ and $p>1$ .", "Kashima [24] has presented this model as a fourth order $H^{-1}$ -gradient flow for energy functional $E(u) = \\beta \\displaystyle \\int _{\\Omega }|Du| + \\dfrac{1}{p}\\int _{\\Omega }|Du|^p.$ Furthermore, the total variation flow, which is the gradient flow for total variation energy, has been studied well in image processing.", "In 1992, Rudin, Osher and Fatemi [35] have introduced the total variation to image denoising and reconstruction.", "Their model, which is known as the Rudin-Osher-Fatemi (ROF) model, is described as $u = \\mathop {\\mathrm {argmin}}_{u\\in L^2(\\Omega )} \\left\\lbrace \\displaystyle \\int _{\\Omega }|Du|+\\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{L^2(\\Omega )}^2\\right\\rbrace ,$ where $\\Omega \\subset \\mathbb {R}^2$ is bounded domain and $f:\\Omega \\rightarrow \\mathbb {R}$ is a given noisy image.", "This introduces the second order total variation flow $u_t = \\operatorname{div}\\left(\\dfrac{\\nabla u}{|\\nabla u|}\\right) + \\lambda (u-f).$ On the other hand, Osher, Solé and Vese [33] have introduced the $H^{-1}$ fidelity and provided Osher-Solé-Vese (OSV) model $u=\\mathop {\\mathrm {argmin}}_{u\\in H^{-1}(\\Omega )}\\left\\lbrace \\displaystyle \\int _{\\Omega }|Du|+\\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}(\\Omega )}^2\\right\\rbrace ,$ where $H^{-1}(\\Omega ) = (H^1_0(\\Omega ))^*$ .", "Their model performs better on textured or oscillatory images.", "Equation (REF ) gives the fourth order total variation flow $u_t = -\\Delta \\left(\\operatorname{div}\\left(\\dfrac{\\nabla u}{|\\nabla u|}\\right)\\right)+\\lambda (u-f).$ Performing numerical computations for the ROF model, the OSV model and total variation flow have difficulties because of its singularity.", "Several studies have suggested numerical schemes for the ROF model and second order total variation flow.", "Especially, the split Bregman framework is well-known as an efficient solver for the ROF model.", "The aim of this paper is to provide a new numerical scheme, which is based on the backward Euler method and split Bregman framework, for fourth order total variation flow and Spohn's fourth order model.", "Numerical experiments are demonstrated for fourth order problems under periodic boundary condition.", "The split Bregman method, which is based on the Bregman iterative scheme [2], has been studied and performed in image processing (for example, see [31]).", "Goldstein and Osher [20] have proposed the alternating split Bregman method.", "Their method separates the “$L^1$ ” minimization part and “$L^2$ ” part.", "The alternating split Bregman method has several advantages.", "They have mentioned that the “$L^2$ ” part is differentiable, and the shrinking method can be applied to the “$L^1$ ” part for the ROF model.", "Therefore it is extremely efficient solver and easy to code.", "The split Bregman framework can be performed for second order total variation flow easily.", "For example, $u_t = \\operatorname{div}\\left(\\dfrac{\\nabla u}{|\\nabla u|}\\right)$ introduces the subdifferential formulation $u_t\\in -\\partial F(u)$ , where $F(u) = \\int _{\\Omega }|Du|$ .", "We let $u_t\\approx (u^{k+1}-u^k)/\\tau $ and apply the backward Euler method to the subdifferential formulation, then we obtain $u^{k+1} = \\mathop {\\mathrm {argmin}}_{u\\in L^2(\\Omega )}\\left\\lbrace \\displaystyle \\int _{\\Omega }|Du|+\\dfrac{1}{2\\tau }\\Vert u-u^k\\Vert _{L^2(\\Omega )}^2\\right\\rbrace ,$ where $\\tau $ is time step size.", "This is essentially the same problem as the ROF model, therefore the split Bregman framework can be applied to second order total variation flow.", "In this paper, we propose the split Bregman framework for the OSV model (REF ), fourth order total variation flow $u_t = -\\Delta \\left(\\operatorname{div}\\left(\\dfrac{\\nabla u}{|\\nabla u|}\\right)\\right)$ and Spohn's fourth order model (REF ).", "For simplicity, we consider one-dimensional torus $\\mathbb {T}$ .", "We introduce spatial discretization by piecewise constant functions, then we compute $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}v_h$ for piecewise constant function $v_h$ approximately or exactly.", "We apply the discrete gradient and discrete inverse Laplacian in our first scheme.", "In our second scheme, we calculate the inverse Laplacian for piecewise constant functions directly by using the second degree B-spline, which is continuously differentiable piecewise polynomials.", "The problem can be reduced into a minimization problem on the Euclidean space, which is included in earlier studies for the ROF model.", "Therefore we can apply the split Bregman framework to fourth order problems.", "Several theoretical results such as the convergence [5] can be applied to our scheme directly.", "Both of our two schemes are demonstrated for fourth order problems, and we can check that they perform quite well.", "Furthermore, we introduce a new shrinkage operator for Spohn's fourth order model.", "This enables to perform the numerical experiment for the relaxation of a crystalline surface below the roughening temperature quite effectively.", "Our scheme can be extended to fourth order problems on the two-dimensional torus.", "We also suggest a shrinkage operator for two-dimensional Spohn's model.", "Let us quickly overview some earlier results.", "There are many mathematical studies for the second and fourth order total variation flow.", "The well-posedness for fourth order total variation flow can be proved by considering the right hand side in (REF ) as a subdifferential of a total variation in $H^{-1}(\\Omega )$ (see [24]).", "This enables us to use the theory of maximal monotone operators [27], [3].", "On the other hand, Elliott and Smitheman [9] have proved the well-posedness for fourth order total variation flow by using the Galerkin method.", "Adjusting the methods in [15], Giga, Kuroda and Matsuoka [16] have established the extinction time estimate under Dirichlet boundary condition.", "Numerical computations which include anisotropic diffusion are performed in [29] for second order models.", "Note that even for the second order total variation flow (REF ), because of singularity at $\\nabla u=0$ , the speed of evolution is determined by nonlocal quantities.", "Therefore the definition of the solution itself is nontrivial.", "For the second order model, the comparison principle holds, and the theory of viscosity solutions is applicable to show well-posedness for a wide class of total variation type equations [12], [13].", "However, for the fourth order problem, the comparison principle does not hold in general (see [11]), and the theory of viscosity solutions is not applicable.", "For more details of mathematical analysis, we refer the reader to [11] and references therein.", "Several studies have considered the fourth order problem under periodic boundary condition.", "Kashima [25] has studied the characterization of subdifferential in $H_{\\mathrm {av}}^{-1}(\\mathbb {T}^d)$ .", "The exact profile of the fourth order total variation flow has been studied in [11].", "The extinction time estimate under periodic boundary condition has been established in [15].", "A duality based numerical scheme which applies the forward-backward splitting has been proposed in [17].", "Kohn and Versieux [26] have performed the numerical computation for Spohn's model.", "Their numerical scheme is based on the backward Euler method, mixed finite element method and regularization for singularity.", "They have proved the convergence by combining the regularization error estimate with temporal and spatial semidiscretization error estimates.", "The application of the split Bregman framework to crystalline flow has also been studied through what is called a level-set method.", "A crystalline mean curvature flow has been proposed independently in [1] and [37].", "Oberman, Osher, Takei and Tsai [30] have proposed applying the split Bregman framework to the level-set equation of mean curvature flow.", "Požár [34] has studied self-similar solutions of three dimensional crystalline mean curvature flow and presented a numerical scheme which is based on the finite element method and split Bregman framework.", "However, all calculations given there are for the second order model.", "A level-set method for mean curvature flow interprets the motion of curvature flow by a level-set of a solution of $\\dfrac{\\partial u}{\\partial t}-|\\nabla u|\\operatorname{div}\\left(\\dfrac{\\nabla u}{|\\nabla u|}\\right)=0.$ It is a powerful tool to calculate evolution which experiences topological change.", "It was introduced by Osher and Sathian [32] as a numerical way to calculate the mean curvature flow.", "Note that the level-set mean curvature equation (REF ) looks similar to (REF ).", "However, the singularity of (REF ) at $\\nabla u=0$ is weaker than one of (REF ) because of the multiplier $|\\nabla u|$ .", "Therefore it is not necessary to study nonlocal quantities for the level-set mean curvature equation.", "Its analytic foundation such as well-posedness and comparison principle has been established in [8], [10].", "For more details, we refer the readers to [14].", "Very recently, the analytic foundation of the level-set method is extended to crystalline flow by Požár and the first author [18], [19] and Chambolle, Morini and Ponsiglione [7] and with Novaga [6] independently.", "Their methods are quite different.", "This paper is organized as follows.", "Section states the definition of $H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ and the total variation.", "We introduce the discretization by piecewise constant functions in Section .", "Furthermore, we propose two schemes for reducing $\\Vert \\cdot \\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}$ into Euclidean norm.", "Section presents the split Bregman framework for the OSV model and fourth order total variation flow problem.", "In Section , we describe the shrinking method for Spohn's model.", "This report presents numerical examples on one-dimensional torus in Section .", "Finally, we extend our scheme to two-dimensional fourth order problems under periodic boundary condition in Section .", "First, we review some of the standard facts on the Fourier analysis for one-dimensional torus $\\mathbb {T} = \\mathbb {R}/\\mathbb {Z}$ .", "In this paper, we regard $\\mathbb {T}$ as an interval $[0,1]$ with periodic boundary condition.", "The Fourier transform for $f\\in L^1(\\mathbb {T})$ and definition of Sobolev space on $\\mathbb {R}$ are explained in [21] and [22], respectively.", "Let $\\mathcal {D}(\\mathbb {T})$ be the complex-valued function space $C^{\\infty }(\\mathbb {T})$ endowed with the usual test function topology and $\\mathcal {D}^{\\prime }(\\mathbb {T})$ be its dual.", "The Fourier coefficient $\\widehat{f}_T(\\xi )\\in \\mathbb {C}$ for $f\\in \\mathcal {D}^{\\prime }(\\mathbb {T})$ is defined by the generalized Fourier transform (for example, see [23]); $\\widehat{f}_T(\\xi ) = \\langle f,e^{-2\\pi i\\xi x}\\rangle _{\\mathcal {D}^{\\prime }(\\mathbb {T}),\\mathcal {D}(\\mathbb {T})}.$ The generalized Fourier transform satisfies similar properties to Fourier transform for $f\\in L^1(\\mathbb {T})$ , for example, $\\widehat{df/dx}_T(\\xi ) = \\left\\langle f,\\dfrac{d}{dx}e^{-2\\pi i\\xi x}\\right\\rangle _{\\mathcal {D}^{\\prime }(\\mathbb {T}),\\mathcal {D}(\\mathbb {T})} = -2\\pi i\\xi \\widehat{f}_T(\\xi )$ for all $f\\in \\mathcal {D}^{\\prime }(\\mathbb {T})$ .", "Furthermore, the Fourier coefficients $\\widehat{f}_T(\\xi )\\in \\mathbb {C}$ satisfies $f(x) = \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}\\widehat{f}_T(\\xi )e^{2\\pi i\\xi x}$ for all $f\\in \\mathcal {D}^{\\prime }(\\mathbb {T})$ (see [23]).", "In this Fourier series, the convergence should be understood in the natural topology of $\\mathcal {D}^{\\prime }(\\mathbb {T})$ .", "It is well-known that $\\mathcal {D}(\\mathbb {T})$ is dense in $L^2(\\mathbb {T})$ , therefore we have $L^2(\\mathbb {T})\\simeq (L^2(\\mathbb {T}))^*\\subset \\mathcal {D}^{\\prime }(\\mathbb {T})$ , where $(L^2(\\mathbb {T}))^*$ is the dual space of $L^2(\\mathbb {T})$ (for example, see [4]).", "Using the generalized Fourier transform, the Lebesgue space and the Sobolev space on $\\mathbb {T}$ are defined as follows: $L^2(\\mathbb {T}) = \\left\\lbrace f\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}|\\widehat{f}_T(\\xi )|^2<\\infty \\right\\rbrace ,$ $H^1(\\mathbb {T}) = \\left\\lbrace f\\in L^2(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}\\xi ^2|\\widehat{f}_T(\\xi )|^2<\\infty \\right\\rbrace =\\left\\lbrace f\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}(1+\\xi ^2)|\\widehat{f}_T(\\xi )|^2<\\infty \\right\\rbrace ,$ $H^{-1}(\\mathbb {T}) = \\left\\lbrace f\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}(1+\\xi ^2)^{-1}|\\widehat{f}_T(\\xi )|^2<\\infty \\right\\rbrace .$ Note that the duality pairing can be described formally as $\\langle f,g\\rangle _{H^{-1}(\\mathbb {T}),H^1(\\mathbb {T})} = \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}\\widehat{f}_T(\\xi )\\overline{\\widehat{g}_T(\\xi )} = \\displaystyle \\int _{\\mathbb {T}}f(x)\\overline{g(x)}~dx$ for all $f\\in H^{-1}(\\mathbb {T})$ and $g\\in H^1(\\mathbb {T})$ ." ], [ "The inverse Laplacian $(-\\Delta _{\\mathrm {av}})^{-1}$", "We consider the functions on $\\mathbb {T}$ whose average are equal to zero.", "Let $L^2_{\\mathrm {av}}(\\mathbb {T}) &= \\left\\lbrace f\\in L^2(\\mathbb {T}) : \\displaystyle \\int _{\\mathbb {T}}f(x)~dx=0\\right\\rbrace ,\\\\H^1_{\\mathrm {av}}(\\mathbb {T}) &= L^2_{\\mathrm {av}}(\\mathbb {T})\\cap H^1(\\mathbb {T}) = \\left\\lbrace f\\in H^1(\\mathbb {T}) : \\displaystyle \\int _{\\mathbb {T}}f(x)~dx = 0\\right\\rbrace ,\\\\H^{-1}_{\\mathrm {av}}(\\mathbb {T}) &= \\left\\lbrace f\\in H^{-1}(\\mathbb {T}) : \\langle f,1\\rangle _{H^{-1}(\\mathbb {T}),H^1(\\mathbb {T})} =0\\right\\rbrace .$ These definitions agree with the following ones: $L^2_{\\mathrm {av}}(\\mathbb {T}) = \\left\\lbrace f\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\ne 0}|\\widehat{f}_T(\\xi )|^2<\\infty \\mbox{ and }\\widehat{f}_T(0)=0\\right\\rbrace ,$ $H^1_{\\mathrm {av}}(\\mathbb {T}) = \\left\\lbrace f\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\ne 0}\\xi ^2|\\widehat{f}_T(\\xi )|^2<\\infty \\mbox{ and }\\widehat{f}_T(0) = 0\\right\\rbrace ,$ $H^{-1}_{\\mathrm {av}}(\\mathbb {T}) = \\left\\lbrace f\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\sum _{\\xi \\ne 0}\\xi ^{-2}|\\widehat{f}_T(\\xi )|^2<\\infty \\mbox{ and }\\widehat{f}_T(0) = 0\\right\\rbrace .$ It is easy to check that each of these spaces are Hilbert space endowed with the inner products $(f,g)_{L^2_{\\mathrm {av}}(\\mathbb {T})}&=\\displaystyle \\sum _{\\xi \\ne 0}\\widehat{f}_T(\\xi )\\overline{\\widehat{g}_T(\\xi )},\\\\(f,g)_{H^1_{\\mathrm {av}}(\\mathbb {T})}&=\\displaystyle \\sum _{\\xi \\ne 0}4\\pi ^2\\xi ^2\\widehat{f}_T(\\xi )\\overline{\\widehat{g}_T(\\xi )},\\\\(f,g)_{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}&=\\displaystyle \\sum _{\\xi \\ne 0}\\dfrac{1}{4\\pi ^2}\\xi ^{-2}\\widehat{f}_T(\\xi )\\overline{\\widehat{g}_T(\\xi )},$ respectively.", "These inner products introduce the norms $\\Vert \\cdot \\Vert _{L^2_{\\mathrm {av}}(\\mathbb {T})}$ , $\\Vert \\cdot \\Vert _{H^1_{\\mathrm {av}}(\\mathbb {T})}$ and $\\Vert \\cdot \\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}$ .", "It is easy to check that $\\Vert f\\Vert _{L^2_{\\mathrm {av}}(\\mathbb {T})}&=\\Vert f\\Vert _{L^2(\\mathbb {T})} &\\qquad \\mbox{for all }f\\in L^2_{\\mathrm {av}}(\\mathbb {T}),\\\\\\Vert f\\Vert _{H^1_{\\mathrm {av}}(\\mathbb {T})} &= \\Vert df/dx\\Vert _{L^2(\\mathbb {T})} &\\qquad \\mbox{for all }f\\in H^1_{\\mathrm {av}}(\\mathbb {T}).$ Fix $u\\in H^1_{\\mathrm {av}}(\\mathbb {T})$ arbitrarily.", "Let $c(\\xi ) = 4\\pi ^2\\xi ^2\\widehat{u}_T(\\xi )\\in \\mathbb {C}$ and $f(x) = \\sum _{\\xi \\in \\mathbb {Z}}c(\\xi )e^{2\\pi i\\xi x}$ , then we have $c(0)=0$ and $\\sum _{\\xi \\ne 0}\\xi ^{-2}|c(\\xi )|^2 = 16\\pi ^4\\sum _{\\xi \\ne 0}\\xi ^2|\\widehat{u}_T(\\xi )|^2<\\infty $ .", "This implies $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ .", "Moreover, $-\\Delta u(x) = \\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}\\widehat{(-\\Delta u)}_T(\\xi )e^{2\\pi i\\xi x}=\\displaystyle \\sum _{\\xi \\in \\mathbb {Z}}4\\pi ^2\\xi ^2\\widehat{u}_T(\\xi )e^{2\\pi i\\xi x} = f(x),$ where $-\\Delta u = -d^2u/dx^2$ .", "Consequently, $u=(-\\Delta _{\\mathrm {av}})^{-1}f$ defines $(-\\Delta _{\\mathrm {av}})^{-1}:H^1_{\\mathrm {av}}(\\mathbb {T})\\rightarrow H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ .", "We call this operator the inverse Laplacian.", "Let $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ and $u=(-\\Delta )_{\\mathrm {av}}^{-1}f\\in H^1_{\\mathrm {av}}(\\mathbb {T})$ , then we have $\\Vert f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 &= \\Vert -\\Delta u\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2\\\\&= \\displaystyle \\sum _{\\xi \\ne 0}\\dfrac{1}{4\\pi ^2}\\xi ^{-2}\\widehat{(-\\Delta u)}_T(\\xi )\\overline{\\widehat{(-\\Delta u)}_T(\\xi )}\\\\&= \\displaystyle \\sum _{\\xi \\ne 0}4\\pi ^2\\xi ^2|\\widehat{u}_T(\\xi )|^2 = \\Vert u\\Vert _{H^1_{\\mathrm {av}}(\\mathbb {T})}^2.$ This implies $\\Vert f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})} =\\Vert (-\\Delta _{\\mathrm {av}})^{-1}f\\Vert _{H^1_{\\mathrm {av}}(\\mathbb {T})}= \\Vert \\nabla (-\\Delta _{\\mathrm {av}})^{-1}f\\Vert _{L^2(\\mathbb {T})}$ for all $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ , where $\\nabla = d/dx$ ." ], [ "Bounded variation and $H^{-1}$ fidelity for the torus {{formula:c20ebab4-b4f5-4ab8-b58b-35c58c9231fb}}", "We recall the spaces of functions of bounded variation in one-dimensional torus.", "Definition 1 (Definition 3.3.13 of [21]) For a measurable function $f$ on $\\mathbb {T}$ which is defined everywhere, we define the total variation as $\\displaystyle \\int _{\\mathbb {T}}|Df| = \\operatorname{ess} \\sup \\left\\lbrace \\displaystyle \\sum _{j=1}^M|f(x_j)-f(x_{j-1})| : 0=x_0<x_1<\\dots <x_M=1\\right\\rbrace ,$ where the supremum is taken over all partition of the interval $[0,1]$ .", "we say $f$ is bounded variation if the total variation of $f$ is bounded.", "Moreover, we define $BV(\\mathbb {T}) = \\left\\lbrace v\\in \\mathcal {D}^{\\prime }(\\mathbb {T}) : \\displaystyle \\int _{\\mathbb {T}}|Df|<\\infty \\right\\rbrace .$ Remark In the definition, $D$ can be regarded as the distributional derivative, and $Df$ can be identified with a signed Borel measure.", "Remark The total variation on a general open set $\\Omega \\subset \\mathbb {R}^d$ is defined as $\\displaystyle \\int _{\\Omega }|Dv| = \\sup \\left\\lbrace -\\int _{\\Omega }u(x)\\operatorname{div}\\phi (x)~dx : \\phi \\in C^{\\infty }_0(\\Omega ;\\mathbb {R}^d)\\mbox{ and }\\Vert \\phi \\Vert _{L^{\\infty }(\\Omega )}\\le 1\\right\\rbrace ,$ and the space of bounded variation is defined as $BV(\\Omega ) = \\left\\lbrace v\\in L^1(\\Omega ) : \\displaystyle \\int _{\\Omega }|Dv|<\\infty \\right\\rbrace .$ It is well-known that if $v\\in W^{1,1}(\\Omega )$ , then $\\displaystyle \\int _{\\Omega }|Dv| = \\int _{\\Omega }|\\nabla v|~dx = |v|_{W^{1,1}(\\Omega )},$ and therefore $W^{1,1}(\\Omega )\\subset BV(\\Omega )\\subset L^1(\\Omega )$ .", "We define the functional $\\Phi :H^{-1}_{\\mathrm {av}}(\\mathbb {T})\\rightarrow \\mathbb {R}\\cup \\lbrace \\infty \\rbrace $ as $\\Phi (v) =\\left\\lbrace \\begin{array}{ll}\\displaystyle \\int _{\\mathbb {T}}|Dv|&\\mbox{if }v\\in BV(\\mathbb {T})\\cap H^{-1}_{\\mathrm {av}}(\\mathbb {T}),\\\\\\infty &\\mbox{otherwise.", "}\\end{array}\\right.$ Note that $\\Phi :H^{-1}_{\\mathrm {av}}(\\mathbb {T})\\rightarrow \\mathbb {R}\\cup \\lbrace \\infty \\rbrace $ is nonnegative, proper, lower semi-continuous and convex.", "In this paper, we consider the gradient flow equation of the form $(\\mbox{gradient flow})\\left\\lbrace \\begin{array}{rll}\\dfrac{du}{dt}(t) &\\in -\\partial _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\Phi (u(t))&\\mbox{for a.e.", "}t>0,\\\\u(\\cdot ,0) &=u_0\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T}),&\\end{array}\\right.$ where the subdifferential $\\partial _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}$ is defined as $\\partial _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}F(u) = \\left\\lbrace p\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T}) : F(v)-F(u)\\ge (p,v-u)_{H^{-1}_{\\mathrm {av}}(\\mathbb {T})} \\mbox{ for all }v\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})\\right\\rbrace $ for any convex functional $F:H^{-1}_{\\mathrm {av}}(\\mathbb {T})\\rightarrow \\mathbb {R}\\cup \\lbrace \\infty \\rbrace $ and $u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ .", "It is well-known that the theory of maximal monotone operators shows the existence and uniqueness of solution $u\\in C([0,\\infty ),H^{-1}_{\\mathrm {av}}(\\mathbb {T}))$ to equation (REF ) (for example, see [27]).", "Let $\\tau >0$ be the temporal step size.", "We consider the backward Euler method for gradient flow equation (REF ); for given $u^k\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ , find $u^{k+1}\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ such that $\\dfrac{u^{k+1}-u^k}{\\tau } \\in -\\partial _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\Phi (u^{k+1}).$ This can be reduced to solving the following minimization problem: $u^{k+1} = \\displaystyle \\mathop {\\mathrm {argmin}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\left\\lbrace \\Phi (u)+\\dfrac{1}{2\\tau }\\Vert u-u^k\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2\\right\\rbrace .$ Since $\\Phi $ is convex, such $u^{k+1}$ is uniquely determined.", "The convergence of backward Euler method has been proved in [27].", "Note that equation (REF ) is similar to the OSV model [33] which can be described as $(\\mbox{OSV})\\left\\lbrace \\begin{array}{l}\\mbox{Find $u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ such that}\\\\u = \\mathop {\\mathrm {argmin}}\\left\\lbrace \\Phi (u) + \\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2\\right\\rbrace ,\\end{array}\\right.$ where $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ is given data and $\\lambda >0$ is an artificial parameter.", "The existence result in convex analysis (for example, see [4]) gives that the minimizer $u\\in BV(\\mathbb {T})\\cap H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ exists.", "Hereafter, we consider the following minimization problem: find $u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ such that $(\\mathrm {P}0)\\quad \\displaystyle \\mathop {\\mathrm {minimize}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\left\\lbrace \\Phi (u)+\\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2\\right\\rbrace ,$ where $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ is a given data or $f=u^k$ , and $\\lambda $ is a given parameter or $\\lambda = 1/\\tau $ .", "This involves both of (OSV) and the backward Euler method for (gradient flow).", "Furthermore, $(\\mbox{P}0)$ introduces the following constrained problem: $(\\mathrm {P}1)\\quad \\displaystyle \\mathop {\\mathrm {minimize}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\left\\lbrace \\displaystyle \\int _{\\mathbb {T}}|d|+\\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 : d=Du\\right\\rbrace .$ Remark When we consider the Spohn's model $u_t = -\\Delta \\left(\\operatorname{div}\\left(\\beta \\dfrac{\\nabla u}{|\\nabla u|}+|\\nabla u|^{p-2}\\nabla u\\right)\\right),$ the subdifferential formulation is given as $u_t \\in -\\partial _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\widetilde{\\Phi }(u)$ , where $\\widetilde{\\Phi }(u) = \\beta \\displaystyle \\int _{\\mathbb {T}}|Du| + \\dfrac{1}{p}\\int _{\\mathbb {T}}|Du|^p.$ Therefore the backward Euler method yields $u^{k+1} = \\mathop {\\mathrm {argmin}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\left\\lbrace \\widetilde{\\Phi }(u)+\\dfrac{1}{2\\tau }\\Vert u-u^k\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2\\right\\rbrace .$ Then we consider the constraint problem $\\mathop {\\mathrm {minimize}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\left\\lbrace \\beta \\displaystyle \\int _{\\mathbb {T}}|d|+\\dfrac{1}{p}\\int _{\\mathbb {T}}|d|^p + \\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 : d=Du\\right\\rbrace .$" ], [ "Discretization for minimization problem", "We introduce the (spatial) discretization for the problem $(\\mbox{P}1)$ .", "Let $N\\in \\mathbb {N}$ be the partition number, $h=1/N$ and $x_n = nh$ .", "We regard $x_0 = x_N$ , then $\\lbrace x_n\\rbrace _{n=0}^N$ gives an uniform partition for $\\mathbb {T}$ .", "Furthermore, we let $x_{n+1/2} = (x_n+x_{n+1})/2 = (n+1/2)h$ for $n = -1,0,\\dots , N$ , where $x_{-1/2}$ and $x_{N+1/2}$ are identified with $x_{N-1/2}$ and $x_{1/2}$ , respectively.", "Then we define the following spaces of piecewise constant functions: $V_{h} &= \\left\\lbrace v_h:\\mathbb {T}\\rightarrow \\mathbb {R} : v_h|_{I_n}\\in \\mathbb {P}_0(I_n)\\mbox{ for all }n=0,\\dots ,N\\right\\rbrace ,\\\\V_{h0} &= \\left\\lbrace v_h = \\displaystyle \\sum _{n=1}^Nv_n1_{I_n}\\in V_h : \\displaystyle \\sum _{n=1}^Nv_n = 0\\right\\rbrace ,\\\\\\widehat{V}_{h} &= \\lbrace d_h:I\\rightarrow \\mathbb {R} : d_h|_{[x_{n-1},x_n)}\\in \\mathbb {P}_0([x_{n-1},x_n))\\mbox{ for all }n=1,\\dots ,N\\rbrace ,$ where $I=[0,1]$ , $I_n = [x_{n-1/2},x_{n+1/2})$ , $\\mathbb {P}_0(I_n)$ is a space of constant functions on interval $I_n$ and $1_{I_{n}}$ is its characteristic function.", "Note that $V_{h0}$ is a finite dimensional subspace of $H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ .", "Furthermore, we define $D_h:V_{h0}\\rightarrow \\widehat{V}_h\\cap L^2_{\\mathrm {av}}(I)$ as $D_hv_h = \\displaystyle \\sum _{n=1}^N(v_{n}-v_{n-1})1_{[x_{n-1},x_n)},$ where $v_0$ is identified with $v_N$ .", "Let $d_h=D_hv_h\\in \\widehat{V}_h$ , $\\textbf {d} = (d_1,\\dots , d_N)^{\\mathrm {T}}\\in \\mathbb {R}^N$ for $d_h=\\sum _{n=1}^Nd_n1_{[x_{n-1},x_n)}$ , $\\widetilde{\\textbf {v}} = (v_1,v_2,\\dots , v_{N})^{\\mathrm {T}}\\in \\mathbb {R}^N$ for $v_h=\\sum _{n=1}^Nv_n1_{I_n}\\in V_{h0}$ , then we have $\\Phi (v_h) = \\dfrac{\\Vert D_hv_h\\Vert _{L^1(I)}}{h} = \\dfrac{\\Vert d_h\\Vert _{L^1(I)}}{h} = h\\Vert \\nabla _h\\widetilde{\\textbf {v}}\\Vert _1 = \\Vert \\textbf {d}\\Vert _1,$ where $\\nabla _h:\\mathbb {R}^N\\rightarrow \\mathbb {R}^N$ is the discrete gradient $\\nabla _h = h^{-1}\\begin{pmatrix}1&0&\\dots &0&-1\\\\-1&1&\\dots &0&0\\\\\\vdots &&\\ddots &&\\vdots \\\\0&0&\\dots &-1&1.\\end{pmatrix}\\in \\mathbb {R}^{N\\times N}.$ Note that $D_hv_h\\in \\widehat{V}_h\\cap L^2_{\\mathrm {av}}(I)\\subset L^2_{\\mathrm {av}}(I)$ for all $v_h\\in V_{h0}$ ; however, $D_hv_h\\notin H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ because it does not satisfy the periodic boundary condition (see Figure REF ).", "Figure: An example of v h ∈V h0 v_h\\in V_{h0} and D h v h ∈V ^ h D_hv_h\\in \\widehat{V}_h.Here we introduce the discretized problem for (P1); $(\\mathrm {P}1)_h\\quad \\displaystyle \\mathop {\\mathrm {minimize}}_{u_h\\in V_{h0}}\\left\\lbrace \\Vert \\textbf {d}\\Vert _1 + \\dfrac{\\lambda }{2}\\Vert u_h-f_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 : d_h = D_h u_h\\in \\widehat{V}_h\\right\\rbrace ,$ where $f_h\\in V_{h0}$ is given data or $f_h=u_h^k$ , and $\\textbf {d} = (d_1,\\dots , d_N)^{\\mathrm {T}}$ for $d_h=\\sum _{n=1}^Nd_n1_{[x_{n-1},x_n)}$ .", "Furthermore, we introduce the unconstrained problem $(\\mathrm {P}2)_h\\quad \\displaystyle \\mathop {\\mathrm {minimize}}_{u_h\\in V_{h0}, d_h\\in \\widehat{V}_h}\\left\\lbrace \\Vert \\textbf {d}\\Vert _1 + \\dfrac{\\lambda }{2}\\Vert u_h-f_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 + \\dfrac{\\mu }{2}\\Vert d_h - D_h u_h\\Vert _{L^2(I)}^2\\right\\rbrace .$ Remark In this paper, we use $\\Vert d_h-D_hu_h\\Vert _{L^2(I)}^2$ .", "This enables to apply the shrinking method to minimization problem in the split Bregman framework." ], [ "Corresponding matrix form", "We reduce $(\\mathrm {P}2)_h$ to the matrix formulation.", "Let $\\textbf {d} = (d_1,\\dots , d_N)^{\\mathrm {T}}\\in \\mathbb {R}^N$ for $d_h=\\sum _{n=1}^Nd_n1_{[x_{n-1},x_n)}$ , $\\widetilde{\\textbf {u}} = (u_1,\\dots , u_N)^{\\mathrm {T}}\\in \\mathbb {R}^N$ and $\\textbf {u}=(u_1,\\dots ,u_{N-1})^{\\mathrm {T}}\\in \\mathbb {R}^{N-1}$ for $u_h = \\sum _{n=1}^Nu_n1_{I_n}\\in V_{h0}$ , then we have $d_h-D_hu_h = \\displaystyle \\sum _{n=1}^N(d_n-(u_n-u_{n-1}))1_{[x_{n-1},x_n)} = (\\textbf {d}-S\\widetilde{\\textbf {u}})\\cdot (1_{[x_0,x_1)},\\dots , 1_{[x_{N-1},x_N)})^{\\mathrm {T}},$ where $S_N=h\\nabla _h\\in \\mathbb {R}^{N\\times N}$ .", "Furthermore, $u_h\\in V_{h0}$ implies $u_N = -\\sum _{n=1}^{N-1}u_n$ , that is, $\\widetilde{\\textbf {u}} = R_N\\textbf {u}$ , where $R_N = \\begin{pmatrix}1&0&\\dots &0&\\\\0&1&\\dots &0&\\\\\\vdots &&\\ddots &\\vdots \\\\0&0&\\dots &1\\\\-1&-1&\\dots &-1\\end{pmatrix}\\in \\mathbb {R}^{N\\times (N-1)}.$ Therefore $\\dfrac{\\mu }{2}\\Vert d_h-D_hu_h\\Vert _{L^2(I)}^2= \\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}\\Vert _2^2.$ Next, we consider two expressions of $\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2$ for $v_h\\in V_{h0}$ .", "Recall that equation (REF ) implies $\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 = \\Vert \\nabla (-\\Delta _{\\mathrm {av}})^{-1}v_h\\Vert _{L^2(\\mathbb {T})}^2.$ We propose two schemes for considering $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}$ .", "The first scheme is to approximate $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}$ by using the discrete gradient $\\nabla _h\\in \\mathbb {R}^{N\\times N}$ and the discrete Laplacian $-\\Delta _h = \\nabla _h^{\\mathrm {T}}\\nabla _h = h^{-2}S_N^{\\mathrm {T}}S_N = h^{-2}\\begin{pmatrix}2&-1&0&\\dots &-1\\\\-1&2&-1&\\dots &0\\\\\\vdots &&\\ddots &&\\vdots \\\\-1&0&0&\\dots &2\\end{pmatrix}\\in \\mathbb {R}^{N\\times N}.$ Let $\\widetilde{\\textbf {v}} = (v_1,v_2,\\dots , v_{N})^{\\mathrm {T}}\\in \\mathbb {R}^N$ and $\\textbf {v}=(v_1,\\dots ,v_{N-1})^{\\mathrm {T}}\\in \\mathbb {R}^{N-1}$ for $v_h\\in V_{h0}$ , then $\\widetilde{\\textbf {v}}=R_N\\textbf {v}$ .", "We define $\\textbf {w}\\in \\mathbb {R}^{N-1}$ and $\\widetilde{\\textbf {w}}\\in \\mathbb {R}^N$ for $w_h\\in V_{h0}$ in the same way.", "Letting $\\widetilde{\\textbf {v}}= -\\Delta _h\\widetilde{\\textbf {w}}$ implies $R_N\\textbf {v} = -\\Delta _hR_N\\textbf {w}.$ Multiplying the (unique) pseudo-inverse matrix $L_N = \\dfrac{1}{N}\\begin{pmatrix}N-1&-1&\\dots &-1&-1\\\\-1&N-1&\\dots &-1&-1\\\\\\vdots &&\\ddots &&\\vdots \\\\-1&-1&\\dots &N-1&-1\\end{pmatrix}\\in \\mathbb {R}^{(N-1)\\times N}$ yields $\\textbf {v} = L_N(-\\Delta _h)R_N\\textbf {w} = h^{-2}L_NS_N^{\\mathrm {T}}S_NR_N\\textbf {w}$ .", "For simplicity of notation, we let $A_N &= L_NS_N^{\\mathrm {T}}S_NR_N,\\\\(-\\Delta _{\\mathrm {av}})_h &= h^{-2}A_N = L_N(-\\Delta _h)R_N.$ It is easy to check that $A_N = \\begin{pmatrix}3&0&1&1&\\dots &1&1&1\\\\-1&2&-1&0&\\dots &0&0&0\\\\0&-1&2&-1&\\dots &0&0&0\\\\\\vdots &&&\\ddots &&&\\vdots \\\\0&0&0&0&\\dots &-1&2&-1\\\\1&1&1&1&\\dots &1&0&3\\end{pmatrix}\\in \\mathbb {R}^{(N-1)\\times (N-1)}.$ satisfies $\\det A_N=N^2\\ne 0$ , therefore we have $\\det (-\\Delta _{\\mathrm {av}})_h^{-1} \\ne 0$ .", "This implies $\\left\\lbrace \\begin{array}{rl}\\widetilde{\\textbf {v}} &= -\\Delta _h\\widetilde{\\textbf {w}},\\\\\\widetilde{\\textbf {w}}&= R_N(-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v}.\\end{array}\\right.$ Our first scheme is to approximate $(-\\Delta _{\\mathrm {av}})^{-1}$ by $R_N(-\\Delta _{\\mathrm {av}})_h^{-1}$ , instead of $(-\\Delta _h)^{-1}$ which does not exist.", "This yields $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}v_h\\approx (\\nabla _hR_N(-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v})\\cdot (1_{[x_0,x_1)},\\dots ,1_{[x_{N-1},x_N)})^{\\mathrm {T}},$ that is, $\\Vert \\nabla (-\\Delta _{\\mathrm {av}})^{-1}v_h\\Vert _{L^2(I)}^2 &\\approx \\left\\Vert \\left(\\nabla _hR_N(-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v}\\right)\\cdot (1_{[x_0,x_1)},\\dots ,1_{[x_{N-1},x_N)})\\right\\Vert _{L^2(I)}^2\\\\&= h\\Vert \\nabla _hR_N(-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v}\\Vert _2^2\\\\&= h^3\\Vert S_NR_NA_N^{-1}\\textbf {v}\\Vert _2^2.$ For simplicity, we let $J = SR_NA_N^{-1}\\in \\mathbb {R}^{N\\times (N-1)}$ , then our first scheme can be described as $\\dfrac{\\lambda }{2}\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 \\approx \\dfrac{\\lambda h^3}{2}\\Vert J\\textbf {v}\\Vert _2^2$ for all $v_h\\in V_{h0}$ .", "Remark When we apply $H^{-s}_{\\mathrm {av}}(\\mathbb {T})$ norm $(0<s<1)$ to $(\\mbox{P}1)_h$ , the discrete inverse Laplacian $(-\\Delta _{\\mathrm {av}})_h^{-s}$ can be introduced by the discrete Fourier transform (for example, see [17]).", "Figure: The second degree B-spline basis functionsOur second scheme is to compute $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}$ directly.", "It requires the second degree piecewise polynomial which has continuous derivative.", "We define the second degree periodic B-spline basis functions (see Figure REF ) $B_n(x) = \\left\\lbrace \\begin{array}{rl}\\dfrac{(x-x_{n-\\frac{3}{2}})^2}{2h^2}&\\mbox{if }x\\in I_{n-1},\\\\\\dfrac{(x-x_{n-\\frac{1}{2}})(x_{n+\\frac{1}{2}}-x)}{h^2}+\\dfrac{1}{2}&\\mbox{if }x\\in I_n,\\\\\\dfrac{(x_{n+\\frac{3}{2}}-x)^2}{2h^2}&\\mbox{if }x\\in I_{n+1},\\\\0&\\mbox{otherwise}.\\end{array}\\right.$ We identify $B_{-1}\\equiv B_{N-1}$ , $B_0\\equiv B_N$ and $B_1\\equiv B_{N+1}$ .", "The B-spline basis functions have continuous derivative (see Figure REF ) $\\nabla B_n(x) =\\left\\lbrace \\begin{array}{rl}(x-x_{n-\\frac{3}{2}})h^{-2}&\\mbox{if }x\\in I_{n-1},\\\\2(x_n-x)h^{-2}&\\mbox{if }x\\in I_n,\\\\-(x_{n+\\frac{3}{2}}-x)h^{-2}&\\mbox{if }x\\in I_{n+1},\\\\0&\\mbox{otherwise}.\\end{array}\\right.$ Therefore we have $-\\Delta B_n(x) = \\left\\lbrace \\begin{array}{rl}-h^{-2}&\\mbox{if }x\\in I_{n-1},\\\\2h^{-2}&\\mbox{if }x\\in I_n,\\\\-h^{-2}&\\mbox{if }x\\in I_{n+1},\\\\0&\\mbox{otherwise}.\\end{array}\\right.$ Fix $v_h\\in V_{h0}$ arbitrarily, then there exits $w_h\\in \\operatorname{span}\\lbrace B_1,\\dots , B_N\\rbrace $ such that $w_h = (-\\Delta _{\\mathrm {av}}^{-1})v_h\\in H^1_{\\mathrm {av}}(\\mathbb {T})$ .", "It is easy to check that $\\displaystyle \\int _{\\mathbb {T}}B_n(x)~dx = h\\mbox{ for all }n=1,2,\\dots , N.$ Let $\\sum _{n=1}^Nw_n=0$ , then equation (REF ) implies $\\sum _{n=1}^Nw_nB_n\\in H^1_{\\mathrm {av}}(\\mathbb {T})$ .", "Furthermore, we let $w_h = \\displaystyle \\sum _{n=1}^N w_nB_n\\in H^1_{\\mathrm {av}}(\\mathbb {T}),\\ \\widetilde{\\textbf {w}}=(w_1,\\dots ,w_N)^{\\mathrm {T}}\\in \\mathbb {R}^N \\mbox{ and } \\textbf {w}=R_N\\widetilde{\\textbf {w}}\\in \\mathbb {R}^{N-1}.$ Then we have $v_h=-\\Delta w_h = \\displaystyle \\sum _{n=1}^Nw_n (-\\Delta B_n) = h^{-2}\\sum _{n=1}^N(-w_{n-1}+2w_n-w_{n+1})1_{I_n}\\in V_{h0}.$ This implies $R_N\\textbf {v}=\\widetilde{\\textbf {v}}= -\\Delta _h\\widetilde{\\textbf {w}}=-\\Delta _hR_N\\textbf {w}=h^{-2}S_N^{\\mathrm {T}}S_NR_N\\textbf {w}.$ Multiplying the pseudo-inverse matrix $L_N$ yields $\\textbf {v} =(-\\Delta _{\\mathrm {av}})_h\\textbf {w}= h^{-2}A_N\\textbf {w}.$ Therefore we have $\\textbf {w} = (-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v}=h^2A_N^{-1}\\textbf {v}.$ Figure: The derivative of second degree B-spline basis functions.The definition, combining with equation (REF ) gives $\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 = \\Vert \\nabla w_h\\Vert _{L^2(\\mathbb {T})}^2$ , where $\\nabla w_h = \\displaystyle \\sum _{n=1}^Nw_n\\nabla B_n$ is a piecewise linear function which satisfies $\\nabla w_h(x_{n-1/2}) = (w_n-w_{n-1})h^{-1}\\mbox{ for all }n=1,\\dots , N.$ This implies $\\nabla w_h = \\displaystyle \\sum _{n=1}^N(w_n-w_{n-1})h^{-1}\\phi _{n-1/2} = (\\nabla _h\\widetilde{\\textbf {w}})\\cdot (\\phi _{1/2},\\dots ,\\phi _{N-1/2})^{\\mathrm {T}},$ where $\\phi _{n-1/2}(x) = \\left\\lbrace \\begin{array}{rl}(x-x_{n-3/2})h^{-1}&\\mbox{if }x\\in I_{n-1},\\\\(x_{n+1/2}-x)h^{-1}&\\mbox{if }x\\in I_n,\\\\0&\\mbox{otherwise}.\\end{array}\\right.$ We identify $\\phi _{-1/2}=\\phi _{N-1/2}$ , $\\phi _{1/2}=\\phi _{N+1/2}$ (see Figure REF ).", "It is easy to check that $\\displaystyle \\int _{\\mathbb {T}}\\phi _{n-1/2}(x)\\phi _{m-1/2}(x)~dx = \\left\\lbrace \\begin{array}{rl}2h/3&\\mbox{if }n=m,\\\\h/6&\\mbox{if }|n-m|=1,\\\\0&\\mbox{otherwise}\\end{array}\\right.$ for all $n=1,\\dots ,N$ .", "Therefore we have $\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 &= \\Vert \\nabla w_h\\Vert _{L^2(\\mathbb {T})}^2\\\\&= (\\nabla _h\\widetilde{\\textbf {w}})^{\\mathrm {T}}\\begin{pmatrix}2h/3&h/6&0&\\dots &0&h/6\\\\h/6&2h/3&h/6&\\dots &0&0\\\\\\vdots &&\\ddots &&&\\vdots \\\\h/6&0&0&\\dots &h/6&2h/3\\end{pmatrix}\\nabla _h\\widetilde{\\textbf {w}}\\\\&= \\dfrac{1}{h}(S_NR_N\\textbf {w})^{\\mathrm {T}}M_NS_NR_N\\textbf {w},$ where $M_N = \\begin{pmatrix}2/3&1/6&0&\\dots &0&1/6\\\\1/6&2/3&1/6&\\dots &0&0\\\\\\vdots &&\\ddots &&&\\vdots \\\\1/6&0&0&\\dots &1/6&2/3\\end{pmatrix}\\in \\mathbb {R}^{N\\times N}.$ Let $T=\\begin{pmatrix}a&0&\\dots &0&b\\\\b&a&\\dots &0&0\\\\\\vdots &&\\ddots &&\\\\0&0&\\dots &b&a\\\\\\end{pmatrix}\\in \\mathbb {R}^{N\\times N},$ where $a=\\dfrac{\\sqrt{3}+1}{2\\sqrt{3}}$ and $b = \\dfrac{\\sqrt{3}-1}{2\\sqrt{3}}$ , then $T^{\\mathrm {T}}T=M_N$ .", "Summarizing the above argument, our second scheme can be described as $\\dfrac{\\lambda }{2}\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 &= \\dfrac{\\lambda }{2h}(S_NR_N\\textbf {w})^{\\mathrm {T}}M_NS_NR_N\\textbf {w}\\\\&= \\dfrac{\\lambda }{2h}(S_NR_N(-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v})^{\\mathrm {T}}T^{\\mathrm {T}}TS_NR_N(-\\Delta _{\\mathrm {av}})_h^{-1}\\textbf {v}\\\\&=\\dfrac{\\lambda h^3}{2}\\Vert TS_NR_NA_N^{-1}\\textbf {v}\\Vert _2^2.$ Let $H = TSR_NA_N^{-1} = TJ\\in \\mathbb {R}^{N\\times (N-1)}$ for simplicity of notation, then we have $\\dfrac{\\lambda }{2}\\Vert v_h\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}^2 = \\dfrac{\\lambda h^3}{2}\\Vert H\\textbf {v}\\Vert _2^2.$ Applying equation (REF ), (REF ) and (REF ) to $(\\mbox{P}2)_h$ implies the following two discretized problems; $&\\displaystyle \\mathop {\\mathrm {minimize}}_{\\textbf {u}\\in \\mathbb {R}^{N-1}, \\textbf {d}\\in \\mathbb {R}^N}\\left\\lbrace \\Vert \\textbf {d}\\Vert _1+\\dfrac{\\lambda h^3}{2}\\Vert J(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}\\Vert _2^2\\right\\rbrace ,\\\\&\\displaystyle \\mathop {\\mathrm {minimize}}_{\\textbf {u}\\in \\mathbb {R}^{N-1}, \\textbf {d}\\in \\mathbb {R}^N}\\left\\lbrace \\Vert \\textbf {d}\\Vert _1+\\dfrac{\\lambda h^3}{2}\\Vert H(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}\\Vert _2^2\\right\\rbrace ,$ where $\\textbf {f}\\in \\mathbb {R}^{N-1}$ is given as $f\\in V_{h0}$ or $\\textbf {f} = \\textbf {u}^k$ .", "Recall that the matrix $J$ is introduced by the approximation $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}\\approx \\nabla _hR_N(-\\Delta _{\\mathrm {av}})_h^{-1}$ .", "On the other hand, we obtain $H$ by using $\\nabla (-\\Delta _{\\mathrm {av}})^{-1}$ exactly.", "Therefore (REF ) can be regarded as an approximation of (), which is equivalent to $(\\mbox{P}2)_h$ .", "Figure: The piecewise linear basis functions" ], [ "Split Bregman framework", "In this section, we review the alternating split Bregman framework in [20] for the problem $(\\mbox{P}3K)_h\\quad \\displaystyle \\mathop {\\mathrm {minimize}}_{\\textbf {u}\\in \\mathbb {R}^{N-1}, \\textbf {d}\\in \\mathbb {R}^N}\\left\\lbrace \\Vert \\textbf {d}\\Vert _1+\\dfrac{\\lambda h^3}{2}\\Vert K(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}\\Vert _2^2\\right\\rbrace ,$ where $K\\in \\mathbb {R}^{N\\times (N-1)}$ is equal to $J$ or $H$ .", "Recall that $(\\mbox{P}3K)_h$ is an approximation of the discrete problem for $(\\mbox{P}0)$ ; $(\\mbox{P}0K)_h\\quad \\displaystyle \\mathop {\\mathrm {minimize}}_{\\textbf {u}\\in \\mathbb {R}^{N-1}}\\left\\lbrace \\Vert S_NR_N\\textbf {u}\\Vert _1+\\dfrac{\\lambda h^3}{2}\\Vert K(\\textbf {u}-\\textbf {f})\\Vert _2^2\\right\\rbrace .$ Let $\\Psi (\\textbf {u},\\textbf {d}) = \\Vert \\textbf {d}\\Vert _1+\\dfrac{\\lambda h^3}{2}\\Vert K(\\textbf {u}-\\textbf {f})\\Vert _2.$ The Bregman method replaces $\\Psi (\\textbf {u},\\textbf {d})$ into the Bregman distance and iteratively solves $(\\textbf {u}^{k+1},\\textbf {d}^{k+1}) = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {u}\\in \\mathbb {R}^{N-1}, \\textbf {d}\\in \\mathbb {R}^N}\\left\\lbrace D_{\\Psi }^{\\textbf {p}^k}((\\textbf {u},\\textbf {d}),(\\textbf {u}^k,\\textbf {d}^k)) + \\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}\\Vert _2^2\\right\\rbrace ,$ where the Bregman distance $D_{\\Psi }^{\\textbf {p}^k}$ is defined as $D_{\\Psi }^{\\textbf {p}^k}((\\textbf {u},\\textbf {d}),(\\textbf {u}^k,\\textbf {d}^k)) = \\Psi (\\textbf {u},\\textbf {d}) - \\Psi (\\textbf {u}^k,\\textbf {d}^k)-\\textbf {p}_u^k\\cdot (\\textbf {u}-\\textbf {u}^k)-\\textbf {p}_d^k\\cdot (\\textbf {d}-\\textbf {d}^k),$ and $\\textbf {p}^k = (\\textbf {p}_u^k,\\textbf {p}_d^k)\\in \\mathbb {R}^{N-1}\\times \\mathbb {R}^N$ is defined as $\\textbf {p}_u^{k+1} &= \\textbf {p}_u^k - \\mu h(S_NR_N)^{\\mathrm {T}}(S_NR_N\\textbf {u}^{k+1}-\\textbf {d}^{k+1})\\mbox{ and }\\textbf {p}_u^0 =\\textbf {0}\\in \\mathbb {R}^{N-1}\\\\\\textbf {p}_d^{k+1} &= \\textbf {p}_d^k - \\mu h(\\textbf {d}^{k+1}-S_NR_N\\textbf {u}^{k+1})\\mbox{ and }\\textbf {p}_d^0=\\textbf {0}\\in \\mathbb {R}^N.$ Thanks to $\\Psi :\\mathbb {R}^{N-1}\\times \\mathbb {R}^N\\rightarrow \\mathbb {R}$ is convex and lower semi-continuous, the Bregman distance $D_{\\Psi }^{\\textbf {p}^k}(\\cdot ,(\\textbf {u}^k,\\textbf {d}^k))$ is also convex and lower semi-continuous.", "Applying the usual existence result of convex analysis (see [4]) gives that there exists a minimizer $(\\textbf {u}^{k+1},\\textbf {d}^{k+1})$ .", "Furthermore, by using induction we can show that $\\left(\\Psi (\\textbf {u},\\textbf {d}) +\\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}-\\alpha ^{k}\\Vert _2^2\\right) - \\left(D_{\\Psi }^{\\textbf {p}^k}((\\textbf {u},\\textbf {d}),(\\textbf {u}^k,\\textbf {d}^k)) + \\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}\\Vert _2^2\\right)$ is independent of $(\\textbf {u},\\textbf {d})$ , where $\\alpha ^{k+1}\\in \\mathbb {R}^N$ is defined as $\\alpha ^{k+1} = \\alpha ^k -(\\textbf {d}^{k+1}-S_NR_N\\textbf {u}^{k+1})\\mbox{ and }\\alpha ^{0}=\\textbf {0}.$ This implies the minimizer $(\\textbf {u}^{k+1},\\textbf {d}^{k+1})$ of problem (REF ) satisfies $(\\textbf {u}^{k+1},\\textbf {d}^{k+1}) = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {u}\\in \\mathbb {R}^{N-1}, \\textbf {d}\\in \\mathbb {R}^N}\\left\\lbrace \\Vert \\textbf {d}\\Vert _1+\\dfrac{\\lambda h^3}{2}\\Vert K(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\dfrac{\\mu h}{2}\\Vert \\textbf {d}-S_NR_N\\textbf {u}-\\alpha ^{k}\\Vert _2^2\\right\\rbrace .$ This is the split Bregman iteration for the problem $(\\mbox{P}3K)_h$ .", "Finally, we apply the alternating split Bregman algorithm and obtain [left=(P4K)h   ]align uk+1 = argminuRN-1{h32K(u-f)22 + h2dk-SNRNu-k22}, dk+1 = argmindRN{d1+h2d-SNRNuk+1-k22}, k+1 = k-dk+1+SNRNuk+1, where $\\textbf {f}\\in \\mathbb {R}^{N-1}$ is given data or $\\textbf {f}=\\textbf {u}^k$ , $\\alpha ^0 = \\textbf {0}$ , $\\textbf {u}^0$ is given as $\\textbf {0}$ or initial condition, and $\\textbf {d}^0 = S_NR_N\\textbf {u}^0$ .", "This satisfies the following convergence result.", "Lemma 1 (Theorem 3.2 of [5]) Suppose that $(\\mbox{P}0K)_h$ has a minimizer $\\textbf {u}^*\\in \\mathbb {R}^{N-1}$ , then $\\textbf {u}^k$ which determined by $(\\mbox{P}4K)_h$ satisfies $\\lim _{k\\rightarrow \\infty }\\Vert S_NR_N\\textbf {u}^{k+1}\\Vert _1 + \\dfrac{\\lambda h^3}{2}\\Vert K(\\textbf {u}^{k+1}-\\textbf {f})\\Vert _2^2 = \\Vert S_NR_N\\textbf {u}^*\\Vert _1 + \\dfrac{\\lambda h^3}{2}\\Vert K(\\textbf {u}^*-\\textbf {f})\\Vert _2^2.$ Furthermore, if a minimizer $\\textbf {u}^*$ of $(\\mbox{P}0K)_h$ is unique, then $\\lim _{k\\rightarrow \\infty }\\Vert \\textbf {u}^{k+1}-\\textbf {u}^*\\Vert _2=0$ .", "The functional in () is differentiable with respect to $\\textbf {u}$ , and the minimization () can be reduced to the shrinking method $(\\textbf {d}^{k+1})_n = \\operatorname{shrink}\\left((S_NR_N\\textbf {u}^{k+1}+\\alpha ^{k})_n,\\dfrac{1}{\\mu h}\\right),$ where $(\\textbf {v})_n$ is the $n$ -entry of vector $\\textbf {v}$ and $\\operatorname{shrink}(\\rho ,a) = \\dfrac{\\rho }{|\\rho |}\\max \\lbrace |\\rho |-a,0\\rbrace .$ Therefore, the problem $(\\mbox{P}4K)_h$ introduces $(\\mbox{P}5K)_h\\quad \\left\\lbrace \\begin{array}{rl}\\textbf {u}^{k+1} &= \\left(\\lambda h^3K^{\\mathrm {T}}K+\\mu h(S_NR_N)^{\\mathrm {T}}S_NR_N\\right)^{-1}\\left(\\lambda h^3K^{\\mathrm {T}}K\\textbf {f}+\\mu h(S_NR_N)^{\\mathrm {T}}(\\textbf {d}^k-\\alpha ^{k})\\right),\\\\(\\textbf {d}^{k+1})_n &= \\operatorname{shrink}\\left((S_NR_N\\textbf {u}^{k+1}+\\alpha ^{k})_n,\\dfrac{1}{\\mu h}\\right)\\mbox{ for all }n=1,\\dots , N,\\\\\\alpha ^{k+1} &= \\alpha ^k-\\textbf {d}^{k+1}+S_NR_N\\textbf {u}^{k+1},\\end{array}\\right.$ where $\\textbf {f}\\in \\mathbb {R}^{N-1}$ is given data or $\\textbf {f}=\\textbf {u}^k$ , $\\alpha ^0 = \\textbf {0}$ , $\\textbf {u}^0$ is given as $\\textbf {0}$ or initial condition, and $\\textbf {d}^0 = S_NR_N\\textbf {u}^0$ .", "Figure: The difference between K=JK=J and K=HK=H." ], [ "Shrinking method for Spohn's model", "In this section, we consider the split Bregman framework for Spohn's model $u_t = -\\Delta \\left(\\operatorname{div}\\left(\\beta \\dfrac{\\nabla u}{|\\nabla u|}+|\\nabla u|^{p-2}\\nabla u\\right)\\right),$ which can be regarded as the gradient flow problem for energy functional $\\widetilde{\\Phi }(u) = \\beta \\displaystyle \\int _{\\mathbb {T}}|Du| + \\dfrac{1}{p}\\int _{\\mathbb {T}}|Du|^p,$ where $\\beta >0$ and $p>1$ .", "This energy is considered in model for the relaxation of a crystalline surface below the roughening temperature (for example, see [26]).", "If we replace $w = (Du)^p$ , the alternating split Bregman method introduces nonlinear problem.", "In this paper we always assume $p=3$ , and we apply the constraint $d=Du$ to $|Du|^3$ .", "The alternating split Bregman method implies [left=]align uk+1 = argminuRN-1{h32K(u-f)22 + h2dk-SNRNu-k22}, dk+1 = argmindRN{d1+ 1pdpp +h2d-SNRNuk+1-k22}, k+1 = k-dk+1+SNRNuk+1, We consider the Euler-Lagrange equation for equation (); $\\beta \\left(\\dfrac{(\\textbf {d}^{k+1})_n}{|(\\textbf {d}^{k+1})_n|}\\right)_{1\\le n\\le N} + \\left((\\textbf {d}^{k+1})_n|(\\textbf {d}^{k+1})_n|^{p-2}\\right)_{1\\le n\\le N} +\\mu h(\\textbf {d}^{k+1}-SR_N\\textbf {u}^{k+1}-\\alpha ^k)=0.$ For simplicity of notation, we let $x=(\\textbf {d}^{k+1})_n$ , $a=1/(\\mu h)>0$ and $\\rho = (SR_N\\textbf {u}^{k+1}+\\alpha ^k)_n$ .", "This, combining with $p=3$ gives $\\beta \\dfrac{x}{|x|}+x|x|+\\dfrac{1}{a}(x-\\rho )=0.$ Suppose that $x>0$ , then we have $a\\beta <\\rho $ and $x=\\dfrac{1}{2}\\left(-\\dfrac{1}{a}+\\sqrt{\\dfrac{1}{a^2}-4\\left(\\beta -\\dfrac{\\rho }{a}\\right)}\\right).$ By the similar way, supposing $x<0$ yields $\\rho <-a\\beta $ and $x = \\dfrac{1}{2}\\left(\\dfrac{1}{a}-\\sqrt{\\dfrac{1}{a^2}-4\\left(\\beta +\\dfrac{\\rho }{a}\\right)}\\right).$ If $-a\\beta <\\rho <a\\beta $ , we let $x=0$ .", "These observations provide the shrinking operator of the form $x = \\dfrac{\\rho }{2a|\\rho |}\\left(-1+\\sqrt{1+4a\\max \\lbrace |\\rho |-a\\beta ,0\\rbrace }\\right).$ Applying this to equation () gives $\\left\\lbrace \\begin{array}{rl}\\textbf {u}^{k+1} &= \\left(\\lambda h^3K^{\\mathrm {T}}K+\\mu h(S_NR_N)^{\\mathrm {T}}S_NR_N\\right)^{-1}\\left(\\lambda h^3K^{\\mathrm {T}}K\\textbf {f}+\\mu h(S_NR_N)^{\\mathrm {T}}(\\textbf {d}^k-\\alpha ^{k})\\right),\\\\(\\textbf {d}^{k+1})_n &=\\dfrac{\\mu h\\rho _n^{k+1}}{2|\\rho _n^{k+1}|}\\left(-1+\\sqrt{1+\\dfrac{4}{\\mu h}\\max \\left\\lbrace |\\rho _n^{k+1}|-\\dfrac{\\beta }{\\mu h},0\\right\\rbrace }\\right) \\mbox{ for all }n=1,\\dots , N,\\\\\\alpha ^{k+1} &= \\alpha ^k-\\textbf {d}^{k+1}+S_NR_N\\textbf {u}^{k+1},\\end{array}\\right.$ where $\\rho _n^{k+1}=(S_NR_N\\textbf {u}^{k+1}+\\alpha ^k)_n$ .", "Figure: Numerical examples of the gradient flow." ], [ "Example 1: Comparison of two schemes", "Here we show numerical examples of $(\\mbox{P}5K)_h$ .", "Note that equation () implies that $\\mu $ should satisfy $\\mu =O(h^{-1})$ .", "Moreover, this and equation () yield $\\lambda = O(h^{-3})$ is necessary for reasonable computation.", "In this paper, we always regard $\\mathbb {T}$ as an interval $[0,1]$ with periodic boundary condition.", "Our first numerical example is the gradient flow (REF ) with the initial condition $u^0(x) = \\left\\lbrace \\begin{array}{ll}10(4-\\log 5)&\\mbox{if }|x-1/2|\\le 1/10,\\\\\\dfrac{5}{|x-1/2|}-10(1+\\log 5)&\\mbox{otherwise.", "}\\end{array}\\right.$ Note that the similar example is computed in [17].", "They essentially apply the matrix $J$ and compute the gradient flow problem without split Bregman method.", "Their scheme requires $\\tau = \\lambda ^{-1} = O(h^5)$ for $H^{-1}_{\\mathrm {av}}$ fidelity.", "We check the difference between $K=J$ and $K=H$ .", "Figure REF shows two numerical results with the same parameters $N=40$ , $\\lambda =h^{-3}$ and $\\mu =5h^{-1}$ .", "Numerical results $\\textbf {u}^k\\in \\mathbb {R}^{N-1}$ are represented as piecewise constant functions $u_h^k\\in V_{h0}$ .", "They are different because the matrix $J$ is introduced by discrete gradient and discrete inverse Laplacian.", "This difference is expected to be small if we consider sufficiently small $h$ .", "Figure REF shows evolution of numerical solutions for $N=200$ , $\\lambda =h^{-3}$ and $\\mu =5h^{-1}$ .", "We infer from them that (REF ) can provide sufficiently accurate result." ], [ "Example 2: Discontinuity and symmetry", "Our second numerical example for (REF ) is $u^0(x) = \\left\\lbrace \\begin{array}{ll}-a(1/4-r)^3&\\mbox{if }0<x<r\\mbox{ or }1-r<x<1,\\\\a(x-1/4)^3&\\mbox{if }r<x<1/2-r,\\\\a(1/4-r)^3&\\mbox{if }1/2-r<x<1/2+r\\\\-a(x-3/4)^3&\\mbox{if }1/2+r<x<1-r,\\end{array}\\right.$ where $a = 450$ and $r=1/15$ .", "In [11], a class of initial data including (REF ) as an example has been studied analytically.", "They rigorously proved that the solution becomes discontinuous instantaneously.", "Their analysis gives an exact profile of the fourth order gradient flow.", "Note that because of uniqueness of a solution, the symmetry of initial profile is preserved during evolution.", "We can check that our numerical result shows the discontinuity and symmetry approximately (see Figure REF , REF ).", "We use $K=J$ , $N=200$ , $\\lambda =25h^{-3}$ and $\\mu =15h^{-1}$ .", "Furthermore, we note that we can compute until $\\textbf {u}^k\\approx \\textbf {0}$ easily, because our scheme can be stable for $\\tau =\\lambda ^{-1}=O(h^3)$ .", "Figure: Second numerical examples of the gradient flow." ], [ "Example 3: Extinction time", "Our third example for (REF ) is $u^0(x) = -\\cos (2\\pi x),$ which gives $\\Vert u^0\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})} = \\dfrac{1}{2\\sqrt{2}\\pi }.$ Figure REF shows evolution of numerical solution for third example.", "We use $N-200$ , $\\lambda =20h^{-3}$ and $\\mu =30h^{-1}$ for .Figure REF .", "Recall that our numerical scheme can compute the evolution until $\\textbf {u}^k\\approx \\textbf {0}$ .", "easily.", "Furthermore, applying the extinction time estimate [15] to one-dimensional torus implies $T^*(u^0) \\le C^*\\Vert u^0\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})},$ where $T^*(u^0)$ is the extinction time for the initial condition $u^0\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ and the constant $C^*$ satisfies $\\Vert f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})}\\le C^*\\int _{\\mathbb {T}}|Df|$ for all $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ .", "It is easy to check that $\\Vert f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T})} = \\left(\\displaystyle \\sum _{\\xi \\ne 0}\\dfrac{1}{4\\pi ^2}\\xi ^{-2}|\\widehat{f}_T(\\xi )|^2\\right)^{1/2} \\le \\dfrac{1}{2\\pi }\\Vert f\\Vert _{L^2(\\mathbb {T})} \\le \\dfrac{1}{2\\pi }\\Vert f\\Vert _{L^{\\infty }(\\mathbb {T})} \\le \\dfrac{1}{2\\pi }\\int _{\\mathbb {T}}|Df|$ for all $f\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T})$ .", "Therefore, the extinction time for $u^0(x) = -\\cos (2\\pi x)$ can be estimated as $T^*(u^0) \\le \\dfrac{1}{4\\sqrt{2}\\pi ^2} \\approx 1.7911224 \\times 10^{-2}.$ The numerical solution is expected to be “extinct” in $k \\le \\dfrac{T^*(u^0)}{\\tau } \\lessapprox 1.7911224\\tau ^{-1}\\times 10^{-2}.$ Table: Time step kk which satisfies ∥𝐮 k ∥ ∞ <10 -4 \\Vert \\textbf {u}^k\\Vert _{\\infty }<10^{-4}, 10 -6 10^{-6} and 10 -8 10^{-8}.Table REF shows the time step number $k$ such that $\\Vert \\textbf {u}^k\\Vert _{\\infty }<10^{-4}$ , $10^{-6}$ and $10^{-8}$ for each parameters.", "This result shows that we can get $\\Vert \\textbf {u}^k\\Vert _{\\infty }\\lessapprox \\tau $ in reasonable iteration number which is expected in (REF ), however, it requires more iteration to obtain smaller $\\Vert \\textbf {u}^k\\Vert _{\\infty }$ .", "Figure: Numerical results for u 0 (x)=-cos(2πx)u^0(x) = -\\cos (2\\pi x)." ], [ "Example 4: Spohn's model", "Our fourth example is split Bregman framework for Spohn's fourth order model (REF ), which is described in Section .", "Recall that we suppose that $p=3$ in this paper.", "Therefore we can apply the shrinkage operator (REF ) to split Bregman framework for Spohn's model.", "Figure REF shows the numerical example for $u^0(x) = -\\cos (2\\pi x)$ , $\\beta = 0.5$ , $N=200$ , $\\lambda =50h^{-3}$ and $\\mu =30h^{-1}$ ." ], [ "Two dimensional case", "The fourth order total variation flow and Spohn's model on two dimensional torus $\\mathbb {T}^2$ can be computed by the similar way to one dimensional case.", "We can define $L^2_{\\mathrm {av}}(\\mathbb {T}^2)$ , $H^1_{\\mathrm {av}}(\\mathbb {T}^2)$ , $H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)$ and $(-\\Delta _{\\mathrm {av}})^{-1}:H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)\\rightarrow H^1_{\\mathrm {av}}(\\mathbb {T}^2)$ by the generalized Fourier transform.", "First, the fourth order isotropic total variation flow introduces the constraint problem $\\mathop {\\mathrm {minimize}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}\\left\\lbrace \\displaystyle \\int _{\\mathbb {T}^2}|(d_x,d_y)|+\\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}^2 : d_x = D_xu\\mbox{ and }d_y = D_yu\\right\\rbrace ,$ where $D_x$ , $D_y$ is distributional derivative for each variable.", "Note that $\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}^2=\\Vert \\nabla _x(-\\Delta _{\\mathrm {av}})^{-1}(u-f)\\Vert _{L^2(\\mathbb {T}^2)}^2+\\Vert \\nabla _y(-\\Delta _{\\mathrm {av}})^{-1}(u-f)\\Vert _{L^2(\\mathbb {T}^2)}^2,$ where $\\nabla _x=\\partial /\\partial x$ and $\\nabla _y = \\partial /\\partial y$ .", "Let $N_x$ , $N_y$ be the partition number, $h_x=1/N_x$ , $h_y = 1/N_y$ , $x_n = nh_x$ and $y_n = nh_y$ .", "Furthermore, we let $Q_{n_x,n_y} = [x_{n_x-1/2},x_{n_x+1/2})\\times [y_{n_y-1/2},y_{n_y+1/2})$ and $\\widehat{Q}_{n_x,n_y} = [x_{n_x-1},x_{n_x})\\times [y_{n_y-1},y_{n_y})$ .", "Then we consider the space of piecewise constant functions $V_h &= \\left\\lbrace v_h :\\mathbb {T}^2\\rightarrow \\mathbb {R} : v_h|_{Q_{n_x,n_y}}\\in \\mathbb {P}_0(Q_{n_x,n_y})\\mbox{ for all }n_x,n_y\\right\\rbrace .\\\\V_{h0} &= \\left\\lbrace v_h = \\displaystyle \\sum _{n_x=1,n_y=1}^{N_x,N_y}v_{n_x,n_y}1_{Q_{n_x,n_y}}\\in V_h : \\sum _{n_x=1,n_y=1}^{N_x,N_y}v_{n_x,n_y}=0\\right\\rbrace ,\\\\\\widehat{V}_h &= \\left\\lbrace d_h : \\Omega \\rightarrow \\mathbb {R} : d_h|_{\\widehat{Q}_{n_x,n_y}}\\in \\mathbb {P}_0(\\widehat{Q}_{n_x,n_y})\\mbox{ for all }n_x,n_y\\right\\rbrace ,$ where $\\Omega = [0,1)^2$ .", "Any element $d_h\\in \\widehat{V}_h$ is described as $d_h = \\sum _{n_x,n_y}d_{n_x,n_y}1_{\\widehat{Q}_{n_x,n_y}}$ .", "Let $\\widetilde{\\textbf {v}} &= (v_{1,1},\\dots ,v_{N_x,1},v_{1,2},\\dots ,v_{N_x,2},\\dots ,v_{N_x-1,N_y}, v_{N_x,N_y})^{\\mathrm {T}}\\in \\mathbb {R}^{N_xN_y}\\\\\\textbf {v} &= (v_{1,1},\\dots ,v_{N_x,1},v_{1,2},\\dots ,v_{N_x,2},\\dots ,v_{N_x-1,N_y})^{\\mathrm {T}}\\in \\mathbb {R}^{N_xN_y-1}\\\\\\textbf {d} &= (d_{1,1},\\dots ,d_{N_x,1},d_{1,2},\\dots ,d_{N_x,2},\\dots ,d_{N_x,N_y})^{\\mathrm {T}}\\in \\mathbb {R}^{N_xN_y}$ for $v_h\\in V_{h0}$ and $d_h\\in \\widehat{V}_h$ .", "We define $D_{xh},D_{yh}:V_{h0}\\rightarrow \\widehat{V}_h\\cap L^2_{\\mathrm {av}}(\\Omega )$ as $D_{xh}v_h = \\displaystyle \\sum _{n_x,n_y}(v_{n_x,n_y}-v_{n_x-1,n_y})1_{Q_{n_x,n_y}},\\quad D_{yh}v_h = \\sum _{n_x,n_y}(v_{n_x,n_y}-v_{n_x,n_y-1})1_{Q_{n_x,n_y}}.$ This gives $\\Vert d_{xh}-D_{xh}u_h\\Vert _{L^2(\\Omega )}^2 &= h_xh_y\\Vert \\textbf {d}_x-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}\\Vert _2^2,\\\\\\Vert d_{yh}-D_{yh}u_h\\Vert _{L^2(\\Omega )}^2 &= h_xh_y\\Vert \\textbf {d}_y-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}\\Vert _2^2,$ where $R_{N_xN_y}\\in \\mathbb {R}^{(N_xN_y)\\times (N_xN_y-1)}$ is defined as equation (REF ) and $\\nabla _{xh}$ , $\\nabla _{yh}$ are the discrete gradient $\\nabla _{xh} = h_x^{-1}I_{N_y}\\otimes S_{N_x},\\quad \\nabla _{yh}=h_y^{-1}S_{N_y}\\otimes I_{N_x},$ where $I_N\\in \\mathbb {R}^{N\\times N}$ is the identity matrix and $\\otimes $ is the Kronecker product.", "Then our discretized problem is described as $\\begin{array}{rl}\\displaystyle \\mathop {\\mathrm {minimize}}_{\\textbf {u}\\in \\mathbb {R}^{N_xN_y-1},\\textbf {d}_x, \\textbf {d}_y\\in \\mathbb {R}^{N_xN_y}}&\\Biggl \\lbrace \\Vert \\textbf {d}_{xy}\\Vert _1+\\dfrac{\\lambda h_xh_y}{2}\\left(\\Vert K_x(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\Vert K_y(\\textbf {u}-\\textbf {f})\\Vert _2^2\\right)\\\\&\\qquad +\\dfrac{\\mu h_xh_y}{2}\\left(\\Vert \\textbf {d}_x-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}\\Vert _2^2+\\Vert \\textbf {d}_y-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}\\Vert _2^2\\right)\\Biggr \\rbrace ,\\end{array}$ where $\\textbf {d}_{xy}\\in \\mathbb {R}^{N_x\\times N_y}$ is defined as $d_{xy,n_x,n_y} = \\sqrt{d_{x,n_x,n_y}^2 + d_{y,n_x,n_y}^2}\\mbox{ for all }1\\le n_x\\le N_x\\mbox{ and }1\\le n_y\\le N_y,$ and $K_x, K_y\\in \\mathbb {R}^{(N_xN_y)\\times (N_xN_y-1)}$ are deduced from $\\nabla _x(-\\Delta _{\\mathrm {av}})^{-1}$ and $\\nabla _y(-\\Delta _{\\mathrm {av}})^{-1}$ , respectively.", "For example, we can approximate the inverse Laplacian by using $(-\\Delta _{\\mathrm {av}})_h = L_{N_xN_y}(\\nabla _{xh}^{\\mathrm {T}}\\nabla _{xh}+\\nabla _{yh}^{\\mathrm {T}}\\nabla _{yh})R_{N_xN_y}.$ This yields that our first scheme for two dimensional case is described as $K_x=J_x$ and $K_y=J_y$ , where $J_x = \\nabla _{xh}R_{N_xN_y}(-\\Delta _{\\mathrm {av}})_h^{-1},\\quad J_y = \\nabla _{yh}R_{N_xN_y}(-\\Delta _{\\mathrm {av}})_h^{-1}.$ If we let $h_x=h_y=h$ , then it is required that $\\lambda = O(h^{-4})$ and $\\tau =O(h^{-2})$ .", "The split Bregman framework gives [left=]align $\\textbf {u}^{k+1} = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {u}\\in \\mathbb {R}^{N_xN_y-1}}&\\left\\lbrace \\dfrac{\\lambda h_xh_y}{2}\\left(\\Vert K_x(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\Vert K_y(\\textbf {u}-\\textbf {f})\\Vert _2^2\\right)\\right.\\\\&\\qquad + \\dfrac{\\mu h_xh_y}{2}\\left(\\Vert \\textbf {d}_x^k-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}-\\alpha _x^{k}\\Vert _2^2\\right.\\\\&\\qquad \\qquad \\left.\\left.+\\Vert \\textbf {d}_y^k-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}-\\alpha _y^{k}\\Vert _2^2\\right)\\right\\rbrace ,$ $(\\textbf {d}_x^{k+1},\\textbf {d}_y^{k+1}) = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {d}_x,\\textbf {d}_y\\in \\mathbb {R}^{N_xN_y}}&\\left\\lbrace \\Vert \\textbf {d}_{xy}\\Vert _1+\\dfrac{\\mu h_xh_y}{2}\\left(\\Vert \\textbf {d}_x-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}^{k+1}-\\alpha _x^{k}\\Vert _2^2\\right.\\right.\\\\&\\left.\\left.\\qquad +\\Vert \\textbf {d}_y-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}^{k+1}-\\alpha _y^{k}\\Vert _2^2\\right)\\right\\rbrace ,$ xk+1 = xk-dxk+1+hxxhRNxNyuk+1,   yk+1 = yk-dyk+1+hyyhRNxNyuk+1, where $\\textbf {f}\\in \\mathbb {R}^{N_xN_y-1}$ is given data or $\\textbf {f}=\\textbf {u}^k$ , $\\alpha _x^0=\\alpha _x^0 = \\textbf {0}$ , $\\textbf {u}^0$ is given as $\\textbf {0}$ or initial condition, and $\\textbf {d}_x^0 = h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}^0$ , $\\textbf {d}_y^0 = h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}^0$ .", "Note that the equation () is essentially the same formulation as the one of split Bregman framework for second order isotropic problem, which is mentioned in [20].", "The Euler-Lagrange equation for equation () yields $\\dfrac{(\\textbf {d}_x^{k+1})_n}{|(\\textbf {d}_{xy}^{k+1})_n|}+\\mu h_xh_y\\left(\\textbf {d}_x^{k+1}-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}^{k+1}-\\alpha _x^k\\right)_n=0,\\\\\\dfrac{(\\textbf {d}_y^{k+1})_n}{|(\\textbf {d}_{xy}^{k+1})_n|}+\\mu h_xh_y\\left(\\textbf {d}_y^{k+1}-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}^{k+1}-\\alpha _y^k\\right)_n=0$ for all $n=1,\\dots , N_xN_y$ .", "We consider the approximation $\\dfrac{(\\textbf {d}_x^{k+1})_n}{|(\\textbf {d}_{xy}^{k+1})_n|} \\approx \\dfrac{(\\textbf {d}_x^{k+1})_n}{|(\\textbf {d}_{x}^{k+1})_n|}\\cdot \\dfrac{|s_{x,n}^k|}{s_n^k},\\quad \\dfrac{(\\textbf {d}_y^{k+1})_n}{|(\\textbf {d}_{xy}^{k+1})_n|} \\approx \\dfrac{(\\textbf {d}_y^{k+1})_n}{|(\\textbf {d}_{y}^{k+1})_n|}\\cdot \\dfrac{|s_{y,n}^k|}{s_n^k},$ where $s_n^k = \\sqrt{(s_{x,n}^k)^2+(s_{y,n}^k)^2},\\quad s_{x,n}^k = (h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}^{k+1}+\\alpha _x^k)_n,\\quad s_{y,n}^k = (h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}^{k+1}+\\alpha _y^k)_n.$ Applying them into equations (REF ) and () give the following shrinkage formula, which are equivalent to ones of [20]: $(d_x^{k+1})_n = \\dfrac{s_{x,n}^k}{|s_{x,n}^k|}\\max \\left\\lbrace |s_{x,n}^k|-\\dfrac{|s_{x,n}^k|}{\\mu h_xh_ys_n^k},0\\right\\rbrace ,\\quad (d_y^{k+1})_n = \\dfrac{s_{y,n}^k}{|s_{y,n}^k|}\\max \\left\\lbrace |s_{y,n}^k|-\\dfrac{|s_{y,n}^k|}{\\mu h_xh_ys_n^k},0\\right\\rbrace .$ Figure REF shows the numerical result of fourth order isotropic total variation flow (REF ) in $\\mathbb {T}^2$ with initial data $u^0(x,y) = x(x-1)y(y-1)-1/36$ .", "We use $N_x=N_y=40$ , $\\lambda =5h^{-4}$ and $\\mu =20h^{-2}$ .", "Next, the fourth order anisotropic total variation flow $u_t = -\\Delta \\left(\\operatorname{div}\\left(\\dfrac{\\nabla _xu}{|\\nabla _xu|},\\dfrac{\\nabla _yu}{|\\nabla _yu|}\\right)\\right).$ Figure: Numerical results of two-dimensional problems.Letting $F(u) = \\int _{\\mathbb {T}^2}\\left(|D_xu|+|D_yu|\\right)$ implies that formally we have $\\left(\\Delta \\left(\\operatorname{div}\\left(\\dfrac{\\nabla _xu}{|\\nabla _xu|},\\dfrac{\\nabla _yu}{|\\nabla _yu|}\\right)\\right),v-u\\right)_{H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)} &= \\left(-\\operatorname{div}\\left(\\dfrac{\\nabla _xu}{|\\nabla _xu|},\\dfrac{\\nabla _yu}{|\\nabla _yu|}\\right),v-u\\right)_{L^2_{\\mathrm {av}}(\\mathbb {T}^2)}\\\\&= \\displaystyle \\int _{\\mathbb {T}^2}\\left(\\dfrac{\\nabla _xu\\overline{\\nabla _xv}}{|\\nabla _xu|}-|\\nabla _xu|+\\dfrac{\\nabla _yu\\overline{\\nabla _yv}}{|\\nabla _yu|}-|\\nabla _yu|\\right)\\\\&\\le F(v)-F(u),$ therefore $u_t\\in -\\partial _{H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)} F$ .", "We apply the backward Euler method and obtain $u^{k+1} = \\mathop {\\mathrm {argmin}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}\\left\\lbrace \\displaystyle \\int _{\\mathbb {T}^2}\\left(|D_xu|+|D_yu|\\right)+\\dfrac{1}{2\\tau }\\Vert u-u^k\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}^2\\right\\rbrace ,$ which introduces the constraint problem $\\mathop {\\mathrm {minimize}}_{u\\in H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}\\left\\lbrace \\displaystyle \\int _{\\mathbb {T}^2}\\left(|d_x|+|d_y|\\right)+\\dfrac{\\lambda }{2}\\Vert u-f\\Vert _{H^{-1}_{\\mathrm {av}}(\\mathbb {T}^2)}^2 : d_x = D_xu\\mbox{ and }d_y = D_yu\\right\\rbrace ,$ This, combining with the split Bregman framework gives [left=]align $\\textbf {u}^{k+1} = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {u}\\in \\mathbb {R}^{N_xN_y-1}}&\\left\\lbrace \\dfrac{\\lambda h_xh_y}{2}\\left(\\Vert K_x(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\Vert K_y(\\textbf {u}-\\textbf {f})\\Vert _2^2\\right)\\right.\\\\&\\qquad + \\dfrac{\\mu h_xh_y}{2}\\left(\\Vert \\textbf {d}_x^k-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}-\\alpha _x^{k}\\Vert _2^2\\right.\\\\&\\qquad \\qquad \\left.\\left.+\\Vert \\textbf {d}_y^k-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}-\\alpha _y^{k}\\Vert _2^2\\right)\\right\\rbrace ,$ dxk+1 = argmindxRNxNy{dx1+hxhy2dx-hxxhRNxNyuk+1-xk22}, dyk+1 = argmindyRNxNy{dy1+hxhy2dy-hyyhRNxNyuk+1-yk22}, xk+1 = xk-dxk+1+hxxhRNxNyuk+1,   yk+1 = yk-dyk+1+hyyhRNxNyuk+1.", "We can apply the shrinking method (REF ) to equations () and ().", "Figure REF presents the evolution of fourth order anisotropic total variation flow for $u^0(x,y) = x(x-1)y(y-1)-1/36$ , $N_x=N_y=40$ , $\\lambda =5h^{-4}$ and $\\mu =20h^{-2}$ .", "For second order anisotropic total variation flow, Łasica, Moll and Mucha [28] have considered rectangular domain $\\Omega \\subset \\mathbb {R}^2$ or $\\Omega =\\mathbb {R}^2$ and rigorously proved that if the initial profile is piecewise constant, then the exact solution is piecewise constant.", "We can infer from our numerical experiment REF that their theoretical result is true also for fourth order anisotropic total variation flow.", "Finally, we consider two dimensional Spohn's fourth order model.", "The split Bregman framework provides [left=]align $\\textbf {u}^{k+1} = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {u}\\in \\mathbb {R}^{N_xN_y-1}}&\\left\\lbrace \\dfrac{\\lambda h_xh_y}{2}\\left(\\Vert K_x(\\textbf {u}-\\textbf {f})\\Vert _2^2+\\Vert K_y(\\textbf {u}-\\textbf {f})\\Vert _2^2\\right)\\right.\\\\&\\qquad + \\dfrac{\\mu h_xh_y}{2}\\left(\\Vert \\textbf {d}_x^k-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}-\\alpha _x^{k}\\Vert _2^2\\right.\\\\&\\qquad \\qquad \\left.\\left.+\\Vert \\textbf {d}_y^k-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}-\\alpha _y^{k}\\Vert _2^2\\right)\\right\\rbrace ,$ $(\\textbf {d}_x^{k+1},\\textbf {d}_y^{k+1}) = \\displaystyle \\mathop {\\mathrm {argmin}}_{\\textbf {d}_x,\\textbf {d}_y\\in \\mathbb {R}^{N_xN_y}}&\\left\\lbrace \\beta \\Vert \\textbf {d}_{xy}\\Vert _1+\\dfrac{1}{p}\\Vert \\textbf {d}_{xy}\\Vert _p^p\\right.\\\\&\\qquad +\\dfrac{\\mu h_xh_y}{2}\\left(\\Vert \\textbf {d}_x-h_x\\nabla _{xh}R_{N_xN_y}\\textbf {u}^{k+1}-\\alpha _x^{k}\\Vert _2^2\\right.\\\\&\\qquad \\qquad \\left.\\left.+\\Vert \\textbf {d}_y-h_y\\nabla _{yh}R_{N_xN_y}\\textbf {u}^{k+1}-\\alpha _y^{k}\\Vert _2^2\\right)\\right\\rbrace ,$ xk+1 = xk-dxk+1+hxxhRNxNyuk+1,   yk+1 = yk-dyk+1+hyyhRNxNyuk+1.", "The Euler-Lagrange equation for () can be approximated by equation (REF ).", "In this paper, we always suppose that $p=3$ .", "Note that the approximation (REF ) implies $|(\\textbf {d}_{xy}^{k+1})_n| \\approx |(\\textbf {d}_x^{k+1})_n|\\cdot \\dfrac{s_n^k}{|s_{x,n}^k|}\\quad \\mbox{ and }\\quad |(\\textbf {d}_{xy}^{k+1})_n| \\approx |(\\textbf {d}_y^{k+1})_n|\\cdot \\dfrac{s_n^k}{|s_{y,n}^k|}.$ We obtain approximated Euler-Lagrange equations $\\beta \\dfrac{(\\textbf {d}_x^{k+1})_n}{|(\\textbf {d}_{x}^{k+1})_n|}\\cdot \\dfrac{|s_{x,n}^k|}{s_n^k}+(\\textbf {d}_x^{k+1})_n|(\\textbf {d}_x^{k+1})_n|\\cdot \\dfrac{s_n^k}{|s_{x,n}^k|}+\\mu h_xh_y((\\textbf {d}_x^{k+1})_n-s_{x,n}^k)=0,\\\\\\beta \\dfrac{(\\textbf {d}_y^{k+1})_n}{|(\\textbf {d}_{y}^{k+1})_n|}\\cdot \\dfrac{|s_{y,n}^k|}{s_n^k}+(\\textbf {d}_y^{k+1})_n |(\\textbf {d}_y^{k+1})_n|\\cdot \\dfrac{s_n^k}{|s_{y,n}^k|}+\\mu h_xh_y((\\textbf {d}_y^{k+1})_n-s_{y,n}^k)=0.$ By the similar way to one dimensional case, we provide the shrinkage operators of the form $(\\textbf {d}_x^{k+1})_n &= \\dfrac{\\mu h_xh_y|s_{x,n}^k|}{2s_n^k}\\cdot \\dfrac{s_{x,n}^k}{|s_{x,n}^k|}\\left(-1+\\sqrt{1+\\dfrac{4s_n^k}{\\mu h_xh_y|s_{x,n}^k|}\\max \\left\\lbrace |s_{x,n}^k|-\\dfrac{\\beta |s_{x,n}^k|}{\\mu h_xh_ys_n^k},0\\right\\rbrace }\\right),\\\\(\\textbf {d}_y^{k+1})_n &= \\dfrac{\\mu h_xh_y|s_{y,n}^k|}{2s_n^k}\\cdot \\dfrac{s_{y,n}^k}{|s_{y,n}^k|}\\left(-1+\\sqrt{1+\\dfrac{4s_n^k}{\\mu h_xh_y|s_{y,n}^k|}\\max \\left\\lbrace |s_{y,n}^k|-\\dfrac{\\beta |s_{y,n}^k|}{\\mu h_xh_ys_n^k},0\\right\\rbrace }\\right).$ Figure REF shows the numerical result of split Bregman framework for Spohn's forth order model.", "We use $p=3$ , $\\beta =0.25$ , $N_x=N_y=40$ , $\\lambda =1.25h^{-4}$ and $\\mu =5h^{-2}$ .", "Moreover, we use the initial value $u^0(x,y) = x(x-1)y(y-1)-1/36$ , which is considered in [26].", "We can obtain the similar numerical result quite effectively by split Bregman framework." ], [ "Conclusion", "In this study, we propose a new numerical scheme for the OSV model, fourth order total variation flow and Spohn's fourth order model.", "Our scheme is based on the split Bregman framework for the ROF model and second order total variation flow.", "We demonstrate several numerical examples for one dimensional and two dimensional problems under periodic boundary condition.", "We use the parameters $\\lambda =O(h^{-3})$ , $\\mu =O(h^{-1})$ for one dimensional case, and $\\lambda =O(h^{-4})$ , $\\mu =O(h^{-2})$ for two dimensional case.", "For fourth order total variation flow, our numerical results approximately represent the flat facet and discontinuity, which is expected by the theoretical result for the exact profile.", "Furthermore, we propose new shrinkage operators for Spohn's model.", "Numerical results for Spohn's model show facet and relaxation." ], [ "Acknowledgement", "A part of the work of the second author was done when he was a postdoc fellow at the University of Tokyo.", "Its hospitality is gratefully acknowledged.", "The work of the first author was partly supported by the Japan Society for the Promotion of Science through the grant No.", "26220702 (Kiban S), No.", "19H00639 (Kiban A), No.", "18H05323 (Kaitaku), No.", "17H01091 (Kiban A) and No.", "16H03948 (Kiban B)." ] ]
1906.04394
[ [ "Tunneling interferometry and measurement of thickness of ultrathin\n metallic Pb(111) films" ], [ "Abstract Spectra of the differential tunneling conductivity for ultrathin lead films grown on Si(111)7x7 single crystals with a thickness from 9 to 50 monolayers have been studied by low-temperature scanning tunneling microscopy and spectroscopy.", "The presence of local maxima of the tunneling conductivity is typical for such systems.", "The energies of maxima of the differential conductivity are determined by the spectrum of quantum-confined states of electrons in a metallic layer and, consequently, the local thickness of the layer.", "It has been shown that features of the microstructure of substrates, such as steps of monatomic height, structural defects, and inclusions of other materials covered with a lead layer, can be visualized by bias-modulation scanning tunneling spectroscopy." ], [ "Introduction", "The main trend in the development of modern solid–state electronics is to reduce dimensions of logical elements, sensors, and conductors connecting them [1].", "A consequence of a transition from macro- to nanoscale is an increase in the influence of quantum confinement effects and effects of discreteness of electric charge and structural disorder on the transport properties of nanoelectronic devices [2].", "In this work we discuss the diagnostics of the quality of deposited metallic layers and the presence of foreign inclusions on the example of ultrathin Pb films.", "Such films appear to be convenient objects for studying quantum–size effects in normal and superconducting metal films [3], [4], [5], [6], [7], [8], [9], peculiarities of the growth of metal nanoislands [9], [10], [11], vortex states in superconducting nanostructures [12], [13], and electronic properties of superconductor–normal metal hybrid structures [14].", "The main methods for studying electronic states in Pb films are low-temperature scanning tunneling microscopy (STM) and spectroscopy [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], transport measurements [15], [16], and photoemission studies [16], [17], [18], [19], [20], [21].", "Peaks of the differential tunneling conductivity were detected for some values of the potential $U^{\\,}_n$ of ultrathin Pb film and island samples by tunneling spectroscopy methods [3], [6], [7], [8], [5], [9], [13].", "A correlation between the spectrum of the $U^{\\,}_n$ values and the local thickness of the Pb layer was found and interpreted in terms of the resonant tunneling of electrons through quantum-confined levels in a quasi-two-dimensional electron gas.", "The energy of quantum-confined levels for electrons in a one-dimensional potential well is determined by the Bohr–Sommerfeld quantization rule [4] $\\varphi ^{\\,}_1 + \\varphi ^{\\,}_2 + 2k^{\\,}_{\\perp ,n} D = 2\\pi n,$ where $\\varphi ^{\\,}_{1}$ and $\\varphi ^{\\,}_{2}$ are the phase shifts of the electronic wave reflected from the metal–vacuum and metal–substrate interfaces, respectively; $k^{\\,}_{\\perp ,n}$ is the spectrum of allowed values of the wave vector transverse with respect to the interfaces; $D$ is the thickness of the layer; and $n$ is the number of nodes of a standing electron wave.", "The authors of [4], [18], [19], [20] tried to extract the phases $\\varphi ^{\\,}_{1}$ and $\\varphi ^{\\,}_{2}$ from tunneling and photoemission spectroscopy data.", "The authors of [4] demonstrated the possibility of visualizing structure of atoms of the lower interface under the metal layer by the scanning tunneling microscopy due to the dependence of the phase shift $\\varphi ^{\\,}_{2}$ on the lateral coordinates.", "Quantum–confined states and the corresponding features of the tunneling conductivity or optical properties were also revealed in Ag and Cu films [19], [22] and In islands [17], [23].", "Local tunneling spectroscopy [3], [6], [7], [8], [5], [9], [13] performed for a limited number of points cannot reliably determine the boundaries of regions with a constant thickness of the Pb layer.", "As far as we know, the application of scanning bias–modulation spectroscopy, which involves simultaneous acquisition of a topographic image and a map of the density of states for a given energy, for ultrathin Pb films has not yet been discussed.", "Figure: (a) Scanning tunneling microscopy image 230×\\times 230 nm 2 ^2 of the surface of the Pb island on top of Si(111)7×77\\times 7 obtained at U=+2.00U=+2.00 V and I=50I=50 pA. (b) Schematic diagram of the structure of the studied area.", "The thick solid lines indicate the boundaries of the terraces at the upper interface of the Pb film.", "The dashed lines show the steps of monoatomic height in the Si(111) substrate.", "Numbers indicate the nominal thickness measured from the level of the wetting layer and expressed in units of d ML d^{\\,}_{ML}.", "(c) Profiles of the surface along the A–B and C–D lines.", "(d–h) Differential conductivity dI/dUdI/dU versus UU at several points of the surface marked in panel (a) with the thicknesses of the Pb layer of 9 ML (d), 10 ML (e), 11 ML (f), 12 ML (g), and within the wetting layer (h); the measurements were performed under the initial condition at U=+2.00U=+2.00 V and I=200I=200 pA.In this work we analyze the possibility of determining the thickness of Pb films and visualizing features of the microstructure of the substrate (steps of monoatomic height and foreign inclusions) under the Pb layer by low–temperature bias–modulation scanning tunneling spectroscopy." ], [ "Experimental procedure", "The preparation of the surface of substrates, thermal deposition of Pb and the investigation of the electrophysical properties of Pb nanostructures is performed on a UHV LT SPM Omicron Nanotechnology ultrahigh–vacuum setup.", "The topography of prepared structures is studied by scanning tunneling microscopy at a temperature of 78 K in the regime of a given tunneling current at a constant potential $U$ of the sample with respect to the tip of a tunneling microscope.", "Etched tungsten wires with apex cleaned by electron bombardment in ultrahigh vacuum are used as tips.", "The quality of prepared tips was tested by scanning Si(111)$7\\times 7$ and Au(111) surfaces.", "The electronic properties of Pb nanostructures are studied by scanning tunneling spectroscopy, which involves the measurement of local current–voltage ($I-U$ ) characteristics of a tip–sample tunnel junction or series of such characteristics at a fixed position of the tip.", "Furthermore, bias–modulation tunneling spectroscopy provides maps of local differential conductivity $dI/dU$ as functions of the coordinates for a given average bias voltage by the filtration of the oscillating component of $I$ at the modulation frequency of the tip potential $f_0=11.111$  kHz using a Stanford Research SR 830 lock–in amplifier.", "The thermal deposition of Pb was performed in two stages.", "First, lead (Alfa Aesar, purity of 99.99%) was deposited from a Mo crucible by an EFM3 electron–beam evaporator on a preliminarily prepared Si(111)$7\\times 7$ surface at a pressure of $2\\times 10^{-10}$ mbar at a rate of about 0.01 nm/min at room temperature for 6 min.", "Such a procedure ensured the formation of the amorphous wetting Pb layer without islands, which was confirmed by the subsequent scanning tunneling microscopy analysis.", "Then, to obtain two–dimensional Pb islands, lead was deposited on the wetting layer at a pressure of $6\\times 10^{-10}$ mbar at a rate of about 0.5 nm/min at room temperature for 4 min, which corresponds to the average thickness of the deposited layer of 6 ML of lead." ], [ "Results and discussion", "Figure REF a shows a typical topographic image of the Pb/Si(111)$7\\times 7$ surface aligned with respect to the region with the lowest height.", "It is well known [3], [6], [7], [8] that the growth of Pb nanostructures occurs through the Stranski–Krastanov mechanism, which results in the formation of ultrathin islands with typical lateral dimensions from several tens to hundreds of nanometers.", "The A–B and C–D profiles shown in Fig.", "REF c indicate a quantized change in the height; consequently, the minimal change in the height of terraces should be associated with the thickness of a monolayer (ML) of lead atoms: $d^{\\,}_{ML}=0.28\\pm 0.01$  nm.", "The estimate $d^{\\,}_{ML}$ for our films coincides with the distance between atomic planes for single-crystal lead in the (111) direction: $d^{\\,}_{ML}=a/\\sqrt{3}=0.285$  nm, where $a=0.495$  nm is the lattice constant.", "Since all heights in Figs.REF a–REF c are measured from the amorphous wetting layer [3], the actual thickness of the film $D$ is the sum of the parameter $d^{\\,}_w$ , which is determined by the thickness of the wetting layer and a finite radius of localization of wave functions beyond the layer, and the nominal thickness $d$ of islands on the wetting layer.", "The available spectrum of heights in Fig.", "REF a can be expressed in terms of the number of monolayers $N=d/d^{\\,}_{ML}$ and the following schematic representation can be proposed (see Fig.", "REF b).", "Figure: (a) Positions of the maxima E n E^{\\,}_n of the tunneling conductivity versus the nominal thickness of the layer N=d/d ML N=d/d^{\\,}_{ML} for the Pb/Si(111)7×77\\times 7 system.", "The symbols □\\Box mark the expected positions of the quantum–confined levels in the model specified by Eq.", "() taking into account that D=d ML (N+3)+0.02D=d^{\\,}_{ML}\\,(N+3)+0.02 nm .", "The thin and thick solid lines correspond to constant principal quantum numbers ℓ\\ell and the dashed lines correspond to integer values of the parameter p=2ℓ-3Np=2\\ell -3N.", "(b) Difference ΔE=E ℓ -E ℓ-1 \\Delta E = E^{\\,}_{\\ell }-E^{\\,}_{\\ell -1} (for energies near the Fermi level) versus NN for Pb/Si(111)7×77\\times 7 (•\\bullet ) and Pb/HOPG (∘\\circ ).", "The linear extrapolation of the dependence 1/ΔE1/\\Delta E on the thickness of the layer makes it possible to estimate the thickness of the wetting layer for the Pb/Si(111)7×77\\times 7 system as d w ≃3d ML d^{\\,}_w\\simeq 3d^{\\,}_{ML}.", "(c) Spectrum E ℓ (k ⊥,ℓ )E^{\\,}_{\\ell }(k^{\\,}_{\\perp ,\\ell }) reconstructed from the measurements shown in the panel (a); k F ≃1.593k_F\\simeq 1.593 Å -1 ^{-1} in the extended band scheme.", "The solid line corresponds to the spectrum E(k)E(k) of bulk Pb in the (111) direction .Figures REF d–REF h show the dependences of $dI/dU$ on $U$ obtained for the Pb/Si(111)$7\\times 7$ system at various points.", "It was found that tunneling spectra at various points of the surface with the same thickness are identical (see Figs.", "REF d–REF e).", "An important feature of the resulting spectra is the presence of almost equidistant peaks of the conductivity; the positions of these peaks depend on the local thickness of the layer [3], [6], [7], [8].", "Such peaks of the conductivity are absent in local tunneling measurements within the wetting Pb layer in the range of $\\pm 2\\,$ V. Using the measurements on several Pb islands on Si(111)$7\\times 7$ with a thickness of 2.5 nm (9 ML) to 14.3 nm (50 ML), we composed the $E^{\\,}_m-N$ diagram indicating the positions of maxima of the conductivity versus the nominal thickness of the Pb layer expressed in units of $d^{\\,}_{ML}$ (Fig.", "REF a).", "An interval between two resolved quasistationary states $\\Delta E=E^{\\,}_{\\ell }-E^{\\,}_{\\ell -1}$ near the Fermi level $E^{\\,}_F$ can be estimated.", "It is easy to see that the dependence of $\\Delta E^{-1}$ on $N$ can be approximated with a good accuracy by a linear function: $\\Delta E^{-1}\\simeq A\\,(N+3)$ , where the parameter $A=0.0753\\,$ eV$^{-1}$ was determined by the least squares approximation; consequently, $d^{\\,}_{w}\\simeq 3d^{\\,}_{ML}$ [3].", "The interval between quantum–confined levels for the wetting layer should be $\\Delta E=(3A)^{-1}=4.5\\,$ eV, which explains the absence of pronounced peaks of the tunneling conductivity in the considered range of bias voltages.", "Preliminary data obtained for the Pb/HOPG system indicate that the parameters $d^{\\,}_w$ and $v^{\\,}_F$ can depend on the material of the substrate.", "Therefore, the $E^{\\,}_m-N$ diagram (see Fig.", "REF a) is not universal and is sensitive to the material of the substrate.", "The obtained data can be interpreted within the simplest model of localization of a particle with the effective mass $m^*$ in a one-dimensional potential well with the width $D$ and infinite walls.", "In this case, $\\varphi ^{\\,}_{1,2}=\\pi /2$ and solutions of Eq.", "(REF ) are states with $k^{\\,}_{\\perp ,\\ell }=\\pi \\ell /D$ , where $\\ell =1, 2, \\ldots $ is the number of electronic half–waves.", "Then, $E^{\\,}_{\\ell } = E^{\\,}_0 + \\frac{\\hbar ^2k^{2}_{\\perp ,\\ell }}{2m^*} \\simeq E^{\\,}_F + \\hbar v^{\\,}_F\\,\\left(\\frac{\\pi \\ell }{D} - k^{\\,}_F\\right),$ where $E^{\\,}_0 = E^{\\,}_F - \\hbar ^2k^{2}_{F}/2m^*$ is the bottom of the conduction band and $k^{\\,}_F$ and $v^{\\,}_F=\\hbar k^{\\,}_F/m^*$ are the wave vector and semiclassical velocity at the Fermi level, respectively.", "Since $D=d^{\\,}_{ML}\\,(N+3)$ we obtain $\\Delta E = \\pi \\hbar v^{\\,}_F/D$ and $d (\\Delta E)^{-1}/dN = d^{\\,}_{ML}/(\\pi \\hbar v^{\\,}_F)$ , what makes it possible to determine the Fermi velocity for electrons in the Pb(111) film as $v^{\\,}_F = d^{\\,}_{ML}/(\\pi \\hbar A)\\simeq 1.83\\times 10^8\\,$ cm/s.", "The resulting value is in agreement with the data reported in [3], [5].", "Expression (REF ) for each $m$ value predicts a hyperbolic dependence of the energy on $N$ if the number of monolayers is treated as a continuous parameter (thin and thick solid lines in Fig.", "REF a).", "It is noteworthy that one of the peaks of the conductivity for films with $N=17$ and 26 is observed at zero bias and, consequently, allowed $k^{\\,}_{\\perp }$ values should be close to $k^{\\,}_{F}$ .", "Counting the number of possible hyperbolic lines between resonance states on the Fermi level for $N=17$ and 26, we find that the principal quantum numbers of these states differ from each other by 13.", "Let $\\ell ^{\\,}_0$ be the number of the quantum state corresponding to the peak at zero bias for $N=17$ ; then, $\\ell ^{\\,}_0/(17+3)=(\\ell ^{\\,}_0+13)/(26+3)$ , from which $\\ell ^{\\,}_0=29$ .", "All other peaks can be indexed automatically (see Fig.", "REF a).", "The presented way of numbering of peaks gives the values coinciding with the data from other works [5].", "Consequently, we can estimate $k^{\\,}_F=\\pi \\ell ^{\\,}_0/[d^{\\,}_{ML}\\,(17+3)]=15.93\\,$ nm$^{-1}$ in the extended band scheme and $m^*=\\hbar k^{\\,}_F/v^{\\,}_F=1.01\\,m^{\\,}_0$ , where $m^{\\,}_0=9.1\\times 10^{-31}\\,$ kg is the mass of the free electron.", "Furthermore, it is easy to transform the $E^{\\,}_{\\ell }-N$ diagram to the $E^{\\,}_{\\ell }(k^{\\,}_{\\perp ,{\\ell }})$ dependence (see Fig.", "REF c).", "As expected, the dependence of $E^{\\,}_{\\ell }$ on $k^{\\,}_{\\perp ,\\ell }$ at $E\\simeq E^{\\,}_F$ is close to a linear function, which is in agreement with experimental data and calculations for bulk Pb [6], [24].", "The Fermi wavelength can be estimated as $\\lambda ^{\\,}_{F}=2\\pi /k^{\\,}_{F}=0.394\\,$ nm; therefore, the ratio $\\lambda ^{\\,}_{F}/d^{\\,}_{ML}$ close to 4/3 [5].", "Consequently, the energy of an electronic state with the index $\\ell $ (i.e., the number of half-waves) near $E^{\\,}_F$ for the film with a thickness of $N$ ML should be close to the energy of the state with the index $\\ell +3$ for the film with a thickness of $(N+2)$ ML.", "Using Eq.", "(REF ), we write the energy of states for the integer index $p=2\\ell -3N$ in the form $E^{\\,}_p \\simeq E^{\\,}_F + \\hbar v^{\\,}_F\\,\\left(\\frac{3}{2}\\,\\frac{\\pi }{d^{\\,}_{ML}}\\,\\frac{(N+p/3)}{(N+3)} - k^{\\,}_F\\right),$ The dependences of $E^{\\,}_p$ on $N$ for various $p$ values are shown in Fig.", "REF a.", "It is remarkable that the energy of states with $p=9$ in our model is independent of the thickness: $E^{\\,}_{p=9} - E^{\\,}_F \\simeq \\frac{\\hbar ^2 k^{\\,}_F}{m^*}\\,\\left(\\frac{3}{2}\\,\\frac{\\pi }{d^{\\,}_{ML}} - k^{\\,}_F\\right),$ The positions of peaks of the conductivity in the range of 0.6 to 0.7 eV that are observed for films with odd $N$ values are in good agreement with the estimate (REF ).", "A small experimentally observed slope of the dependence of $E^{\\,}_{p=9}$ on $N$ indicates that the parameter $d^{\\,}_w$ , which is determined by the thickness of the wetting layer and a finite localization radius of wave functions beyond the Pb film, is not precisely equal to $3d^{\\,}_{ML}$ and the relation $\\lambda ^{\\,}_F/d^{\\,}_{ML}=4/3$ is valid approximately.", "Figure: (a) Scanning tunneling microscopy image 690×\\times 690 nm 2 ^2 of the Pb/Si(111)7×77\\times 7 surface obtained at and I=300I=300~pA.", "(b) Schematic of the structure of the studied area.", "The thick solid and dashed lines show monatomic-height steps in the Si(111) substrate.", "Numbers indicate the nominal thickness in terms of d ML d^{\\,}_{ML}.", "(c–e) Maps of the differential conductivity (the average tunneling current is I=300I=300~ pA, the modulation amplitude of the tip potential is 50 mV, and the frequency is 11.111 kHz) at bias voltages of (c), (d) and (e); brighter (darker) regions correspond to higher (lower) tunneling conductivity.Figure: (a) Scanning tunneling microscopy image 450×\\times 350 nm 2 ^2 of the Pb/Si(111)7×\\times 7 surface obtained at U=+0.30U=+0.30~V and I=200I=200 pA. (b) Map of the differential conductivity (U=+0.30U=+0.30~V and I=200I=200 pA, modulation amplitude is 50 mV).", "The circle indicates a foreign inclusion that is not visible in the topography but becomes noticeable at scanning in the bias–modulation mode.Figure: (a) Scanning tunneling microscopy image 115×\\times 92 nm 2 ^2 of the surface Pb/Si(111)7×\\times 7 acquired for U=+0.20U=+0.20 V and I=200I=200 pA. (b) The profile along the A–B line.", "(c) Map of local differential tunneling conductivity dI/dUdI/dU at U=+0.20U=+0.20 V, I=200I=200 pA and the amplitude of the bias modulation 50 mV; the labels indicate the local thickness of the Pb films with respect to the wetting Pb layer.Figure REF shows the results of bias–modulation scanning tunneling spectroscopy study of the topography and the local differential conductivity of Pb nanoislands.", "The comparison of the topographic image (Fig.", "REF a) with maps of the local density of states at various energies (Figs.", "REF d–REF f) clearly shows that regions with an identical thickness of the Pb layer correspond to regions of an equal intensity on differential conductivity maps.", "This correspondence makes it possible to identify terraces of Pb islands with the same thickness even when an island is located in a complicated system of monatomic steps in the substrate or when the scanning region includes a part of the island with the surrounding wetting layer (Figs.", "REF and REF ).", "In particular, according to the diagram shown in Fig.", "REF a, a measurement at $U=+0.6\\,$ V makes it possible to identify terraces with an odd number of monolayers with respect to the wetting layer because one of the conductivity peaks for such terraces lies near +0.6 V (see Fig.", "REF d).", "A single differential conductivity map is obviously insufficient to reconstruct the thicknesses of all regions, but several maps recorded at different biases significantly simplify the interpretation of a topographic image.", "It is noteworthy that bias–modulation scanning tunneling spectroscopy allows to reveal hidden details of the image (e.g., steps in the substrate and defects), which are completely covered with a metal layer and, hence, are invisible on the topographic image.", "For example, an invisible cluster of a different material is revealed under a Pb island with the atomically smooth surface in Fig.", "REF b.", "We would like to illustrate a procedure of the determination of the local thickness of a Pb film based on a single map of the differential tunneling conductance.", "The topographic image in Fig.", "REF a indicates that there are one developed monoatomic step and one appearing monoatomic step, caused by two screw dislocations, on the top surface of the Pb film.", "The map of the local differential conductance shown in Fig.", "1c points to the additional monoatomic steps on the bottom surface of the Pb film.", "Thus, one can conclude that there are three regions of different nominal thickness: $N_0$ (bright areas in Fig.", "REF c), $N_0+1$ (dark areas in Fig.", "REF c) and close to $N_0+2$ (area of the intermediate intensity in Fig.", "REF c).", "Taking into account the diagram in Fig.", "REF a, one can conclude that the peak of tunneling conductance at +0.2 V should correspond to the Pb film with $N_0=12$ .", "As a consequence, the bright and dark areas in the $dI/dU$ map should be attributed to $N=12$ and $N=13$ , respectively; while the thickness for the area of the intermediate intensity within the appearing monoatomic step varies from $N=13$ to $N=14$ (see labels in Fig.", "REF c).", "This conclusion was supported by local STS measurements, which are similar to that shown in Figs.REF f–g." ], [ "Conclusion", "Pb nanoislands grown on the Si(111)$7\\times 7$ surface have been experimentally studied by low–temperature scanning tunneling microscopy and bias–modulation scanning tunneling spectroscopy.", "It has been shown that the local differential tunneling conductivity include pronounced peaks of the conductivity.", "The energies of the corresponding quasistationary states depend on the thickness of the Pb layer at the location of the tip of a scanning tunneling microscope.", "A method for indexing peaks of the conductivity has been proposed on the basis of the unambiguous determination of quantum numbers for states at the Fermi level.", "Within the simplest model of a particle in a potential well with infinite walls, the resonance energy of the conductivity peak, which is almost independent of the film thickness, has been calculated.", "The relation of the energy of this peak to the microscopic parameters of the film ($d^{\\,}_{ML}$ , $k^{\\,}_F$ , and $m^*$ ) has been discussed.", "It has been shown that bias–modulation scanning tunneling spectroscopy at liquid nitrogen temperatures allows visualizing hidden defects under the metal layer with a thickness of 15 nm with a subnanometer spatial resolution.", "In particular, monatomic steps in the substrate, as well as foreign inclusions under the metal layer, which are not manifested in a topographic image, have been revealed." ], [ "Acknowlednements", "We are grateful to V. S. Stolyarov and S. I. Bozhko for valuable remarks and assistance in the work.", "The work was performed with the use of equipment at the Common Research Center 'Physics and Technology of Micro- and Nanostructures' at Institute for Physics of Microstructures, Russian Academy of Sciences (Nizhny Novgorod, Russia).", "The work of S. S. U. and A. Yu. A.", "was supported by the Russian Foundation for Basic Research (project nos.", "15-42-02416 and 16-02-00727).", "The work of A. V. P. was supported by the Russian Science Foundation (project no.", "15-12-10020).", "G. E. Moore, Electronics 38, 114 (1965).", "David K. Ferry and Stephen M. Goodnick, Transport in nanostructures.", "Cambridge University Press, 2nd ed.", "670 p. (2009).", "I. B.", "Altfeder, K. A. Matveev, and D. M. Chen, Phys.", "Rev.", "Lett.", "78, 2815 (1997).", "I. B.", "Altfeder, V. Naranayanamurti, and D. M. Chen, Phys.", "Rev.", "Lett.", "88, 206801 (2002).", "W. B. Su, S. H. Chang, W. B. Jian, C. S. Chang, L. J. Chen, and T. T. Tsong, Phys.", "Rev.", "Lett.", "86, 5116 (2001).", "I-Po Hong, C. Brun, F. Patthey, I. Yu.", "Sklyadneva, X. Zubizarreta, R. Heid, V. M. Silkin, P. M. Echenique, K. P. Bohnen, E. V. Chulkov, and W.-D. Schneider, Phys.", "Rev.", "B 80, 081409 (2009).", "D. Eom, S. Qin, M. Y. Chou, and C. K. Shih, Phys.", "Rev.", "Lett.", "96, 027005 (2006).", "K. Wang, X. Zhang, M. M. T. Loy, T.-C. Chiang, and X. Xiao, Phys.", "Rev.", "Lett.", "102, 076801 (2009).", "C. C. Hsu, W. H. Lin, Y. S. Ou, W. B. Su, C. S. Chang, C. I. Wu, T. T. Tsong, Surf.", "Sci.", "604, 1 (2010).", "D. A. Fokin, S. I. Bozhko, V. Dubost, F. Debontridder, A. M. Ionov, T. Cren, D. Roditchev, Physica Status Solidi (c) 7, 165 (2010).", "C.-S. Jiang, S.-C. Li, H.-B.", "Yu, D. Eom, X.-D. Wang, Ph.", "Ebert, J.-F. Jia, Q.-K. Xue, and C.-K. Shih, Phys.", "Rev.", "Lett.", "92, 106104 (2004).", "T. Cren, D. Fokin, F. Debontridder, V. Dubost, D. Roditchev, Phys.", "Rev.", "Lett.", "102, 127005 (2009).", "S. A. Moore, J. Fedor, M. Iavarone, Supercond.", "Sci.", "Technol.", "28, 045003 (2015).", "V. Cherkez, J. C. Cuevas, C. Brun, T. Cren, G. Menard, F. Debontridder, V. S. Stolyarov, and D. Roditchev, Phys.", "Rev.", "X 4, 011033 (2014).", "M. Jalochowski and E. Bauer, Phys.", "Rev.", "B, 38, 5272 (1988).", "N. Miyata, K. Horikoshi, T. Hirahara, S. Hasegawa, C. M. Wei, and I. Matsuda, Phys.", "Rev.", "B 78, 245405 (2008).", "J. H. Dil, J. W. Kim, Th.", "Kampen, K. Horn, A. R. H. F. Ettema, Phys.", "Rev.", "B 73, 161308 (2006).", "A. Mans, J. H. Dil, A. R. H. F. Ettema, H. H. Weitering, Phys.", "Rev.", "B 66, 195410 (2002).", "M. Milun, P. Pervan and D. P. Woodruff, Rep. Prog.", "Phys.", "65, 99-141 (2002).", "D. A. Ricci, Y. Liu, T. Miller, T.-C. Chiang, Phys.", "Rev.", "B 79, 195433 (2009).", "B. Slomski, F. Meier, J. Osterwalder, and J. H. Dil, Phys.", "Rev.", "B 83, 035409 (2011).", "T.-C. Chiang, Surf.", "Sci.", "Rep. 39, 181 (2000).", "I. B.", "Altfeder, X. Liang, T. Yamada, D. M. Chen, and V. Narayanamurti, Phys.", "Rev.", "Lett.", "92, 226404 (2004).", "D. A. Papaconstantopoulos, Handbook of the band structure of elemental solids.", "From Z=1 to Z=112.", "Springer, 2nd ed., 655 p. (2015)." ] ]
1906.04472
[ [ "Low Rank Approximation at Sublinear Cost" ], [ "Abstract Low Rank Approximation (LRA) of an m-by-n matrix is a hot research subject, fundamental for Matrix and Tensor Computations and Big Data Mining and Analysis.", "Computations with LRA can be performed at sublinear cost -- by using much fewer than mn memory cells and arithmetic operations, but can we compute LRA at sublinear cost?", "Yes and no.", "No, because spectral, Frobenius, and all other norms of the error matrix of LRA output by any sublinear cost deterministic or randomized algorithm exceed their minimal values for LRA by infinitely large factors for the worst case input and even for the inputs from the small families of our Appendix.", "Yes, because for about two decades Cross-Approximation (C-A) iterations, running at sublinear cost, have been consistently computing close LRA worldwide.", "We provide new insight into that \"yes\" and \"no\" coexistence by identifying C-A iterations as recursive sketching algorithms for LRA that use sampling test matrices and run at sublinear cost.", "As we prove in good accordance with our numerical tests, already at a single recursive step they compute close LRA.", "except for a narrow class of hard inputs, which tends to shrink in the recursive process.", "We also discuss enhancing the power of sketching by means of using leverage scores." ], [ "Introduction", "1.1.", "Background for LRA.", "Low rank approximation (LRA) of a matrix is a hot research area of Numerical Linear Algebra and Computer Science with applications to fundamental matrix and tensor computations and data mining and analysis (see the surveys [21], [8], [26], [22], [4], [27], [48], [50], and pointers to huge bibliography therein).", "Next we outline our study of LRA, to be formalized later.It is customary in this field to rely on a very informal basic concept of low rank approximation; in high level description of our LRA algorithms we also use some other informal concepts such as “large\", “small\", “ill-\" and “well-conditioned\", “near\", “close\", and “approximate\", quantified in context; we complement this description with formal presentation and analysis.", "The size of the matrices defining Big Data (e.g., unfolding matrices of multidimensional tensors) is frequently so large that realistically only a small fraction of all their entries can fit computer memory, although quite typically these matrices admit their LRA.", "One can operate with low rank matrices at sublinear computational cost, but can we compute LRA at sublinear cost?", "Yes and no.", "No, because the spectral, Frobenius, and all other norms of the error matrix of LRA output by any sublinear cost deterministic or randomized algorithm exceed their minimal values for $LRA$ by infinitely large factors for the worst case input and even for the inputs from the small families of Appendix .", "Yes, because for about two decades Cross-Approximation (C-A)This concept has been coined in [44].", "(aka Adaptive Cross-Approximation) iterations, running at sublinear cost (see [1], [5], [2], [17], [31], [30]), have been consistently computing close LRA worldwide.", "Some nontrivial supporting techniques have been developed for these iterations, but they only partly explain this “yes\" and “no\" coexistence.", "1.2.", "Sketching algorithms – briefly.", "To provide a new insight into it we study another class of popular algorithms for LRA, called sketching algorithms.", "With a high probability (whp) they compute a nearly optimal rank-$r$ approximation of an $m\\times n$ matrix $M$ by using auxiliary matrices $MH$ with $n\\times l$ random dense multipliers $H$ , called test matrices, for $l$ a little exceeding $r$ .In [47] one chooses a $k \\times m$ left test matrix $F$ for $r<k<m$ and then computes an auxiliary matrix $FMH$ .", "In Section REF we do not choose matrix $F$ but compute it as a submatrix of a permutation matrix that minimizes the spectral norm $||(FMH)^+||_2$ of the Moore-Penrose pseudo inverse of the matrix $FMH$ for a given matrix $MH$ .", "(Alternatively one could have minimized the Frobenius norm $||(FMH)^+||_F$ .)", "Namely the spectral and Frobenius norms of the output error matrices of these algorithms are within a small constant factor from the optimal ones, defined by truncated SVD of an input matrix (see (REF )),Various randomized sketching algorithms compute LRA that whp reaches optimal Frobenius or even spectral error norm bound within a factor of $1+\\epsilon $ from the optimum for any fixed tolerance $\\epsilon >0$ (cf.", "[21]), but proofs of such bounds involves sketches of large sizes (see [47]).", "Sketching algorithm of [47] avoids such a deficiency at the price of increasing the factor $1+\\epsilon $ a little, say, to 2, while the algorithms of [11] support a proof of nearly optimal bounds for $LRA$ and empirically work by using sketches of quite reasonable sizes.", "provided that the test matrix $H$ lies neither in nor near the span of the matrix $V_1$ of the $r$ top right singular vectors $M$ associated with its $r$ top (largest) singular values, that is, provided that the $r$ th top singular value $\\sigma _r(V_1^*H)=1/||(V_1^*H)^+||_2$ is not small.", "This provision is satisfied whp for any matrix $M$ of numerical rank $r$ if $H$ is a Gaussian random, Rademacher's, SRHT, or SRFT matrix.Here and hereafter “Gaussian\" stands for “standard Gaussian (normal) random\"; “SRHT and SRFT\" are the acronyms for “Subsampled Random Hadamard and Fourier transforms\"; Rademacher's are the matrices filled with independent identically distributed (iid) variables, each equal to 1 or $-1$ with probability 1/2.", "1.3.", "Dual sketching algorithms.", "The cited sketching algorithms involve all $mn$ entries of $M$ , and our goal is to get the same effect by processing much fewer than $mn$ entries of $M$ .", "One can explore a variety of ways towards this goal.", "Maybe the simplest way is to randomly sample $l$ columns of the matrix $M$ or equivalently to let $H$ be the $n\\times l$ leftmost submatrix of a random $n\\times n$ permutation matrix under a fixed randomization model.", "It is easy to see that in this case only $nl$ elements of $M$ are accessed, and the calculation itself is given \"for free\".", "Obviously, not for all inputs $M$ the matrix $MH$ lies whp near the $r$ -top subspace, generated by the $r$ top right singular vectors of $M$ , but in our numerical experiments with various sparse test matrices $H$ we quite consistently output close LRA.", "To explain these test results and to construct a theory, we introduce a probabilistic structure in the space of input matrices $M$ , thus defining dual sketching in comparison to customary primal one, where this matrix is fixed.", "Since the output error norm of such a sketching algorithm is essentially defined by the product $V_1^*H$ , we first assume that $V_1^*$ is a Q factor of QR or QRP factorization of a Gaussian matrix, $H$ has orthonormal columns, and $l-r$ is reasonably large.", "Then we prove that whp a sketching algorithm outputs LRA whose both Frobenius and spectral error norms are within a factor of $\\sqrt{1 + 16~n/l}$ from their optima; this is essentially Theorem REF – our basic result.", "While this choice of probabilistic structure is most relevant to the estimation of the output errors of a sketching algorithm, it is not most relevant to LRA computation in the real world.", "In Section REF we estimate the errors of LRA by means of dual sketching under a rather natural model: we fix a test matrix $H$ with orthonormal columns and let an $m \\times n$ input matrix of rank $r$ be defined as a matrix of the form $M = A\\Sigma B +E$ for a perturbation matrix $E$ and for $A\\Sigma B$ being a random pseudo SVD of a rank-$r$ matrix of size $m\\times n$ .", "Namely, we let $\\Sigma $ be an $r\\times r$ diagonal matrix with $r$ positive diagonal entries (as in SVD) and lat $A$ and $B$ be scaled Gaussian matrices, rather than orthogonal matrices of singular vectors of SVD.By saying “LRA\" we assume that $r$ is much smaller than $\\min \\lbrace m,n\\rbrace $ ; then we can motivate the above definition of random pseudo SVD by recalling (e.g., from [13] or [41]) that $\\frac{1}{\\sqrt{k}}G$ is close to an orthogonal matrix whp for $r\\ll k$ and a $k\\times r$ Gaussian matrix $G$ .", "See another motivation of independent interest in Appendix .", "We call such a matrix $M$ a perturbation of two-sided factor-Gaussian (with expected rank $r$ ), but most of our study (in particular Theorem REF and our main result, recalled below) apply to a more general class of perturbed right factor-Gaussian matrices (with expected rank $r$ ) of the form $AG+E$ where $G$ is a $r\\times n$ Gaussian matrix and $A$ is an $m \\times r$ matrix of full rank $r$ .", "Under a rather mild assumption about the spectrum of the singular values of an input matrix $M$ we prove that our sublinear cost sketching algorithm outputs a close rank-$r$ approximations whp on the defined probability space: namely, our bound on the output errors under this randomization model only increases from a factor of $\\sqrt{1 + 16~n/l}$ versus the optimum to a factor of $\\sqrt{1 + 100~n/l}$ , and this is essentially Theorem REF – our main result!", "Both Theorems REF and REF apply to approximation by matrices of any fixed rank $r$ , and we refer to LRA rather than to rank-$r$ approximation simply because we compute such an approximation at the cost of order $r^2\\max \\lbrace m,n\\rbrace $ , which only becomes sublinear where $r^2=o(\\min \\lbrace m,n\\rbrace )$ .", "1.4.", "Concurrent and recursive sketching algorithms.", "How meaningful is the latter result?", "Our definitions in Theorems REF and REF for the classes of random matrices that admit their LRA are quite natural for various real world applications of LRA,In a natural variation one can substitute the orthogonal factors Q of QR or another orthogonal factorization of independent Gaussian matrices for Gaussian factors $A$ and $B$ in our definitions of of factor-Gaussian matrices; Theorem REF can be readily extended under this model.", "but are odd for some other ones, and this is the case for any relevant definition of that kind.", "In spite of such odds our cited numerical tests for both synthetic and real world inputs, some from [21], are in good accordance with our formal study.", "It is plausible that for input matrices of a narrow class the output error norms can be much greater than both in our tests and under our models, but we can try to further narrow this class by reapplying the same algorithm with distinct pairs of test matrices $F$ and $H$ .", "We can do this concurrently for various pairs $F$ and $H$ or recursively, by alternating the computation of right and left test matrices $H$ and $F$ , respectively.", "Given a right test matrix $H$ , one can compute a left test matrix $F$ by trying to minimizing the spectral or Frobenius norm of the matrix $(FMH)^+$ (see footnote $^3$ ), but given a left test matrix $F$ we can similarly compute a right test matrix $H$ by trying to minimize such a norm.", "By recursively alternating this recipe we naturally arrive at recursive dual sketching process.", "In the case of sampling (subpermutation) test matrices $F$ and $H$ it turns into C-A iterations!", "This simple observation seems to be novel and provides a new insight: so far the popular LRA algorithms by means of sketching and by means of C-A iterations have been supported by distinct well-developed techniques and distinct numerical tests, and our current work reveals that all these algorithms can be studied as a single class of recursive sketching LRA algorithms, running at sublinear cost for most of inputs that admit their LRA.", "Our progress should motivate new effort in the field of sublinear cost LRA.", "In Section we briefly discuss this issue and focus on enhancing the power of recursive sketching by means of using leverage scores of [11].", "1.5.", "Quality of dual LRA and its iterative refinement.", "Our upper bounds of Theorems REF and REF on the output error norm of LRA are too high for practical use and seem to be of only qualitative interest.", "Already these dual upper bounds, however, are significantly smaller than the known best upper bounds on the output error norms of C-A iterations, are much smaller than the error norms of any sublinear cost algorithm for the worst case input, and greatly exceed nearly optimal errors consistently reached empirically in C-A iterations; our bounds also tend to exceed even the output error norms observed in our numerical tests.", "Furthermore, if the optimal bound for $M$ is small enough, then the computed LRA is close enough in order to allow us to initialize an iterative refinement LRA algorithm, for which we can apply the algorithms of [35] and [25]; both of them run at sublinear cost, and the algorithm of [25] is actually randomized recursive sketching (see the end of Section ).", "1.6.", "Some earlier works.", "In the immense bibliography on LRA (see some pointers to it in the very first sentence of this Introduction), we recall the papers [21] and [47], from which we departed towards our slightly different (dual) understanding of sketching LRA algorithms; our duality approach continues the earlier study in [39], [40], [36], and [37].", "It was proved in [24] that a single two-step non-degenerating C-A loop outputs a close LRA of a matrix that admits a sufficiently close LRA – much closer than we require in Theorem REF .", "1.7.", "Organization of the paper.", "Section is devoted to background on matrix computations.", "In Section we recall sketching algorithms for LRA.", "In Section we recall deterministic error bounds for the outputs of these algorithms.", "In Section we prove error bounds for our dual LRA algorithms – our main results.", "In Section we cover numerical tests, the contribution of the third and fourth authors.", "In Section we comment on concurrent and recursive application of sketching algorithms and their link to C-A iterations.", "In concluding Section we comment on some directions of further study, in particular by means of randomization with leverage scores of [11].", "In Appendix we recall the known estimates for the norms of a Gaussian matrix and its pseudo inverse.", "In Appendix we prove that pre-processing with Gaussian multipliers turn any matrix that allow its $LRA$ into a perturbed factor-Gaussian matrix.", "In Appendix we recall the error bounds for some known sketching algorithms.", "In Appendix we specify some small families of input matrices on which any sublinear cost LRA algorithm fails.", "In Appendix we generate two important families of sparse test matrices.", "In Appendix we estimate the volume and $r$ -projective volume of a factor-Gaussian matrix, which are basic parameters in the study of C-A iterations." ], [ "Some definitions", "“$\\ll $ \" and “$\\gg $ \" mean “much less than\" and “much greater than\", respectively.", "“Flop\" stands for “floating point arithmetic operation\".", "$\\mathbb {R}^{p\\times q}$ and $\\mathbb {C}^{p\\times q}$ denote the classes of $p\\times q$ real and complex matrices, respectively.", "We assume dealing with real matrices throughout, except for matrices defined by Fourier transform in Section , and so apart from that section the Hermitian transpose $M^*$ of $M$ turns into its transpose $M^T$ .", "A “perturbation of a matrix\" means matrix perturbations having a small relative norm unless we specify otherwise.", "An $m\\times n$ matrix $M$ is called unitary provided that $M^*M=I_n$ or $MM^*=I_m$ .", "A real unitary matrix is said to be orthogonal or orthonormal.", "$||\\cdot ||_2$ and $||\\cdot ||_F$ denote the spectral and Frobenius matrix norms, respectively; we write $|||\\cdot |||$ where a property holds for both of these norms (cf.", "[21]).", "A (compact) singular value decomposition (SVD) of an $m\\times n$ matrix $M$ of rank $\\rho $ (cf.", "[3]) is the decomposition $M=U\\Sigma V^*$ where $\\Sigma =\\operatorname{diag}(\\sigma _j)_{j=1}^{\\rho }$ is the diagonal matrix of the singular values of $M$ , $\\sigma _1\\ge \\sigma _2\\ge \\dots \\ge \\sigma _\\rho >0$ , and $U\\in \\mathbb {C}^{m\\times \\rho }$ and $V\\in \\mathbb {C}^{\\rho \\times n}$ are two unitary (orthogonal) matrices of the associated left and right singular spaces, respectively.", "$M_{r}$ is the rank-$r$ truncation, obtained from a rank-$\\rho $ matrix $M$ for $\\rho \\ge r$ by setting $\\sigma _j(M)=0$ for $j>r$ .", "Its SVD is said to be $r$-top SVD of $M$ (cf.", "[11]) and is a rank-$r$ approximation of $M$ whose spectral and Frobenius error norms $\\tilde{\\sigma }_{r+1}(M):=|||M-M_r|||,$ $\\tilde{\\sigma }_{r+1}(M)$ minimal (Eckhart–Young–Mirski theorem) (cf.", "[16]), $\\tilde{\\sigma }_{r+1}(M)= \\sigma _{r+1}(M)$ under spectral norm and $\\tilde{\\sigma }_{r+1}(M)^2=\\sum _{j>r}\\sigma _{j}(M)^2$ under Frobenius norm.", "$\\operatorname{rank}(M)$ denotes the rank of a matrix $M$ .", "$\\epsilon $ -$\\operatorname{rank}(M)$ is argmin$_{|||E|||\\le \\epsilon |||M|||}\\operatorname{rank}(M+E)$ ; it is called numerical rank, $\\operatorname{nrank}(M)$ if a tolerance $\\epsilon $ is fixed in context, typically being linked to machine precision or the level of relative errors of the computations (see [16]).", "$M^+$ denotes the Moore – Penrose pseudo inverse of $M$ .", "For a matrix $M=(m_{i,j})_{i,j=1}^{m,n}$ and two sets $\\mathcal {I}\\subseteq \\lbrace 1,\\dots ,m\\rbrace $ and $\\mathcal {J}\\subseteq \\lbrace 1,\\dots ,n\\rbrace $ , define the submatrices $M_{\\mathcal {I},:}:=(m_{i,j})_{i\\in \\mathcal {I}; j=1,\\dots , n},M_{:,\\mathcal {J}}:=(m_{i,j})_{i=1,\\dots , m;j\\in \\mathcal {J}},~{\\rm and}~M_{\\mathcal {I},\\mathcal {J}}:=(m_{i,j})_{i\\in \\mathcal {I};j\\in \\mathcal {J}}.$ $\\textrm {Span}(M_{1,:}^T, M_{2,:}^T, ..., M_{m,:}^T )$ denotes the row space of a matrix $M = (m_{i,j})_{i,j=1}^{m,n}=(M_{i,:}^T)_{i=1}^m=(M_{:,j})_{j=1}^n,$ and $\\textrm {Span}(M_{:, 1}, M_{:, 2}, ..., M_{:, n})$ denotes its column space." ], [ "Auxiliary results", "Lemma 2.1 [The norm of the pseudo inverse of a matrix product (cf.", "[3]).]", "Suppose that $A\\in \\mathbb {R}^{k\\times r}$ , $B\\in \\mathbb {R}^{r\\times l}$ , and the matrices $A$ and $B$ have full rank $r\\le \\min \\lbrace k,l\\rbrace $ .", "Then $|||(AB)^+|||\\le |||A^+|||~~|||B^+|||$ .", "Lemma 2.2 [The impact of a perturbation of a matrix on its singular values (see [16]).]", "For $m\\ge n$ and a pair of ${m\\times n}$ matrices $M$ and $M+E$ it holds that $|\\sigma _j(M+E)-\\sigma _j(M)|\\le ||E||_2~{\\rm for}~j=1,\\dots ,n. $ Lemma 2.3 (The impact of a perturbation of a matrix on its singular space, adapted from [18].)", "Let $M$ be an $m\\times n$ matrix of rank $r < \\min (m, n)$ where $M =\\begin{bmatrix}U_r & U_{\\perp }\\end{bmatrix}~\\begin{bmatrix}\\Sigma _r & 0 \\\\0 & 0\\end{bmatrix}~\\begin{bmatrix}V_r^T \\\\V_{\\perp }^T\\end{bmatrix}$ is its SVD, and let $E$ be a perturbation matrix such that $\\delta = \\sigma _r(M) - 2~||E||_2 > 0~{\\rm and}~||E||_F \\le \\frac{\\delta }{2}.$ Then there exists a matrix such that $P\\in \\mathbb {R}^{(n-r)\\times r}$ , $||P||_F < 2~\\frac{||E||_F}{\\delta } < 1$ , and the columns of the matrix $\\tilde{V} = V_r + V_{\\perp }P$ span the right leading singular subspace of $\\tilde{M} = M + E$ .", "Remark 2.1 Matrix $\\tilde{V}$ from the above does not necessarily have orthogonal columns, but one can readily prove that the matrix $(V_r + V_{\\perp }P)(I_r + P^TP)^{-1/2}$ has orthonormal columns, i.e.", "$(I_r + P^TP)^{-1/2}$ normalizes $\\tilde{V}$ ." ], [ "Gaussian and factor-Gaussian\nmatrices", "Definition 2.1 A matrix is Gaussian if all its entries are iid standard Gaussian variables.", "Hereafter we consider constant matrices (as opposed to random matrices) to be matrices all of whose entries are constants (rather than random variables), and we specify which matrices are constant whenever confusion may arise.", "Theorem 2.1 [Non-degeneration of a Gaussian matrix.]", "Suppose that $M\\in \\mathbb {R}^{p\\times q}$ is a constant matrix, $r\\le \\operatorname{rank}(M)$ , and $F$ and $H$ are $r\\times p$ and $q\\times r$ independent Gaussian matrices, respectively.", "Then the matrices $F$ , $H$ , $FM$ , and $MH$ have full rank $r$ with probability 1.", "Rank deficiency of matrices $H$ , $FM$ , and $MH$ is equivalent to turning into 0 the determinants $\\det (FF^T)$ , $\\det (H^TH)$ , $\\det ((MH^T)MH)$ , and $\\det (FM(FM)^T)$ , respectively.", "The claim follows because these equations define algebraic varieties of lower dimension in the linear spaces of the entries, considered independent variables (cf., e.g., [6]).", "Remark 2.2 Hereafter we condition on the event that such matrices do have full rank whenever their rank deficiency with probability 0 is immaterial for deducing our probability estimates.", "Lemma 2.4 [Orthogonal Invariance.]", "[46].", "Suppose that $G$ is an $m\\times n$ Gaussian matrix, constant matrices $S\\in \\mathbb {R}^{k\\times m}$ , $T\\in \\mathbb {R}^{n\\times k}$ , for $k\\le \\min \\lbrace m,n\\rbrace ,$ and $S$ and $T$ have orthonormal rows and columns, respectively.", "Then $SG$ and $GT$ are random matrices and have the distribution of $k\\times n$ and $m\\times k$ Gaussian random matrices, respectively.", "Definition 2.2 [Factor-Gaussian matrices.]", "Let $A\\in \\mathbb {R}^{m\\times r}$ , $B\\in \\mathbb {R}^{r\\times n}$ , and $C\\in \\mathbb {R}^{r\\times r}$ be three constant well-conditioned matrices of full rank $r<\\min \\lbrace m,n\\rbrace $ .", "Let $G_1$ and $G_2$ be $m\\times r$ and $r\\times n$ independent Gaussian matrices, respectively.", "Then we call the matrices $G_1B$ , $AG_2$ , and $G_1 C G_2$ left, right, and two-sided factor-Gaussian matrices of rank $r$ , respectively.", "Theorem 2.2 The distribution of two-sided $m\\times n$ factor-Gaussian matrices $G_{m,r} C G_{r,n}$ does not change if in its definition we replace the factor $C$ by an appropriate diagonal matrix $\\Sigma =(\\sigma _j)_{j=1}^r$ such that $\\sigma _1\\ge \\sigma _2\\ge \\dots \\ge \\sigma _r>0$ .", "Let $C=U_C\\Sigma _C V_C^*$ be SVD.", "Then $A=G_{m,r}U_C$ and $B=V_C^*G_{r,n}$ have distributions of independent $m\\times r$ and $r\\times n$ Gaussian matrices respectively by virtue of Lemma REF .", "Hence $G_{m,r} C G_{r,n}$ has the same distribution as $A\\Sigma _C B$ ." ], [ "Algorithm 3.1 Range Finder [21] (see Remark REF ).", "Input: An $m\\times n$ matrix $M$ and a target rank $r$ .", "Output: Two matrices $X\\in \\mathbb {R}^{m\\times l}$ and $Y\\in \\mathbb {R}^{l\\times m}$ defining an LRA $\\tilde{M}=XY$ of $M$ .", "Initialization: Fix an integer $l$ , $r\\le l\\le n$ , and an $n\\times l$ matrix $H$ of full rank $l$ .", "Computations: Compute the $m\\times l$ matrix $MH$ .", "Fix a nonsingular $l\\times l$ matrix $T^{-1}$ and output the $m\\times l$ matrix $X:=MHT^{-1}$ .", "Output an $l\\times n$ matrix $Y:= {\\rm argmin}_V ~|||XV-M|||=X^+M$ .", "Column sketching turns into column subset selection if $H$ a sampling matrix, that is, a full rank submatrix of a permutation matrix.", "Algorithm 3.2 Transposed Range Finder.", "Input: As in Algorithm REF .", "Output: Two matrices $X\\in \\mathbb {R}^{k\\times n}$ and $Y\\in \\mathbb {R}^{m\\times k}$ defining an LRA $\\tilde{M}=YX$ of $M$ .", "Initialization: Fix an integer $k$ , $r\\le k\\le m$ , and a $k\\times m$ matrix $F$ of full numerical rank $k$ .", "Computations: Compute the $k\\times m$ matrix $FM$ .", "Fix a nonsingular $k\\times k$ matrix $S^{-1}$ ; then output $k\\times n$ matrix $X:=S^{-1}FM$ .", "Output an $m\\times k$ matrix $Y:= {\\rm argmin}_V ~|||VX-M|||=MX^+$ .", "Row sketching turns into row subset selection where $F$ is a sampling matrix.", "The following algorithm combines row and column sketching.", "For $S$ being the $k\\times k$ identity matrix $I_k$ , it turns into the algorithm of [47], whose origin is traced back to [49].", "Algorithm 3.3 Row and Column Sketching.", "Input: As in Algorithm REF .", "Output: Two matrices $X\\in \\mathbb {R}^{m\\times k}$ and $Y\\in \\mathbb {R}^{k\\times m}$ defining an LRA $\\tilde{M}=XY$ of $M$ .", "Initialization: Fix two integers $k$ and $l$ , $r\\le k\\le m$ and $r\\le l\\le n$ ; fix two matrices $F\\in \\mathbb {R}^{k\\times m}$ and $H\\in \\mathbb {R}^{n\\times l}$ of full numerical ranks and two nonsingular matrices $S\\in \\mathbb {R}^{k\\times k}$ and $T\\in \\mathbb {R}^{l\\times l}$ .", "Computations: 1.", "Output the matrix $X=MHT^{-1 }\\in \\mathbb {R}^{m\\times l}$ .", "2.", "Compute the matrices $U:=S^{-1}FM\\in \\mathbb {R}^{k\\times n}$ and $W:=S^{-1}FX\\in \\mathbb {R}^{m\\times l}$ .", "3.", "Output the $l\\times n$ matrix $Y:={\\rm argmin}_V |||WV-U|||=W^+U$ .", "Remark 3.1 In Algorithms REF , REF , and REF , $XY=MH(MH)^+M$ , $YX=M(FM)^+FM$ , and $YX=MH(FMH)^+FM$ , respectively, independently of the choice of non-singular matrix $S$ and $T$ if the matrices $MH$ , $FM$ , and $FMH$ have full ranks $l$ , $k$ , and $\\min \\lbrace k,l\\rbrace $ , respectively, but the choice of the matrices $S$ and $T$ affects the conditioning of the matrices $X=MHT^{-1}$ , $X=S^{-1}FM$ and $W=S^{-1}FMHT^{-1}$ .", "E.g., they are ill-conditioned if $S=I_k$ , $T=I_l$ , $l>r$ , and $k>r$ , but $X$ is orthogonal in Algorithm REF if $T$ is the factor $R$ in the thin QR factorization of $MH$ as well as if $T=R\\Pi $ and if $R$ and $\\Pi $ are factors of a rank-revealing QR$\\Pi $ factorization of $MH$ .", "Remark 3.2 By applying Algorithm REF to the transpose matrix $M^*$ we obtain Algorithm 3.4.", "It begins with column sketching followed by row sketching.", "We only study Algorithms REF and REF for input $M$ , but they turn into Algorithms REF and 3.4 for the input $M^*$ ." ], [ "Deterministic error bounds of Range Finder", "Next we recall the known estimates for the output errors of Algorithm REF for any input under both spectral and Frobenius matrix norms.", "Theorem 4.1 [21].", "Suppose that Algorithm REF has been applied to a matrix $M$ with a test matrix $H$ and let $M=\\begin{pmatrix} U_1& U_2\\end{pmatrix}\\begin{pmatrix}\\Sigma _1& \\\\&\\Sigma _2 \\end{pmatrix}\\begin{pmatrix}~V_1^*\\\\~V_2^*\\end{pmatrix}~~{\\rm and}~M_r=U_1\\Sigma _1 V_1^*$ be SVDs of the matrices $M$ and its rank-$r$ truncation $M_r$ , respectively.", "[$\\Sigma _2=O$ and $XY=M$ if $\\operatorname{rank}(M)=r$ .", "The $r$ columns of $V_1$ are the $r$ top right singular vectors of $M$ .]", "Write $C_1=V^*_1H,~C_2=V^*_2H.$ Assume that $||H||_2 \\le 1$ and $\\operatorname{rank}(C_1) = r$ .", "Then $|||M-XY|||^2\\le |||\\Sigma _2|||^2+|||\\Sigma _2C_2C_1^+|||^2.$ Corollary 4.1 Under the assumptions of Theorem REF it holds that $|||M-XY|||/\\tilde{\\sigma }_{r+1}(M) \\le (1+|||C_1^+|||^2)^{1/2} ~{\\rm for}~C_1=V_1^*H$ and for $\\tilde{\\sigma }_{r+1}(M)$ of (REF ).", "The corollary follows from (REF ) because $|||\\Sigma _2|||=\\tilde{\\sigma }_{r+1}(M),~ |||C_2|||\\le 1,~{\\rm and}~|||\\Sigma _2C_2C_1^+|||\\le |||\\Sigma _2|||~~|||C_2|||~~|||C_1^+|||.$ (REF ) implies that the output LRA is optimal under both spectral and Frobenius matrix norms up to a factor of $(1+|||C_1^+|||^2)^{1/2}$ ." ], [ "Impact of pre-multiplication on the errors of ", "The following deterministic error bounds contributed to the overall error bounds of Algorithm REF at the pre-multiplication stage are dominated by the error bounds contributed by Range Finder.", "Theorem 4.2 [The impact of pre-multiplication on LRA errors.]", "Suppose that Algorithm REF outputs a matrix $XY$ for $Y=(FX)^+FM$ and that $m\\ge k\\ge l=\\operatorname{rank}(X)$ .", "Then $M-XY=W(M-XX^+M)~{\\rm for}~W=I_m-X(FX)^+F,$ $|||M-XY|||\\le |||W|||~~|||M-XX^+M|||,~~|||W|||\\le |||I_m|||+|||X|||~~|||F|||~~|||(FX)^+|||.$ Recall that $Y=(FX)^+FM$ and notice that $(FX)^+FX=I_l$ if $k\\ge l=\\operatorname{rank}(FX)$ .", "Therefore $Y=X^+M+(FX)^+F(M-XX^+M)$ .", "Consequently (REF ) and (REF ) hold.", "Next we complement the bounds on the norm $||M-XX^+M||_2$ of the previous subsection by the bounds on the norms $||(FX)^+||_2$ and $||W||_2$ .", "For any fixed constant $h>1$ various known algorithms (see our pointers to them in Section REF ) output sampling matrices $F$ of size $k\\times m$ satisfying $||(FX)^+||_2\\le ||X^+||_2\\sqrt{(m-k)kh^2+1}.$ For $W=I_m+X(FX)^+F$ of (REF ), it follows that $||W||_2\\le 1+||X^+||_2\\sqrt{(m-k)kh^2+1}$ ; this upper bound turns into approximately $ ||X^+||_2\\sqrt{mk+1}$ for $m\\gg k$ and $h\\approx 1$ , although this upper bound was pessimistic in our numerical tests, where output errors tend to decrease, and frequently significantly, when we double or triple $k$ .", "In particular [32] computes such a matrix $F$ in $O(mk^2)$ flops, that is, at sublinear cost for $k^2=o(n)$ ." ], [ "Output error norm bounds for dual sketching algorithms", "Given the matrices $MHT^{-1}$ and $S^{-1}FM$ , we can perform Algorithm REF at arithmetic cost in $O(kln)$ , which is sublinear if $kl=o(m)$ .", "If also $l^2=o(m)$ and $k^2=o(n)$ , then for proper sparse (e.g., sampling) test matrices $F$ and $H$ we can also compute the matrices $MHT^{-1}$ and $S^{-1}FM$ at sublinear cost and then perform entire Algorithm REF at sublinear cost.", "Of course, in this case the algorithm has arbitrarily large output errors for a worst case input (cf.", "Appendix ), in spite of a reasonable upper bound at pre-processing stage, proved in the previous subsection.", "Next we prove that even the Range Finder stage can only fail with a low probability if $H$ is any fixed, possibly sparse, test matrix of full rank and if a random input matrix $M$ admits LRA; we also estimate output error norm bound in this case." ], [ "Auxiliary results", "We first deduce a lower bound on the smallest singular value of a matrix product $QH$ , where the rows of $Q$ form an orthonormal basis of a randomly generated linear subspace and where $H$ is any fixed full rank matrix, possibly sparse.", "Lemma 5.1 Suppose that $G$ and $H$ are $r\\times n$ and $n\\times l$ matrices, respectively, $r < l < n$ , $GH$ has full rank $r$ , and $Q$ is an $r\\times n$ matrix with orthonormal rows such that $Q$ and $G$ have the same row space.", "Then $\\sigma _r(QH) \\ge \\frac{\\sigma _r(GH)}{\\sigma _1(G)}.$ Without loss of generality, suppose that $R\\in \\mathbb {R}^{r\\times r}$ and $G = RQ$ .", "Then $\\sigma _r(QH) = \\sigma _r(R^{-1}GH) \\ge \\sigma _r(GH)\\cdot \\sigma _r(R^{-1}).$ Hence $\\sigma _r(R^{-1}) = 1/\\sigma _1(G)$ because the matrices $R$ and $G$ share their singular values.", "Hereafter write $e:=2.71828182\\dots $ .", "Lemma 5.2 Suppose that $G$ is an $r\\times n$ Gaussian matrix, $H$ is an $n\\times l$ matrix with orthonormal columns, $n > l > r$ , $l \\ge 4$ , $Q$ is a matrix with orthonormal rows, and $Q$ and $G$ share their row space.", "Fix two positive parameters $t_1$ and $t_2<1$ .", "Then $\\sigma _r(QH) \\ge t_2 \\cdot \\frac{\\sqrt{l}- \\sqrt{r/l}+ \\sqrt{1/l}}{t_1 + \\sqrt{r} + \\sqrt{n} }\\cdot \\frac{1}{e} \\ $ with a probability no less than $1 - \\exp (-t_1^2/2) - (t_2)^{l-r}$ .", "As follows from the Orthogonal Invariance of Gaussian matrices (cf.", "Lemma REF ), $GH$ has the distribution of an $r\\times l$ Gaussian matrix, and so in this proof we assume that the matrix $GH$ has full rank (see Remark REF ).", "Next deduce from claim (i) of Theorem REF that $\\operatorname{Probability}\\lbrace \\sigma _1(G) > t_1 + \\sqrt{r} + \\sqrt{n} \\rbrace < \\exp (-t_1^2/2)~{\\rm for}~t_1 \\ge 0.$ Furthermore deduce from claim (ii) of Theorem REF that $\\operatorname{Probability}\\Big \\lbrace \\sigma _r(GH) \\le t_2\\cdot \\frac{l - r + 1}{e\\sqrt{l}} \\Big \\rbrace \\le (t_2)^{l - r}~{\\rm for}~t_2 < 1~{\\rm and}~l \\ge 4.$ Combine the latter two inequalities by applying the union bound,Union Bound: $\\operatorname{Probability}\\big \\lbrace \\cup _{i=1}^n A_i\\big \\rbrace \\le \\sum _{i=1}^n (\\operatorname{Probability}\\big \\lbrace A_i\\big \\rbrace ) $ for events $A_1, ..., A_n$ .", "apply Lemma REF , and obtain Lemma REF .", "Lemma REF implies that whp $\\sigma _r(QH)$ has at least order of $\\sqrt{l/n}$ .", "Next we specify this estimate under some reasonable bounds on $n$ and $l$ .", "Corollary 5.1 For $n, l, r, G, Q$ , and $H$ defined in Lemma REF , further assume that $n > 36r$ and $ l > 22(r - 1)$ .", "Then $\\operatorname{Probability}\\big \\lbrace \\sigma _r(QH) \\ge \\frac{1}{4} \\sqrt{l/n} \\big \\rbrace > 1 - \\exp \\Big (-\\frac{n}{72}\\Big ) - \\exp \\Big (-\\frac{l-r}{20}\\Big ).$ Write $t_1: = \\frac{1}{3}\\sqrt{n} - \\sqrt{r}$ and $t_2: = \\frac{1}{3}~\\frac{le}{l-r+1}$ , recall that $n > 36r$ and $ l > 22(r - 1)$ , and then readily verify that $t_1 > \\frac{\\sqrt{n}}{6}$ and $\\exp (0.05) > 0.95 > t_2 > 0$ .", "Finally apply Lemma REF under these bounds on $t_1$ and $t_2$ .", "Remark 5.1 We can extend our lower bounds of Lemma REF and Corollary REF on $\\sigma _r(QH)$ to the case where $H$ is any matrix of full rank $l$ rather than an orthogonal matrix if we decrease these bounds by a factor of $\\sigma _l(H)$ ." ], [ "Output errors of Range Finder of a matrix with a random singular space", "Next we estimate output errors of Range Finder under the randomization model where the leading right singular space of an input matrix is randomly generated.", "Assumption 5.1 Fix two constant matrices $\\Sigma _r =\\operatorname{diag}(\\sigma _j)_{j=1}^r~{\\rm and }~\\Sigma _{\\perp } =\\begin{bmatrix}\\operatorname{diag}(\\sigma _j)_{j=r+1}^n\\\\O_{m-n,n-r}\\end{bmatrix}\\in \\mathbb {R}^{(m-r)\\times (n-r)}$ such that $r < n \\le m$ , $\\sigma _1\\ge \\sigma _2\\ge \\dots \\ge \\sigma _n \\ge 0$ , and $\\sigma _r > 0$ .", "Let $G$ be an $r\\times n$ Gaussian matrix (having full rank $r$ with probability 1) and let $Q\\in \\mathbb {R}^{n\\times r}$ and $Q_{\\perp }\\in \\mathbb {R}^{(n-r)\\times r}$ be two matrices whose column sets make up orthonormal bases of the row space of $G$ and its orthogonal complement, respectively.", "Furthermore let $M = U\\cdot \\begin{bmatrix}\\Sigma _r & 0\\\\ 0 & \\Sigma _\\perp \\end{bmatrix} \\cdot \\begin{bmatrix}Q^T\\\\Q_\\perp ^T\\end{bmatrix}$ be SVD where the right singular space of $M$ is random and $U$ is an arbitrary orthogonal matrix.", "Remark 5.2 (i) We can readily extend our study to the case where $m < n$ , just by defining $\\Sigma _\\perp =\\operatorname{diag}(\\sigma _{r+1},\\dots , \\sigma _{m},0,\\dots ,0)$ such that Theorem REF below still holds.", "(ii) The matrix $M$ from the above assumption is random only in the sense that its leading rank-$r$ right singular space comes from a Gaussian matrix $G$ .", "The matrix $M$ is not uniquely determined by $G$ , since the matrices $Q$ , $Q_{\\perp }$ , and $U$ satisfying the assumption are non-unique, but this is immaterial for our analysis.", "Theorem 5.1 [Errors of Range Finder for an input with a random singular space.]", "Suppose that $G$ is an $r\\times n$ Gaussian matrix, $H\\in \\mathbb {R}^{n\\times l}$ is a constant matrix, $n > 36r$ , $l>22(r-1)$ , and $r \\le l < \\min (m, n)$ .", "Apply Algorithm REF to the matrix $M$ of (REF ) by using the test matrix $H$ and let $X$ , $Y$ denote the output matrices.", "(i) If $H$ has orthonormal columns, then $|||M - XY |||/\\tilde{\\sigma }_{r+1}(M) \\le \\sqrt{1 + 16n/l}$ with a probability no less than $1 - \\exp (-\\frac{n}{72}) - \\exp (-\\frac{l-r}{20})$ .", "(ii) If $H$ has full rank $l$ , then $|||M - XY |||/\\tilde{\\sigma }_{r+1}(M) \\le \\sqrt{1 + 16\\kappa ^2(H)n/l}$ with a probability no less than $1 - \\exp (-\\frac{n}{72}) - \\exp (-\\frac{l-r}{20})$ .", "In our probability estimates we can assume that the random matrices $G$ and $GH$ have full rank $r$ (see Theorem REF and Remark REF ).", "Consider SVD $M = U\\cdot \\begin{bmatrix}\\Sigma _r & 0\\\\ 0 & \\Sigma _\\perp \\end{bmatrix} \\cdot \\begin{bmatrix}Q^T\\\\Q_\\perp ^T\\end{bmatrix},$ write $C_1 := Q^TH$ , apply Corollary REF , and deduce that $|||M - XY |||/\\tilde{\\sigma }_{r+1}(M) \\le \\sqrt{1 + (||C_1^+||_2)^2}.$ (i) Recall from Corollary REF that $\\operatorname{Probability}\\big \\lbrace \\sigma _r(C_1) = \\sigma _r(Q^TH) \\ge \\frac{1}{4} \\sqrt{l/n} \\big \\rbrace > 1 - \\exp (-\\frac{n}{72}) - \\exp (-\\frac{l-r}{20}).$ (ii) Let $H = U_H\\Sigma _HV_H^T$ be a compact SVD.", "Then $\\sigma _r(C_1)\\ge \\sigma _r(Q^TU_H)\\sigma _r(H)$ .", "Similarly to (REF ) obtain that $\\operatorname{Probability}\\big \\lbrace \\sigma _r(C_1) \\ge \\frac{1}{4} \\sqrt{l/n} \\cdot \\sigma _r(H) \\big \\rbrace &\\ge \\operatorname{Probability}\\big \\lbrace \\sigma _r(Q^TU_H) \\ge \\frac{1}{4} \\sqrt{l/n}\\big \\rbrace \\\\&> 1 - \\exp (-\\frac{n}{72}) - \\exp (-\\frac{l-r}{20}).$ Combine Theorem REF , equation $||Q_\\perp ||_2 = 1$ , and the bound $\\sigma _r(C_1) \\ge \\frac{1}{4} \\sqrt{l/n} \\cdot \\sigma _r(H)$ and obtain $|||M - XY |||/\\tilde{\\sigma }_{r+1}(M) \\le \\sqrt{1 + ||Q_\\perp ^THC_1^+ ||_2^2} \\le \\sqrt{1 + 16\\kappa ^2(H)n/l}.$ Bound the output errors of Algorithms REF and 3.4 by combining the estimates of this section and Section REF and by transposing the input matrix $M$ ." ], [ "Output errors of Range Finder for a perturbed factor-Gaussian input", "Assumption 5.2 For an $r\\times n$ Gaussian matrix $G$ and a constant matrix $A\\in \\mathbb {R}^{m\\times r}$ of full rank $r< \\min (m, n)$ define the matrices $B: = \\frac{1}{\\sqrt{n}}\\cdot G$ and $\\tilde{M}: = AB$ and call $M: = \\tilde{M} + E$ a perturbed right factor-Gaussian matrix if the Frobenius norm of a perturbation matrix $E$ is sufficiently small in comparison to $\\sigma _r(A)$ .", "Theorem 5.2 [Errors of Range Finder for a perturbed right factor-Gaussian matrix.]", "Suppose that we are given an $r\\times n$ Gaussian random matrix $G$ and constant matrices $H\\in \\mathbb {R}^{n\\times l}$ , $A\\in \\mathbb {R}^{m\\times r}$ , and $E \\in \\mathbb {R}^{m\\times n}$ for $r\\le l < \\min (m ,n)$ .", "Let $\\tilde{M}$ be a right factor-Gaussian matrix of Assumption REF and let $M = \\tilde{M} + E$ for a perturbation matrix $E$ .", "Apply Algorithm REF to the matrix $M$ with a test matrix $H$ , and let $X$ and $Y$ denote the output matrices.", "Assume that $n > 36r$ and $l>22(r-1)$ .", "(i) Suppose that the matrix $H$ has orthonormal columns and that $||E||_F \\le \\frac{\\sigma _r(A)}{48\\sqrt{n/l} + 6}$ .", "Then $|||M-XY|||/\\tilde{\\sigma }_{r+1}(M) \\le \\sqrt{1+ 100n/l}$ with a probability no less than $1 - \\exp (-\\frac{l-r}{20}) - \\exp (-\\frac{n-r}{20}) - \\exp (-\\frac{n}{72})$ .", "(ii) Assume that $H$ has full rank and that $||E||_F \\le \\frac{\\sigma _r(A)}{12}\\min (1, \\frac{1}{4\\sqrt{n/l}\\cdot \\sigma _l(H) + 0.5})$ and let $\\kappa (H) = ||H||_2||H^+||_2$ denote the spectral condition number of $H$ .", "Then $|||M-XY|||/\\tilde{\\sigma }_{r+1}(M)\\le \\sqrt{1 + 100\\kappa ^2(H)n/l }$ with a probability no less than $1 - \\exp (-\\frac{l-r}{20}) - \\exp (-\\frac{n-r}{20}) - \\exp (-\\frac{n}{72}).$ Recall that the matrices $B$ , $AB$ , and $BH$ have full rank with probability 1 (see Theorem REF ) and assume that they do have full rank (see Remark REF ).", "Let $M = \\begin{bmatrix}U_r & U_{\\perp } \\end{bmatrix} \\begin{bmatrix}\\Sigma _r & 0 \\\\ 0 & \\Sigma _\\perp \\end{bmatrix} \\begin{bmatrix}V_r^T \\\\ V_\\perp ^T\\end{bmatrix}\\textrm { and }\\tilde{M} = \\begin{bmatrix}\\tilde{U}_r & \\tilde{U}_{\\perp } \\end{bmatrix} \\begin{bmatrix}\\tilde{\\Sigma }_r & 0 \\\\ 0 & 0 \\end{bmatrix} \\begin{bmatrix}\\tilde{V}_r^T \\\\ \\tilde{V}_\\perp ^T\\end{bmatrix}$ be SVDs, where $V_r$ and $\\tilde{V}_r$ are the matrices of the $r$ top right singular vectors of $M$ and $\\tilde{M}$ , respectively.", "Define $C_1 = V_r^TH$ and $\\tilde{C}_1 := \\tilde{V}_r^TH$ as in (REF ).", "Then $||| M - XY ||| /\\tilde{\\sigma }_{r+1}(M) \\le \\sqrt{1 + ||V_\\perp ^THC_1^+ ||_2^2}$ by virtue of Theorem REF (that is, [21]).", "Recall that $||V_\\perp ||_2 = ||H||_2 = 1$ , and therefore we only need to deduce that $\\sigma _r(C_1) > \\frac{1}{10}\\sqrt{l/n}$ whp in order to prove claim (i).", "Recall that the columns of $V_r$ span the row space of an $r\\times n$ Gaussian matrix $G$ and deduce from Corollary REF that $\\sigma _r(\\tilde{C}_1) > \\frac{1}{4}\\sqrt{l/n}$ with a probability no less than $1 - \\exp (-\\frac{l-r}{20}) - \\exp (-n/72)$ .", "To complete the proof of claim (i), it remains to verify that whp the perturbation $E$ only slightly alters the leading right singular space of $\\tilde{M}$ and that $\\sigma _r(C_1)$ is close to $\\sigma _r(\\tilde{C}_1)$ .", "If the norm of the perturbation matrix $||E||_F$ is sufficiently small, then by virtue of Lemma REF there exists a matrix $P$ such that $\\tilde{V}_r + \\tilde{V}_\\perp P$ and $V_r$ have the same column space and that furthermore $||P||_F \\le \\frac{2||E||_F}{\\sigma _r(\\tilde{M}) - 2||E||_F }$ .", "This implies a desired bound on the differences of the smallest positive singular values of $C_1$ and $\\tilde{C}_1$ ; next we supply the details.", "Claim (ii) of Theorem REF implies that whp the $r$ -th singular value of $\\tilde{M} = AB$ is not much less than the $r$ -th singular value of $A$ .", "Readily deduce from Corollary REF that $\\operatorname{Probability}\\lbrace \\sigma _r(\\tilde{M}) < \\sigma _r(A)/3 \\rbrace \\le \\operatorname{Probability}\\lbrace \\sigma _r(B) = \\frac{1}{\\sqrt{n}} \\sigma _r(G) < 1/3\\rbrace \\le e^{-(n-r)/20}.$ Then deduce that $||E||_F \\le \\frac{\\sigma _r(A)}{48\\sqrt{n/l} + 6}\\le \\frac{\\sigma _r (\\tilde{M}) }{16\\sqrt{n/l} + 2}$ with a probability no less than $1 - e^{-(n-r)/20}$ .", "It follows that there exists a matrix $P$ of Lemma REF such that $||P||_2 \\le \\frac{1}{8}\\sqrt{l/n}$ .", "Conditioning on such an event and also on the event that $\\sigma _r(\\tilde{C}_1) > \\frac{1}{4}\\sqrt{l/n}$ , obtain $\\sigma _r(V_r^TH) &= \\sigma _r\\big ((I_r + P^TP)^{-1/2}(\\tilde{V}_r^T + P^T\\tilde{V}_\\perp ^T)H\\big ) \\\\&\\ge \\sigma _r\\big ((I_r + P^TP)^{-1/2}\\big ) \\sigma _r(\\tilde{V}_r^TH + P^T\\tilde{V}_\\perp ^TH)\\\\&\\ge \\frac{1}{\\sqrt{1 + (\\sigma _1(P))^2}} \\big ( \\sigma _r(\\tilde{C}_1) - \\sigma _1(P)\\big ) > \\frac{1}{10} \\sqrt{l/n}.", "$ Equality (REF ) holds because the matrix $\\tilde{V}_r + \\tilde{V}_\\perp P$ is normalized by $(I_r + P^TP)^{-1/2}$ (see Remark REF ) and has the same column span as $V_r$ .", "By applying the union bound deduce that inequality () holds with a probability no less than $1 - \\exp (-\\frac{l-r}{20}) - \\exp (-\\frac{n-r}{20}) - \\exp (-\\frac{n}{72})$ .", "To prove claim (ii), we essentially need to show that $\\sigma _r(C_1) = \\sigma _r(V_r^TH) \\ge \\frac{1}{10}\\sqrt{l/n}\\cdot \\sigma _l(H)$ , and then the claim will follow readily from inequality (REF ).", "Let $H = U_H\\Sigma _HV_H^T$ be compact SVD, such that $U_H\\in \\mathbb {R}^{n\\times l}$ , $\\Sigma _H\\in \\mathbb {R}^{l\\times l}$ , and $V_H\\in \\mathbb {R}^{l\\times l}$ , and obtain that $\\sigma _r(\\tilde{C}_1) \\ge \\sigma _r(\\tilde{V}_r^TU_H)\\sigma _l(\\Sigma _H)$ and $\\operatorname{Probability}\\lbrace \\sigma _r(\\tilde{C}_1) < \\frac{1}{4}\\sqrt{l/n}\\cdot \\sigma _l(H)\\rbrace < \\exp (-\\frac{l-r}{20}) + \\exp (-\\frac{n}{72}).$ Next bound $\\sigma _r(C_1)$ by showing that the column spaces of $V_r$ and $\\tilde{V}_r$ are sufficiently close to one another if the perturbation $V_r-\\tilde{V}_r$ is sufficiently small.", "Assume that $||E||_F \\le \\frac{\\sigma _r(A)}{12}$ , and then the assumptions of Lemma REF holds whp.", "By applying the same argument as in the proof of claim (i), deduce that $||E||_F \\le \\min \\Big ( \\frac{\\sigma _r (\\tilde{M})}{4}, \\frac{\\sigma _r (\\tilde{M}) }{16\\sqrt{n/l}\\cdot \\sigma _l(H) + 2}\\Big )$ with a probability no less than $1 - e^{-(n-r)/20}$ .", "It follows that there exists a matrix $P$ of Lemma REF such that $||P||_2 \\le \\frac{1}{8}\\sqrt{l/n}\\cdot \\sigma _l(H)$ .", "Hence $\\sigma _r(C_1)\\ge \\frac{1}{10}\\sqrt{l/n}\\cdot \\sigma _l(H)$ whp." ], [ "Numerical tests", "In this section we cover our tests of dual sublinear cost variants of Algorithm REF .", "The standard normal distribution function randn of MATLAB has been applied in order to generate Gaussian matrices.", "The MATLAB function \"svd()\" has been applied in order to calculate the $\\epsilon $ -rank, i.e., the number of singular values exceeding $\\epsilon $ for $\\epsilon =10^{-6}$ .", "The tests for Tables REF –REF have been performed on a 64-bit Windows machine with an Intel i5 dual-core 1.70 GHz processor by using custom programmed software in $C^{++}$ and compiled with LAPACK version 3.6.0 libraries." ], [ "Input matrices for ", "We generated the following classes of input matrices $M$ for testing LRA algorithms.", "Class I: $M=U_M\\Sigma _M V_M^*$ , where $U_M$ and $V_M$ are the Q factors of the thin QR orthogonalization of $n\\times n$ Gaussian matrices, $\\Sigma _M=\\operatorname{diag}(\\sigma _j)_{j=1}^n$ ; $\\sigma _j=1/j,~j=1,\\dots ,r$ , $\\sigma _j=10^{-10},~j=r+1,\\dots ,n$ (cf.", "[H02, Section 28.3]), and $n=256,512,1024$ .", "(Hence $||M||_2=1$ and $||M^+||_2=10^{10}$ .)", "Class II: (i) The matrices $M$ of the discretized single-layer Laplacian operator of [21]: $[S\\sigma ](x) = c\\int _{\\Gamma _1}\\log {|x-y|}\\sigma (y)dy,x\\in \\Gamma _2$ , for two circles $\\Gamma _1 = C(0,1)$ and $\\Gamma _2 = C(0,2)$ on the complex plane.", "We arrived at a matrices $M=(m_{ij})_{i,j=1}^n$ , $m_{i,j} = c\\int _{\\Gamma _{1,j}}\\log |2\\omega ^i-y|dy$ for a constant $c$ , $||M||=1$ and the arc $\\Gamma _{1,j}$ of $\\Gamma _1$ defined by the angles in the range $[\\frac{2j\\pi }{n},\\frac{2(j+1)\\pi }{n}]$ .", "(ii) The matrices that approximate the inverse of a large sparse matrix obtained from a finite-difference operator of [21].", "Class III: The dense matrices of five classes with smaller ratios of “numerical rank/$n$ \" from the built-in test problems in Regularization Tools, which came from discretization (based on Galerkin or quadrature methods) of the Fredholm Integral Equations of the first kind:See http://www.math.sjsu.edu/singular/matrices and http://www2.imm.dtu.dk/$\\sim $ pch/Regutools For more details see Chapter 4 of the Regularization Tools Manual at http://www.imm.dtu.dk/$\\sim $ pcha/Regutools/RTv4manual.pdf baart: Fredholm Integral Equation of the first kind, shaw: one-dimensional image restoration model, gravity: 1-D gravity surveying model problem, wing: problem with a discontinuous solution, foxgood: severely ill-posed problem.", "We used $1024\\times 1024$ SVD-generated input matrices of class I having numerical rank $r = 32$ , $400 \\times 400$ Laplacian input matrices of class II(i) having numerical rank $r = 36$ , $408 \\times 800$ matrices having numerical rank $r = 145$ and representing finite-difference inputs of class II(ii), and $1000 \\times 1000$ matrices of class III (from the San Jose University database), having numerical rank 4, 6, 10, 12, and 25 for the matrices of the classes wing, baart, foxgood, shaw, and gravity, respectively." ], [ "Five families of sparse test matrices", "We generated our $n\\times (r+p)$ test matrices for random $p=1,2, \\dots , 21$ by using 3-ASPH, 3-APH, and Random permutation matrices.", "When the overestimation parameter $p$ was considerable, we actually computed LRA of numerical rank larger than $r$ , and so LRA was frequently closer to an input matrix than the optimal rank-$r$ approximation.", "Accordingly, the output error norms in our tests ranged from about $10^{-4}$ to $10^{4}$ relative to the optimal errors.", "We obtained every 3-APH and every 3-ASPH matrix by applying three Hadamard's recursive steps (REF ) followed by random column permutation defined by random permutation of the integers from 1 to $n$ inclusive.", "While generating a 3-ASPH matrix we also applied random scaling with a diagonal matrix $D=\\operatorname{diag}(d_i)_{i=1}^n$ where we have chosen the values of random iid variables $d_i$ under the uniform probability distribution from the set $\\lbrace -4, -3, -2, -1, 0,1 ,2, 3, 4\\rbrace $ .", "We used the following families of test matrices: (0) Gaussian (for control), (1) sum of a 3-ASPH and a permutation matrix, (2) sum of a 3-ASPH and two permutation matrices, (3) sum of a 3-ASPH and three permutation matrices, (4) sum of a 3-APH and three permutation matrices, and (5) sum of a 3-APH and two permutation matrices." ], [ "Test results", "In Tables REF –REF we display the average relative error norm $ \\frac{\\Vert M - \\tilde{M} \\Vert _2}{\\Vert M - M_{nrank}\\Vert _2}$ in our tests repeated 100 times for each class of input matrices and each size of an input matrix and a test matrix for Algorithm REF or for each size of an input and left-hand and right-hand test matrices for Algorithm REF .", "In all our tests we applied test matrices of the six families of the previous subsection.", "In Tables REF –REF we display the average relative error norm for the output of Algorithm REF ; in our tests it ranged from about $10^{-3}$ to $10^{1}$ .", "The numbers in parentheses in the first line of Tables REF and REF show the numerical rank of input matrices.", "In Tables REF –REF we display the average relative error norm for the output of Algorithm REF applied to the same input matrices from classes I–III as in our experiments for Algorithm REF .", "In these tests we used $n\\times \\ell $ and $\\ell \\times m$ test matrices for $\\ell = r+p$ and $k=c\\ell $ for $c=1,2,3$ and random $p=1,2, \\dots , 21$ .", "Table: Relative error norms in tests for matrices of classes I and IITable: Relative error norms for input matrices of class III (of San Jose University database)Table: Relative error norms for input matrices of class III (of San Jose University database)Table: Relative error norms in tests for matrices of classes I and IITable: Relative error norms for input matrices of class III (of San Jose University database)Table: Relative error norms for input matrices of class III (of San Jose University database)" ], [ "Sparse test matrices in\nsketching algorithms", "Algorithms REF and 3.4 output accurate LRA whp if the multipliers $F$ and $H$ (test matrices) are Gaussian, SRHT, SRFT, or Rademacher's matrices (cf.", "[21], [45], [10]), but multiplication by such matrices runs at superlinear cost.", "Our heuristic recipe is to apply these algorithms with a small variety of sparse fixed or random test matrices $F_i$ and/or $H_i$ , $i=1,2,\\dots $ , such that the overall computational cost stays sublinear, to monitor the accuracy of the output LRA, and to apply stopping criteria of [35] and [38].", "One can apply selected sketching algorithm concurrently for selected pairs of test matrices $F_i$ and $H_i$ .", "The papers [36], [37], and [38] cover various families of sparse test matrices.", "We used orthogonalized sums of these matrices and sampling matrices as test matrices in our numerical tests.", "By generalizing this recipe one can use products (cf.", "[21]) or lower degree polynomials of the matrices of basic families instead of using their sums." ], [ "Suppose that Algorithms REF and 3.4 are applied with the identity auxiliary matrices $S$ and $T$ and sampling test matrices.", "Then the output LRA is particularly memory efficient: it turns into the product $CUR$ where $C=MH$ and $R=FM$ are submatrices made up of two sets of columns and rows of $M$ , respectively, and the matrix $U=G^+$ for $G=FMH$ “unites\" the submatrices $C$ and $R$ in LRA of $M$ .", "By following [38] we call $G$ the generator and $U$ the nucleus of CUR LRA.In the pioneering papers [20] and [19] as well as in [31] and [30], the matrix $U$ is called germ and is replaced by $G$ ; accordingly CGR replaces $CUR$ .", "We call CUR LRA with nucleus $U=G^+$ canonical and recall that $CUR=M$ for a matrix $M$ if and only if $\\operatorname{rank}(M)=\\operatorname{rank}(G)$ , by virtue of simple [38]; this is obvious where $k=l=r$ , in which case $G$ is an $r\\times r$ matrix.", "Hence CUR LRA is quite close if $M=\\tilde{M}+E$ where $\\operatorname{rank}(\\tilde{M})=\\operatorname{rank}(G)$ , $E\\approx 0$ , and matrix $G$ is well-conditioned.", "If $G$ is rank deficient or ill-conditioned, we would seek another generator.", "Here we do not use non-canonical nuclei such as $U=C^+MR^+$ [28], $U=G_r^+$ and $U=G(\\tau )^+$ [31], [30] where $G_r$ is the rank-$r$ truncation of a generator $G$ and $G(\\tau )$ is obtained by means of setting to 0 the singular values of $G$ exceeded by a fixed positive tolerance.", "Notice that CUR LRA built on the nucleus $U=C^+MR^+$ is within a factor of $2+\\epsilon $ from optimal under the Frobenius norm for and any positive $\\epsilon $ [28], but the computation of this nucleus runs at superlinear cost, while the canonical nucleus $U=G^+$ as well as the nuclei $U=G_r^+$ and $U=G(\\tau )^+$ are computed at sublinear cost provided that we are given a generator $G$ of a sufficiently small size." ], [ "Cross-Approximation (", "We define C-A iterations as recursive sketching algorithms that combine the recipes of Sections 1.4 and REF and use sampling test matrices.", "Our analysis in the previous section supports the well-known empirical efficiency of these iterations; this link and support are apparently novel.", "It is instructive to recall the original definition and support of these iterations and then compare the two definitions and their studies; we are going to do this next." ], [ "Volume maximization and ", "Definition 7.1 For three integers $k$ , $l$ , and $r$ such that $1\\le r\\le \\min \\lbrace k,l\\rbrace $ , define the volume $v_2(M):=\\prod _{j=1}^{\\min \\lbrace k,l\\rbrace } \\sigma _j(M)$ of a $k\\times l$ matrix $M$ and its $r$ -projective volume $v_{2,r}(M):=\\prod _{j=1}^r \\sigma _j(M)$ such that $v_{2,r}(M)=v_{2}(M)~{\\rm if}~r=\\min \\lbrace k,l\\rbrace $ , $v_2^2(M)=\\det (MM^*)$ if $k\\ge l$ ; $v_2^2(M)=\\det (M^*M)$ if $k\\le l$ , $v_2^2(M)=|\\det (M)|^2$ if $k = l$ .", "Given a matrix $W$ , five integers $k$ , $l$ , $m$ , $n$ , and $r$ such that $1\\le r\\le \\min \\lbrace k,l\\rbrace $ , and a real $h>1$ , and an $m\\times n$ matrix $W$ , its $k\\times l$ submatrix $G$ has weakly $h$ -maximal volume (resp.", "$r$ -projective volume) in $W$ if $v_2(G)$ (resp.", "$v_{2,r}(G)$ ) is maximal up to a factor of $h$ among all $k\\times l$ submatrices of $W$ that differ from $G$ in a single row and/or a single column.", "We write maximal for 1-maximal and drop “weakly\" if the volume maximization is over all submatrices of a fixed size $k\\times l$ .", "For an $m\\times n$ matrix $W=(w_{i,j})_{i,j=1}^{m,n}|$ define its Chebyshev norm $||W||_C:=\\max _{i,j=1}^{m,n}|w_{ij}|$ such that (cf.", "[16]) $||W||_C\\le ||W||_2 \\le ||W||_F\\le \\sqrt{mn}~||W||_C.$ Recall the following result from [31]: Theorem 7.1 Suppose that $W_{k,l}=W_{\\mathcal {I},\\mathcal {J}}$ is a $k\\times l$ submatrix of an $m\\times n$ matrix $W$ , $U=W_{k,l,r}^+$ is the canonical nucleus of a CUR LRA of $W$ , $E=W-CUR$ , $h\\ge 1$ , and the $r$ -projective volume of $W_{\\mathcal {I},\\mathcal {J}}$ is locally $h$ -maximal.", "Then $||E||_C\\le h~f(k,l,r)~\\sigma _{r+1}(W)~~{\\rm for}~~f(k,l,r):=\\sqrt{\\frac{(k+1)(l+1)}{(k-r+1)(l-r+1)}}.$ In particular if $r=\\min \\lbrace k,l\\rbrace $ , then the $r$ -projective volume turns into volume and $f(k,l,r):=\\sqrt{\\frac{(k+1)(l+1)}{(|l-k|+1)}}.$" ], [ "Volume maximization and ", "Theorem REF motivates one to reduce the computation of a $k\\times l$ CUR LRA of an $m\\times n$ matrix $M$ to the computation of a $k\\times l$ submatrix of $M$ having weakly $h$ -maximal $r$ -projective volume for a fixed $h>1$ .", "The customary recipe towards this goal is to alternate the computation of $k\\times l$ submatrices in $k\\times n$ and $m\\times l$ matrices $W$ being horizontal and vertical submatrices made up of $k$ rows and $l$ columns of $M$ , respectively (see Figure 1); hence $r$ -projective volume of the submatrix recursively increases up to a factor of $h$ .", "Figure: Three successive C–A steps output threestriped matrices.The known algorithms compute a $k\\times l$ submatrix of weakly $h$ -maximal volume (rather than $r$ -projective volume) of an $m\\times l$ or $k\\times n$ matix $W$ at arithmetic cost in $O(ml^2)$ or $O(nk^2)$ , respectively.", "These cost bounds are sublinear for $k^2=o(n)$ and $l^2=o(m)$ , and the computations are numerically stable for $h>1$ .", "(See [15], [32], [17], [30], [31], and the references therein; see [30] and [31] towards tentative extension of these algorithms to the maximization of $r$ -projective volume rather than volume.)", "Clearly, the alternation process defines precisely the same C-A iterations as recursive sketching algorithms of Section REF , obtained by combining the recipes of Sections 1.4 and REF and using sampling test matrices.", "This remains true also if we first orthogonalize the input submatrix before applying the algorithms of the above papers for the computation of the submatrix of the matrix $W$ .", "Indeed (i) under the definition of Section REF the orthogonalization is by means of computing the auxiliary matrices $S$ and $T$ from QR or QRP factorization of the products $FM$ and $MH$ of an input matrix $M$ and of sampling test matrices $F$ and $H$ , respectively, and does not change the output matrix $F$ or $H$ , that is, does not change the output set of column or row indices, respectively (see Remark REF ); (ii) under the customary definition of C-A iterations these indices define a $k\\times l$ submatrix of weakly $h$ -maximal volume or $r$ -projective volume in $m\\times l$ or $k\\times n$ submatrix of $M$ ; one can immediately observe that maximization is invariant in multiplication of the latter matrix by a nonsingular $l\\times l$ or $k\\times k$ matrix, respectively, in particular by the inverse of the R or RP factor of QR or QRP factorization of $W$ , respectively." ], [ "Stopping and initialization of ", "Suppose that for some $h\\ge 1$ and $h^{\\prime }\\ge 1$ two successive C-A iterations output $k\\times l$ submatrices of weakly $h$ -maximal and weakly $h^{\\prime }$ -maximal volumes in their input $k\\times n$ and $m\\times l$ submatrices of a given $m\\times n$ matrix $M$ of rank $\\min \\lbrace k,l\\rbrace $ , respectively.", "Then, as this was observed in [24], the $k\\times l$ submatrix of $M$ output by the second C-A step has weakly $hh^{\\prime }$ -maximal volume as well as $r$ -projective volume in $M$ for any positive integer $r\\le \\min \\lbrace k,l\\rbrace $ , and the error bound of Theorem REF can be applied.", "This observation can be readily extended to the case where $M=\\tilde{M}+E$ , $\\operatorname{rank}(\\tilde{M})=\\operatorname{rank}(G)$ , and $E\\approx 0$ , such that $\\log (||E||_2)$ is of at most orders $\\min \\lbrace k,l\\rbrace $ or $r$ for maximization of volume or $r$ -projective volume, respectively.", "One cannot rely on milder bound on the error norm $||E||_2$ because a perturbation of $M$ by $E$ may change all singular values of $M$ by $||E||_2$ (see Lemma REF ).More surprising for the study of sublinear cost $LRA$ is the main result of [24], that is, a deterministic sublinear cost algorithm, which computes a CUR LRA of any symmetric positive semi-definite matrix $M$ such that $||M-CUR||_C\\le (1+\\epsilon )\\sigma _{r+1}(M)$ for any positive $\\epsilon $ .", "It relies on distinct nontrivial techniques rather than C-A iterations.", "If two successive C-A iterations output the same $k\\times l$ submatrix of $M$ , then its volume or $r$ -projective volume is already weakly $h$ -maximal in both vertical $m\\times l$ and horizontal $k\\times n$ submatrices of $M$ .", "One can tentatively use this $k\\times l$ submatrix as the generator of CUR LRA, apply Theorem REF , and recall (REF ) for the transition to error norm bounds in spectral and Frobenius norms.", "For small $k$ and $l$ one can monitor the increase of the volume and $r$ -projective volume in a C-A iteration at sublinear cost, but their initial deviation from their weak maxima is harder to estimate.", "One can compute an initial generator for CUR LRA with an r!-maximal volume by applying the algorithm of [9], which is essentially Gaussian elimination with complete pivoting and thus has superlinear cost.", "The common initialization recipe is by the algorithm of [1], which is essentially Gaussian elimination with partial pivoting and thus has sublinear cost, although no good estimates are known for how far the output volume is from weak maximum.", "Such an estimate for the $r$ -projective volume of any $k\\times l$ submatrix of an $m\\times n$ two-sided factor-Gaussian matrix with expected rank $r<\\min \\lbrace k,l\\rbrace $ is proved in Appendix .", "Namely this volume is weakly $h$ -maximal with a probability at least $1-\\exp (c/r)$ for $\\log (h)=O(r)$ and a positive constant $c$ .", "Remark REF , however, explains why the application domain of this result is much narrower than, say, that of Theorem REF ." ], [ "Output error bounds for ", "Let us specify the output error norm bounds for C-A iterations based on their two definitions.", "Under the customary definition we arrive at a bound of order of $h\\sqrt{(k+l)mn}$ on both spectral and Frobenius norms by relying on Theorem REF and assuming that the computed $k\\times l$ generator $G$ for CUR LRA of $M$ has a weakly $h$ -maximal volume for $h\\ge 1$ .", "We can decrease the bound by a factor of $\\sqrt{k+l}$ if a generator $G$ has weakly $h$ -maximal $r$ -projective volume, but the designers of C-A iterations are still struggling towards ensuring this maximization property [31], [30].", "Moreover, the known sublinear cost algorithms compute a generator that has weakly $h$ -maximal volume for bounded $h$ only for vertical or horizontal submatrices $W$ , each made up of a small number of columns or rows of $M$ , and not for the matrix $M$ itself.", "Next recall sketching Algorithm REF , which represents a single C-A iteration for sampling test matrices $F$ and $H$ .", "By virtue of Theorem REF its Range Finder stage contributes a factor $\\sqrt{1+100n/l}$ to its spectral and Frobenius output error norm bounds in the case of an $n\\times l$ sampling test matrix $H$ and an $m\\times n$ perturbed right factor-Gaussian input matrix.", "In Section REF we proved an overall error norm bound for Algorithm REF by multiplying this estimate by a factor of $|||W|||$ for $W=I_m-X(FX)^+F$ , a $k\\times m$ sampling matrix $F$ of our choice, and an $m\\times l$ matrix $X=MHT^{-1}$ , defined at the Range Finder stage of the algorithm.", "By properly choosing matrix $T^{-1}$ at that stage we can make $X$ orthogonal.", "Then the known sublinear cost algorithms compute $F$ such that $||W||_2\\le 1+||X(FX)^+||_2\\le 1+\\sqrt{(m-k)kh^2+1}$ for any fixed $h>1$ (cf.", "(REF )).", "For $k\\ll m$ this contributes roughly a factor of $\\sqrt{mk}$ to the overall error norm bound.", "Hence in the case of a perturbed right factor-Gaussian input the overall spectral error norm has at worst order $\\sqrt{mnk/l}$ ." ], [ "Conclusions", "We hope that our work will become a springboard for interesting research avenues.", "For example, recursive sketching algorithms with sampling test matrices are friendly to the case of a sparse input matrix $M$ , and it is tempting to yield close $CUR$ $LRA$ at dual sublinear cost relative to the number of nonzero entries of $M$ , but the it seems to be hard to define a proper probabilistic structure in the space of sparse matrices.", "For another example, empirical study of combinations of C-A and other recursive sketching iterations with recursive pre-processing with sparse multipliers can be motivated by our Theorem REF and Remark REF of Appendix .", "In the rest of this section we demonstrate how our dual sketching and recursive sketching algorithms can benefit from incorporation of the celebrated techniques of randomization with leverage scores (cf.", "[11] and [25]).", "Namely, we decrease the bound on the output error norm of Algorithms REF and 3.4 to the level of Theorem REF up to a factor of $1+\\epsilon $ for any fixed positive $\\epsilon <1$ ; one can be motivated to choose $\\epsilon $ not too small, say, to let $\\epsilon =1/2$ , because the algorithm samples order of $1/\\epsilon ^2$ rows and because a factor of $1+\\epsilon $ for $\\epsilon =1/2$ means only a minor increase of the error norm bound of the Range Finder stage.", "We first modify Algorithm REF : keeping its Range Finder stage we replace its Stages 2 and 3 with the approximation of an $l\\times n$ matrix $Y=\\arg \\min _V|||XV-M|||,$ that is, of one minimizing the (Frobenius or spectral) norm $|||XY-M|||$ .", "Next by using randomization we decrease the error norm factor of $1+\\sqrt{(m-k)kh^2+1}$ , reached by deterministic algorithm of Section REF , to $1+\\epsilon $ for any fixed positive $\\epsilon <1$ .", "We fix three positive numbers $\\beta \\le 1$ , $\\epsilon <1$ , and $\\delta <1$ , fix a positive integer $r\\le l$ (target rank), write $k:=1296 \\beta ^{-1}r^2\\epsilon ^{-2}\\delta ^{-4},$ and apply a randomized algorithm of [11], which runs at sublinear cost if $k^2+l^2=o(\\min \\lbrace m,n\\rbrace )$ and which computes a sampling test matrix $F$ and the matrix $Y=(FX)^+FM$ such that $||XY-M||_F\\le (1+\\epsilon )||XX^+M-M||_F$ with a probability at least $1-\\delta $ .", "This is [25], adapted from [11] (cf.", "footnote $^4$ ).The randomized algorithm of [11] computes a sampling matrix $S$ and a diagonal re-scaling matrix $D$ in a single process based on using leverage scores; then it computes the matrix $Y=(D^TS^TX)^+D^TS^TM=(S^TX)^+S^TM$ , thus defining a desired sampling matrix $F=S^T$ .", "By means of the recipe of Section 1.4 we can extend this computation to recursive sketching iterations for CUR  LRA; [25] proved that such a recursive process defines iterative refinement that converges for a sufficiently close initial CUR  LRA.", "Appendix" ], [ "Norms of a Gaussian matrix and its pseudo inverse", "$\\Gamma (x)=\\int _0^{\\infty }\\exp (-t)t^{x-1}dt$ denotes the Gamma function.", "Hereafter $\\nu _{p, q}$ denotes the random variable representing the spectral norm of a $p\\times q$ Gaussian random matrix and $\\nu ^+_{p, q}$ and denotes the random variable representing the spectral norm of the pseudo inverse of a $p\\times q$ Gaussian matrix.", "Theorem A.1 [Norms of a Gaussian matrix.", "See [12] and our Definition REF .]", "Probability$\\lbrace \\nu _{m,n}>t+\\sqrt{m}+\\sqrt{n}\\rbrace \\le \\exp (-t^2/2)$ for $t\\ge 0$ , $\\mathbb {E}(\\nu _{m,n})\\le \\sqrt{m}+\\sqrt{n}$ .", "Theorem A.2 [Norms of the pseudo inverse of a Gaussian matrix (see Definition REF ).]", "(i) Probability $\\lbrace \\nu _{m,n}^+\\ge m/x^2\\rbrace <\\frac{x^{m-n+1}}{\\Gamma (m-n+2)}$ for $m\\ge n\\ge 2$ and all positive $x$ , (ii) Probability $\\lbrace \\nu _{m,n}^+\\ge t\\frac{e\\sqrt{m}}{m-n+1}\\rbrace \\le t^{n-m}$ for all $t\\ge 1$ provided that $m\\ge 4$ , (iii) $\\mathbb {E}(\\nu ^+_{m,n})\\le \\frac{e\\sqrt{m}}{m-n}$ provided that $m\\ge n+2\\ge 4$ , See [7] for claim (i), [21] for claims (ii) and (iii), and [43] for claim (iv).", "Theorem REF implies reasonable probabilistic upper bounds on the norm $\\nu _{m,n}^+$ even where the integer $|m-n|$ is close to 0; whp the upper bounds of Theorem REF on the norm $\\nu ^+_{m,n}$ decrease very fast as the difference $|m-n|$ grows from 1." ], [ "Randomized pre-processing of lower rank matrices", "The following simple results, (cf.", "[38]), where $A \\preceq B$ ($A \\succeq B$ ) means that $A$ is statistically less (greater) or equal to $B$ , show that pre-processing with Gaussian multipliers $X$ and $Y$ transforms any matrix that admits $LRA$ into a perturbation of a factor-Gaussian matrix.", "Theorem B.1 Consider five integers $k$ , $l$ , $m$ , $n$ , and $r$ satisfying the bounds $r\\le k\\le m,~r\\le l\\le n$ , an $m\\times n$ well-conditioned matrix $M$ of rank $r$ , $k\\times m$ and $n\\times l$ Gaussian matrices $G$ and $H$ , respectively, and the norms $\\nu _{p,q}$ and $\\nu _{p,q}^+$ of Definition REF .", "Then (i) $GM$ is a left factor-Gaussian matrix of expected rank $r$ such that $||GM||_2\\preceq ||M||_2~\\nu _{k,r}~{\\rm and}~||(GM)^+||_2\\preceq ||M^+||_2~\\nu _{k,r}^+,$ (ii) $MH$ is a right factor-Gaussian matrix of expected rank $r$ such that $||MH||_2 \\preceq ||M||_2~\\nu _{r,l}~{\\rm and}~||(MH)^+||_2\\preceq ||M^+||_2~\\nu _{r,l}^+,$ (iii) $GMH$ is a two-sided factor-Gaussian matrix of expected rank $r$ such that $||GMH||_2\\preceq ||M||_2~\\nu _{k,r}\\nu _{r,l}~{\\rm and}~||(GMH)^+||_2\\preceq ||M^+||_2~\\nu _{k,r}^+\\nu _{r,l}^+.$ Remark B.1 Based on this theorem we can readily extend our results on $LRA$ of perturbed factor-Gaussian matrices to all matrices that admit $LRA$ and are pre-processed with Gaussian multipliers.", "We cannot perform such pre-processing at sublinear cost, but empirically pre-processing at sublinear cost with various sparse orthogonal multipliers works as efficiently [36], [37]." ], [ "The error bounds for the known sketching algorithms", "In the next theorem we write $\\sigma _{F,r+1}^2(M):=\\sum _{j>r}\\sigma _j^2(M)$ .", "Theorem C.1 (i) Let $2\\le r\\le l-2$ and apply Algorithm REF with a Gaussian test matrix $H$ .", "Then (cf.", "[21]) [21] estimate the norms of $M-XY$ in probability.", "$\\mathbb {E}||M-XY||_F^2\\le \\Big (1+\\frac{r}{l-r-1}\\Big )~\\sigma _{F,r+1}^2(M),$ $\\mathbb {E}||M-XY||_2\\le \\Big (1+\\sqrt{\\frac{r}{l-r-1}}~\\Big )~\\sigma _{r+1}(M)+\\frac{e\\sqrt{l}}{l-r} \\sigma _{F,r+1}(M).$ (ii) Let $4[\\sqrt{r}+\\sqrt{8\\log (rn)}]^2\\log (r)\\le l\\le n$ and apply Algorithm REF with an SRHT or SRFT test matrix $H$ .", "Then (cf.", "[45], [21]) $|||M-XY|||\\le \\sqrt{1+7n/l}~~\\tilde{\\sigma }_{r+1}(M)~{\\rm with~a~probability~in}~1-O(1/r).$ [47] shows that the output LRA $XY$ of Algorithm REF applied with Gaussian test matrices $F$ and $H$ satisfiesIn words, the expected output error norm $\\mathbb {E}||M-XY||_F$ is within a factor of $\\Big (\\frac{kl}{(k-l)(l-r)}\\Big )^{1/2}$ from its minimum value $\\sigma _{F,r+1}(M)$ ; this factor is just 2 for $k=2l=4r$ .", "$\\mathbb {E}||M-XY||_F^2\\le \\frac{kl}{(k-l)(l-r)}\\sigma _{F,r+1}^2(M)~{\\rm if}~k>l>r.$ Remark C.1 Clarkson and Woodruff prove in [10] that Algorithm REF reaches the bound $\\sigma _{r+1}(M)$ within a factor of $1+\\epsilon $ whp if the test matrices $F\\in \\mathcal {G}^{k\\times m}$ and $H\\in \\mathcal {G}^{n\\times l}$ are Rademacher's matrices and if $k$ and $l$ are sufficiently large, having order of $r/\\epsilon $ and $r/\\epsilon ^2$ for small $\\epsilon $ , respectively.", "Tropp et al.", "argue in [47] that LRA is not practical if the numbers $k$ and $l$ of row and column samples are large; iterative refinement of LRA at sublinear cost in [35] can be a partial remedy." ], [ "Small families of hard inputs for sublinear cost ", "Any sublinear cost LRA algorithm fails on the following small families of LRA inputs.", "Example D.1 Let $\\Delta _{i,j}$ denote an $m\\times n$ matrix of rank 1 filled with 0s except for its $(i,j)$ th entry filled with 1.", "The $mn$ such matrices $\\lbrace \\Delta _{i,j}\\rbrace _{i,j=1}^{m,n}$ form a family of $\\delta $ -matrices.", "We also include the $m\\times n$ null matrix $O_{m,n}$ filled with 0s into this family.", "Now fix any sublinear cost algorithm; it does not access the $(i,j)$ th entry of its input matrices for some pair of $i$ and $j$ .", "Therefore it outputs the same approximation of the matrices $\\Delta _{i,j}$ and $O_{m,n}$ , with an undetected error at least 1/2.", "Arrive at the same conclusion by applying the same argument to the set of $mn+1$ small-norm perturbations of the matrices of the above family and to the $mn+1$ sums of the latter matrices with any fixed $m\\times n$ matrix of low rank.", "Finally, the same argument shows that a posteriori estimation of the output errors of an LRA algorithm applied to the same input families cannot run at sublinear cost.", "This example actually covers randomized LRA algorithms as well.", "Indeed suppose that with a positive constant probability an LRA algorithm does not access $K$ entries of an input matrix and apply this algorithm to two matrices of low rank whose difference at all these entries is equal to a large constant $C$ .", "Then clearly with a positive constant probability the algorithm has errors at least $C/2$ at at least $K/2$ of these entries.", "The paper [34] shows, however, that close LRA of a matrix that admits sufficiently close LRA can be computed at sublinear cost in two successive C-A iterations unless we initiate these iterations with a degeneracy, that is, with a submatrix of smaller numerical rank.", "Such a poor initialization occurs when we apply C-A iterations to the matrix families of Example REF , but we readily compute close LRA if we recursively perform C-A iterations and avoid degeneracy of the above kind at least at the initialization of a single C-A step." ], [ "Generation of two families of sparse test matrices", "In this section we specify two particular families of sparse test matrices, which were highly efficient in our tests when we applied them both themselves and in combination with other sparse test matrices.", "We define test matrices of these families by means of abridging the classical recursive processes of the generation of $n\\times n$ SRHT and SRFT matrices for $n=2^t$ .", "These matrices are obtained from the $n\\times n$ dense matrices $H_n$ of Walsh-Hadamard transform (cf.", "[26]) and $F_n$ of discrete Fourier transform (DFT) at $n$ points (cf.", "[33]), respectively.", "Recursive representation in $t$ recursive steps enables multiplication of the matrices $H_n$ and $F_n$ by a vector in $2tn$ additions and subtractions and $O(tn)$ flops, respectively.", "We end these processes in $d$ recursive steps for a fixed recursion depth $d$ , $1\\le d\\le t$ , and obtain the $d$ -abridged Hadamard (AH) and Fourier (AF) matrices $H_{d,d}$ and $F_{d,d}$ , respectively, such that $H_{t,t}=H_n$ and $F_{t,t}=F_n$ .", "Namely write $H_{d,0}:=F_{d,0}:=I_{n/2^d}$ , ${\\bf i}:=\\sqrt{-1}$ , and $\\omega _{s}:=\\exp (2\\pi {\\bf i}/s)$ , denoting a primitive $s$ -th root of 1, and then specify two recursive processes: $H_{d,0}:=I_{n/2^d},~H_{d,i+1}:=\\begin{pmatrix}H_{d,i} & H_{d,i} \\\\H_{d,i} & -H_{d,i}\\end{pmatrix}~{\\rm for}~i=0,1,\\dots ,d-1,$ $F_{d,i+1}:=\\widehat{P}_{i+1}\\begin{pmatrix}F_{d,i}&~~F_{d,i}\\\\F_{d,i}\\widehat{D}_{i+1}&-F_{d,i}\\widehat{D}_{i+1}\\end{pmatrix},~\\widehat{D}_{i+1}:=\\operatorname{diag}\\Big (\\omega _{2^{i+1}}^{j}\\Big )_{j=0}^{2^i-1},~i=0,1,\\dots ,d-1,$ where $\\widehat{P}_{i}$ denotes the $2^i\\times 2^i$ matrix of odd/even permutations such that $\\widehat{P}_{i}{\\bf u}={\\bf v}$ , ${\\bf u}=(u_j)_{j=0}^{2^{i}-1}$ , ${\\bf v}=(v_j)_{j=0}^{2^{i}-1}$ , $v_j=u_{2j}$ , $v_{j+2^{i-1}}=u_{2j+1}$ , $j=0,1,\\ldots ,2^{i-1}-1$ .For $d=t$ this is a decimation in frequency (DIF) radix-2 representation of FFT.", "Transposition turns it into the decimation in time (DIT) radix-2 representation of FFT.", "For any fixed pair of $d$ and $i$ , each of the matrices $H_{d,i}$ (resp.", "$F_{d,i}$ ) is orthogonal (resp.", "unitary) up to scaling and has $2^d$ nonzero entries in every row and column.", "Now make up test matrices $F$ and $H$ of $k\\times m$ and $n\\times l$ submatrices of $F_{d,d}$ and $H_{d,d}$ , respectively.", "Then in view of sparseness of $F_{d,d}$ or $H_{d,d}$ , we can compute the products $FM$ and $MH$ by using $O(kn2^d)$ and $O(lm2^d)$ flops, respectively, and they are just additions or subtractions in the case of submatrices of $H_{d,d}$ .", "Define the $d$ –Abridged Scaled and Permuted Hadamard (ASPH) matrices, $PDH_{d,d}$ , and $d$ –Abridged Scaled and Permuted Fourier (ASPF) $n\\times n$ matrices, $PD^{\\prime }F_{d,d}$ , where $P$ is a random sampling matrix, $D$ is the matrix of Rademacher's or another random integer diagonal scaling, and $D^{\\prime }$ is a matrix of random unitary diagonal scaling.", "Likewise define the families of ASH, ASF, APH, and APF matrices, $DH_{n,d}$ , $DF_{n,d}$ , $H_{n,d}P$ , and $F_{n,d}P$ , respectively.", "Each random permutation or scaling contributes up to $n$ random parameters.", "We can involve more random parameters by applying random permutation and scaling also to some or all intermediate matrices $H_{d,i}$ and $F_{d,i}$ for $i=0,1,\\dots ,d$ .", "The first $k$ rows for $r\\le k\\le n$ or first $l$ columns for $r\\le l\\le n$ of $H_{d,d}$ and $F_{d,d}$ form a $d$ -abridged Hadamard or Fourier test matrices, which turns into a SRHT or SRFT matrix, respectively, for $d=t$ .", "For $k$ and $l$ of order $r\\log (r)$ Algorithm REF with a SRHT or SRFT test matrix outputs whp accurate LRA of any matrix $M$ admitting LRA (see [21]), but in our tests the output was consistently accurate even with sparse abridged SRHT or SRFT test matrices, typically even when we computed them just in three recursive steps." ], [ "Volume of a factor-Gaussian matrix", "In this section we let random or randomized inputs have the distribution of a two-sided factor-Gaussian matrix with expected rank $r$ .", "For an $m\\times r$ Gaussian matrix $G$ , with $r \\le m$ , it is well-known that $ \\operatornamewithlimits{{\\it v}_2}(G)^2 \\sim \\prod _{i=1}^{r} \\chi ^2_{m-i+1},$ where $\\chi ^2_{m-i+1}$ denotes independent $\\chi ^2$ random variables with $m-i+1$ degrees of freedom, for $i=1,\\dots ,r$ .", "Next we recall two results, for the concentration of $\\chi ^2$ random variables and for the statistical order of $r\\cdot \\operatornamewithlimits{{\\it v}_2}(G)^{1/r}$ and $\\chi ^2$ random variables with appropriate degrees of freedom, respectively.", "Based on these results, we prove the concentration of the volume of a Gaussian matrix.", "Lemma F.1 (adapted from [23]) Let $Z\\sim \\chi ^2_k$ and let $r$ be an integer.", "Then $ \\operatorname{Probability}\\big \\lbrace \\frac{Z}{r} \\ge 1 + \\theta \\big \\rbrace \\le \\exp (-\\frac{\\theta r}{4})$ for any $\\theta > 4$ , and $ \\operatorname{Probability}\\big \\lbrace \\frac{Z}{r} \\le 1 - \\phi \\big \\rbrace \\le \\exp (-\\frac{\\phi ^2 r}{4})$ for any $\\phi > 0$ .", "Theorem F.1 ([29]) Let $m \\ge r \\ge 2$ and let $G$ be an $m\\times r$ Gaussian matrix.", "Then $\\chi ^2_{r(m-r+1) + \\frac{(r-1)(r-2)}{2}} \\succeq r\\operatornamewithlimits{{\\it v}_2}(G)^{2/r} \\succeq \\chi ^2_{r(m-r+1)}.$ Next we estimate the volume of a Gaussian matrix.", "Lemma F.2 Let $m \\ge r \\ge 2$ and let $G$ be an $m\\times r$ Gaussian matrix.", "Then $ \\operatorname{Probability}\\big \\lbrace \\operatornamewithlimits{{\\it v}_2}*{G} \\ge (1+\\theta )^{r/2} (m-r/2)^{r/2}\\big \\rbrace \\le \\exp {\\big (-\\frac{\\theta }{4}(mr-\\frac{r^2}{2} - \\frac{r}{2} + 1) \\big )}$ for $\\theta > 4$ , and $ \\operatorname{Probability}\\big \\lbrace \\operatornamewithlimits{{\\it v}_2}*{G} \\le (1-\\phi )^{r/2} (m-r+1)^{r/2}\\big \\rbrace \\le \\exp {\\big (-\\frac{\\phi ^2}{4}r(m-r+1)\\big )} $ for $\\phi > 0$ .", "Combine Lemma REF and Theorem REF .", "We also need the following result, proved in [31] (see an alternative proof in [24]).", "Theorem F.2 For an $m\\times q$ matrix $G$ , a $q\\times n$ matrix $H$ , and $1\\le r\\le q$ , it holds that $v_{2,r}(W)\\le v_{2,r}(G)v_{2,r}(H)$ for $1\\le r\\le q$ .", "Next we assume that $\\min \\lbrace m, n\\big \\rbrace \\gg r$ and that $p$ and $q$ are two sufficiently large integers; then we prove that the volume of any fixed $p\\times q$ submatrix of an $m\\times n$ two-sided factor-Gaussian matrix with expected rank $r$ has a reasonably large lower bound whp.", "Theorem F.3 Let $W = G\\Sigma H$ be an $m\\times n$ two-sided factor-Gaussian matrix with expected rank $r \\le \\min \\lbrace m, n\\rbrace $ .", "Let $\\mathcal {I}$ and $\\mathcal {J}$ be row and column index sets such that $|\\mathcal {I}| = p \\ge r$ and $|\\mathcal {J}| = q \\ge r$ .", "Let $\\phi $ be a positive number.", "Then $ \\operatornamewithlimits{{\\it v}_{2, r}}{\\big ( W_{\\mathcal {I}, \\mathcal {J}}\\big )} \\ge (1-\\phi )^r(p-r+1)^{r/2}(q-r+1)^{r/2}\\operatornamewithlimits{{\\it v}_{2, r}}\\big (\\Sigma \\big ) $ with a probability no less than $1 - \\exp {\\big (\\frac{\\phi ^2}{4}r(p-r+1)\\big )} - \\exp {\\big (\\frac{\\phi ^2}{4}r(q-r+1)\\big )}$ .", "Recall Theorem REF and obtain $ \\operatornamewithlimits{{\\it v}_{2, r}}{\\big (W_{\\mathcal {I}, \\mathcal {J}}\\big )} = \\operatornamewithlimits{{\\it v}_2}*{G_{\\mathcal {I}, :}}\\operatornamewithlimits{{\\it v}_2}*{\\Sigma } \\operatornamewithlimits{{\\it v}_2}*{H_{:, \\mathcal {J}}},$ where $G_{\\mathcal {I}, :}$ and $H_{:, \\mathcal {J}}$ are independent Gaussian matrices.", "Complete the proof by applying Lemma REF and the Union Bound.", "Next we extend the latter theorem to the estimation of the volume of a two-sided factor-Gaussian matrix.", "Due to the volume concentration of a Gaussian matrix, it is unlikely that the maximum volume of a matrix in a set of moderate number of Gaussian matrices greatly exceeds the volume of a fixed matrix in this set.", "Based on this observation, we estimate weak maximization of the volume of any fixed submatrix of a two-sided factor-Gaussian matrix.", "Lemma F.3 Let $G_1, G_2, \\dots , G_M$ be a collection of $M$ Gaussian matrices with size $m\\times r$ , and $m \\ge r$ .", "Then $\\frac{\\max _{1\\le i\\le M} \\big (\\operatornamewithlimits{{\\it v}_2}*{G_i}\\big )}{\\operatornamewithlimits{{\\it v}_2}*{G_1}} \\le \\Big ( \\frac{(1+\\theta )(1+r/m)}{1-\\phi } \\Big )^{r/2}$ whp specified in the proof.", "Define $V_{max} := \\max _{1\\le i\\le M} \\big (\\operatornamewithlimits{{\\it v}_2}*{G_i}\\big )$ .", "Apply Lemma REF and the Union Bound, and obtain that $ \\operatorname{Probability}\\big \\lbrace V_{max}\\ge (1+\\theta )^{r/2} (m-r/2)^{r/2}\\big \\rbrace \\le M\\cdot \\exp {\\Big (-\\frac{\\theta }{4}\\Big (mr-\\frac{r^2}{2} - \\frac{r}{2} + 1\\Big ) \\Big )} $ for $\\theta > 4$ .", "Moreover, $ \\operatorname{Probability}\\big \\lbrace \\operatornamewithlimits{{\\it v}_2}*{G_1} \\le (1-\\phi )^{r/2} (m-r+1)^{r/2}\\big \\rbrace \\le \\exp {\\big (-\\frac{\\phi ^2}{4}r(m-r+1)\\big )} $ for $\\phi > 0$ .", "Now assume that $m > 2r$ and readily deduce that $\\frac{m - r/2}{ m - r + 1} < 1 + \\frac{r}{m}.$ Combine these results and obtain that inequality (REF ) holds with a probability no less than $1 - M\\cdot \\exp {\\big (-\\frac{\\theta }{4}(mr-\\frac{r^2}{2} - \\frac{r}{2} + 1) \\big )} - \\exp {\\big (-\\frac{\\phi ^2}{4}r(m-r+1)\\big )}$ .", "Remark F.1 The exponent $r/2$ in the volume ratio may be disturbing but is natural because $\\operatornamewithlimits{{\\it v}_2}*{G_i}$ is essentially the volume of an $r$ -dimensional parallelepiped, and difference in each dimension will contribute to the difference in the volume.", "The impact of factor $M$ on the probability estimates can be mitigated with parameter $m$ , that is, the probability is high and even close to 1 if $m$ is set sufficiently large.", "Namely, let $m \\ge 1 + r + \\frac{4\\ln M}{r\\theta }.$ Then we can readily deduce that $M\\cdot \\exp \\Big (-\\frac{\\theta }{4}\\Big (mr-\\frac{r^2}{2} - \\frac{r}{2} + 1\\Big )\\Big ) < \\exp \\Big (-\\frac{\\theta }{4}r\\Big )$ and $\\exp {\\Big (-\\frac{\\phi ^2}{4}r(m-r+1)\\Big )} < \\exp \\Big ( -\\frac{\\phi ^2}{2}r\\Big ).$ Theorem F.4 Let $W = G\\Sigma H $ be an $m\\times n$ two-sided factor-Gaussian matrix with expected rank $r$ and let $r < \\min \\big (m, n\\big )$ .", "Let $\\mathcal {I}$ and $\\mathcal {J}$ be row and column index sets such that $|\\mathcal {I}| = p > 2r$ and $|\\mathcal {J}| = q > 2r$ .", "Let $\\theta > 4$ and $\\phi > 0$ be two parameters, and further assume that $p \\ge 1 + r + \\frac{4\\ln m^2/4}{r\\theta }$ and $q \\ge 1 + r + \\frac{4\\ln n^2/4}{r\\theta }$ .", "Then $W_{\\mathcal {I}, \\mathcal {J}}$ is a submatrix with $\\big (\\frac{1+\\theta }{1-\\phi }\\big )^r\\big (\\frac{(p+r)(q+r)}{pq} \\big )^{r/2}$ -weakly maximal $r$ -projective volume with a probability no less than $1 - 2\\exp {\\big (-\\frac{\\theta }{4}r \\big )}-2\\exp {\\big (-\\frac{\\phi ^2}{2}r\\big )}$ .", "Notice that there are $p(m-p) \\le m^2/4$ submatrices of $G$ that have size $p\\times r$ and differ from $G_{\\mathcal {I}, :}$ by one row and that likewise there are $q(n-q) \\le n^2/4$ submatrices that have size $q\\times r$ of $H$ and differ from $H_{:, \\mathcal {J}}$ by one column.", "Now let $\\mathcal {I}^{\\prime }$ and $\\mathcal {J}^{\\prime }$ be any pair of row and column index sets that differ from $\\mathcal {I}$ and $\\mathcal {J}$ by a single index, respectively.", "Then Lemma REF and Remark REF together imply that $\\frac{\\operatornamewithlimits{{\\it v}_2}*{G_{\\mathcal {I}^{\\prime }, :}}}{\\operatornamewithlimits{{\\it v}_2}*{G_{\\mathcal {I}, :}}} \\le \\Big ( \\frac{(1+\\theta )(1+\\frac{r}{p})}{1-\\phi } \\Big )^{r/2}~\\textrm { and }~\\frac{\\operatornamewithlimits{{\\it v}_2}*{H_{:, \\mathcal {J}^{\\prime }}}}{\\operatornamewithlimits{{\\it v}_2}*{H_{:, \\mathcal {I}}}} \\le \\Big ( \\frac{(1+\\theta )(1+\\frac{r}{q})}{1-\\phi } \\Big )^{r/2}$ with a probability no less than $1 - 2\\exp {\\big (-\\frac{\\theta }{4}r \\big )}-2\\exp {\\big (-\\frac{\\phi ^2}{2}r\\big )}$ .", "Recall that $\\operatornamewithlimits{{\\it v}_{2, r}}\\big ( W_{\\mathcal {I}, \\mathcal {J}} \\big )= \\operatornamewithlimits{{\\it v}_2}\\big ( G_{\\mathcal {I}, :}\\big )\\operatornamewithlimits{{\\it v}_2}*{\\Sigma }\\operatornamewithlimits{{\\it v}_2}\\big ( H_{:, \\mathcal {J}}\\big ),$ and similarly for $W_{\\mathcal {I}^{\\prime }, \\mathcal {J}^{\\prime }}$ .", "Inequality (REF ) implies that $\\frac{\\operatornamewithlimits{{\\it v}_{2, r}}\\big ( W_{\\mathcal {I}^{\\prime }, \\mathcal {J}^{\\prime }}\\big )}{\\operatornamewithlimits{{\\it v}_{2, r}}\\big ( W_{\\mathcal {I}, \\mathcal {J}}\\big )} \\le \\Big (\\frac{1+\\theta }{1-\\phi }\\Big )^r\\Big (\\frac{(p+r)(q+r)}{pq} \\Big )^{r/2}.$ Remark F.2 Recall from the beginning of Section REF that such results as Theorems REF and REF can be only extended from a factor-Gaussian matrix into its very small neighborhood.", "Acknowledgements: Our research has been supported by NSF Grants CCF–1563942 and CCF–1733834 and PSC CUNY Award 69813 00 48.", "We also thank E.E.", "Tyrtyshnikov for the challenge of providing formal support for empirical efficiency of C-A iterations, and we very much appreciate reviewers' thoughtful comments, which helped us to improve our initial draft most significantly." ] ]
1906.04327
[ [ "Asymptotic Guarantees for Learning Generative Models with the\n Sliced-Wasserstein Distance" ], [ "Abstract Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e.g.", "Wasserstein generative adversarial networks, Wasserstein autoencoders).", "Emerging from computational optimal transport, the Sliced-Wasserstein (SW) distance has become a popular choice in MEDE thanks to its simplicity and computational benefits.", "While several studies have reported empirical success on generative modeling with SW, the theoretical properties of such estimators have not yet been established.", "In this study, we investigate the asymptotic properties of estimators that are obtained by minimizing SW. We first show that convergence in SW implies weak convergence of probability measures in general Wasserstein spaces.", "Then we show that estimators obtained by minimizing SW (and also an approximate version of SW) are asymptotically consistent.", "We finally prove a central limit theorem, which characterizes the asymptotic distribution of the estimators and establish a convergence rate of $\\sqrt{n}$, where $n$ denotes the number of observed data points.", "We illustrate the validity of our theory on both synthetic data and neural networks." ], [ "Introduction", "Minimum distance estimation (MDE) is a generalization of maximum-likelihood inference, where the goal is to minimize a distance between the empirical distribution of a set of independent and identically distributed (i.i.d.)", "observations $Y_{1:n} =(Y_1,\\dots ,Y_n)$ and a family of distributions indexed by a parameter $\\theta $ .", "The problem is formally defined as follows [1], [2]: $\\hat{\\theta }_n = \\mathrm {argmin}_{\\theta \\in \\Theta }\\ \\mathbf {D}(\\hat{\\mu }_n, \\mu _\\theta ) \\;,$ where $\\mathbf {D}$ denotes a distance (or a divergence in general) between probability measures, $\\mu _\\theta $ denotes a probability measure indexed by $\\theta $ , $\\Theta $ denotes the parameter space, and $\\hat{\\mu }_n= \\frac{1}{n} \\sum \\nolimits _{i=1}^n _{Y_i}$ denotes the empirical measure of $Y_{1:n}$ , with $_Y$ being the Dirac distribution with mass on the point $Y$ .", "When $\\mathbf {D}$ is chosen as the Kullback-Leibler divergence, this formulation coincides with the maximum likelihood estimation (MLE) [2].", "While MDE provides a fruitful framework for statistical inference, when working with generative models, solving the optimization problem in (REF ) might be intractable since it might be impossible to evaluate the probability density function associated with $\\mu _\\theta $ .", "Nevertheless, in various settings, even if the density is not available, one can still generate samples from the distribution $\\mu _\\theta $ , and such samples turn out to be useful for making inference.", "More precisely, under such settings, a natural alternative to (REF ) is the minimum expected distance estimator, which is defined as follows [3]: $ \\hat{\\theta }_{n, m} = \\mathrm {argmin}_{\\theta \\in \\Theta }\\ \\mathbb {E}\\left[ \\mathbf {D}(\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right] \\;.$ Here, $\\hat{\\mu }_{\\theta , m} = \\frac{1}{m} \\sum \\nolimits _{i=1}^m _{Z_i}$ denotes the empirical distribution of $Z_{1:m}$ , that is a sequence of i.i.d.", "random variables with distribution $\\mu _\\theta $ .", "This algorithmic framework has computationally favorable properties since one can replace the expectation with a simple Monte Carlo average in practical applications.", "In the context of MDE, distances that are based on optimal transport (OT) have become increasingly popular due to their computational and theoretical properties [4], [5], [6], [7], [8].", "For instance, if we replace the distance $\\mathbf {D}$ in (REF ) with the Wasserstein distance (defined in sec:prel-techn-backgr below), we obtain the minimum expected Wasserstein estimator [3].", "In the classical statistical inference setting, the typical use of such an estimator is to infer the parameters of a measure whose density does not admit an analytical closed-form formula [2].", "On the other hand, in the implicit generative modeling (IGM) setting, this estimator forms the basis of two popular IGM strategies: Wasserstein generative adversarial networks (GAN) [4] and Wasserstein variational auto-encoders (VAE) [5] (cf.", "[9] for their relation).", "The goal of these two methods is to find the best parametric transport map $T_\\theta $ , such that $T_\\theta $ transforms a simple distribution $\\mu $ (e.g.", "standard Gaussian or uniform) to a potentially complicated data distribution $\\hat{\\mu }_n$ by minimizing the Wasserstein distance between the transported distribution $\\mu _\\theta = T_{\\theta \\sharp }\\mu $ and $\\hat{\\mu }_n$ , where $\\sharp $ denotes the push-forward operator, to be defined in the next section.", "In practice, $\\theta $ is typically chosen as a neural network, for which it is often impossible to evaluate the induced density $\\mu _\\theta $ .", "However, one can easily generate samples from $\\mu _\\theta $ by first generating a sample from $\\mu $ and then applying $T_\\theta $ to that sample, making minimum expected distance estimation (REF ) feasible for this setting.", "Motivated by its practical success, the theoretical properties of this estimator have been recently taken under investigation [10], [11] and very recently Bernton et al.", "[3] have established the consistency (for the general setting) and the asymptotic distribution (for one dimensional setting) of this estimator.", "Even though estimation with the Wasserstein distance has served as a fertile ground for many generative modeling applications, except for the case when the measures are supported on $\\mathbb {R}^1$ , the computational complexity of minimum Wasserstein estimators rapidly becomes excessive with the increasing problem dimension, and developing accurate and efficient approximations is a highly non-trivial task.", "Therefore, there have been several attempts to use more practical alternatives to the Wasserstein distance [12], [6].", "In this context, the Sliced-Wasserstein (SW) distance [13], [14], [15] has been an increasingly popular alternative to the Wasserstein distance, which is defined as an average of one-dimensional Wasserstein distances, which allows it to be computed in an efficient manner.", "While several studies have reported empirical success on generative modeling with SW [16], [17], [18], [19], the theoretical properties of such estimators have not yet been fully established.", "Bonnotte [14] proved that SW is a proper metric, and in compact domains SW is equivalent to the Wasserstein distance, hence convergence in SW implies weak convergence in compact domains.", "[14] also analyzed the gradient flows based on SW, which then served as a basis for a recently proposed IGM algorithm [18].", "Finally, recent studies [16], [20] investigated the sample complexity of SW and established bounds for the SW distance between two measures and their empirical instantiations.", "In this paper, we investigate the asymptotic properties of estimators given in (REF ) and (REF ) when $\\mathbf {D}$ is replaced with the SW distance.", "We first prove that convergence in SW implies weak convergence of probability measures defined on general domains, which generalizes the results given in [14].", "Then, by using similar techniques to the ones given in [3], we show that the estimators defined by (REF ) and (REF ) are consistent, meaning that as the number of observations $n$ increases the estimates will get closer to the data-generating parameters.", "We finally prove a central limit theorem (CLT) in the multidimensional setting, which characterizes the asymptotic distribution of these estimators and establishes a convergence rate of $\\sqrt{n}$ .", "The CLT that we prove is stronger than the one given in [3] in the sense that it is not restricted to the one-dimensional setting as opposed to [3].", "We support our theory with experiments that are conducted on both synthetic and real data.", "We first consider a more classical statistical inference setting, where we consider a Gaussian model and a multidimensional $\\alpha $ -stable model whose density is not available in closed-form.", "In both models, the experiments validate our consistency and CLT results.", "We further observe that, especially for high-dimensional problems, the estimators obtained by minimizing SW have significantly better computational properties when compared to the ones obtained by minimizing the Wasserstein distance, as expected.", "In the IGM setting, we consider the neural network-based generative modeling algorithm proposed in [16] and show that our results also hold in the real data setting as well." ], [ "Preliminaries and Technical Background", "We consider a probability space $(\\Omega , \\mathcal {F}, \\mathbb {P})$ with associated expectation operator $\\mathbb {E}$ , on which all the random variables are defined.", "Let $(Y_k)_{k \\in \\mathbb {N}}$ be a sequence of random variables associated with observations, where each observation takes value in $\\mathsf {Y}\\subset \\mathbb {R}^{d}$ .", "We assume that these observations are i.i.d.", "according to $\\mu _\\star \\in \\mathcal {P}(\\mathsf {Y})$ , where $\\mathcal {P}(\\mathsf {Y})$ stands for the set of probability measures on $\\mathsf {Y}$ .", "A statistical model is a family of distributions on $\\mathsf {Y}$ and is denoted by $\\mathcal {M} = \\lbrace \\mu _\\theta \\in \\mathcal {P}(\\mathsf {Y}),\\ \\theta \\in \\Theta \\rbrace $ , where $\\Theta \\subset \\mathbb {R}^{d_\\theta }$ is the parametric space.", "In this paper, we focus on parameter inference for purely generative models: for all $\\theta \\in \\Theta $ , we can generate i.i.d.", "samples $(Z_{k})_{k \\in \\mathbb {N}^*}\\in \\mathsf {Y}^{\\mathbb {N}^*}$ from $\\mu _\\theta $ , but the associated likelihood is numerically intractable.", "In the sequel, $(Z_k)_{k \\in \\mathbb {N}^*}$ denotes an i.i.d.", "sequence from $\\mu _{\\theta }$ with $\\theta \\in \\Theta $ , and for any $m \\in \\mathbb {N}^*$ , $\\hat{\\mu }_{\\theta , m} = (1/m) \\sum _{i=1}^m _{Z_i}$ denotes the corresponding empirical distribution.", "Throughout our study, we assume that the following conditions hold: (1) $\\mathsf {Y}$ , endowed with the Euclidean distance $\\rho $ , is a Polish space, (2) $\\Theta $ , endowed with the distance $\\rho _\\Theta $ , is a Polish space, (3) $\\Theta $ is a $\\sigma $ -compact space, i.e.", "the union of countably many compact subspaces, and (4) parameters are identifiable, i.e.", "$\\mu _\\theta = \\mu _{\\theta ^{\\prime }}$ implies $\\theta = \\theta ^{\\prime }$ .", "We endow $\\mathcal {P}(\\mathsf {Y})$ with the Lévy-Prokhorov distance $\\mathbf {d}_{\\mathcal {P}}$ , which metrizes the weak convergence by [21] since $\\mathsf {Y}$ is assumed to be a Polish space.", "We denote by $\\mathcal {Y}$ the Borel $\\sigma $ -field of $(\\mathsf {Y},\\rho )$ .", "Wasserstein distance.", "For $p \\ge 1$ , we denote by $\\mathcal {P}_p(\\mathsf {Y})$ the set of probability measures on $\\mathsf {Y}$ with finite $p$ 'th moment: $\\mathcal {P}_p(\\mathsf {Y}) = \\left\\lbrace \\mu \\in \\mathcal {P}(\\mathsf {Y})\\,:\\;\\int _{\\mathsf {Y}} {y - y_0 }^p \\mathrm {d}\\mu (y) < +\\infty , \\, \\text{ for some $y_0 \\in \\mathsf {Y}$}\\right\\rbrace $ .", "The Wasserstein distance of order $p$ between any $\\mu , \\nu \\in \\mathcal {P}_p(\\mathsf {Y})$ is defined by [22], $[p]^{p}(\\mu , \\nu ) = \\inf _{\\gamma \\in \\Gamma (\\mu , \\nu )} \\left\\lbrace \\int _{\\mathsf {Y}\\times \\mathsf {Y}} { x - y }^p \\mathrm {d}\\gamma (x,y) \\right\\rbrace \\;,$ where $\\Gamma (\\mu , \\nu )$ is the set of probability measures $\\gamma $ on $(\\mathsf {Y}\\times \\mathsf {Y},\\mathcal {Y}\\otimes \\mathcal {Y})$ satisfying $\\gamma (\\mathsf {A}\\times \\mathsf {Y}) = \\mu (\\mathsf {A})$ and $\\gamma (\\mathsf {Y}\\times \\mathsf {A}) = \\nu (\\mathsf {A})$ for any $\\mathsf {A}\\in \\mathcal {B}(\\mathsf {Y})$ .", "The space $\\mathcal {P}_p(\\mathsf {Y})$ endowed with the distance $[p]$ is a Polish space by [22] since $(\\mathsf {Y}, \\rho )$ is assumed to be Polish.", "The one-dimensional case is a favorable scenario for which computing the Wasserstein distance of order $p$ between $\\mu , \\nu \\in \\mathcal {P}_p(\\mathbb {R})$ becomes relatively easy since it has a closed-form formula, given by [23]: $[p]^p(\\mu , \\nu ) = \\int _{0}^1 \\left| F_\\mu ^{-1}(t) - F_\\nu ^{-1}(t) \\right|^p \\mathrm {d}t = \\int _{\\mathbb {R}} \\left| s - F_\\nu ^{-1}(F_\\mu (s)) \\right|^p \\mathrm {d}\\mu (s)\\;,$ where $F_\\mu $ and $F_\\nu $ denote the cumulative distribution functions (CDF) of $\\mu $ and $\\nu $ respectively, and $F_\\mu ^{-1}$ and $F_\\nu ^{-1}$ are the quantile functions of $\\mu $ and $\\nu $ respectively.", "For empirical distributions, (REF ) is calculated by simply sorting the $n$ samples drawn from each distribution and computing the average cost between the sorted samples.", "Sliced-Wasserstein distance.", "The analytical form of the Wasserstein distance for one-dimensional distributions is an attractive property that gives rise to an alternative metric referred to as the Sliced-Wasserstein (SW) distance [13], [15].", "The idea behind SW is to first, obtain a family of one-dimensional representations for a higher-dimensional probability distribution through linear projections, and then, compute the average of the Wasserstein distance between these one-dimensional representations.", "More formally, let $\\mathbb {S}^{d-1} = \\left\\lbrace u\\in \\mathbb {R}^d\\,:\\;{u} = 1\\right\\rbrace $ be the $d$ -dimensional unit sphere, and denote by $\\left\\langle \\cdot ,\\cdot \\right\\rangle $ the Euclidean inner-product.", "For any $u\\in \\mathbb {S}^{d-1}$ , we define $u^{\\star }$ the linear form associated with $u$ for any $y \\in \\mathsf {Y}$ by $u^{\\star }(y) = \\left\\langle u,y \\right\\rangle $ .", "The Sliced-Wasserstein distance of order $p$ is defined for any $\\mu ,\\nu \\in \\mathcal {P}_p(\\mathsf {Y})$ as, $[p]^{p}(\\mu , \\nu ) = \\int _{\\mathbb {S}^{d-1}} [p]^p(u^{\\star }_{\\sharp } \\mu , u^{\\star }_{\\sharp } \\nu ) \\mathrm {d}\\sigma (u)$ where $\\sigma $ is the uniform distribution on $\\mathbb {S}^{d-1}$ and for any measurable function $f :\\mathsf {Y}\\rightarrow \\mathbb {R}$ and $\\zeta \\in \\mathcal {P}(\\mathsf {Y})$ , $f_{\\sharp }\\zeta $ is the push-forward measure of $\\zeta $ by $f$ , i.e.", "for any $\\mathsf {A}\\in \\mathcal {B}(\\mathbb {R})$ , $f_{\\sharp }\\zeta (\\mathsf {A}) = \\zeta (f^{-1}(\\mathsf {A}))$ where $f^{-1}(\\mathsf {A}) = \\lbrace y \\in \\mathsf {Y}\\, : \\, f(y) \\in \\mathsf {A}\\rbrace $ .", "$[p]$ is a distance on $\\mathcal {P}_p(\\mathsf {Y})$ [14] and has important practical implications: in practice, the integration in (REF ) is approximated using a Monte Carlo scheme that randomly draws a finite set of samples from $\\sigma $ on $\\mathbb {S}^{d-1}$ and replaces the integral with a finite-sample average.", "Therefore, the evaluation of the SW distance between $\\mu , \\nu \\in \\mathcal {P}_p(\\mathsf {Y})$ has significantly lower computational requirements than the Wasserstein distance, since it consists in solving several one-dimensional optimal transport problems, which have closed-form solutions." ], [ "Asymptotic Guarantees for Minimum Sliced-Wasserstein Estimators", "We define the minimum Sliced-Wasserstein estimator (MSWE) of order $p$ as the estimator obtained by plugging $[p]$ in place of $\\mathbf {D}$ in (REF ).", "Similarly, we define the minimum expected Sliced-Wasserstein estimator (MESWE) of order $p$ as the estimator obtained by plugging $[p]$ in place of $\\mathbf {D}$ in (REF ).", "In the rest of the paper, MSWE and MESWE will be denoted by $\\hat{\\theta }_{n}$ and $\\hat{\\theta }_{n,m}$ respectively.", "We present the asymptotic properties that we derived for MSWE and MESWE, namely their existence and consistency.", "We study their measurability in subsec:measurability of the supplementary document.", "We also formulate a CLT that characterizes the asymptotic distribution of MSWE and establishes a convergence rate for any dimension.", "We provide all the proofs in sec:postponed-proofs of the supplementary document.", "Note that, since the Sliced-Wasserstein distance is an average of one-dimensional Wasserstein distances, some proofs are, inevitably, similar to the proofs done in [3].", "However, the adaptation of these techniques to the SW case is made possible by the identification of novel properties regarding the topology induced by the SW distance, to the best of our knowledge, which we establish for the first time in this study." ], [ "Topology induced by the Sliced-Wasserstein distance", "We begin this section by a useful result which we believe is interesting on its own and implies that the topology induced by $[p]$ on $\\mathcal {P}_p(\\mathbb {R}^d)$ is finer than the weak topology induced by the Lévy-Prokhorov metric $\\mathbf {d}_{\\mathcal {P}}$ .", "Theorem 1 Let $p \\in [1, +\\infty )$ .", "The convergence in $[p]$ implies the weak convergence in $\\mathcal {P}(\\mathbb {R}^d)$ .", "In other words, if ${\\mu _k}$ is a sequence of measures in $\\mathcal {P}_p(\\mathbb {R}^d)$ satisfying $ \\lim _{k \\rightarrow +\\infty } [p](\\mu _k, \\mu ) = 0$ , with $\\mu \\in \\mathcal {P}_p(\\mathbb {R}^d)$ , then ${\\mu _k} \\xrightarrow{}\\mu $ .", "The property that convergence in $[p]$ implies weak convergence has already been proven in [14] only for compact domains.", "While the implication of weak convergence is one of the most crucial requirements that a distance metric should satisfy, to the best of our knowledge, this implication has not been proved for general domains before.", "In [14], the main proof technique was based on showing that $[p]$ is equivalent to $[p]$ in compact domains, whereas we follow a different path and use the Lévy characterization." ], [ "Existence and consistency of MSWE and MESWE", "In our next set of results, we will show that both MSWE and MESWE are consistent, in the sense that, when the number of observations $n$ increases, the estimators will converge to a parameter $\\theta _\\star $ that minimizes the ideal problem $\\theta \\mapsto [p](\\mu _\\star ,\\mu _\\theta )$ .", "Before we make this argument more precise, let us first present the assumptions that will imply our results.", "A 1 The map $\\theta \\mapsto \\mu _\\theta $ is continuous from $(\\Theta ,\\rho _{\\Theta })$ to $(\\mathcal {P}(\\mathsf {Y}),\\mathbf {d}_{\\mathcal {P}})$ , i.e.", "for any sequence $(\\theta _n)_{n \\in \\mathbb {N}}$ in $\\Theta $ , satisfying $\\lim _{n \\rightarrow +\\infty } \\rho _\\Theta (\\theta _n, \\theta ) = 0$ , we have ${\\mu _{\\theta _n}} \\xrightarrow{}\\mu _\\theta $ .", "A 2 The data-generating process is such that $\\lim _{n \\rightarrow +\\infty } [p](\\hat{\\mu }_n, \\mu _\\star ) = 0$ , $\\mathbb {P}$ -almost surely.", "A 3 There exists $\\epsilon > 0$ , such that setting $\\epsilon _\\star = \\inf _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ , the set $\\Theta ^\\star _\\epsilon = \\lbrace \\theta \\in \\Theta : [p](\\mu _\\star , \\mu _\\theta ) \\le \\epsilon _\\star + \\epsilon \\rbrace $ is bounded.", "These assumptions are mostly related to the identifiability of the statistical model and the regularity of the data generating process.", "They are arguably mild assumptions, analogous to those that have already been considered in the literature [3].", "Note that, without thm:SWpmetrizesPp, the formulation and use of assumption:datagen in our proofs in the supplementary document would not be possible.", "In the next result, we establish the consistency of MSWE.", "Theorem 2 (Existence and consistency of MSWE) Assume assumption:continuousmap, assumption:datagen and assumption:boundedset.", "There exists $\\mathsf {E}\\in \\mathcal {F}$ with $\\mathbb {P}(\\mathsf {E}) = 1$ such that, for all $\\omega \\in \\mathsf {E}$ , $\\lim _{n \\rightarrow +\\infty } \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) &= \\inf _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta ), \\; \\text{ and } \\\\\\limsup _{n \\rightarrow +\\infty } \\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) &\\subset \\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta ) \\;, $ where $\\hat{\\mu }_n$ is defined by (REF ).", "Besides, for all $\\omega \\in \\mathsf {E}$ , there exists $n(\\omega )$ such that, for all $n \\ge n(\\omega )$ , the set $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ is non-empty.", "Our proof technique is similar to the one given in [3].", "This result shows that, when the number of observations goes to infinity, the estimate $\\hat{\\theta }_n$ will converge to a global minimizer of the problem $\\min _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ .", "In our next result, we prove a similar property for MESWEs as $\\min (m,n)$ goes to infinity.", "In order to increase clarity, and without loss of generality, in this setting, we consider $m$ as a function of $n$ such that $\\lim _{n \\rightarrow +\\infty } m(n) = +\\infty $ .", "Now, we derive an analogous version of Theorem REF for MESWE.", "For this result, we need to introduce another continuity assumption.", "A 4 If $\\lim _{n \\rightarrow +\\infty } \\rho _\\Theta (\\theta _n, \\theta ) = 0$ , then $\\lim _{n \\rightarrow +\\infty } \\mathbb {E}[ [p](\\mu _{\\theta _n}, \\hat{\\mu }_{\\theta _n, n}) | Y_{1:n} ] = 0$ .", "The next theorem establishes the consistency of MESWE.", "Theorem 3 (Existence and consistency of MESWE) Assume assumption:continuousmap, assumption:datagen, assumption:boundedset and assumption:sw32.", "Let $(m(n))_{n \\in \\mathbb {N}^*}$ be an increasing sequence satisfying $\\lim _{n \\rightarrow +\\infty } m(n) = +\\infty $ .", "There exists a set $\\mathsf {E}\\subset \\Omega $ with $\\mathbb {P}(\\mathsf {E}) = 1$ such that, for all $w \\in \\mathsf {E}$ , $\\lim _{n \\rightarrow +\\infty } \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] &= \\inf _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta ), \\; \\text{ and } \\\\\\limsup _{n \\rightarrow +\\infty } \\mathrm {argmin}_{\\theta \\in \\Theta }\\ \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] &\\subset \\mathrm {argmin}_{\\theta \\in \\Theta }\\ [p](\\mu _\\star , \\mu _\\theta ) \\;, $ where $\\hat{\\mu }_n$ and $ \\hat{\\mu }_{\\theta , m(n)}$ are defined by (REF ) and (REF ) respectively.", "Besides, for all $\\omega \\in \\mathsf {E}$ , there exists $n(\\omega )$ such that, for all $n \\ge n(\\omega )$ , the set $\\mathrm {argmin}_{\\theta \\in \\Theta }\\ \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} ]$ is non-empty.", "Similar to Theorem REF , this theorem shows that, when the number of observations goes to infinity, the estimator obtained with the expected distance will converge to a global minimizer." ], [ "Convergence of MESWE to MSWE", "In practical applications, we can only use a finite number of generated samples $Z_{1:m}$ .", "In this subsection, we analyze the case where the observations $Y_{1:n}$ are kept fixed while the number of generated samples increases, i.e.", "$m \\rightarrow +\\infty $ and we show in this scenario that MESWE converges to MSWE, assuming the latter exists.", "Before deriving this result, we formulate a technical assumption below.", "A 5 For some $\\epsilon > 0$ and $\\epsilon _n = \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n, \\mu _\\theta )$ , the set $\\Theta _{\\epsilon , n} = \\lbrace \\theta \\in \\Theta : [p](\\hat{\\mu }_n, \\mu _\\theta ) \\le \\epsilon _n + \\epsilon \\rbrace $ is bounded almost surely.", "Theorem 4 (MESWE converges to MSWE as $m \\rightarrow +\\infty $ ) Assume assumption:continuousmap, assumption:sw32 and assumption:boundedsetepsn.", "Then, $\\lim _{m \\rightarrow +\\infty } \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] &= \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n, \\mu _\\theta ) \\\\\\limsup _{m \\rightarrow +\\infty } \\mathrm {argmin}_{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] &\\subset \\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\hat{\\mu }_n, \\mu _\\theta ) $ Besides, there exists $m^*$ such that, for any $ m \\ge m^*$ , the set $\\mathrm {argmin}_{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right]$ is non-empty.", "This result shows that MESWE would be indeed promising in practice, as one get can more accurate estimations by increasing $m$ ." ], [ "Rate of convergence and the asymptotic distribution", "In our last set of theoretical results, we investigate the asymptotic distribution of MSWE and we establish a rate of convergence.", "We now suppose that we are in the well-specified setting, i.e.", "there exists $\\theta _\\star $ in the interior of $\\Theta $ such that $\\mu _{\\theta _\\star } = \\mu _\\star $ , and we consider the following two assumptions.", "For any $u \\in \\mathbb {S}^{d-1}$ and $t \\in \\mathbb {R}$ , we define $F_\\theta (u,t) = \\int _\\mathsf {Y}\\mathbb {1}_{\\left(-\\infty ,t\\right]} (\\left\\langle u,y \\right\\rangle ) \\mathrm {d}\\mu _\\theta (y) $ .", "Note that for any $u \\in \\mathbb {S}^{d-1}$ , $F_\\theta (u,\\cdot )$ is the cumulative distribution function (CDF) associated to the measure $u^{\\star }_{\\sharp }\\mu _\\theta $ .", "A 6 For all $\\epsilon > 0$ , there exists $\\delta > 0$ such that $\\inf _{\\theta \\in \\Theta :\\ \\rho _{\\Theta }(\\theta , \\theta _\\star ) \\ge \\epsilon } [1](\\mu _{\\theta _\\star }, \\mu _\\theta ) > \\delta \\;.$ Let $\\mathcal {L}^1(\\mathbb {S}^{d-1}\\times \\mathbb {R})$ denote the class of functions that are absolutely integrable on the domain $\\mathbb {S}^{d-1}\\times \\mathbb {R}$ , with respect to the measure $d\\sigma \\otimes \\mathrm {Leb}$ , where $\\mathrm {Leb}$ denotes the Lebesgue measure.", "A 7 Assume that there exists a measurable function $D_{\\star } = (D_{\\star ,1}, \\dots , D_{\\star ,d_\\theta }) : \\mathbb {S}^{d-1}\\times \\mathbb {R}\\mapsto \\mathbb {R}^{d_\\theta }$ such that for each $i =1, \\dots , d_\\theta $ , $ D_{\\star ,i} \\in \\mathcal {L}^1(\\mathbb {S}^{d-1}\\times \\mathbb {R})$ and $\\int _{\\mathbb {S}^{d-1}} \\int _\\mathbb {R}\\left| F_{\\theta }(u,t) - F_{\\theta _\\star }(u,t) - \\langle \\theta - \\theta _\\star , D_{\\star }(u,t) \\rangle \\right| \\mathrm {d}t \\mathrm {d}\\sigma (u) = \\epsilon ( \\rho _\\Theta ( \\theta , \\theta _\\star ) ) \\;,$ where $\\epsilon : \\mathbb {R}_+ \\rightarrow \\mathbb {R}_+$ satisfies $\\lim _{t \\rightarrow 0} \\epsilon (t) = 0$ .", "Besides, $\\lbrace D_{\\star ,i}\\rbrace _{i=1}^{d_\\theta }$ are linearly independent in $\\mathcal {L}^1(\\mathbb {S}^{d-1}\\times \\mathbb {R})$ .", "For any $u \\in \\mathbb {S}^{d-1}$ , and $t \\in \\mathbb {R}$ , define: $\\hat{F}_{n}(u,t) = n^{-1} \\operatorname{card}\\lbrace i \\in \\lbrace 1, \\dots , n\\rbrace : \\left\\langle u,Y_i \\right\\rangle \\le t \\rbrace $ , where $\\operatorname{card}$ denotes the cardinality of a set, and for any $u \\in \\mathbb {S}^{d-1}$ , $\\hat{F}_n(u,\\cdot )$ is the CDF associated to the measure $u^{\\star }_{\\sharp }\\hat{\\mu }_n$ .", "A 8 There exists a random element $G_{\\star } : \\mathbb {S}^{d-1}\\times \\mathbb {R}\\mapsto \\mathbb {R}$ such that the stochastic process $\\sqrt{n} ( \\hat{F}_n - F_{\\theta _\\star } )$ converges weakly in $\\mathcal {L}_1(\\mathbb {S}^{d-1}\\times \\mathbb {R})$ to $G_\\star $Under mild assumptions on the tails of $u^{\\star }_{\\sharp }\\mu _\\star $ for any $u \\in \\mathbb {S}^{d-1}$ , we believe that one can prove that assumption:weakconvergencewithoutnorm holds in general by extending [24] and [25].. Theorem 5 Assume assumption:continuousmap, assumption:datagen, assumption:boundedset, assumption:wellseparation, assumption:formderivative and assumption:weakconvergencewithoutnorm.", "Then, the asymptotic distribution of the goodness-of-fit statistic is given by, $\\sqrt{n} \\inf _{\\theta \\in \\Theta } [1](\\hat{\\mu }_n, \\mu _\\theta ) \\xrightarrow{}\\inf _{\\theta \\in \\Theta } \\int _{\\mathbb {S}^{d-1}} \\int _\\mathbb {R}\\left| G_{\\star }(u,t) - \\langle \\theta , D_{\\star }(u,t) \\rangle \\right| \\mathrm {d}t \\mathrm {d}\\sigma (u), \\quad \\text{ as } n \\rightarrow +\\infty \\;,$ where $\\hat{\\mu }_n$ is defined by (REF ).", "Theorem 6 Assume assumption:continuousmap, assumption:datagen, assumption:boundedset, assumption:wellseparation, assumption:formderivative and assumption:weakconvergencewithoutnorm.", "Suppose also that the random map $\\theta \\mapsto \\int _{\\mathbb {S}^{d-1}} \\int _\\mathbb {R}\\left| G_{\\star }(u,t) - \\langle \\theta , D_{\\star }(u,t) \\rangle \\right| \\mathrm {d}t \\mathrm {d}\\sigma (u)$ has a unique infimum almost surely.", "Then, MSWE with $p = 1$ satisfies, $\\sqrt{n} ( \\hat{\\theta }_n - \\theta _\\star ) \\xrightarrow{}\\mathrm {argmin}_{\\theta \\in \\Theta } \\int _{\\mathbb {S}^{d-1}} \\int _\\mathbb {R}\\left| G_{\\star }(u,t) - \\langle \\theta , D_{\\star }(u,t) \\rangle \\right| \\mathrm {d}t \\mathrm {d}\\sigma (u), \\quad \\text{ as } n \\rightarrow +\\infty \\;,$ where $\\hat{\\theta }_n$ is defined by (REF ) with $[1]$ in place of $ \\mathbf {D}$ .", "These results show that the estimator and the associated goodness-of-fit statistics will converge to a random variable in distribution, where the rate of convergence is $\\sqrt{n}$ .", "Note that $G_{\\star }$ is defined as a random element (see assumption:weakconvergencewithoutnorm), therefore we can not claim that the convergence in distribution derived in thm:asymptotic1 and REF implies the convergence in probability.", "This CLT is also inspired by [3], where they identified the asymptotic distribution associated to the minimum Wasserstein estimator.", "However, since $[p]$ admits an analytical form only when $d=1$ , their result is restricted to the scalar case, and in their conclusion, [3] conjecture that the rate of the minimum Wasserstein estimators would depend negatively on the dimension of the observation space.", "On the contrary, since $[p]$ is defined in terms of one-dimensional $[p]$ distances, we circumvent the curse of dimensionality and our result holds for any finite dimension.", "While the perceived computational burden has created a pessimism in the machine learning community about the use of Wasserstein-based methods in large dimensional settings, which motivated the rise of regularized optimal transport [26], we believe that our findings provide an interesting counter-example to this conception." ], [ "Experiments", "We conduct experiments on synthetic and real data to empirically confirm our theorems.", "We explain in appendix:computational of the supplementary document the optimization methods used to find the estimators.", "Specifically, we can use stochastic iterative optimization algorithm (e.g., stochastic gradient descent).", "Note that, since we calculate (expected) SW with Monte Carlo approximations over a finite set of projections (and a finite number of `generated datasets'), MSWE and MESWE fall into the category of doubly stochastic algorithms.", "Our experiments on synthetic data actually show that using only one random projection and one randomly generated dataset at each iteration of the optimization process is enough to illustrate our theorems.", "We provide the code to reproduce the experiments.See https://github.com/kimiandj/min_swe.", "Figure: Probability density estimates of the MSWE σ ^ n 2 \\hat{\\sigma }^2_n of order 1, centered and rescaled by n\\sqrt{n}, on the 10-dimensional Gaussian model for different values of nn.Figure: Min.", "SW estimation on Gaussians in ℝ 10 \\mathbb {R}^{10}.", "fig:resultsgaussianexpa and fig:resultsgaussianexpb show the mean squared error between (𝐦 ☆ ,σ ☆ 2 )=(0,1)(\\mathbf {m}_\\star , \\sigma ^2_\\star ) = (\\mathbf {0}, 1) and MSWE (𝐦 ^ n ,σ ^ n 2 )(\\hat{\\mathbf {m}}_n, \\hat{\\sigma }^2_n) (resp.", "MESWE (𝐦 ^ n,n ,σ ^ n,n 2 )(\\hat{\\mathbf {m}}_{n,n}, \\hat{\\sigma }^2_{n,n})) for nn from 10 to 10 000, illustrating Theorems  and .", "fig:resultsgaussianexpc shows the error between (𝐦 ^ n ,σ ^ n 2 )(\\hat{\\mathbf {m}}_n, \\hat{\\sigma }^2_n) and (𝐦 ^ n,m ,σ ^ n,m 2 )(\\hat{\\mathbf {m}}_{n,m}, \\hat{\\sigma }^2_{n,m}) for 2000 observations and mm from 10 to 10 000, to illustrate thm:cvgmeswetomswe.", "Results are averaged over 100 runs, the shaded areas represent the standard deviation.Multivariate Gaussian distributions: We consider the task of estimating the parameters of a 10-dimensional Gaussian distribution using our SW estimators: we are interested in the model $\\mathcal {M}= \\left\\lbrace \\mathcal {N}(\\mathbf {m}, \\sigma ^2\\mathbf {I})\\ :\\ \\mathbf {m}\\in \\mathbb {R}^{10},\\ \\sigma ^2 > 0 \\right\\rbrace $ and we draw i.i.d.", "observations with $(\\mathbf {m}_\\star , \\sigma ^2_\\star ) = (\\mathbf {0}, 1)$ .", "The advantage of this simple setting is that the density of the generated data has a closed-form expression, which makes MSWE tractable.", "We empirically verify our central limit theorem: for different values of $n$ , we compute 500 times MSWE of order 1 using one random projection, then we estimate the density of $\\hat{\\sigma }^2_n$ with a kernel density estimator.", "fig:resultsgaussianasymptotic shows the distributions centered and rescaled by $\\sqrt{n}$ for each $n$ , and confirms the convergence rate that we derived (thm:asymptotic2).", "To illustrate the consistency property in thm:existenceconsistencymswe, we approximate MSWE of order 2 for different numbers of observed data $n$ using one random projection and we report for each $n$ the mean squared error between the estimate mean and variance and the data-generating parameters $(\\mathbf {m}_\\star , \\sigma ^2_\\star )$ .", "We proceed the same way to study the consistency of MESWE (thm:existenceconsistencymeswe), which we approximate using one random projections and one generated dataset $z_{1:m}$ of size $m = n$ for different values of $n$ .", "We also verify the convergence of MESWE to MSWE (thm:cvgmeswetomswe): we compute these estimators on a fixed set of $n = 2000$ observations for different $m$ , and we measure the error between them for each $m$ .", "Results are shown in fig:resultsgaussianexp.", "We see that our estimators indeed converge to $(\\mathbf {m}_\\star , \\sigma ^2_\\star )$ as the number of observations increases (Figures REF , REF ), and on a fixed observed dataset, MESWE converges to MSWE as we generate more samples (fig:resultsgaussianexpc).", "Multivariate elliptically contoured stable distributions: We focus on parameter inference for a subclass of multivariate stable distributions, called elliptically contoured stable distributions and denoted by $\\mathcal {E}\\alpha \\mathcal {S}_c$ [27].", "Stable distributions refer to a family of heavy-tailed probability distributions that generalize Gaussian laws and appear as the limit distributions in the generalized central limit theorem [28].", "These distributions have many attractive theoretical properties and have been proven useful in modeling financial [29] data or audio signals [30], [31].", "While special univariate cases include Gaussian, Lévy and Cauchy distributions, the density of stable distributions has no general analytic form, which restricts their practical application, especially for the multivariate case.", "If $Y \\in \\mathbb {R}^d\\sim \\mathcal {E}\\alpha \\mathcal {S}_c(\\mathbf {\\Sigma }, \\mathbf {m})$ , then its joint characteristic function is defined for any $\\mathbf {t}\\in \\mathbb {R}^d$ as $ \\mathbb {E}[ \\exp (i\\mathbf {t}^T Y) ] = \\exp \\left( - (\\mathbf {t}^T \\mathbf {\\Sigma }\\mathbf {t})^{\\alpha / 2} + i \\mathbf {t}^T \\mathbf {m}\\right)$ , where $\\mathbf {\\Sigma }$ is a positive definite matrix (akin to a correlation matrix), $\\mathbf {m}\\in \\mathbb {R}^d$ is a location vector (equal to the mean if it exists) and $\\alpha \\in (0, 2)$ controls the thickness of the tail.", "Even though their densities cannot be evaluated easily, it is straightforward to sample from $\\mathcal {E}\\alpha \\mathcal {S}_c$ [27], therefore it is particularly relevant here to apply MESWE instead of MLE.", "To demonstrate the computational advantage of MESWE over the minimum expected Wasserstein estimator [3], we consider observations in $\\mathbb {R}^d$ i.i.d.", "from $\\mathcal {E}\\alpha \\mathcal {S}_c(\\mathbf {I}, \\mathbf {m}_\\star )$ where each component of $\\mathbf {m}_\\star $ is 2 and $\\alpha = 1.8$ , and $\\mathcal {M}= \\left\\lbrace \\mathcal {E}\\alpha \\mathcal {S}_c(\\mathbf {I}, \\mathbf {m})\\ :\\ \\mathbf {m}\\in \\mathbb {R}^d\\right\\rbrace $ .", "The Wasserstein distance on multivariate data is either computed exactly by solving the linear program in (REF ), or approximated by solving a regularized version of this problem with Sinkhorn's algorithm [12].", "The MESWE is approximated using 10 random projections and 10 sets of generated samples.", "Then, following the approach in [3], we use the gradient-free optimization method Nelder-Mead to minimize the Wasserstein and SW distances.", "We report on fig:resultscomparison the mean squared error between each estimate and $\\mathbf {m}_\\star $ , as well as their average computational time for different values of dimension $d$ .", "We see that MESWE provides the same quality of estimation as its Wasserstein-based counterparts while considerably reducing the computational time, especially in higher dimensions.", "We focus on this model in $\\mathbb {R}^{10}$ and we illustrate the consistency of the MESWE $\\hat{\\mathbf {m}}_{n,m}$ , approximated with one random projection and one generated dataset, the same way as for the Gaussian model: see fig:resultsalphastableexpa.", "To confirm the convergence of $\\hat{\\mathbf {m}}_{n,m}$ to the MSWE $\\hat{\\mathbf {m}}_n$ , we fix $n=100$ observations and we compute the mean squared error between the two approximate estimators (using one random projection and one generated dataset) for different values of $m$ (fig:resultsalphastableexpb).", "Note that the MSWE is approximated with the MESWE obtained for a large enough value of $m$ : $\\hat{\\mathbf {m}}_n \\approx \\hat{\\mathbf {m}}_{n, 10\\,000}$ .", "Figure: Min.", "SW estimation for the location parameter of multivariate elliptically contoured stable distributions.", "fig:resultscomparison compares the quality of the estimation provided by SW and Wasserstein-based estimators as well as their average computational time, for different values of dimension dd.", "fig:resultsalphastableexpa and fig:resultsalphastableexpb illustrate, for d=10d=10, the consistency of MESWE 𝐦 ^ n,m \\hat{\\mathbf {m}}_{n,m} and its convergence to the MSWE 𝐦 ^ n \\hat{\\mathbf {m}}_n.", "Results are averaged over 100 runs, the shaded area represent the standard deviation.High-dimensional real data using GANs: Finally, we run experiments on image generation using the Sliced-Wasserstein Generator (SWG), an alternative GAN formulation based on the minimization of the SW distance [16].", "Specifically, the generative modeling approach consists in introducing a random variable $Z$ which takes value in $\\mathsf {Z}$ with a fixed distribution, and then transforming $Z$ through a neural network.", "This defines a parametric function $T_\\theta : \\mathsf {Z}\\rightarrow \\mathsf {Y}$ that is able to produce images from a distribution $\\mu _\\theta $ .", "The goal is to optimize the neural network parameters such that the generated images are close to the observed ones.", "[16] proposes to minimize the SW distance between $\\mu _\\theta $ and the real data distribution over $\\theta $ as the generator objective, and train on MESWE in practice.", "For our experiments, we design a neural network with the fully-connected configuration given in [16] and we use the MNIST dataset, made of 60 000 training images and 10 000 test images of size $28 \\times 28$ .", "Our training objective is MESWE of order 2 approximated with 20 random projections and 20 different generated datasets.", "We study the consistent behavior of the MESWE by training the neural network on different sizes $n$ of training data and different numbers $m$ of generated samples and by comparing the final training loss and test loss to the ones obtained when learning on the whole training dataset ($n = 60\\,000$ ) and $m = 200$ .", "Results are averaged over 10 runs and shown on fig:resultsnnexpa, where the shaded areas correspond to the standard deviation over the runs.", "We observe that our results confirm thm:existenceconsistencymeswe.", "We would like to point out that, in all of our experiments, the random projections used in the Monte Carlo average that estimates the integral in (REF ) were picked uniformly on $\\mathbb {S}^{d-1}$ (see appendix:computational in the supplementary document for more details).", "The sampling on $\\mathbb {S}^{d-1}$ directly impacts the quality of the resulting approximation of SW, and might induce variance in practice when learning generative models.", "On the theoretical side, studying the asymptotic properties of SW-based estimators obtained with a finite number of projections is an interesting question (e.g., their behavior might depend on the sampling method or the number of projections used).", "We leave this study for future research." ], [ "Conclusion", "The Sliced-Wasserstein distance has been an attractive metric choice for learning in generative models, where the densities cannot be computed directly.", "In this study, we investigated the asymptotic properties of estimators that are obtained by minimizing SW and the expected SW. We showed that (i) convergence in SW implies weak convergence of probability measures in general Wasserstein spaces, (ii) the estimators are consistent, (iii) the estimators converge to a random variable in distribution with a rate of $\\sqrt{n}$ .", "We validated our mathematical results on both synthetic data and neural networks.", "We believe that our techniques can be further extended to the extensions of SW such as [20], [33], [34]." ], [ "Acknowledgements", "The authors are grateful to Pierre Jacob for his valuable comments on an earlier version of this manuscript.", "This work is partly supported by the French National Research Agency (ANR) as a part of the FBIMATRIX project (ANR-16-CE23-0014) and by the industrial chair Machine Learning for Big Data from Télécom ParisTech.", "Alain Durmus acknowledges support from Polish National Science Center grant: NCN UMO-2018/31/B/ST1/00253." ], [ "Convergence and lower semi-continuity", "Definition 1 (Weak convergence) Let ${\\mu _k}$ be a sequence of probability measures on $\\mathsf {Y}$ .", "We say that $\\mu _k$ converges weakly to a probability measure $\\mu $ on $\\mathsf {Y}$ , and write ${\\mu _k} \\xrightarrow{}\\mu $ (or $\\mu _k \\xrightarrow{}\\mu $ ), if for any continous and bounded function $f$ : $\\mathsf {Y}\\rightarrow \\mathbb {R}$ , we have $ \\lim _{k \\rightarrow +\\infty }\\int f\\ \\mathrm {d}\\mu _k = \\int f\\ \\mathrm {d}\\mu \\;.$ Definition 2 (Epi-convergence) Let $\\Theta $ be a metric space and $f : \\Theta \\rightarrow \\mathbb {R}$ .", "Consider a sequence ${f_k}$ of functions from $\\Theta $ to $\\mathbb {R}$ .", "We say that the sequence ${f_k}$ epi-converges to a function $f : \\Theta \\rightarrow \\mathbb {R}$ , and write ${f_k} \\xrightarrow{}f$ , if for each $\\theta \\in \\Theta $ , $\\liminf _{k \\rightarrow \\infty } f_k(\\theta _k) &\\ge f(\\theta ) \\; \\text{ for every sequence } (\\theta _k)_{n \\in \\mathbb {N}} \\text{ such that } \\lim _{k \\rightarrow +\\infty } \\theta _k = \\theta \\;,\\\\\\text{ and } \\quad \\limsup _{k \\rightarrow \\infty } f_k(\\theta _k) &\\le f(\\theta ) \\; \\text{ for a sequence } (\\theta _k)_{n \\in \\mathbb {N}} \\text{ such that } \\lim _{k \\rightarrow +\\infty } \\theta _k = \\theta \\;.$ An equivalent and useful characterization of epi-convergence is given in [35], which we paraphrase in Proposition REF after recalling the definition of lower semi-continuous functions.", "Definition 3 (Lower semi-continuity) Let $\\Theta $ be a metric space and $f : \\Theta \\rightarrow \\mathbb {R}$ .", "We say that $f$ is lower semi-continuous (l.s.c.)", "on $\\Theta $ if for any $\\theta _0 \\in \\Theta $ , $\\liminf _{\\theta \\rightarrow \\theta _0} f(\\theta ) \\ge f(\\theta _0)$ Proposition 1 (Characterization of epi-convergence via minimization, Proposition 7.29 of [35]) Let $\\Theta $ be a metric space and $f : \\Theta \\rightarrow \\mathbb {R}$ be a l.s.c. function.", "The sequence ${f_k}$ , with $f_k : \\Theta \\rightarrow \\mathbb {R}$  for any $n \\in \\mathbb {N}$ , epi-converges to $f$ if and only if (a) $\\liminf _{k \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} f_k(\\theta ) \\ge \\inf _{\\theta \\in \\mathsf {K}} f(\\theta )$ for every compact set $\\mathsf {K}\\subset \\Theta $ ; (b) $\\limsup _{k \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {O}} f_k(\\theta ) \\le \\inf _{\\theta \\in \\mathsf {O}} f(\\theta )$ for every open set $\\mathsf {O}\\subset \\Theta $ .", "[35], paraphrased below, gives asymptotic properties for the infimum and argmin of epiconvergent functions and will be useful to prove the existence and consistency of our estimators.", "Theorem 7 (Inf and argmin in epiconvergence, Theorem 7.31 of [35]) Let $\\Theta $ be a metric space, $f : \\Theta \\rightarrow \\mathbb {R}$ be a l.s.c.", "function and ${f_k}$ be a sequence with $f_k : \\Theta \\rightarrow \\mathbb {R}$  for any $n \\in \\mathbb {N}$ .", "Suppose ${f_k} \\xrightarrow{}f$ with $- \\infty < \\inf _{\\theta \\in \\Theta } f(\\theta ) < \\infty $ .", "(a) It holds $\\lim _{k \\rightarrow \\infty } \\inf _{\\theta \\in \\Theta } f_k(\\theta ) = \\inf _{\\theta \\in \\Theta } f(\\theta )$ if and only if for every $\\eta > 0$ there exists a compact set $\\mathsf {K}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such for any $k \\ge N$ , $\\inf _{\\theta \\in \\mathsf {K}} f_k(\\theta ) \\le \\inf _{\\theta \\in \\Theta } f_k(\\theta ) + \\eta \\;.$ (b) In addition, $\\limsup _{k\\rightarrow \\infty } \\mathrm {argmin}_{\\theta \\in \\Theta } f_k(\\theta ) \\subset \\mathrm {argmin}_{\\theta \\in \\Theta } f(\\theta )$ ." ], [ "Preliminary results", "In this section, we gather technical results regarding lower semi-continuity of (expected) Sliced-Wasserstein distances and measurability of MSWE which will be needed in our proofs." ], [ "Lower semi-continuity of Sliced-Wasserstein distances", "Lemma 1 (Lower semi-continuity of $\\mathbf {SW}_p$ ) Let $p \\in [1,\\infty )$ .", "The Sliced-Wasserstein distance of order $p$ is lower semi-continuous on $\\mathcal {P}_p(\\mathsf {Y}) \\times \\mathcal {P}_p(\\mathsf {Y})$ endowed with the topology of weak convergence, i.e.", "for any sequences ${\\mu _k}$ and ${\\nu _k}$ of $\\mathcal {P}_p(\\mathsf {Y})$ which converge weakly to $\\mu \\in \\mathcal {P}_p(\\mathsf {Y})$ and $\\nu \\in \\mathcal {P}_p(\\mathsf {Y})$ respectively, we have: $[p](\\mu ,\\nu ) \\le \\liminf _{k \\rightarrow +\\infty } [p] \\, (\\mu _k,\\nu _k) \\;.$ First, by the continuous mapping theorem, if a sequence ${\\mu _k}$ of elements of $\\mathcal {P}_p(\\mathsf {Y})$ converges weakly to $\\mu $ , then for any continuous function $f : \\mathsf {Y}\\rightarrow \\mathbb {R}$ , ${f_{\\sharp } \\mu _k}$ converges weakly to $f_{\\sharp } \\mu $ .", "In particular, for any $u\\in \\mathbb {S}^{d-1}$ , $u^{\\star }_{\\sharp }\\mu _k \\xrightarrow{}u^{\\star }_{\\sharp }\\mu $ since $u^{\\star }$ is a bounded linear form thus continuous.", "Let $p \\in [1,\\infty )$ .", "We introduce the two sequences ${\\mu _k}$ and ${\\nu _k}$ of elements of $\\mathcal {P}_p(\\mathsf {Y})$ such that $\\mu _k \\xrightarrow{}\\mu $ and $\\nu _k \\xrightarrow{}\\nu $ .", "We show that for any $u\\in \\mathbb {S}^{d-1}$ , $[p]^p(u^{\\star }_{\\sharp }\\mu , u^{\\star }_{\\sharp }\\nu ) \\le \\liminf _{k \\rightarrow +\\infty } [p]^p(u^{\\star }_{\\sharp }\\mu _k , u^{\\star }_{\\sharp }\\nu _k) \\;.$ Indeed, if (REF ) holds, then the proof is completed using the definition of the Sliced-Wasserstein distance (REF ) and Fatou's Lemma.", "Let $u\\in \\mathbb {S}^{d-1}$ .", "For any $k \\in \\mathbb {N}$ , let $\\gamma _k\\in \\mathcal {P}(\\mathbb {R}\\times \\mathbb {R})$ be an optimal transference plan between $u^{\\star }_{\\sharp }\\mu _k$ and $u^{\\star }_{\\sharp }\\nu _k$ for the Wasserstein distance of order $p$ which exists by [22] i.e.", "$[p]^p(u^{\\star }_{\\sharp }\\mu _k , u^{\\star }_{\\sharp }\\nu _k) = \\int _{\\mathbb {R}\\times \\mathbb {R}} \\left|a-b \\right| \\mathrm {d}\\gamma _k(a,b) \\;.$ Note that by [22] and Prokhorov's Theorem, ${\\gamma _k}$ is sequentially compact in $\\mathcal {P}(\\mathbb {R}\\times \\mathbb {R})$ for the topology associated with the weak convergence.", "Now, consider a subsequence ${\\gamma _{\\phi _1(k)}}$ where $\\phi _1 : \\mathbb {N}\\rightarrow \\mathbb {N}$ is increasing such that $\\lim _{k \\rightarrow +\\infty } \\int _{\\mathbb {R}\\times \\mathbb {R}} \\left|a-b \\right|^p \\mathrm {d}\\gamma _{\\phi _1(k)}(a,b) &= \\lim _{k \\rightarrow +\\infty } [p]^p(u^{\\star }_{\\sharp }\\mu _{\\phi _1(k)}, u^{\\star }_{\\sharp }\\nu _{\\phi _1(k)}) \\nonumber \\\\&= \\liminf _{k \\rightarrow +\\infty } [p]^p(u^{\\star }_{\\sharp }\\mu _k, u^{\\star }_{\\sharp }\\nu _k) \\;.", "$ Since ${\\gamma _k}$ is sequentially compact, ${\\gamma _{\\phi _1(k)}}$ is sequentially compact as well, and therefore there exists an increasing function $\\phi _2 : \\mathbb {N}\\rightarrow \\mathbb {N}$ and a probability distribution $\\gamma \\in \\mathcal {P}(\\mathbb {R}\\times \\mathbb {R})$ such that ${\\gamma _{\\phi _2(\\phi _1(k))}}$ converges weakly to $\\gamma $ .", "Then, we obtain by (REF ), $\\int _{\\mathbb {R}\\times \\mathbb {R}} {a-b}^p \\mathrm {d}\\gamma (a,b) = \\lim _{k \\rightarrow +\\infty } \\int _{\\mathbb {R}\\times \\mathbb {R}} {a-b}^p \\mathrm {d}\\gamma _{\\phi _2(\\phi _1(k))}(a,b) = \\liminf _{k \\rightarrow +\\infty } [p]^p(u^{\\star }_{\\sharp }\\mu _k, u^{\\star }_{\\sharp }\\nu _k) \\;.$ If we show that $\\gamma \\in \\Gamma (u^{\\star }_{\\sharp }\\mu , u^{\\star }_{\\sharp }\\nu )$ , it will conclude the proof of (REF ) by definition of the Wasserstein distance (REF ).", "But for any continuous and bounded function $f : \\mathbb {R}\\rightarrow \\mathbb {R}$ , since for any $k \\in \\mathbb {N}$ , $\\gamma _k \\in \\Gamma (\\mu _k,\\nu _k)$ , and ${\\mu _k},{\\nu _k}$ converge weakly to $\\mu $ and $\\nu $ respectively, we have: $\\int _{\\mathbb {R}\\times \\mathbb {R}} f(a) \\mathrm {d}\\gamma (a,b) = \\lim _{k \\rightarrow +\\infty } \\int _{\\mathbb {R}\\times \\mathbb {R}} f(a) \\mathrm {d}\\gamma _{\\phi _2(\\phi _1(k))}(a,b) =\\lim _{k \\rightarrow +\\infty } \\int _{\\mathbb {R}} f(a) \\mathrm {d}u^{\\star }_{\\sharp }\\mu _{\\phi _2(\\phi _1(k))}(a) \\\\ = \\int _{\\mathbb {R}} f(a) \\mathrm {d}u^{\\star }_{\\sharp }\\mu (a) \\;,$ and similarly $\\int _{\\mathbb {R}\\times \\mathbb {R}} f(b) \\mathrm {d}\\gamma (a,b) = \\int _{\\mathbb {R}} f(b) \\mathrm {d}u^{\\star }_{\\sharp }\\nu (a) \\;.$ This shows that $\\gamma \\in \\Gamma (u^{\\star }_{\\sharp }\\mu , u^{\\star }_{\\sharp }\\nu )$ and therefore, (REF ) is true.", "We conclude by applying Fatou's Lemma.", "By a direct application of lem:swsemicontinuous, we have the following result.", "Corollary 1 Assume assumption:continuousmap.", "Then, $(\\mu , \\theta ) \\mapsto [p](\\mu , \\mu _\\theta )$ is lower semi-continuous in $\\mathcal {P}_p(\\mathsf {Y}) \\times \\Theta $ .", "Lemma 2 (Lower semi-continuity of $\\mathbb {E}\\mathbf {SW}_p$ ) Let $p \\in [1, \\infty )$ and $m \\in \\mathbb {N}^*$ .", "Denote for any $\\mu \\in \\mathcal {P}_p(\\mathsf {Y})$ , $\\hat{\\mu }_m = (1/m) \\sum _{i=1}^m _{Z_i}$ , where $Z_{1:m}$ are i.i.d.", "samples from $\\mu $ .", "Then, the map $(\\nu , \\mu ) \\mapsto \\mathbb {E}\\left[ [p](\\nu , \\hat{\\mu }_m) \\right]$ is lower semi-continuous on $\\mathcal {P}_p(\\mathsf {Y}) \\times \\mathcal {P}_p(\\mathsf {Y})$ endowed with the topology of weak convergence.", "We consider two sequences $(\\mu _k)_{k \\in \\mathbb {N}}$ and $(\\nu _k)_{k \\in \\mathbb {N}}$ of probability measures in $\\mathsf {Y}$ , such that ${\\mu _k} \\xrightarrow{}\\mu $ and ${\\nu _k} \\xrightarrow{}\\nu $ , and we fix $m \\in \\mathbb {N}^*$ .", "By Skorokhod's representation theorem, there exists a probability space $(\\tilde{\\Omega },\\tilde{\\mathcal {F}},\\tilde{\\mathbb {P}})$ , a sequence of random variables $(\\tilde{X}_k^{1},\\ldots ,\\tilde{X}_k^{m})_{k \\in \\mathbb {N}}$ and a random variable $(\\tilde{X}^1,\\ldots ,\\tilde{X}^m)$ defined on $\\tilde{\\Omega }$ such that for any $k \\in \\mathbb {N}$ and $i \\in \\lbrace 1,\\ldots ,m\\rbrace $ , $\\tilde{X}_k^i$ has distribution $\\mu _k$ , $\\tilde{X}^i$ has distribution $\\mu $ and $(\\tilde{X}_k^{1},\\ldots ,\\tilde{X}_k^{m})_{k \\in \\mathbb {N}^*}$ converges to $(\\tilde{X}^1,\\ldots ,\\tilde{X}^m)$ , $\\tilde{\\mathbb {P}}$ -almost surely.", "We then show that the sequence of (random) empirical distributions $(\\hat{\\mu }_{k,m})_{k \\in \\mathbb {N}}$ defined by $\\hat{\\mu }_{k,m} = (1/m) \\sum _{i=1}^m _{\\tilde{X}^{i}_k}$ , weakly converges to $\\hat{\\mu }_{m} = (1/m) \\sum _{i=1}^m _{\\tilde{X}^i}$ , $\\tilde{\\mathbb {P}}$ -almost surely.", "Note that it is sufficient to show that for any deterministic sequence $(x_k^{1},\\ldots ,x_k^{m})_{k \\in \\mathbb {N}^*}$ which converges to $(x^1,\\ldots ,x^m)$ , i.e.", "$\\lim _{k \\rightarrow +\\infty } \\max _{i\\in \\lbrace 1,\\ldots ,m\\rbrace } \\rho (x_k^i,x^i) = 0$ , then the sequence of empirical distributions $(\\hat{\\nu }_{k,m})_{k \\in \\mathbb {N}}$ defined by $\\hat{\\nu }_{k,m} = (1/m) \\sum _{i=1}^m _{x^i_k}$ , weakly converges to $\\hat{\\nu }_{m} = (1/m) \\sum _{i=1}^m _{x^i}$ .", "Note that since the Lévy-Prokhorov metric $\\mathbf {d}_{\\mathcal {P}}$ metrizes the weak convergence by [21], we only need to show that $\\lim _{k \\rightarrow +\\infty } \\mathbf {d}_{\\mathcal {P}}(\\hat{\\nu }_{k,m},\\hat{\\nu }_m) = 0$ .", "More precisely, since for any probability measure $\\zeta _1$ and $\\zeta _2$ , $\\mathbf {d}_{\\mathcal {P}}(\\zeta _1,\\zeta _2) = \\inf \\left\\lbrace \\epsilon >0 \\, : \\, \\text{ for any $\\mathsf {A}\\in \\mathcal {Y}$, } \\zeta _1(\\mathsf {A}) \\le \\zeta _2(\\mathsf {A}^{\\epsilon }) + \\epsilon \\text{ and } \\zeta _2(\\mathsf {A}) \\le \\zeta _1(\\mathsf {A}^{\\epsilon }) + \\epsilon \\right\\rbrace \\;,$ where $\\mathcal {Y}$ is the Borel $\\sigma $ -field of $(\\mathsf {Y},\\rho )$ and for any $\\mathsf {A}\\in \\mathcal {Y}$ , $\\mathsf {A}^{\\epsilon } = \\lbrace x \\in \\mathsf {Y}\\, : \\, \\rho (x,y) < \\epsilon \\text{ for any } y \\in \\mathsf {A}\\rbrace $ , we get $\\mathbf {d}_{\\mathcal {P}}(\\hat{\\nu }_{k,m},\\hat{\\nu }_{m}) \\le 2 \\max _{i\\in \\lbrace 1,\\ldots ,m\\rbrace } \\rho (x_k^i,x^i) \\;,$ and therefore $\\lim _{k \\rightarrow +\\infty } \\mathbf {d}_{\\mathcal {P}}(\\hat{\\nu }_{k,m},\\hat{\\nu }_{m}) = 0$ , so that, ${\\hat{\\nu }_{k,m}}$ weakly converges to $\\hat{\\nu }_m$ .", "Finally, we have that $\\hat{\\mu }_{k,m} = (1/m) \\sum _{i=1}^m _{\\tilde{X}^{i}_k}$ , weakly converges to $\\hat{\\mu }_{m} = (1/m) \\sum _{i=1}^m _{\\tilde{X}^i}$ , $\\tilde{\\mathbb {P}}$ -almost surely and we obtain the final result using the lower semi-continuity of the Sliced-Wasserstein distance derived in lem:swsemicontinuous and Fatou's lemma which give $\\tilde{\\mathbb {E}}\\left[ [p](\\nu , \\hat{\\mu }_m) \\right] \\le \\tilde{\\mathbb {E}}\\left[ \\liminf _{i \\rightarrow \\infty } [p](\\nu _i, \\hat{\\mu }_{m,i}) \\right] \\le \\liminf _{i \\rightarrow \\infty } \\tilde{\\mathbb {E}}\\left[ \\lbrace [p](\\nu _i, \\hat{\\mu }_{m,i}) \\right] \\;,$ where $\\tilde{\\mathbb {E}}$ is the expectation corresponding to $\\tilde{\\mathbb {P}}$ .", "The following corollary is a direct consequence of Lemma REF .", "Corollary 2 Assume assumption:continuousmap.", "Then, $(\\nu , \\theta ) \\mapsto \\mathbb {E}[ [p](\\nu , \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} ]$ is lower semi-continuous on $\\mathcal {P}(\\mathsf {Y}) \\times \\Theta $ ." ], [ "Measurability of the MSWE and MESWE", "The measurability of the MSWE and MESWE follows from the application of [36], also used in [37] and [3], and which we recall in thm:corollary1.", "Theorem 8 (Corollary 1 in [36]) Let $\\mathsf {U},\\mathsf {V}$ be Polish spaces and $f$ be a real-valued Borel measurable function defined on a Borel subset $\\mathsf {D}$ of $\\mathsf {U}\\times \\mathsf {V}$ .", "We denote by $\\operatorname{proj}(\\mathsf {D})$ the set defined as $\\operatorname{proj}(\\mathsf {D})= \\lbrace u\\ :\\ \\text{there exists } v \\in \\mathsf {V},\\ (u,v) \\in \\mathsf {D}\\rbrace \\;.$ Suppose that for each $u \\in \\operatorname{proj}(\\mathsf {D})$ , the section $\\mathsf {D}_u = \\lbrace v \\in V, (u,v) \\in \\mathsf {D}\\rbrace $ is $\\sigma $ -compact and $f(u, \\cdot )$ is lower semi-continuous with respect to the relative topology on $\\mathsf {D}_u$ .", "Then, The sets $\\operatorname{proj}(\\mathsf {D})$ and $\\mathsf {I}= \\lbrace u \\in \\operatorname{proj}(\\mathsf {D}),\\ \\text{for some } v \\in \\mathsf {D}_u,\\ f(u,v) = \\inf f_u \\rbrace $ are Borel For each $\\epsilon > 0$ , there is a Borel measurable function $\\phi _\\epsilon $ satisfying, for $u \\in \\operatorname{proj}(\\mathsf {D})$ , $f(u, \\phi _\\epsilon (u)) &= \\inf _{\\mathsf {D}_u} f_u, &\\text{if }\\; u \\in \\mathsf {I}, \\;\\;\\;& \\\\&\\le \\epsilon + \\inf _{\\mathsf {D}_u} f_u, &\\text{if }\\; u \\notin \\mathsf {I}, \\;\\;\\;&\\text{and }\\;\\; \\inf _{\\mathsf {D}_u} f_u \\ne -\\infty \\\\&\\le - \\epsilon ^{-1}, &\\text{if }\\; u \\notin \\mathsf {I}, \\;\\;\\;&\\text{and }\\;\\; \\inf _{\\mathsf {D}_u} f_u = -\\infty \\;.$ Theorem 9 (Measurability of the MSWE) Assume assumption:continuousmap.", "For any $n \\ge 1$ and $\\epsilon > 0$ , there exists a Borel measurable function $\\hat{\\theta }_{n,\\epsilon } : \\Omega \\rightarrow \\Theta $ that satisfies: for any $\\omega \\in \\Omega $ , $\\hat{\\theta }_{n,\\epsilon }(\\omega ) \\in \\left\\lbrace \\begin{array}{ll}\\mathrm {argmin}_{\\theta \\in \\Theta } \\;\\;\\; [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ), & \\;\\; \\text{if this set is non-empty,} \\\\\\lbrace \\theta \\in \\Theta \\ :\\ [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le \\epsilon _\\star + \\epsilon \\rbrace , & \\;\\; \\text{otherwise.", "}\\end{array}\\right.$ where $\\epsilon _\\star = \\inf _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ .", "The proof consists in showing that the conditions of Theorem REF are satisfied.", "The empirical measure $\\hat{\\mu }_n(\\omega )$ depends on $\\omega \\in \\Omega $ only through $y = (y_1, \\dots , y_n) \\in \\mathsf {Y}^n$ , so we can consider it as a function on $\\mathsf {Y}^n$ rather than on $\\Omega $ .", "We introduce $\\mathsf {D}= \\mathsf {Y}^n \\times \\Theta $ .", "Since $\\mathsf {Y}$ is Polish, $\\mathsf {Y}^n$ ($n \\in \\mathbb {N}^*$ ) endowed with the product topology is Polish.", "For any $y \\in \\mathsf {Y}^n$ , the set $\\mathsf {D}_y = \\lbrace \\theta \\in \\Theta ,\\ (y, \\theta ) \\in \\mathsf {D}\\rbrace = \\Theta $ is assumed to be $\\sigma $ -compact.", "The map $y \\mapsto \\hat{\\mu }_n(y)$ is continuous for the weak topology (see the proof of lemma:lscEsw), as well as the map $\\theta \\mapsto \\mu _\\theta $ according to assumption:continuousmap.", "We deduce by Corollary REF that the map $(\\mu , \\theta ) \\mapsto [p](\\mu , \\mu _\\theta )$ is l.s.c.", "for the weak topology.", "Since the composition of a lower semi-continuous function with a continuous function is l.s.c., the map $(y, \\theta ) \\mapsto [p](\\hat{\\mu }_n(y), \\mu _\\theta )$ is l.s.c.", "for the weak topology, thus measurable and for any $y \\in \\mathsf {Y}^n$ , $\\theta \\mapsto [p](\\hat{\\mu }_n(y), \\mu _\\theta )$ is l.s.c.", "on $\\Theta $ .", "A direct application of Theorem REF finalizes the proof.", "Theorem 10 (Measurability of the MESWE) Assume assumption:continuousmap.", "For any $n \\ge 1$ , $m \\ge 1$ and $\\epsilon > 0$ , there exists a Borel measurable function $\\hat{\\theta }_{n,m,\\epsilon } : \\Omega \\rightarrow \\Theta $ that satisfies: for any $\\omega \\in \\Omega $ , $\\hat{\\theta }_{n,m,\\epsilon }(\\omega ) \\in \\left\\lbrace \\begin{array}{ll}\\mathrm {argmin}_{\\theta \\in \\Theta } \\;\\;\\; \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right], & \\; \\text{if this set is non-empty,} \\\\\\big \\lbrace \\theta \\in \\Theta \\ :\\ \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right] \\le \\epsilon _* + \\epsilon \\rbrace \\big \\rbrace , & \\; \\text{otherwise.", "}\\end{array}\\right.$ where $\\epsilon _* = \\inf _{\\theta \\in \\Theta } \\mathbb {E}[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} ]$ .", "The proof can be done similarly to the proof of thm:measurability: we verify that we can apply thm:corollary1 using coro:eswsemicontinuous2 instead of coro:swsemicontinuous2." ], [ "Proof of thm:SWpmetrizesPp", "Lemma 3 Let ${\\mu _k}$ be a sequence of probability measures on $\\mathbb {R}^d$ and $\\mu $ a measure in $\\mathbb {R}^d$ such that, $\\lim _{k \\rightarrow \\infty } [1](\\mu _k, \\mu ) = 0 \\;.$ Then, there exists an increasing function $\\phi : \\mathbb {N} \\rightarrow \\mathbb {N}$ such that the subsequence ${\\mu _{\\phi (k)}}$ converges weakly to $\\mu $ .", "By definition, we have that $\\lim _{k \\rightarrow \\infty } \\int _{\\mathbb {S}^{d-1}} [1](u^{\\star }_{\\sharp }\\mu _k, u^{\\star }_{\\sharp }\\mu ) \\mathrm {d}\\sigma (u) = 0 \\;.$ Therefore by [38], there exists an increasing mapping $\\phi : \\mathbb {N} \\rightarrow \\mathbb {N}$ such that for $\\sigma $ -almost every ($\\sigma $ -a.e.)", "$u \\in \\mathbb {S}^{d-1}$ , $\\lim _{k \\rightarrow \\infty } [1](u^{\\star }_{\\sharp }\\mu _{\\phi (k)}, u^{\\star }_{\\sharp }\\mu ) = 0 $ .", "By [22], it implies that for $\\sigma $ -a.e.", "$u \\in \\mathbb {S}^{d-1}$ , ${u^{\\star }_{\\sharp }\\mu _{\\phi (k)}} \\xrightarrow{}u^{\\star }_{\\sharp }\\mu $ .", "Lévy's characterization [39] gives that, for $\\sigma $ -a.e.", "$u \\in \\mathbb {S}^{d-1}$ and any $s \\in \\mathbb {R}$ , $\\lim _{k \\rightarrow \\infty } \\Phi _{u^{\\star }_{\\sharp }\\mu _{\\phi (k)}}(s) = \\Phi _{u^{\\star }_{\\sharp }\\mu }(s) \\;,$ where, for any distribution $\\nu \\in \\mathcal {P}(\\mathbb {R}^p)$ , $\\Phi _\\nu $ denotes the characteristic function of $\\nu $ and is defined for any $v \\in \\mathbb {R}^p$ as $\\Phi _\\nu (v) = \\int _{\\mathbb {R}^p} \\mathrm {e}^{\\mathrm {i}\\langle v, w \\rangle } \\mathrm {d}\\nu (w) \\;.$ Then, we can conclude that for Lebesgue-almost every $z \\in \\mathbb {R}^d$ , $ \\lim _{k \\rightarrow \\infty } \\Phi _{\\mu _{\\phi (k)}}(z) = \\Phi _{\\mu }(z) \\;.$ We can now show that ${\\mu _{\\phi (k)}} \\xrightarrow{}\\mu $ , i.e.", "by [21] for any $f: \\mathbb {R}^d \\rightarrow \\mathbb {R}$ continuous with compact support, $\\lim _{n \\rightarrow \\infty } \\int _{\\mathbb {R}^d} f(z) \\mathrm {d}\\mu _n(z) = \\int _{\\mathbb {R}^d} f(z) \\mathrm {d}\\mu (z) \\;.$ Let $f : \\mathbb {R}^d \\rightarrow \\mathbb {R}$ be a continuous function with compact support and $\\sigma > 0$ .", "Consider the function $f_\\sigma $ defined for any $x \\in \\mathbb {R}^d$ as $f_\\sigma (x) = (2\\sigma ^2)^{-d/2} \\int _{\\mathbb {R}^d} f(x-z) \\exp \\left(-\\Vert z\\Vert ^2/{2\\sigma ^2} \\right) \\mathrm {d}\\mathrm {Leb}(z) = f \\ast g_\\sigma (x) \\;,$ where $g_\\sigma $ is the density of the $d$ -dimensional Gaussian with zero mean and covariance matrix $\\sigma ^2 \\mathbf {I}_d$ , and $\\ast $ denotes the convolution product.", "We first show that (REF ) holds with $f_{\\sigma }$ in place of $f$ .", "Since for any $z \\in \\mathbb {R}^d$ , $\\mathbb {E}\\left[ \\mathrm {e}^{\\mathrm {i}\\left\\langle G,z \\right\\rangle } \\right] = \\mathrm {e}^{\\mathrm {i}\\left\\langle \\mathtt {m},z \\right\\rangle + (1/(2\\sigma ^2))[2]{z}}$ if $G$ is a $d$ -dimensional Gaussian random variable with zero mean and covariance matrix $(1/\\sigma ^2) \\operatorname{I}_d$ , by Fubini's theorem we get for any $k \\in \\mathbb {N}$ $\\nonumber \\int _{\\mathbb {R}^d} f_\\sigma (z) \\mathrm {d}\\mu _{\\phi (k)}(z) &= \\int _{\\mathbb {R}^d} \\int _{\\mathbb {R}^d} f(w) g_\\sigma (z-w) \\mathrm {d}w \\mathrm {d}\\mu _{\\phi (k)}(z) \\\\\\nonumber &= \\int _{\\mathbb {R}^d} \\int _{\\mathbb {R}^d} f(w) (2\\sigma ^2)^{-d/2} \\int _{\\mathbb {R}^d} \\mathrm {e}^{i \\langle z-w, x \\rangle } g_{1/\\sigma }(x) \\mathrm {d}x \\mathrm {d}w \\mathrm {d}\\mu _{\\phi (k)}(z) \\\\\\nonumber &= \\int _{\\mathbb {R}^d} \\int _{\\mathbb {R}^d} (2\\sigma ^2)^{-d/2} f(w) \\mathrm {e}^{-i\\langle w,x \\rangle } g_{1/\\sigma }(x) \\Phi _{\\mu _{\\phi (k)}}(x) \\mathrm {d}x \\mathrm {d}w \\\\&= (2\\sigma ^2)^{-d/2} \\int _{\\mathbb {R}^d} \\mathcal {F}[f](x) g_{1/\\sigma }(x) \\Phi _{\\mu _{\\phi (k)}}(x) \\mathrm {d}x \\;,$ where $\\mathcal {F}[f](x) = \\int _{\\mathbb {R}^d} f(w) \\mathrm {e}^{\\mathrm {i}\\left\\langle w,x \\right\\rangle } \\mathrm {d}w$ denotes the Fourier transform of $f$ , which exists since $f$ is assumed to have a compact support.", "In an analogous manner, we prove that $\\int _{\\mathbb {R}^d} f_\\sigma (z) \\mathrm {d}\\mu (z) = (2\\sigma ^2)^{-d/2} \\int _{\\mathbb {R}^d} \\mathcal {F}[f](x) g_{1/\\sigma }(x) \\Phi _{\\mu }(x) \\mathrm {d}x \\;.$ Now, using that $\\mathcal {F}[f]$ is bounded by $\\int _{\\mathbb {R}^d} |f(w)| \\mathrm {d}w < +\\infty $ since $f$ has compact support, we obtain that, for any $k \\in \\mathbb {N}$ and $x \\in \\mathbb {R}^d$ , $\\left| \\mathcal {F}[f](x) g_{1/\\sigma }(x) \\Phi _{\\mu _{\\phi (k)}}(x) \\right| \\le g_{1/\\sigma }(x) \\int _{\\mathbb {R}^d} |f(w)| \\mathrm {d}w$ By (REF ), (REF ), (REF ) and Lebesgue's Dominated Convergence Theorem, we obtain $\\lim _{k \\rightarrow \\infty } \\int _{\\mathbb {R}^d} (2\\sigma ^2)^{-d/2} \\mathcal {F}[f](x) g_{1/\\sigma }(x) \\Phi _{\\mu _{\\phi (k)}}(x) \\mathrm {d}x &= \\int _{\\mathbb {R}^d} (2\\sigma ^2)^{-d/2} \\mathcal {F}[f](x) g_{1/\\sigma }(x) \\Phi _\\mu (x) \\mathrm {d}x \\nonumber \\\\\\lim _{k \\rightarrow \\infty } \\int _{\\mathbb {R}^d} f_{\\sigma }(z) \\mathrm {d}\\mu _{\\phi (k)}(z) &= \\int _{\\mathbb {R}^d} f_\\sigma (z) \\mathrm {d}\\mu (z) \\;.", "$ We can now complete the proof of (REF ).", "For any $\\sigma > 0$ , we have $\\left| \\int _{\\mathbb {R}^d} f(z) \\mathrm {d}\\mu _{\\phi (k)}(z) - \\int _{\\mathbb {R}^d} f(z) \\mathrm {d}\\mu (z) \\right| \\le 2\\sup _{z \\in \\mathbb {R}^d} \\left| f(z) - f_\\sigma (z) \\right| \\\\+ \\left| \\int _{\\mathbb {R}^d} f_\\sigma (z) \\mathrm {d}\\mu _{\\phi (k)}(z) - \\int _{\\mathbb {R}^d} f_{\\sigma }(z) \\mathrm {d}\\mu (z) \\right| \\;.", "$ Therefore by (REF ), for any $\\sigma >0$ , we get $\\limsup _{k \\rightarrow +\\infty } \\left| \\int _{\\mathbb {R}^d} f(z) \\mathrm {d}\\mu _{\\phi (k)}(z) - \\int _{\\mathbb {R}^d} f(z) \\mathrm {d}\\mu (z) \\right|\\\\ \\le 2\\sup _{z \\in \\mathbb {R}^d} \\left| f(z) - f_\\sigma (z) \\right| \\;.$ Finally [40] implies that $\\lim _{\\sigma \\rightarrow 0} \\sup _{z \\in \\mathbb {R}^d} | f_\\sigma (z) - f(z) | = 0$ which concludes the proof.", "[Proof of Theorem REF ] Now, assume that $\\lim _{k \\rightarrow \\infty } [p](\\mu _k, \\mu ) = 0$ and that $(\\mu _k)_{k \\in \\mathbb {N}}$ does not converge weakly to $\\mu $ .", "Therefore, $\\lim _{k \\rightarrow \\infty } \\mathbf {d}_{\\mathcal {P}}(\\mu _k, \\mu ) \\ne 0$ , where $\\mathbf {d}_{\\mathcal {P}}$ denotes the Lévy-Prokhorov metric, and there exists $\\epsilon > 0$ and a subsequence ${\\mu _{\\psi (k)}}$ with $\\psi : \\mathbb {N}\\rightarrow \\mathbb {N}$ increasing, such that for any $k \\in \\mathbb {N}$ , $ \\mathbf {d}_{\\mathcal {P}}(\\mu _{\\psi (k)}, \\mu ) > \\epsilon $ In addition, by Hölder's inequality, we know that $[1](\\mu _k, \\mu ) \\le [p](\\mu _k, \\mu )$ , thus $[1](\\mu _k, \\mu ) \\le [p](\\mu _k, \\mu )$ , and by (REF ), $\\lim _{k \\rightarrow \\infty } [1](\\mu _{\\psi (k)}, \\mu ) = 0$ .", "Then, according to Lemma REF , there exists a subsequence ${\\mu _{\\phi (\\psi (k))}}$ with $\\phi : \\mathbb {N}\\rightarrow \\mathbb {N}$ increasing, such that $\\mu _{\\phi (\\psi (k))} \\xrightarrow{}\\mu $ which is equivalent to $\\lim _{k \\rightarrow \\infty } \\mathbf {d}_{\\mathcal {P}}(\\mu _{\\phi (\\psi (k))}, \\mu ) = 0$ , thus contradicts (REF ).", "We conclude that (REF ) implies ${\\mu _k} \\xrightarrow{}\\mu $ ." ], [ "Minimum Sliced-Wasserstein estimators: Proof of thm:existenceconsistencymswe", "[Proof of thm:existenceconsistencymswe] This result is proved analogously to the proof of Theorem 2.1 in [3].", "The key step is to show that the function $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ epi-converges to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ $\\mathbb {P}$ -almost surely, and then apply Theorem 7.31 of [35] (recalled in Theorem REF ).", "First, by assumption:continuousmap and coro:swsemicontinuous2, the map $\\theta \\mapsto [p](\\mu , \\mu _\\theta )$ is l.s.c.", "on $\\Theta $ for any $\\mu \\in \\mathcal {P}_p(\\mathsf {Y})$ .", "Therefore by assumption:boundedset, there exists $\\theta _\\star \\in \\Theta $ such that $[p](\\mu _\\star , \\mu _{\\theta _\\star }) = \\epsilon _\\star $ and the set $\\Theta ^\\star _\\epsilon $ is non-empty as it contains $\\theta _\\star $ , closed by lower semi-continuity of $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ , and bounded.", "$\\Theta ^\\star _\\epsilon $ is thus compact, and we conclude again by lower semi-continuity that the set $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ is non-empty [41].", "Consider the event given by assumption:datagen, $\\mathsf {E}\\in \\mathcal {F}$ such that $\\mathbb {P}(\\mathsf {E}) = 1$ and for any $\\omega \\in \\mathsf {E}$ , $\\lim _{n \\rightarrow \\infty } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) = 0$ .", "Then, we prove that $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ epi-converges to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ $\\mathbb {P}$ -almost surely using the characterization in [35], i.e.", "we verify that, for any $\\omega \\in \\mathsf {E}$ , the two conditions below hold: $\\text{for every compact set } \\mathsf {K}\\subset \\Theta $ and every open set $\\mathsf {O}\\subset \\Theta $ , $\\begin{aligned}\\liminf _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )& \\ge \\inf _{\\theta \\in \\mathsf {K}} [p](\\mu _\\star , \\mu _\\theta ) \\\\\\limsup _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {O}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )& \\le \\inf _{\\theta \\in \\mathsf {O}} [p](\\mu _\\star , \\mu _\\theta ) \\;.\\end{aligned}$ We fix $\\omega $ in $\\mathsf {E}$ .", "Let $\\mathsf {K}\\subset \\Theta $ be a compact set.", "By lower semi-continuity of $\\theta \\mapsto [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ , there exists $\\theta _n = \\theta _n(\\omega ) \\in \\mathsf {K}$ such that for any $n \\in \\mathbb {N}$ , $\\inf _{\\theta \\in \\mathsf {K}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) = [p](\\hat{\\mu }_n(\\omega ), \\mu _{\\theta _n})$ .", "We consider the subsequence ${\\hat{\\mu }_{\\phi (n)}}$ where $\\phi : \\mathbb {N} \\rightarrow \\mathbb {N}$ is increasing such that $[p](\\hat{\\mu }_{\\phi (n)}(\\omega ), \\mu _{\\theta _{\\phi (n)}})$ converges to $\\liminf _{n \\rightarrow \\infty } [p](\\hat{\\mu }_n(\\omega ), \\mu _{\\theta _n}) = \\liminf _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ .", "Since $\\mathsf {K}$ is compact, there also exists an increasing function $\\psi : \\mathbb {N} \\rightarrow \\mathbb {N}$ such that, for $\\bar{\\theta } \\in \\mathsf {K}$ , $\\lim _{n \\rightarrow \\infty } \\rho _\\Theta (\\theta _{\\psi (\\phi (n))}, \\bar{\\theta }) = 0$ .", "Therefore, we have $\\liminf _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) &= \\lim _{n \\rightarrow \\infty } [p](\\hat{\\mu }_{\\phi (n)}(\\omega ), \\mu _{\\theta _{\\phi (n)}}) \\nonumber \\\\&= \\lim _{n \\rightarrow \\infty } [p](\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ), \\mu _{\\theta _{\\psi (\\phi (n))}}) \\nonumber \\\\&= \\liminf _{n \\rightarrow \\infty } [p](\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ), \\mu _{\\theta _{\\psi (\\phi (n))}}) \\nonumber \\\\&\\ge [p](\\mu _\\star , \\mu _{\\bar{\\theta }}) \\\\&\\ge \\inf _{\\theta \\in \\mathsf {K}} [p](\\mu _\\star , \\mu _\\theta ) \\;, \\nonumber $ where (REF ) is obtained by lower semi-continuity since $\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ) \\xrightarrow{}\\mu _\\star $ by assumption:datagen and Theorem REF , and $\\mu _{\\theta _{\\psi (\\phi (n))}} \\xrightarrow{}\\mu _{\\bar{\\theta }}$ by assumption:continuousmap.", "We conclude that the first condition in (REF ) holds.", "Now, we fix $\\mathsf {O}\\subset \\Theta $ open.", "By definition of the infimum, there exists a sequence ${\\theta _n}$ in $\\mathsf {O}$ such that $\\lbrace [p](\\mu _\\star , \\mu _{\\theta _n})\\rbrace _{n \\in \\mathbb {N}}$ converges to $\\inf _{\\theta \\in \\mathsf {O}} [p](\\mu _\\star , \\mu _\\theta )$ .", "For any $n \\in \\mathbb {N}$ , $\\inf _{\\theta \\in \\mathsf {O}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le [p](\\hat{\\mu }_n(\\omega ), \\mu _{\\theta _n})$ .", "Therefore, $&\\limsup _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {O}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le \\limsup _{n \\rightarrow \\infty } [p](\\hat{\\mu }_n(\\omega ), \\mu _{\\theta _n}) \\nonumber \\\\& \\qquad \\qquad \\qquad \\le \\limsup _{n \\rightarrow \\infty } \\big ( [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) + [p](\\mu _\\star , \\mu _{\\theta _n}) \\big ) \\; \\text{by the triangle inequality} \\\\&\\qquad \\qquad \\qquad \\le \\limsup _{n \\rightarrow \\infty } [p](\\mu _\\star , \\mu _{\\theta _n}) \\text{ by {assumption:datagen}} \\\\&\\qquad \\qquad \\qquad = \\inf _{\\theta \\in \\mathsf {O}} [p](\\mu _\\star , \\mu _\\theta ) \\;\\; \\text{by definition of ${\\theta _n}$} \\;.$ This shows that the second condition in (REF ) holds, and hence, the sequence of functions $\\theta \\mapsto [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ epi-converges to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ .", "Now, we apply Theorem 7.31 of [35].", "First, by [35], () immediately follows from the epi-convergence of $\\theta \\mapsto [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ .", "Next, we show that [35] can be applied showing that for any $\\eta > 0$ there exists a compact set $\\mathsf {B}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such that, for all $n \\ge N$ , $\\inf _{\\theta \\in \\mathsf {B}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) + \\eta \\;.$ In fact, we simply show that there exists a compact set $\\mathsf {B}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such that, for all $n \\ge N$ , $\\inf _{\\theta \\in \\mathsf {B}} [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) = \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ .", "On one hand, the second condition in (REF ) gives us $\\limsup _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le \\inf _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta ) = \\epsilon _\\star \\;.$ We deduce that there exists $n_{\\epsilon /4}(\\omega )$ such that, for $n \\ge n_{\\epsilon /4}(\\omega )$ , $ \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le \\epsilon _\\star + \\epsilon /4$ , where $\\epsilon $ is given by assumption:boundedset.", "As $n \\ge n_{\\epsilon /4}(\\omega )$ , the set $\\widehat{\\Theta }_{\\epsilon /2} = \\lbrace \\theta \\in \\Theta : [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\le \\epsilon _\\star + \\frac{\\epsilon }{2} \\rbrace $ is non-empty as it contains $\\theta ^*$ defined as $[p](\\hat{\\mu }_n(\\omega ), \\mu _{\\theta ^*}) = \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ .", "On the other hand, by assumption:datagen, there exists $n_{\\epsilon /2}(\\omega )$ such that, for $n \\ge n_{\\epsilon /2}(\\omega )$ , $ [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) \\le \\frac{\\epsilon }{2} \\;.$ Let $n \\ge n_*(\\omega ) = \\max \\lbrace n_{\\epsilon /4}(\\omega ), n_{\\epsilon /2}(\\omega ) \\rbrace $ and $\\theta \\in \\widehat{\\Theta }_{\\epsilon /2}$ .", "By the triangle inequality, $[p](\\mu _\\star , \\mu _\\theta ) &\\le [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) + [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\\\&\\le \\epsilon _\\star + \\epsilon \\text{\\;\\;\\; since $\\theta \\in \\widehat{\\Theta }_{\\epsilon /2}$ and by (\\ref {eqn:mswe_n_eps2})}$ This means that, when $n \\ge n_*(\\omega )$ , $\\widehat{\\Theta }_{\\epsilon /2} \\subset \\Theta ^\\star _\\epsilon $ , and since $\\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ is attained in $\\widehat{\\Theta }_{\\epsilon /2}$ , we have $\\inf _{\\theta \\in \\Theta ^\\star _\\epsilon } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) = \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta ) \\;.", "$ As shown in the first part of the proof $\\Theta ^{\\star }_{\\epsilon }$ is compact and then by [35], (REF ) is a direct consequence of (REF )-(REF ) and the epi-convergence of $\\theta \\mapsto [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ .", "Finally, by the same reasoning that was done earlier in this proof for $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ , the set $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\theta )$ is non-empty for $n \\ge n_*(\\omega )$ ." ], [ "Existence and consistency of the MESWE: Proof of thm:existenceconsistencymeswe", "[Proof of thm:existenceconsistencymeswe] This result is proved analogously to the proof of [3].", "The key step is to show that the function $\\theta \\mapsto \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} ]$ epi-converges to $\\theta \\mapsto \\mathbb {E}[ [p](\\mu _\\star , \\mu _\\theta ) | Y_{1:n} ]$ , and then apply [35], which we recall in Theorem REF .", "First, since we assume assumption:continuousmap and assumption:boundedset, we can apply the same reasoning as in the proof of Theorem REF to show that the set $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ is non-empty.", "Consider the event given by assumption:datagen, $\\mathsf {E}\\in \\mathcal {F}$ such that $\\mathbb {P}(\\mathsf {E}) = 1$ and for any $\\omega \\in \\mathsf {E}$ , $\\lim _{n \\rightarrow \\infty } [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) = 0$ .", "Then, we prove that $\\theta \\mapsto \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} ]$ epi-converges to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ $\\mathbb {P}$ -almost surely using the characterization of [35], i.e.", "we verify that, for any $\\omega \\in \\mathsf {E}$ , the two conditions below hold: for every compact set $\\mathsf {K}\\subset \\Theta $ and for every open set $\\mathsf {O}\\subset \\Theta $ , $\\begin{aligned}\\liminf _{n \\rightarrow +\\infty } \\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\ge \\inf _{\\theta \\in \\mathsf {K}} [p](\\mu _\\star , \\mu _\\theta ) \\\\\\limsup _{n \\rightarrow +\\infty } \\inf _{\\theta \\in \\mathsf {O}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\inf _{\\theta \\in \\mathsf {O}} [p](\\mu _\\star , \\mu _\\theta )\\end{aligned}$ We fix $\\omega $ in $\\mathsf {E}$ .", "Let $\\mathsf {K}\\subset \\Theta $ be a compact set.", "By assumption:continuousmap and Corollary REF , the mapping $\\theta \\mapsto \\mathbb {E}[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} ]$ is l.s.c., so there exists $\\theta _n = \\theta _n(\\omega ) \\in \\mathsf {K}$ such that for any $n \\in \\mathbb {N}$ , $\\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] = \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta _n, m(n)}) | Y_{1:n} \\right]$ .", "We consider the subsequence ${\\hat{\\mu }_{\\phi (n)}}$ where $\\phi : \\mathbb {N} \\rightarrow \\mathbb {N}$ is increasing such that $\\mathbb {E}[ [p](\\hat{\\mu }_{\\phi (n)}(\\omega ), \\hat{\\mu }_{\\theta _{\\phi (n)}, m(\\phi (n))}) | Y_{1:n} ]$ converges to $\\liminf _{n \\rightarrow \\infty } \\mathbb {E}[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta _{n, m(n)}}) | Y_{1:n} ] = \\liminf _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} ]$ .", "Since $\\mathsf {K}$ is compact, there also exists an increasing function $\\psi : \\mathbb {N} \\rightarrow \\mathbb {N}$ such that, for $\\bar{\\theta } \\in \\mathsf {K}$ , $\\lim _{n \\rightarrow \\infty } \\rho _\\Theta (\\theta _{\\psi (\\phi (n))}, \\bar{\\theta }) = 0$ .", "Therefore, we have: $&\\liminf _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\nonumber \\\\&= \\lim _{n \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_{\\phi (n)}(\\omega ), \\hat{\\mu }_{\\theta _{\\phi (n)}, m(\\phi (n))}) | Y_{1:n} \\right] \\nonumber \\\\&= \\lim _{n \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ), \\hat{\\mu }_{\\theta _{\\psi (\\phi (n))}, m(\\psi (\\phi (n)))}) | Y_{1:n} \\right] \\nonumber \\\\&= \\liminf _{n \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ), \\hat{\\mu }_{\\theta _{\\psi (\\phi (n))}, m(\\psi (\\phi (n)))}) | Y_{1:n} \\right] \\nonumber \\\\&\\ge \\liminf _{n \\rightarrow \\infty } \\left\\lbrace [p](\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ), \\mu _{\\theta _{\\psi (\\phi (n))}}) - \\mathbb {E}\\left[ [p](\\mu _{\\theta _{\\psi (\\phi (n))}}, \\hat{\\mu }_{\\theta _{\\psi (\\phi (n))}, m(\\psi (\\phi (n)))}) | Y_{1:n} \\right] \\right\\rbrace \\\\&\\ge \\liminf _{n \\rightarrow \\infty } [p](\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ), \\mu _{\\theta _{\\psi (\\phi (n))}}) - \\limsup _{n \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\mu _{\\theta _{\\psi (\\phi (n))}}, \\hat{\\mu }_{\\theta _{\\psi (\\phi (n))}, m(\\psi (\\phi (n)))}) | Y_{1:n} \\right] \\nonumber \\\\&\\ge [p](\\mu _\\star , \\mu _{\\bar{\\theta }}) \\\\&\\ge \\inf _{\\theta \\in \\mathsf {K}} [p](\\mu _\\star , \\mu _\\theta ) \\nonumber $ where (REF ) follows from the triangle inequality, and () is obtained on one hand by lower semi-continuity since $\\hat{\\mu }_{\\psi (\\phi (n))}(\\omega ) \\xrightarrow{}\\mu _\\star $ by assumption:datagen and Theorem REF and $\\mu _{\\theta _{\\psi (\\phi (n))}} \\xrightarrow{}\\mu _{\\bar{\\theta }}$ by assumption:continuousmap, and on the other hand by assumption:sw32 which gives $\\limsup _{n \\rightarrow \\infty } \\mathbb {E}[ [p](\\mu _{\\theta _{\\psi (\\phi (n))}}, \\hat{\\mu }_{\\theta _{\\psi (\\phi (n))}, m(\\psi (\\phi (n)))}) | Y_{1:n} ] = 0$ .", "We conclude that the first condition in (REF ) holds.", "Now, we fix $\\mathsf {O}\\subset \\Theta $ open.", "By definition of the infimum, there exists a sequence ${\\theta _n}$ in $\\mathsf {O}$ such that $[p](\\mu _\\star , \\mu _{\\theta _n})$ converges to $\\inf _{\\theta \\in \\mathsf {O}} [p](\\mu _\\star , \\mu _\\theta )$ .", "For any $n \\in \\mathbb {N}$ , $\\inf _{\\theta \\in \\mathsf {O}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta _n, m(n)}) | Y_{1:n} \\right]$ .", "Therefore, $&\\limsup _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {O}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\limsup _{n \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta _n, m(n)}) | Y_{1:n} \\right] \\nonumber \\\\& \\qquad \\qquad \\qquad \\le \\limsup _{n \\rightarrow \\infty } \\left\\lbrace [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) + [p](\\mu _\\star , \\mu _{\\theta _n}) + \\mathbb {E}\\left[ [p](\\mu _{\\theta _n}, \\hat{\\mu }_{\\theta _n, m(n)}) | Y_{1:n} \\right] \\right\\rbrace \\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\text{by the triangle inequality} \\\\&\\qquad \\qquad \\qquad = \\limsup _{n \\rightarrow \\infty } [p](\\mu _\\star , \\mu _{\\theta _n}) \\;\\; \\text{ by {assumption:datagen} and {assumption:sw32}} \\nonumber \\\\&\\qquad \\qquad \\qquad = \\inf _{\\theta \\in \\mathsf {O}} [p](\\mu _\\star , \\mu _\\theta ) \\;\\; \\text{by definition of ${\\theta _n}$.", "}$ This shows that the second condition in (REF ) holds, and hence, the sequence of functions $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ epi-converges to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ .", "Now, we apply Theorem 7.31 of [35].", "First, by [35], () immediately follows from the epi-convergence of $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ .", "Next, we show that [35] holds by finding, for any $\\eta > 0$ , a compact set $\\mathsf {B}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such that, for all $n \\ge N$ , $\\inf _{\\theta \\in \\mathsf {B}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] + \\eta \\;.$ In fact, we simply show that there exists a compact set $\\mathsf {B}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such that, for all $n \\ge N$ , $\\inf _{\\theta \\in \\mathsf {B}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] = \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ .", "On one hand, the second condition in (REF ) gives us $\\limsup _{n \\rightarrow \\infty } \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\inf _{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta ) = \\epsilon _\\star \\;.$ We deduce that there exists $n_{\\epsilon / 6}(\\omega )$ such that, for $n \\ge n_{\\epsilon / 6}(\\omega )$ , $\\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\epsilon _\\star + \\frac{\\epsilon }{6},$ with the $\\epsilon $ of assumption:boundedset.", "When $n \\ge n_{\\epsilon / 6}(\\omega )$ , the set $\\widehat{\\Theta }_{\\epsilon /3} = \\lbrace \\theta \\in \\Theta : \\mathbb {E}[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} ] \\le \\epsilon _\\star + \\frac{\\epsilon }{3} \\rbrace $ is non-empty as it contains $\\theta ^*$ defined as $\\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta ^*, m(n)}) | Y_{1:n} \\right] = \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ .", "On the other hand, by assumption:datagen, there exists $n_{\\epsilon /3}(\\omega )$ such that, for $n \\ge n_{\\epsilon /3}(\\omega )$ , $ [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) \\le \\frac{\\epsilon }{3} \\;.$ Finally, by assumption:sw32, there exists $n^{\\prime }_{\\epsilon /3}(\\omega )$ such that, for $n \\ge n^{\\prime }_{\\epsilon /3}(\\omega )$ , $ \\mathbb {E}\\left[ [p](\\mu _\\theta , \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\le \\frac{\\epsilon }{3} \\;.$ Let $n \\ge n_*(\\omega ) = \\max \\lbrace n_{\\epsilon /6}(\\omega ), n_{\\epsilon /3}(\\omega ), n^{\\prime }_{\\epsilon /3}(\\omega ) \\rbrace $ and $\\theta \\in \\widehat{\\Theta }_{\\epsilon /3}$ .", "By the triangle inequality, $[p](\\mu _\\star , \\mu _\\theta ) &\\le [p](\\hat{\\mu }_n(\\omega ), \\mu _\\star ) + \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] + \\mathbb {E}\\left[ [p](\\mu _\\theta , \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\\\{}&\\le \\epsilon _\\star + \\epsilon \\text{\\;\\;\\; since } \\theta \\in \\widehat{\\Theta }_{\\epsilon /3} \\text{ and by } (\\ref {eqn:n_eps_3}) \\text{ and } (\\ref {eqn:n_prime_eps_3})$ This means that, when $n \\ge n_*(\\omega )$ , $\\widehat{\\Theta }_{\\epsilon /3} \\subset \\Theta ^\\star _\\epsilon $ with $\\Theta ^\\star _\\epsilon $ as defined in assumption:boundedset, and since $\\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ is attained in $\\widehat{\\Theta }_{\\epsilon /3}$ , we have $\\inf _{\\theta \\in \\Theta ^\\star _{\\epsilon }} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] &= \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right] \\;.", "$ By [35], (REF ) is a direct consequence of (REF ) and the epi-convergence of $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ to $\\theta \\mapsto [p](\\mu _\\star , \\mu _\\theta )$ .", "Finally, by the same reasoning that was done earlier in this proof for $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\mu _\\star , \\mu _\\theta )$ , the set $\\mathrm {argmin}_{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta , m(n)}) | Y_{1:n} \\right]$ is non-empty for $n \\ge n_*(\\omega )$ ." ], [ "Convergence of the MESWE to the MSWE: Proof of thm:cvgmeswetomswe", "[Proof of thm:cvgmeswetomswe] Here again, the result follows from applying [35], paraphrased in Theorem REF .", "First, by assumption:continuousmap and Corollary REF , the map $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ is l.s.c.", "on $\\Theta $ .", "Therefore, there exists $\\theta _n \\in \\Theta $ such that $[p](\\hat{\\mu }_n, \\mu _{\\theta _n}) = \\epsilon _n$ .", "The set $\\Theta _{\\epsilon , n}$ with the $\\epsilon $ from assumption:boundedsetepsn is non-empty as it contains $\\theta _n$ , closed by lower semi-continuity of $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ , and bounded.", "$\\Theta _{\\epsilon , n}$ is thus compact, and we conclude again by lower semi-continuity that the set $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\hat{\\mu }_n, \\mu _\\theta )$ is non-empty [41].", "Then, we prove that $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right]$ epi-converges to $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ as $m \\rightarrow \\infty $ using the characterization in [35], i.e.", "we verify that: for every compact set $\\mathsf {K}\\subset \\Theta $ and every open set $\\mathsf {O}\\subset \\Theta $ , $\\begin{aligned}\\liminf _{m \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right]& \\ge \\inf _{\\theta \\in \\mathsf {K}} [p](\\hat{\\mu }_n, \\mu _\\theta ) \\\\\\limsup _{m \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {O}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right]& \\le \\inf _{\\theta \\in \\mathsf {O}} [p](\\hat{\\mu }_n, \\mu _\\theta ) \\;.\\end{aligned}$ Let $\\mathsf {K}\\subset \\Theta $ be a compact set.", "By assumption:continuousmap and Corollary REF , for any $m \\in \\mathbb {N}$ , the map $\\theta \\mapsto \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} ]$ is l.s.c., so there exists $\\theta _m \\in \\mathsf {K}$ such that $\\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} ] = \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _m, m}) | Y_{1:n} ]$ .", "We consider the subsequence $\\lbrace \\hat{\\mu }_{\\theta _{\\phi (m)}, \\phi (m)} \\rbrace _{m \\in \\mathbb {N}}$ where $\\phi : \\mathbb {N} \\rightarrow \\mathbb {N}$ is increasing such that $\\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _{\\phi (m)}, \\phi (m)}) | Y_{1:n} ]$ converges to $\\liminf _{m \\rightarrow \\infty } \\ \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _m, m}) | Y_{1:n} ] = \\liminf _{m \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} ]$ .", "Since $\\mathsf {K}$ is compact, there also exists an increasing function $\\psi : \\mathbb {N} \\rightarrow \\mathbb {N}$ such that, for any $\\bar{\\theta } \\in \\mathsf {K}$ , $\\lim _{m \\rightarrow \\infty } \\rho _\\Theta (\\theta _{\\psi (\\phi (m))}, \\bar{\\theta }) = 0$ .", "Therefore, we have $&\\liminf _{m \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {K}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\nonumber \\\\&= \\lim _{m \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _{\\phi (m)}, \\phi (m)}) | Y_{1:n} \\right] \\nonumber \\\\&= \\lim _{m \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _{\\psi (\\phi (m))}, \\psi (\\phi (m))}) | Y_{1:n} \\right] \\nonumber \\\\&= \\liminf _{m \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _{\\psi (\\phi (m))}, \\psi (\\phi (m))}) | Y_{1:n} \\right] \\nonumber \\\\&\\ge \\liminf _{m \\rightarrow \\infty } [ [p](\\hat{\\mu }_n, \\mu _{\\theta _{\\psi (\\phi (m))}}) - \\mathbb {E}\\left[ [p](\\mu _{\\theta _{\\psi (\\phi (m))}}, \\hat{\\mu }_{\\theta _{\\psi (\\phi (m))}, \\psi (\\phi (m))}) | Y_{1:n} \\right] ] \\\\&\\ge \\liminf _{m \\rightarrow \\infty } [p](\\hat{\\mu }_n, \\mu _{\\theta _{\\psi (\\phi (m))}}) - \\limsup _{m \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\mu _{\\theta _{\\psi (\\phi (m))}}, \\hat{\\mu }_{\\theta _{\\psi (\\phi (m))}, \\psi (\\phi (m))}) | Y_{1:n} \\right] \\nonumber \\\\&\\ge [p](\\hat{\\mu }_n, \\mu _{\\bar{\\theta }}) \\\\&\\ge \\inf _{\\theta \\in \\mathsf {K}} [p](\\hat{\\mu }_n, \\mu _\\theta ) \\nonumber $ where (REF ) results from the triangle inequality and () is obtained by assumption:sw32 on one hand and by lower semi-continuity on the other hand since $\\mu _{\\theta _{\\psi (\\phi (n))}} \\xrightarrow{}\\mu _{\\bar{\\theta }}$ by assumption:continuousmap.", "We conclude that the first condition in (REF ) holds.", "Now, we fix $\\mathsf {O}\\subset \\Theta $ open.", "By definition of the infimum, there exists a sequence $(\\theta _m)_{m \\in \\mathbb {N}}$ in $\\mathsf {O}$ such that $[p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _m, m})$ converges to $\\inf _{\\theta \\in \\mathsf {O}} [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m})$ .", "For any $m \\in \\mathbb {N}$ , $\\inf _{\\theta \\in \\mathsf {O}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\le \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\mu _{\\theta _m, m}) | Y_{1:n} \\right]$ .", "Therefore, $&\\limsup _{m \\rightarrow \\infty } \\inf _{\\theta \\in \\mathsf {O}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\\\&\\le \\limsup _{m \\rightarrow \\infty } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta _m, m}) | Y_{1:n} \\right] \\nonumber \\\\&\\le \\limsup _{m \\rightarrow \\infty } [ [p](\\hat{\\mu }_n, \\mu _{\\theta _m}) + \\mathbb {E}\\left[ [p](\\mu _{\\theta _m}, \\hat{\\mu }_{\\theta _m, m}) | Y_{1:n} \\right] ] \\;\\; \\text{by the triangle inequality} \\\\&\\le \\limsup _{m \\rightarrow \\infty } [p](\\hat{\\mu }_n, \\mu _{\\theta _m}) \\text{ by {assumption:sw32}} \\\\&= \\inf _{\\theta \\in \\mathsf {O}} [p](\\hat{\\mu }_n, \\mu _\\theta ) \\;\\; \\text{by definition of $(\\theta _m)_{m \\in \\mathbb {N}}$}$ This shows that the second condition in (REF ) holds, and hence, the sequence of functions $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right]$ epi-converges to $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ .", "Now, we apply [35].", "By [35], () immediately follows from the epi-convergence of $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right]$ to $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ .", "Next, we show that [35] holds by finding for any $\\eta > 0$ a compact set $\\mathsf {B}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such that, for all $n \\ge N$ , $\\inf _{\\theta \\in \\mathsf {B}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\le \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] + \\eta \\;.$ In fact, we simply show that there exists a compact set $\\mathsf {B}\\subset \\Theta $ and $N \\in \\mathbb {N}$ such that, for all $n \\ge N$ , $\\inf _{\\theta \\in \\mathsf {B}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] = \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right]$ .", "On one hand, the second condition in (REF ) gives us $\\limsup _{m \\rightarrow \\infty } \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\le \\inf _{\\theta \\in \\Theta } [p](\\hat{\\mu }_n, \\mu _\\theta ) = \\epsilon _n \\;.$ We deduce that there exists $m_{\\epsilon /4}$ such that, for $m \\ge m_{\\epsilon /4}$ , $ \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\le \\epsilon _n + \\frac{\\epsilon }{4} \\;.$ with the $\\epsilon $ of assumption:boundedsetepsn.", "When $m \\ge m_{\\epsilon /4}$ , the set $\\Theta _{\\epsilon /2} = \\lbrace \\theta \\in \\Theta : \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} ] \\le \\epsilon _n + \\frac{\\epsilon }{2} \\rbrace $ is non-empty as it contains $\\theta ^*$ defined as $\\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ^*, m}) | Y_{1:n} ] = \\inf _{\\theta \\in \\Theta } \\mathbb {E}[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} ]$ .", "On the other hand, by assumption:sw32, there exists $m_{\\epsilon /2}$ such that, for $m \\ge m_{\\epsilon /2}$ , $ \\mathbb {E}\\left[ [p](\\mu _\\theta , \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\le \\frac{\\epsilon }{2} \\;.$ Let $\\theta $ belong to $\\Theta _{\\epsilon /2}$ and $m \\ge m_* = \\max \\lbrace m_{\\epsilon /4}, m_{\\epsilon /2} \\rbrace $ .", "By the triangle inequality, $[p](\\hat{\\mu }_n, \\mu _\\theta ) &\\le \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] + \\mathbb {E}\\left[ [p](\\mu _\\theta , \\hat{\\mu }_{\\theta , m}) | Y_{1:n} \\right] \\\\&\\le \\epsilon _n + \\epsilon \\text{\\;\\;\\; since $\\theta \\in \\Theta _{\\epsilon /2}$ and by (\\ref {eqn:m_eps2})}$ This means that, when $m \\ge m_*$ , $\\Theta _{\\epsilon /2} \\subset \\Theta _{\\epsilon , n}$ , and since $\\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right]$ is attained in $\\Theta _{\\epsilon /2}$ , $\\inf _{\\theta \\in \\Theta _{\\epsilon , n}} \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right] = \\inf _{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right] \\;.", "$ By [35], (REF ) is a direct consequence of (REF ) and the epiconvergence of $\\theta \\mapsto \\mathbb {E}\\left[ [p](\\hat{\\mu }_n(\\omega ), \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right]$ to $\\theta \\mapsto [p](\\hat{\\mu }_n, \\mu _\\theta )$ .", "Finally, by the same reasoning that was done earlier in this proof for $\\mathrm {argmin}_{\\theta \\in \\Theta } [p](\\hat{\\mu }_n, \\mu _\\theta )$ , the set $\\mathrm {argmin}_{\\theta \\in \\Theta } \\mathbb {E}\\left[ [p](\\hat{\\mu }_n, \\hat{\\mu }_{\\theta ,m}) | Y_{1:n} \\right]$ is non-empty for $m \\ge m_*$ ." ], [ "Proof of Rate of convergence and asymptotic distribution: Proof of thm:asymptotic1 and thm:asymptotic2", "[Proof of thm:asymptotic1 and thm:asymptotic2] The proof of thm:asymptotic1 and thm:asymptotic2 consists in showing that the conditions of Theorem 4.2 and Theorem 7.2 in [42] respectively are satisfied: conditions (i), (ii) and (iii) follow from assumption:wellseparation, assumption:formderivative and assumption:weakconvergencewithoutnorm." ], [ "Computational Aspects", "The MSWE and MESWE are in general computationnally intractable, partly because the Sliced-Wasserstein distance requires an integration over infinitely many projections.", "In this section, we review the numerical methods used to approximate these two estimators.", "Approximation of $[p]$ : We recall the definition of the SW distance below.", "$[p]^{p}(\\mu , \\nu ) = \\int _{\\mathbb {S}^{d-1}} [p]^p(u^{\\star }_{\\sharp } \\mu , u^{\\star }_{\\sharp } \\nu ) \\mathrm {d}\\sigma (u) \\;,$ where $\\sigma $ is the uniform distribution on $\\mathbb {S}^{d-1}$ and for any measurable function $f :\\mathsf {Y}\\rightarrow \\mathbb {R}$ and $\\zeta \\in \\mathcal {P}(\\mathsf {Y})$ , $f_{\\sharp }\\zeta $ is the push-forward measure of $\\zeta $ by $f$ .", "We approximate the integral in (REF ) by selecting a finite set of projections $\\mathsf {U}\\subset \\mathbb {S}^{d-1}$ and computing the empirical average: $ [p]^{p}(\\mu , \\nu ) \\approx \\frac{1}{\\operatorname{card}(\\mathsf {U})}\\sum _{u \\in \\mathsf {U}} [p]^p(u^{\\star }_{\\sharp } \\mu , u^{\\star }_{\\sharp } \\nu )$ The quality of this approximation depends on the sampling of $\\mathbb {S}^{d-1}$ .", "In our work, we use random samples picked uniformly on $\\mathbb {S}^{d-1}$ , as proposed in [43] and explained hereafter (see paragraph “Sampling schemes”).", "The Wasserstein distance between two one-dimensional probability densities $\\mu $ and $\\nu $ as defined in (REF ) is also estimated by replacing the integrals with a Monte Carlo estimate, and we can use two distinct methods to approximate this quantity.", "The first approximation we consider is given by, $[p]^p(\\mu , \\nu ) \\approx \\frac{1}{K} \\sum _{k=1}^K \\left| \\tilde{F}_\\mu ^{-1}(t_k) - \\tilde{F}_\\nu ^{-1}(t_k) \\right|^p \\;,$ where $\\lbrace t_k\\rbrace _{k=1}^K$ are uniform and independent samples from $\\left[0,1\\right]$ and for $\\xi \\in \\lbrace \\mu ,\\nu \\rbrace $ , $\\tilde{F}_\\xi ^{-1}$ is a linear interpolation of $\\bar{F}^{-1}_{\\xi }$ which denotes either the exact quantile function of $\\xi $ if $\\xi $ is discrete, or an approximation by a Monte Carlo procedure.", "This last option is justified by the Glivenko-Cantelli Theorem.", "The second approximation is given by, $[p]^p(\\mu , \\nu ) \\approx \\frac{1}{K} \\sum _{k=1}^K \\left| s_k - \\tilde{F}_\\nu ^{-1}(\\tilde{F}_\\mu (s_k)) \\right|^p \\;,$ where $\\lbrace s_k\\rbrace _{i=1}^K$ are uniform and independent samples from $\\mu $ and for $\\xi \\in \\lbrace \\mu ,\\nu \\rbrace $ , $\\tilde{F}_\\xi $ (resp.", "$\\tilde{F}_\\xi ^{-1}$ ) is a linear interpolation of $\\bar{F}_{\\xi }$ (resp.", "$\\bar{F}^{-1}_{\\xi }$ ) which denotes either the exact cumulative distribution function (resp.", "quantile function) of $\\xi $ if $\\xi $ is discrete or an approximation by a Monte Carlo procedure.", "Sampling schemes: We explain the methods that we used to generate i.i.d.", "samples from the uniform distribution on the $d$ -dimensional sphere $\\mathbb {S}^{d-1}$ and from multivariate elliptically contoured stable distributions.", "Uniform sampling on the sphere.", "To sample from $\\mathbb {S}^{d-1}$ , we form the $d$ -dimensional vector $\\mathbf {s}$ by drawing each of its $d$ components from the standard normal distribution $\\mathcal {N}(0, 1)$ and we normalize it: $\\mathbf {s^{\\prime }} = \\mathbf {s}/ \\Vert \\mathbf {s}\\Vert _2$ , so that $\\mathbf {s^{\\prime }}$ lies on the sphere.", "Sampling from multivariate elliptically contoured stable distributions.", "We recall that if $Y \\in \\mathbb {R}^d$ is $\\alpha $ -stable and elliptically contoured, i.e.", "$Y \\sim \\mathcal {E}\\alpha \\mathcal {S}_c(\\mathbf {\\Sigma }, \\mathbf {m})$ , then its joint characteristic function is defined as, for any $\\mathbf {t}\\in \\mathbb {R}^d$ , $ \\mathbb {E}[ \\exp (i\\mathbf {t}^T Y) ] = \\exp \\left( - (\\mathbf {t}^T \\mathbf {\\Sigma }\\mathbf {t})^{\\alpha / 2} + i \\mathbf {t}^T \\mathbf {m}\\right) \\;,$ where $\\mathbf {\\Sigma }$ is a positive definite matrix (akin to a correlation matrix), $\\mathbf {m}\\in \\mathbb {R}^d$ is a location vector (equal to the mean if it exists) and $\\alpha \\in (0, 2)$ controls the thickness of the tail.", "Elliptically contoured stable distributions are scale mixtures of multivariate Gaussian distributions [28], whose densities are intractable, but can easily be simulated [27]: let $A \\sim \\mathcal {S}_{\\alpha /2}(\\beta , \\gamma , \\delta )$ be a one-dimensional positive $(\\alpha / 2)$ -stable random variable with $\\beta = 1$ , $\\gamma = 2\\cos (\\frac{\\pi \\alpha }{4})^{2/\\alpha }$ and $\\delta = 0$ , and $G \\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {\\Sigma })$ .", "Then, $Y = \\sqrt{A}G + \\mathbf {m}$ has (REF ) as characteristic function.", "Optimization methods: Computing the MSWE and MESWE implies minimizing the (expected) Sliced-Wasserstein distance over the set of parameters.", "In our experiments, we used different optimization methods as we detail below.", "Multivariate Gaussian distributions.", "We derive the explicit gradient expressions of the approximate $[2]^2$ distance with respect to the mean and scale parameters $\\mathbf {m}$ and $\\sigma ^2$ , and we use the ADAM stochastic optimization method with the default parameter settings suggested in [32].", "For the MSWE, we use (REF ) to approximate the one-dimensional Wasserstein distance, and we evaluate directly the Gaussian density of the generated samples, utilizing the fact that the projection of a Gaussian of parameters $(\\mathbf {m}, \\sigma ^2 \\mathbf {I})$ along $u \\in \\mathbb {S}^{d-1}$ is a 1D normal distribution of parameters $(\\langle u, \\mathbf {m}\\rangle , \\sigma ^2 \\langle u, u \\rangle )$ .", "In this case, the gradient of the approximate $[2]^2$ between $\\mu = \\mathcal {N}(\\mathbf {m}, \\sigma ^2\\mathbf {I})$ and the empirical distribution associated to $n$ samples drawn by $\\mathcal {N}(\\mathbf {m}_\\star , \\sigma ^2_\\star \\mathbf {I})$ , denoted by $\\hat{\\nu }$ , is given by, $\\nabla _\\mathbf {m}[2]^2(\\mu , \\hat{\\nu }) = \\frac{1}{\\operatorname{card}(\\mathsf {U})\\operatorname{card}(\\mathsf {S})} \\sum _{u \\in \\mathsf {U}, s \\in \\mathsf {S}} \\bigg ( \\left| s - \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\nu }}^{-1}(\\tilde{F}_{u^{\\star }_{\\sharp }\\mu }(s)) \\right|^2 \\mathcal {N}(s ; \\langle u, \\mathbf {m}\\rangle , \\sigma ^2 {u}^2)& \\\\\\frac{s - \\langle u, \\mathbf {m}\\rangle }{\\sigma ^2 {u}^2} u \\bigg ),& \\\\\\nabla _{\\sigma ^2} [2]^2(\\mu , \\hat{\\nu }) = \\frac{1}{\\operatorname{card}(\\mathsf {U})\\operatorname{card}(\\mathsf {S})} \\sum _{u \\in \\mathsf {U}, s \\in \\mathsf {S}} \\bigg ( \\left| s - \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\nu }}^{-1}(\\tilde{F}_{u^{\\star }_{\\sharp }\\mu }(s)) \\right|^2 \\mathcal {N}(s ; \\langle u, \\mathbf {m}\\rangle , \\sigma ^2 {u}^2) \\\\\\frac{1}{2 \\sigma ^2 }\\left(\\frac{(s - \\langle u, \\mathbf {m}\\rangle )^2}{\\sigma ^2 {u}^2} - 1 \\right) \\bigg ),&$ where $\\mathsf {U}\\subset \\mathbb {S}^{d-1}$ is a finite set of random projections picked uniformly on $\\mathbb {S}^{d-1}$ , $\\mathsf {S}$ is a finite subset in $\\mathbb {R}$ , and for any $s \\in \\mathsf {S}$ , $\\mathcal {N}(s ; \\langle u, \\mathbf {m}\\rangle , \\sigma ^2 {u}^2)$ denotes the density function of the Gaussian of parameters $(\\langle u, \\mathbf {m}\\rangle , \\sigma ^2 {u}^2)$ evaluated at $s$ .", "For the MESWE, we use (REF ) and evaluate the empirical distribution of generated samples instead of their normal density.", "Therefore, the gradient of the approximate $[2]^2$ between the empirical distributions corresponding to one generated dataset of $m$ samples drawn from $\\mathcal {N}(\\mu , \\sigma ^2\\mathbf {I})$ and $n$ samples drawn from $\\mathcal {N}(\\mu _\\star , \\sigma ^2_\\star \\mathbf {I})$ , respectively denoted by $\\hat{\\mu }$ and $\\hat{\\nu }$ , is obtained with, $\\nabla _\\mathbf {m}[2]^2(\\hat{\\mu }, \\hat{\\nu }) &= \\frac{-2}{\\operatorname{card}(\\mathsf {U}).K} \\sum _{u \\in \\mathsf {U}} \\sum _{k=1}^K \\left| \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\mu }}^{-1}(t_k) - \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\nu }}^{-1}(t_k) \\right| u \\;, \\\\\\nabla _{\\sigma ^2} [2]^2(\\hat{\\mu }, \\hat{\\nu }) &= \\frac{1}{\\operatorname{card}(\\mathsf {U}).K} \\sum _{u \\in \\mathsf {U}} \\sum _{k=1}^K \\left| \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\mu }}^{-1}(t_k) - \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\nu }}^{-1}(t_k) \\right| \\frac{\\langle u, \\mathbf {m}\\rangle - \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\mu }}^{-1}(t_k)}{\\sigma ^2} \\;.$ Multivariate elliptically contoured stable distributions.", "When comparing MESWE to MEWE, we approximate these estimators using the derivative-free optimization method Nelder-Mead (implemented in Scipy), following the approach in [3].", "When illustrating the theoretical properties of MESWE, we proceed in the same way as for the multivariate Gaussian experiment: we compute the explicit gradient expression of the approximate $[2]^2$ distance with respect to the location parameter $\\mathbf {m}$ , and we use the ADAM stochastic optimization method with the default settings.", "eq:alphagradient gives the formula of the gradient of the approximate $[2]^2$ between the empirical distributions of one generated dataset of $m$ samples drawn from $\\mathcal {E}\\alpha \\mathcal {S}_c(\\mathbf {I}, \\mathbf {m})$ and $n$ samples drawn from $\\mathcal {E}\\alpha \\mathcal {S}_c(\\mathbf {I}, \\mathbf {m}_\\star )$ , respectively denoted by $\\hat{\\mu }$ and $\\hat{\\nu }$ , with respect to $\\mathbf {m}$ .", "$ \\nabla _\\mathbf {m}[2]^2(\\hat{\\mu }, \\hat{\\nu }) = \\frac{-2}{\\operatorname{card}(\\mathsf {U}).K} \\sum _{u \\in \\mathsf {U}} \\sum _{k=1}^K \\left| \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\mu }}^{-1}(t_k) - \\tilde{F}_{u^{\\star }_{\\sharp }\\hat{\\nu }}^{-1}(t_k) \\right| u \\;.$ High-dimensional real data using GANs.", "We use the ADAM optimizer provided by TensorFlow GPU.", "Computing infrastructure: The experiment comparing the computational time of MESWE and MEWE was conducted on a daily-use laptop (CPU intel core i7, 1.90GHz $\\times $ 8 and 16GB of RAM).", "The neural network experiment was run on a cluster with 4 relatively modern GPUs." ] ]
1906.04516
[ [ "Testing dissipative dark matter in causal thermodynamics" ], [ "Abstract In this paper we study the consistency of a cosmological model representing a universe filled with a one-component dissipative dark matter fluid, in the framework of the causal Israel-Stewart theory, where a general expression arising from perturbation analysis for the relaxation time $\\tau$ is used.", "This model is described by an exact analytic solution recently found in [N. Cruz, E. Gonz\\'alez and G. Palma, Gen. Rel.", "Grav.", "\\textbf{52}, 62 (2020), which depends on several model parameters as well as integration constants, allowing the use of Type Ia Supernovae and Observational Hubble data to perform an astringent observational test.", "The constraint regions found for the parameters of the solution allow the existence of an accelerated expansion of the universe at late times, after the domination era of the viscous pressure, which holds without the need of including a cosmological constant.", "Nevertheless, the fitted parameter values lead to drawbacks as a very large non-adiabatic contribution to the speed of sound, and some inconsistencies, not totally conclusive, with the description of the dissipative dark matter as a fluid, which is nevertheless a common feature of these kind of models." ], [ "Introduction", "It is well accepted that nowadays the cosmological data consistently indicates that the expansion of the Universe began to accelerate [1], [2], [3], [4], [5] around $z=0.64$  [6].", "Thus, every model used to describe the cosmic background evolution must display this transition in its dynamics.", "Of course, $\\Lambda $ CDM presents this transition as well and it can be understood as the transition between the dark matter (DM) dominant era and the era dominated by the dark energy (DE).", "Nevertheless, despite the fact that the $\\Lambda $ CDM model has been very successful to explain the cosmological data, it presents the following weak points from the theoretical point of view: i) Why the estimated value of $\\Lambda $ is 120 orders of magnitude smaller than the physically predicted one?.", "This is the well known cosmological constant problem [7], [8], [9], [10], [11], [12], [13], which can be represented mainly by the two following open questions: a) Why does the observed vacuum energy has such an unnaturally small but non vanishing value?, and b) Why do we observe vacuum density to be so close to matter density, even though their ratio can vary up to 120 orders of magnitude during the cosmic evolution?", "(known as the coincidence problem) [14], [15].", "This model presents serious specific observational challenges and tensions as well (for a brief review see for example [16] ).", "As an alternative to $\\Lambda $ CDM, DM unified models do not invoke a cosmological constant.", "In the framework of general relativity, non perfect fluids drive the accelerated expansion due to the negativeness of the viscous pressure, which appears due to the presence of bulk viscosity.", "Therefore, a Cold DM (CDM) viscous component represents a kind of unified DM model that could, in principle, explain the above mentioned transition of the acceleration without the inclusion of a DE component.", "It is worthy mentioning that measurements of the Hubble constant show tension with the values obtained from large scale structure (LSS) and Planck CMB data, which can be alleviated when viscosity is included in the DM component [17].", "The new era of gravitational waves detector has also opened the possibility to detect dissipative effects in DM and DE through the dispersion and dissipation experimented by these waves propagating in a non perfect fluid.", "Some constraints on those effects were found in [18].", "For neutralino CDM it was pointed out in [19] that a bulk viscosity appears in the CDM fluid due to the energy transfered from the CDM fluid to the radiation fluid.", "From the point of view of cosmological perturbations, it has been shown that viscous fluid dynamics provides a simple and accurate framework for extending the description of cosmological perturbations into the nonlinear regime  [20].", "Dissipative DM also appears as a residing component in a hidden sector, and can reproduce several observational properties of disk galaxies [21], [22].", "Alternatively, the direct study in astrophysical scenarios, such as the Bullet Cluster, has been an arena to place constraints on the DM-DM elastic scattering cross-section [23], [24].", "Simulations of this cluster with the inclusion of self-interacting DM and gas was performed in [25], finding a cross-section of around $\\sigma /m = 2cm^{2}g^{-1}$ .", "Other study presents an indication that self interaction DM would require a cross-section that varies with the relative velocity between DM particles in order to modify the structure of dwarf galaxy dark matter haloes [26].", "In spite of the fact that the bullet cluster revealed that the barionic matter has a viscosity much larger than the DM viscosity, its dissipative negative pressure contribution to the accelerated expansion of the universe can be neglected due to very low density in comparison with the one of the DM.", "At background level, where a homogeneous and isotropic space describes the Universe as a whole, only bulk viscosity is present in the cosmic fluid and the dissipative pressure must be described by some relativistic thermodynamical approach to non perfect fluids.", "This implies a crucial point in a fully consistent physical description of the expansion of the Universe using dissipative processes to generate the transition.", "Meanwhile in the $\\Lambda $ CDM model the acceleration is due to a cosmological constant and the entropy remains constant, in the case of non perfect fluids it is necessary to find a solution that not only consistently describes the kinematics of the Universe, but also that satisfies the thermodynamical requirements.", "In the case of a description of viscous fluids, the Eckart's theory [27], [28] has been widely investigated due to its simplicity and became the starting point to shed some light in the behavior of the dissipative effects in the late time cosmology [29], [30], [31], [32] or in inflationary scenarios [33].", "In the framework of an interacting dark sector, a recent work explores a flat universe with a radiation component and a viscous fluid (DM plus baryons) that interacts with a perfect fluid (DE) [34].", "Also a $\\Lambda $ CDM model with with a dissipative DM, where the viscosity is a polynomial function of the redshift, has been constrained in [35].", "Nevertheless, it is a well known result that the Eckart's theory has non causal behavior, presenting the problem of superluminal propagation velocities and some instabilities.", "So, from the point of view of a consistent description of the relativistic thermodynamics of non perfect fluids, it is necessary to include a causal description such as the one given by the Israel- Stewart (IS) theory [36], [37], [38], [39], [40], [41], [42].", "The aim in this paper is to constraint the respective free parameters appearing in the recent exact cosmological solutions found in [43], for a universe filled only with a dissipative dark matter component.", "The constraint was done by using the Supernova Ia (SNe Ia) and Observational Hubble Data (OHD).", "These solutions were found in the framework of the causal thermodynamics described by the IS theory, and are compatible with a transition between deceleration and acceleration expansions at background level, within a certain range of the parameters involved.", "Since the solution found describes a universe containing only a dissipative DM as the main component of the universe, it should only be considered as an adequate approximation for the late time evolution, where cold DM dominates.", "In this sense, these models cannot expected to be fairly representative of the early time evolution, where ultrarelativistic matter dominates.", "For the solutions was assumed a barotropic EoS for the fluid that filled the universe, i.e., $p=\\left(\\gamma -1\\right)\\rho ,$ where $p$ is the barotropic pressure, and $\\rho $ is the energy density.", "These solutions describe the evolution of the universe with dissipative DM with positive pressure, therefore the EoS parameter considered lies in the range $1\\le \\gamma <2$ , where $\\gamma =1$ corresponds to a particular solution.", "Furthermore, the following Ansatz for the bulk viscosity coefficient, $\\xi (\\rho )$ , $\\xi (\\rho )=\\xi _{0}\\rho ^{s}, $ was chosen, which has been widely considered as a suitable function between the bulk viscosity and the energy density of the main fluid.", "$\\xi _{0}$ must be a positive constant because of the second law of thermodynamics [44].", "The nonlinear ordinary differential equation of the IS theory obtained with the above assumptions has been solved, for example, for different values of the parameter $s$ in [45]; for $s=1/4$ and stiff matter in  [46].", "Inflationary solutions were found in [47].", "Stability of inflationary solutions were investigated in [48], [49].", "For an extensive review on viscous cosmology in early and late see [50].", "It is important mentioning that in the formulation of the thermodynamical approaches of relativistic viscous fluids it is assumed that the viscous pressure must be lower than the equilibrium pressure of the fluid ( the near equilibrium condition).", "Whenever there are solutions with acceleration at some stage, like, for example, bulk viscous inflation at early times, or transition between decelerated and accelerated expansions at late times, the above condition cannot be fulfilled.", "Therefore, it is not clearly justified the application of the above approach in such situations.", "To overcome this issue, deviations from the near equilibrium condition have been considered within a non linear extension of IS, as the one proposed in [51].", "Using this proposal, a nonlinear extension in accelerated eras occurring at early times, like inflation or at late times, like phantom behavior, were investigated in  [52] and in [53], respectively.", "The current accelerated expansion was addressed with a nonlinear model for viscosity in [54].", "Also, a phase space analysis of a cosmological model with both viscous radiation and non-viscous dust was performed in [55], where the viscous pressure obeys a nonlinear evolution equation.", "Is important mentioning that in  [56] was shown that the inclusion of a cosmological constant along with a dissipative DM component allows to obey the near equilibrium condition within, in principle, the linear IS theory.", "The analytical solution we will analyse in the present article was obtained using the general expression for the relaxation time $\\tau $  [42], derived from the study of the causality and stability of the IS theory in [57] $\\frac{\\xi }{\\left(\\rho +p\\right)\\tau }=c_{b}^{2},$ where $c_{b}$ is the speed of bulk viscous perturbations (non-adiabatic contribution to the speed of sound in a dissipative fluid without heat flux or shear viscosity).", "Since the dissipative speed of sound $V$ , is given by $V^{2}= c_{s}^{2}+c_{b}^{2}$ , where $c_{s}^{2}=(\\partial p/\\partial \\rho )_{s}$ is the adiabatic contribution, then for a barotropic fluid $c_{s}^{2}=\\gamma -1$ and thus $c_{b}^{2}=\\epsilon \\left(2-\\gamma \\right)$ with $0<\\epsilon \\le 1$ , known as the causality condition.", "The solution generalizes the solution found in [58], where the particular expression $\\tau =\\xi /\\rho $ was used, taking besides the particular values $s=1/2$ and $\\gamma =1$ .", "In a previous work, which included Eq.", "(REF ) for the relaxation time and a pressureless main fluid, the IS equation was solved by using an Ansatz for the viscous pressure [59].", "The conclusion indicates that the full causal theory seems to be disfavored by the cosmological data.", "Nevertheless, in the truncated version of the theory, similar results to those of the $\\Lambda CDM$ model were found for a bulk viscous speed within the interval $10^{-11}\\ll c_{b}^{2}\\lesssim 10^{-8}$ .", "This last constraint on $c_{b}^{2}$ , even though it was obtained with a suitable Ansatz, and it does not represent an exact solution of the theory, teaches us that the non-adiabatic contribution to the speed of sound must be very small to be consistent with the cosmological data.", "The free parameters of the general analytical solution we will constraint against observational data in the present article are $h$ , $\\xi _{0}$ , $\\epsilon $ and $\\gamma $ .", "In the case of one CDM component taking from the beginning, only $h$ , $\\xi _{0}$ and $\\epsilon $ remains free and we find the constraints to obtain a solution that presents a transition between deceleration to acceleration expansions.", "We will also analyse the constraints for the case of where all parameters are taken free.", "Using the observational constraints obtained for the parameters $h$ , $\\xi _{0}$ and $\\epsilon $ for the both cases $\\gamma =1$ and $\\gamma $ free, we will discuss the consistence of a fluid description during the cosmic evolution of the exact solutions representing a dissipative DM component.", "To this aim we evaluate the consistency condition $\\tau H < 1$ in terms of the constrained parameter values, with $\\tau $ being the relaxation time and $H$ the Hubble parameter.", "This paper is organized as follow: In section II we describe briefly the causal Israel-Stewart theory and we recall the general evolution equation for the Hubble parameter $H$ .", "We also write down the constraints for the parameters of the model in order to guaranty a consistent fluid description.", "In section III we present the expressions corresponding to the analytic solution found in [43] for arbitrary $\\gamma $ and for the dust case, $\\gamma =1$ , respectively.", "In section IV we constraint the free parameters of the solutions with the Supernovae Ia (SNe Ia) and Observational Hubble Data (OHD).", "In section V we discuss this results comparing them with $\\Lambda $ CDM model and the implication of the parameters's values obtained and their thermodynamic implications.", "Finally, in section VI we present our conclusions taken into account the kinematic properties of the solutions, as well as the consistence of a fluid description." ], [ "Israel-Stewart formalism", "The model of a dissipative DM component is described by the Einstein's equations for a flat FRW metric: $3H^{2}=\\rho , $ and $ 2\\dot{H}+3H^{2}=-p-\\Pi , $ where natural units defined by $8\\pi G=c=1$ were used.", "In addition, in the IS framework, the transport equation for the viscous pressure $\\Pi $ reads [37] $\\tau \\dot{\\Pi }+\\Pi = -3\\xi (\\rho ) H-\\frac{1}{2}\\tau \\Pi \\left(3H+\\frac{\\dot{\\tau }}{\\tau }-\\frac{\\dot{\\xi (\\rho )}}{\\xi (\\rho )}-\\frac{\\dot{T}}{T}\\right), $ where “dot\" accounts for the derivative with respect to the cosmic time, $H$ is the Hubble parameter and $T$ is the barotropic temperature, which takes the form $T=T_{0}\\rho ^{\\left(\\gamma -1\\right)/\\gamma }$ (Gibbs integrability condition when $p=\\left(\\gamma -1\\right)\\rho $ ), with $T_{0}$ being a positive parameter.", "Using Eqs.", "(REF ), (REF ) in Eq.", "(REF ) we obtain the following expression for the relaxation time $\\tau =\\frac{\\xi _{0}}{\\epsilon \\gamma \\left(2-\\gamma \\right)}\\rho ^{s-1}=\\frac{3^{s-1}\\xi _{0}}{\\epsilon \\gamma \\left(2-\\gamma \\right)}H^{2(s-1)} .", "$ In order to obtain a differential equation in terms of the Hubble parameter, it is neccesary to evaluate the ratios $\\dot{\\tau }/\\tau ,\\,\\,\\dot{\\xi }/\\xi $ and $\\dot{T}/T$ , which appear in Eq.", "(REF ).", "From Eqs.", "(REF ) and (REF ) the expression for the viscous pressure and its time derivative can be obtained.", "Using the above expressions a nonlinear second order differential equation can be obtained for the Hubble parameter: $\\begin{split}& \\ddot{H}+3H\\dot{H}+(3)^{1-s}\\xi _{0}^{-1}\\epsilon \\gamma \\left(2-\\gamma \\right)H^{2-2s}\\dot{H}-\\frac{(2\\gamma -1)}{\\gamma }H^{-1}\\dot{H}^{2}+\\frac{9}{4}\\gamma \\left[1-2\\epsilon \\left(2-\\gamma \\right)\\right]H^{3} \\\\& +\\frac{1}{2}(3)^{2-s}\\xi _{0}^{-1}\\epsilon \\gamma ^{2}\\left(2-\\gamma \\right)H^{4-2s}=0.", "\\end{split}$ We address the reader to see the technical details in ref.", "[43].", "As we shall see in the next section the exact solution was obtained for the special case $s=1/2$ , which allows an important simplification of Eq.", "(REF ).", "In fact, in this case the simple form $H\\left(t\\right)=A\\left(t_{s}-t\\right)^{-1}$ is a solution of Eq.", "(REF ) with a phantom behavior, with $A>0$ , $\\epsilon =1$ and the restriction $ 0<\\gamma <3/2$  [60].", "Besides, the solution $H\\left(t\\right)=A\\left(t-t_{s}\\right)^{-1}$ can represent accelerated universes if $A>1$ , Milne universes if $A=1$ and decelerated universes if $A<1$ , all of them having an initial singularity at $t=t_{s}$  [61].", "As it was mentioned in Section I, an important issue that we will discuss after to constraint the parameters $\\xi _{0}$ , $\\epsilon $ , for the both cases $\\gamma =1$ and $\\gamma $ , is if the found values satisfy the condition for keeping the fluid description of the dissipative dark matter component, expressed by the constraint $\\tau H<1$ .", "Using Eq.", "(REF ) for the case $s=1/2$ and Eq.", "(REF ), the above inequality leads to the following constraint between the parameters $\\xi _{0}$ , $\\epsilon $ and $\\gamma $ $\\frac{\\xi _{0}}{\\sqrt{3}\\epsilon \\gamma (2-\\gamma )} \\ll 1 .$ We will discuss later this condition using the values of $\\xi _{0}$ , $\\epsilon $ , with and without an election of the $\\gamma $ - value, obtained from the cosmological data of SNe Ia observations." ], [ "The exact cosmological solutions", "Now, we will briefly discuss the two solutions for Eq.", "(REF ) found in [43] for $s=1/2$ and for the especial cases of $\\gamma \\ne 1$ and $\\gamma =1$ .", "i) In the case of $\\gamma \\ne 1$ , the solution for the Eq.", "(REF ) can be written as a function of the redshift $z$ as $H(z)=C_{3}\\left(1+z\\right)^{\\alpha }\\cosh ^{\\gamma }{\\left[\\beta \\left(\\ln {\\left(1+z\\right)}+C_{4}\\right)\\right]}, $ where $C_{3}$ and $C_{4}$ are constants given by $C_{3}=\\frac{H_{0}}{\\cosh ^{\\gamma }{\\left(\\beta C_{4}\\right)}}=H_{0}\\left[1-\\frac{\\left(q_{0}+1-\\alpha \\right)^{2}}{\\gamma ^{2}\\beta ^{2}}\\right]^{\\frac{\\gamma }{2}}, $ $C_{4}=\\frac{1}{\\beta }\\mathop {\\mathrm {arctanh}}\\left[\\frac{\\left(q_{0}+1\\right)-\\alpha }{\\gamma \\beta }\\right], $ $\\alpha =\\frac{\\sqrt{3}\\gamma }{2\\xi _{0}}\\left[\\sqrt{3}\\xi _{0}+\\epsilon \\gamma \\left(2-\\gamma \\right)\\right], $ $\\beta =\\frac{\\sqrt{3}}{2\\xi _{0}}\\sqrt{6\\xi _{0}^{2}\\epsilon \\left(2-\\gamma \\right)+\\epsilon ^{2}\\gamma ^{2}\\left(2-\\gamma \\right)^{2}}.", "$ In the above expressions $H_{0}$ and $q_{0}$ are the Hubble and deceleration parameters at the present time, respectively, where the deceleration parameter is defined through $q=-1-\\dot{H}/H^{2}$ .", "The initial condition $a_{0}=1$ is also used.", "This solution has a constraint that arises from Eqs.", "(REF ) and (REF ) that reads $\\left(\\alpha -\\gamma \\beta \\right)-1<q_{0}<\\left(\\alpha +\\gamma \\beta \\right)-1.", "$ Since the value of $q_{0}$ will be taken from the observed data, we will check if the above constraints are fulfilled for the values determined for the parameters $\\xi _{0}$ , $\\epsilon $ and $\\gamma $ after the constraint of the SNe Ia data.", "ii) In the case of $\\gamma =1$ , the solution of the Eq.", "(REF ) can be written as $H(z)=H_{0}\\left[C_{1}\\left(1+z\\right)^{m_{1}}+C_{2}\\left(1+z\\right)^{m_{2}}\\right], $ where $H_{0}$ is the Hubble parameter at the present time, and $m_{1}=\\frac{\\sqrt{3}}{2\\xi _{0}}\\left(\\sqrt{3}\\xi _{0}+\\epsilon +\\sqrt{6\\xi _{0}^{2}\\epsilon +\\epsilon ^{2}}\\right), $ $m_{2}=\\frac{\\sqrt{3}}{2\\xi _{0}}\\left(\\sqrt{3}\\xi _{0}+\\epsilon -\\sqrt{6\\xi _{0}^{2}\\epsilon +\\epsilon ^{2}}\\right), $ $C_{1}=\\frac{\\left(q_{0}+1\\right)-m_{2}}{m_{1}-m_{2}}, $ $C_{2}=\\frac{m_{1}-\\left(q_{0}+1\\right)}{m_{1}-m_{2}}.", "$ In the above equations $q_{0}$ is the deceleration parameter at the present time, and the conditions $a_{0}=1$ and $C_{1}+C_{2}=1$ have been set.", "This solution was previously found and discussed in [58], but with a particular relation for the relaxation time of the form $\\xi _{0}\\rho ^{s-1}$ (which corresponds to $\\alpha =\\xi _{0}$ of our Ansatz), instead of the more general relation as Eq.", "(REF ), which was used in order to obtain the Eq.", "(REF ) in [43].", "Even more, this solution has three different behaviors depending on the signs of the constants $C_{1}$ and $C_{2}$ .", "So, for the fit purposes, we limit our analysis to the solution that is most similar to the $\\Lambda $ CDM model, and that corresponds to the Hubble parameter which fulfills the constraint $m_{2}-1<q_{0}<m_{1}-1, $ which leads an always positive Hubble parameter." ], [ "Constraining the solutions with Supernova Ia and Observational Hubble data sets", "We shall constrain the free parameters of the solutions presented in the above section with the Supernova Ia data (SNe Ia) and the Observational Hubble Data (OHD).", "To do so, we compute the best-fit parameters with the affine-invariant Markov Chain Monte Carlo Method (MCMC) [62], implemented in the pure-Python code emcee [63], by setting 50 chains or “walkers\" with 10000 steps and 10000 burn-in steps; this last ones in order to let the walkers explore the parameters space and get settled in the maximum of the probability density.", "As a likelihood function we use the following Gaussian distribution $\\mathcal {L}\\propto \\exp {\\left(-\\frac{\\chi _{I}^{2}}{2}\\right)}, $ where $\\chi _{I}^{2}$ is the merit function with $I$ representing each data set, namely SNe Ia, OHD and their joint analysis $\\chi _{joint}^{2}=\\chi _{SNe}^{2}+\\chi _{OHD}^{2}$ .", "Therefore, to impose the constraint, we use the Pantheon SNe Ia sample [64] and the compilation of OHD provided by Magaña et al. [65].", "In the first one, the sample consist in 1048 data points in the redshift range $0.01\\le z\\le 2.3$ , that is a compilation of 279 SNe Ia data discovered by the Pan-STARRS1 (PS1) Medium Deep Survey, combined with the distance estimates of SNe Ia from the Sloan Digital Sky Survey (SDSS), Supernova Legacy Survey (SNLS), and various low-z and Hubble Space Telescope (HST) samples, where the distance estimator is obtained using a modified version of the Tripp formula [66] with two nuisance parameters calibrated to zero with the method “BEAMS with Bias Correction” (BBC) proposed by Kessler and Scolnic [67].", "Hence, the observational distance modulus for each SNe Ia at a certain redshift $z_{i}$ is given by the formula $\\mu _{i} = m_{B,i}-\\mathcal {M} ,$ where $m_{B,i}$ is the apparent B-band magnitude of a fiducial SNe Ia and $\\mathcal {M}$ is a nuisance parameter.", "Considering that the Pantheon sample give directly the corrected apparent magnitude for each SNe Ia, the merit function for the SNe Ia data sample can be constructed in matrix notation as $\\chi _{SNe}^{2}=\\mathbf {M}^{\\dagger }\\mathbf {C^{-1}}\\mathbf {M} ,$ where $\\mathbf {M}=\\mathbf {m}_{B}-\\mu _{th}-\\mathcal {M}$ and $\\mathbf {C}$ is the total covariance matrix, given by $\\mathbf {C}=\\mathbf {D}_{stat}+\\mathbf {C}_{sys} ,$ where the diagonal matrix $\\mathbf {D}_{stat}$ denotes the statistical uncertainties obtained by adding in quadrature the uncertainties of $m_{B}$ for each redshift, associated with the BBC method, while $\\mathbf {C}_{sys}$ denotes the systematic uncertainties in the BBC approach.", "On the other hand, the theoretical distance modulus for each SNe Ia at a certain redshift $z_{i}$ in a flat FLRW spacetime for a given model is defined through the relation $\\mu _{th}\\left(z_{i},\\theta \\right)=5\\log _{10}{\\left[\\frac{d_{L}\\left(z_{i},\\theta \\right)}{Mpc}\\right]}+\\bar{\\mu }, $ where $\\theta $ encompasses the free parameters of the respective solution, $\\bar{\\mu }=5\\log _{10}{(c)}+25$ with $c$ the speed of light and $d_{L}$ is the luminosity distance which takes the form $d_{L}(z_{i},\\theta )=(1+z_{i})\\int _{0}^{z_{i}}{\\frac{dz^{\\prime }}{H(z^{\\prime },\\theta )}} .$ In order to reduce the number of free parameters and marginalized over the nuisance parameter $\\mathcal {M}$ , we define $\\bar{\\mathcal {M}}=\\bar{\\mu }+\\mathcal {M}$ and the merit function (REF ) can be expanded as [68] $\\chi ^{2}_{SNe}=A(z,\\theta )-2B(z,\\theta )\\bar{\\mathcal {M}}+C(z,\\theta )\\bar{\\mathcal {M}}^{2} ,$ where $A(z,\\theta )=\\mathbf {M}(z,\\theta ,\\bar{\\mathcal {M}}=0)^{\\dagger }\\mathbf {C^{-1}}\\mathbf {M}(z,\\theta ,\\bar{\\mathcal {M}}=0),$ $B(z,\\theta )=\\mathbf {M}(z,\\theta ,\\bar{\\mathcal {M}}=0)^{\\dagger }\\mathbf {C^{-1}}\\mathbf {1},$ $C(z,\\theta )=\\mathbf {1}^{\\dagger }\\mathbf {C^{-1}}\\mathbf {1}.$ Hence, minimizing the expression (REF ) whit respect to $\\bar{\\mathcal {M}}$ gives $\\bar{\\mathcal {M}}=B/C$ and the merit function reduces to $\\chi _{SNe}^{2}\\Bigr \\vert _{min}=A(z,\\theta )-\\frac{B(z,\\theta )^{2}}{C} ,$ that clearly only depends on the free parameters of the respective solution.", "It is important to note that the merit function given by (REF ) provides the same information as the function given by (REF ); this is because the best fit parameters minimize the merit function.", "Then, $\\chi ^{2}_{min}$ gives an indication of the goodness of fit: the smaller its value, the better is the fit.", "In the second one, the sample consist in 51 data points in the redshift range $0.07\\le z\\le 2.36$ , where 31 data points are obtained by the Differential Age (DA) method [69] which implies that these data points are model independent.", "The remaining 20 points come from BAO measurements, assuming that the $H(z)$ data obtained come from independent measurements.", "Hence, the merit function for the OHD data sample can be constructed as $\\chi ^{2}_{OHD}=\\sum _{i=1}^{51}{\\left[\\frac{H_{i}-H_{th}(z_{i},\\theta )}{\\sigma _{H,i}}\\right]^{2}} ,$ where $H_{i}$ is the observational Hubble parameter at redshift $z_{i}$ with associated error $\\sigma _{H,i}$ , provided by the OHD sample considered, $H_{th}$ is the theoretical Hubble parameter at the same redshift, provided by the solutions, and $\\theta $ encompasses the free parameters of the respective solution.", "The two cosmological solutions are contrasted with the SNe Ia and OHD data through their corresponding Hubble parameter (REF ) and (REF ).", "Because the solutions correspond to only matter as a dominant component, we have to impose $\\Omega _{m}=1$ .", "So, for the solution with $\\gamma \\ne 1$ their free parameters are $\\theta =\\lbrace H_{0},\\xi _{0},\\epsilon ,\\gamma \\rbrace $ and the solution with $\\gamma = 1$ are $\\theta =\\lbrace H_{0},\\xi _{0},\\epsilon \\rbrace $ .", "Even more, dimensionless parameters for the fit are required, where $\\epsilon $ and $\\gamma $ are already dimensionless.", "So, we replace $H_{0}$ for the dimensionless Hubble parameter $h$ , where $H_{0}=100\\;km\\;s^{-1}\\;Mpc^{-1}\\times h ,$ and a dimensionless $\\xi _{0}$ required the following redefinition $\\xi _{0}\\rightarrow H_{0}^{1-2s}\\xi _{0}, $ where, considering that the solutions are obtained for $s=1/2$ , then $\\xi _{0}$ it is in particular also dimensionless.", "In consequence, for $h$ we use a Gaussian prior according to the value reporting by A. G. Riess et al.", "in [70] of $H_{0}=73.24\\pm 1.74 \\;km\\;s^{-1}\\;Mpc^{-1}$ wich is measured with a $2.4\\%$ of uncertainty, for $\\epsilon $ and $\\gamma $ we use the flat priors $0<\\epsilon <1$ and $1<\\gamma <2$ , and for $\\xi _{0}$ we make the change of variable $\\xi _{0}=\\xi _{0}(x)=\\tan {(x)}$ for which we use the flat prior $0<x<\\pi /2$ ; this last one in order to simplify the sampling of the full parameter space using in the MCMC analysis.", "It is important to mention that in both cases we use the actual value of the deceleration parameter, $q_{0}=-0.6$ , as initial condition [4].", "In the solution with $\\gamma \\ne 1$ we need to use as a prior the restriction given by Eq.", "(REF ), in order to avoid a complex Hubble parameter during the fit; and in the solution with $\\gamma =1$ we need to use as a prior the restriction given by Eq.", "(REF ), in order to obtain a positive Hubble parameter.", "Moreover, the $a$ parameter in the emcee code is modified in order to obtain a mean acceptance fraction between $0.2$ and $0.5$  [63]." ], [ "Results and discussion", "Both solutions will be compared with the standard cosmological model $\\Lambda $ CDM, whose respective Hubble parameter as a function of the redshift is given by $H_{\\Lambda CDM}(z)=H_{0}\\sqrt{1-\\Omega _{m}+\\Omega _{m}\\left(1+z\\right)^{3}}, $ with their respective free parameters $\\theta =\\lbrace h,\\Omega _{m}\\rbrace $ , where for $\\Omega _{m}$ we use the flat prior $0<\\Omega _{m}<1$ , and for $h$ the we use the same Gaussian prior as for the exact cosmological solutions.", "Even more, in order to compare the goodness of the fits statistically, we will use the Bayesian Information Criterion (BIC) [71], defined as $BIC=\\theta _{N}\\ln {\\left(n\\right)}-2\\ln {\\left(\\mathcal {L}_{max}\\right)}, $ where $\\mathcal {L}_{max}$ is the maximum value of the likelihood function, calculated for the best-fit parameters, $\\theta _{N}$ the total number of free parameters of the model and $n$ is the total number of the respective data sample.", "This criteria tries to solve the problem of maximizing the likelihood function by adding free parameters resulting in over-fitting.", "To do so, the criteria introduces a penalization that depends on both, the total number of free parameters of each model and the total observational data.", "The model statistically favored by observations, as compared to the others, corresponds to the one with the smallest value of BIC, where a difference of $2-6$ in BIC between two models is considered as evidence against the model with the higher BIC, a difference of $6-10$ in BIC is already strong evidence, and a difference $>10$ in BIC represents definitely a very strong evidence.", "Table: Best-fit values for the free parameter ξ 0 \\xi _{0} obtained from the best-fit values of xx, indicated in the Table , and the relation ξ 0 =tan(x)\\xi _{0}=tan{(x)}.The best-fit values for the $\\Lambda $ CDM model and the exact cosmological solutions, as well as the goodness of fit criteria are shown in Table REF .", "In Figs.REF -REF we depict their respective joint credible regions for combinations of their respective free parameters.", "From them, we are be able to conclude that: 1.- The $\\Lambda $ CDM model has the lower values of $\\chi ^{2}_{min}$ and BIC, i. e., it is the model more favored by the observations.", "Focusing in the values of $\\chi ^{2}_{min}$ , the solution with $\\gamma =1$ is as suited to describe the SNe Ia data as the $\\Lambda $ CDM model does, with a difference in $\\chi ^{2}_{min}$ smaller than $0.3$ .", "But, from the joint credible regions graphics it is possible to see that the SNe Ia data constricts less the free parameters than the OHD data and the joint data analysis.", "On the other hand, focusing in the BIC criteria, the smallest BIC difference occurs between the $\\Lambda $ CDM model and the solution with $\\gamma =1$ reaching already for this difference the value 7.3 for the SNe Ia data.", "This has the consequence that the other solutions for $\\gamma \\ne 1$ are disfavored by the data.", "Moreover, the observations favor models where the recent acceleration expansion of the Universe is due to DE, instead of the models where the acceleration is due to the dissipative effects that experiments the DM.", "Even more, the exact cosmological solution with $\\gamma =1$ has lower values of $\\chi ^{2}_{min}$ and BIC than the solutions with $\\gamma \\ne 1$ , i. e., the observations favor the solutions where a CDM is considered.", "2.- The main issue of the solutions arise from the best-fit values obtained for $\\epsilon $ , which clearly are inconsistent with the value of $10^{-11}\\ll \\epsilon \\lesssim 10^{-8}$ reported in [59], in order to be consistent with the properties of structure formation.", "3.- In order to fulfill the condition $\\tau H\\ll 1$ , given by the Eq.", "(REF ), it is necessary, in the best scenario, that $\\xi _{0}\\ll 2\\sqrt{3}$ .", "From the values of $\\xi _{0}$ shown in the Table REF , it is possible to see that the value obtained from the SNe Ia data for both solutions and for the lower interval, gives a value close to $\\sqrt{3}$ , and for OHD data a value close to $2\\sqrt{3}$ , which are clearly greater than $2\\sqrt{3}$ for the joint data analysis.", "Therefore, the condition $\\tau H\\ll 1$ is not fulfilled by the exact cosmological solution any of both cases, nevertheless, there is the possibility that the fluid condition can be fulfilled in some regime, improving the fit data, or under new considerations when studying the proposed cosmological model.", "So, this claim is not conclusive.", "4.- In natural units $\\xi _{0}$ is a dimensionless parameter.", "In terms of physical units, it has no viscosity units due to the form in which it was defined.", "Nevertheless, it is possible to evaluate the dissipative pressure, for example, at the present time, in order to get an estimation of the size of the values involved.", "For the present time we obtained that $\\Pi \\approx 10^{-20}\\,Pa$ , which is a very low pressure, in comparison, for example, with the values obtained in the Eckart's framework (see [72]).", "5.- The possible explanation for the principal drawbacks presented by the exact cosmological solution for $\\gamma \\ne 1$ and $\\gamma =1$ could be related to the particular election for the bulk viscosity coefficient (see Eq.", "(REF )), which is in this case proportional to the root of the DM density and it is the responsible of the recent acceleration expansion of the universe in this model.", "Because $\\rho \\rightarrow \\infty $ when $z\\rightarrow \\infty $ , and $\\rho \\rightarrow 0$ when $z\\rightarrow -1$ , the bulk viscosity becomes relevant in the past and negligible in the future, which is when the Universe experiments the acceleration in its expansion.", "Therefore, in order that the bulk viscosity becomes relevant at present and future time, it is necessary to increase the value of $\\xi _{0}$ , which inevitably prevents to fulfil the near equilibrium condition $\\tau H\\ll 1$ , alternatively, the rise of the $\\epsilon $ value would be required.", "This fact can be observed in the Figs.", "REF and REF , (most clearly in the Fig.", "REF ), where for a lower values of $\\xi _{0}$ larger $\\epsilon $ values are obtained and vice versa.", "It is worthwhile mentioning that, because $\\epsilon $ cannot be larger than one, $\\xi _{0}$ has a non-zero lower bound; and for this minimum value, $\\xi _{0}\\rightarrow \\infty $ ." ], [ "Conclusions", "We have tested a cosmological model described by an analytical solution recently found in [43] for $s=1/2$ and for arbitrary $\\gamma $ , including the particular case when $\\gamma =1$ , by constraining it against Supernovae Ia and Observational Hubble Data.", "The solution gives the time evolution of the Hubble parameter in the framework of the full causal thermodynamics of Israel-Stewart.", "This solution was obtained considering a bulk viscous coefficient with the dependence $\\xi =\\xi _{0}\\rho ^{1/2}$ , the general expression given by Eq.", "(REF ) for the relaxation time, and for a fluid with a barotropic EoS $p=\\left(\\gamma -1\\right)\\rho $ .", "The results of the constraints still indicate that the $\\Lambda CDM$ model is statistically the most favored model by the observations.", "The lesson that we have learned here is that unified DM models succeed to display the transition between decelerated and accelerated expansions, which is an essential a feature supported by the observational data, without invoking a cosmological constant or some other form of dark energy.", "Nevertheless, as it was found in [59], only a very small value of $\\epsilon $ is consistent with the structure formation, while the numerical value we found from the best fit to the data leads to inconsistencies with the values required at perturbative level.", "It is relevant to mention that the exact solution constrained in this paper displays naturally, for some parameter values, the transition between decelerated and accelerated expansions.", "Other solutions found in the literature within the IS framework and for the same election for the bulk viscosity coefficient, (see for instance [73]), which are described by the power law behavior $H\\left( t\\right) =A\\left( t_{s}-t\\right) ^{-1}$ , do not display the same natural transition.", "In fact, depending on the parameter $A$ , they represent monotonically accelerated or decelerated solutions.", "For the case of an accelerated expansion a large non adiabatic contribution to the speed of sound is required.", "These two investigated solutions have in common the election of a bulk viscosity coefficient, which grows with the energy density of DM.", "This Ansatz has been made due to the simplicity the master equation acquires within the IS formalism.", "Nevertheless, from the physical point of view, this choice implies that the negative dissipative pressure grows with the redshift, while the inverse behavior leads to an accelerated late time expansion.", "The above results indicate that in the framework of the causal thermodynamics theory of dissipative fluids, accelerated solutions can in fact be obtained, nevertheless the non adiabatic contribution to the speed of sound happen to be large, in contradiction with the conclusions of perturbation analysis.", "This result can be inferred when the general expression for the relaxation time $\\tau $ , given by Eq.", "(REF ), is used.", "In some previous results, like the one displayed in  [58], where $\\epsilon $ is set equal to one from the beginning, the consequences of this drawback was not properly acknowledged.", "This result is also consistent with the mathematical condition found in [43], where the exact solution displays an accelerated expansion only if $\\epsilon >1/18$ .", "Moreover, we have shown that the values of the parameters found from the data constraints lead to an inconsistency of the fluid description of the dissipative dark matter component.", "In fact, the best fit parameters indicate that the required condition $\\tau H\\ll 1$ cannot be fulfilled by the solution.", "This result is consistent with the the basic assumption in the thermodynamic approaches of relativistic viscous fluids, which asserts that the viscous stress should be lower than the equilibrium pressure of the fluid.", "This is the so-called near equilibrium condition.", "When the negative pressure comes only from the dissipation, the above condition is not fulfilled.", "The condition $\\tau H\\ll 1$ means that particles of the fluid has an interaction rate that allows to keep the thermal equilibrium, adjusting more rapidly that the natural time-scale defined by the expansion time $H^{-1}$  [42].", "Therefore, it is expected that the condition $\\tau H\\ll 1$ will not be fulfilled by the parameters of an exact solution when this describes accelerated expansions.", "Nevertheless, as it was previously mentioned, this feature does not rule out the possibility that this condition be fulfilled in some other region of the the solution.", "Extensions of the IS approach, which consider non-linear effects allow deviations from the equilibrium.", "This could represent a possible solution to the technical difficulties just mentioned above, and to some extent one scenario of it has been explored in  [60], for phantom-type solutions.", "We expect to go further and extend the analytic solution including this nonlinear generalization elsewhere.", "We thank Arturo Avelino for useful discussions.", "This article was partially supported by Dicyt from Universidad de Santiago de Chile, through Grants $N^{\\circ }$ $041831PA$ (G.P.)", "and $N^{\\circ }$ $041831CM$ (N.C.).", "E.G.", "was supported by Proyecto POSTDOC_DICYT, código 041931CM_POSTDOC, Universidad de Santiago de Chile and partially supported by CONICYT-PCHA/Doctorado Nacional/2016-21160331." ] ]
1906.04570
[ [ "Bag of Color Features For Color Constancy" ], [ "Abstract In this paper, we propose a novel color constancy approach, called Bag of Color Features (BoCF), building upon Bag-of-Features pooling.", "The proposed method substantially reduces the number of parameters needed for illumination estimation.", "At the same time, the proposed method is consistent with the color constancy assumption stating that global spatial information is not relevant for illumination estimation and local information ( edges, etc.)", "is sufficient.", "Furthermore, BoCF is consistent with color constancy statistical approaches and can be interpreted as a learning-based generalization of many statistical approaches.", "To further improve the illumination estimation accuracy, we propose a novel attention mechanism for the BoCF model with two variants based on self-attention.", "BoCF approach and its variants achieve competitive, compared to the state of the art, results while requiring much fewer parameters on three benchmark datasets: ColorChecker RECommended, INTEL-TUT version 2, and NUS8." ], [ "Introduction", "Color constancy in general is the ability of an imaging system to discount the effects of illumination on the observed colors in a scene [1], [2].", "When a person stands in a room lit by a colorful light, the HVS unconsciously removes the lightening effects and the colors are perceived as if they were illuminated by a neutral, white light.", "While this ability is very natural for the HVS, mimicking the same ability in a computer vision system is a challenging and under-constrained problem.", "Given a green pixel, one can not assert if it is a green pixel under a white illumination or a white pixel lit with a greenish illumination.", "Nonetheless, illumination estimation is considered an important component of many higher level computer vision tasks such as object recognition and tracking.", "Thus, it has been extensively studied in order to develop reliable color constancy systems which can achieve illumination invariance to some extent [1], [3].", "The RGB image value $ \\rho (x,y)$ in the position $(x,y)$ of an image can be expressed as a function depending on three key factors [3]: the illuminant distribution $ I(x,y, \\lambda )$ , the surface reflectance $R(x,y,\\lambda )$ and the camera sensitivity $S(\\lambda )$ , where $\\lambda $ is the wave length.", "This dependency is expressed as $\\rho (x,y) = \\int _\\lambda I(x,y, \\lambda ) R(x,y,\\lambda ) S(\\lambda ) d \\lambda .$ Color constancy methods [3], [4] aim to estimate a uniform projection of $I( x,y,\\lambda )$ on the sensor spectral sensitivities $S(\\lambda )$ , i.e., $I = I(x,y) = \\int _\\lambda I(x,y, \\lambda ) S(\\lambda ) d \\lambda ,$ where $I$ is the global illumination of the scene.", "Figure: Building blocks of BoCF approach for illumination estimationRecently, deep learning approaches and CNN in particular have become dominant in almost all computer vision tasks, including color constancy [5], [6], [7], [8] due to their ability to take raw images directly as input and incorporate feature extraction in the learning process [9].", "Despite their accuracy in estimating illumination across multiple datasets [10], [11], [6], deploying CNN-based approaches on low computational power devices, e.g., mobile devices, is still limited, since most of the high-accuracy deep models are computationally expensive [6], [7], [8], which make them inefficient in terms of time and energy consumption.", "Additionally, most of the available datasets for illumination estimation are rather small-scale [12], [13], [10] and hence not suitable for training large models.", "For this purpose, many state of the art approaches [5], [6] rely on pre-trained networks to overcome this limitation.", "On the other hand, these pre-trained networks [14], [9] are originally trained for a classification task.", "Thus, they are usually agnostic to the illumination color.", "This makes their usage in color constancy counter-intuitive as the illumination information is distorted in the early pre-trained layers.", "An alternative approach is of course to reduce the number of model parameters in order to use existing datasets, as shallower models, in general, need less examples to learn.", "Furthermore, in [13], [15] it is argued that global spatial information is not an important feature in color constancy.", "The local information, i.e., the color distribution and the color gradient distribution (i.e.", "edges) can be sufficient to extract the illumination information [13].", "Thus, using regular neural networks configurations to extract deep features is counter-intuitive in this particular problem.", "To address these drawbacks and challenges, we propose in this paper a novel color constancy deep learning approach called BoCF.", "BoCF uses Bag-of-Features Pooling[16], which takes advantage of the ability of CNN to learn relevant shallow features while keeping the model suitable for low-power hardware.", "Furthermore, the proposed approach is consistent with the assumption that global spatial information is not relevant [13], [15] for color illumination estimation.", "Bag-of-Features Pooling is a neural extension [17], [16] of the famous Bag-of-Features model (BoF), also known as Bag-of-Visual Words (BoVW)[18], [19].", "BoFs are wildly used in computer vision tasks, such as action recognition [20], object detection/recognition, sequence classification [21], and information retrieval [22].", "A BoF layer can be combined with convolutional layers to form a powerful convolutional architecture that is end-to-end trainable using the regular back-propagation algorithm [17].", "The block diagram of the proposed BoCF model is illustrated in Figure REF .", "It consists of three main blocks: feature extraction block, Bag of Features block, and an estimation block.", "In the first block, regular convolutional layers are used to extract relevant features.", "Inspired by the assumption that second order gradient information is sufficient to extract the illumination information [13], we use only two convolutional layers to extract the features.", "In our experiments, we also study and validate this hypothesis empirically.", "In the second block, i.e., the Bag of Features block, the network learns the dictionary using back-propagation[17] over the non-linear transformation provided by the first block.", "This block outputs a histogram representation, which is fed to the last component, i.e., the estimation block, to regress to the scene illumination.", "In most CNN-based approaches used to solve the color constancy problem [5], [6], [7], [8], fully connected layers are connected directly to a flattened version of the last convolutional layer output.", "This increases the numbers of parameters dramatically, as convolutional layer outputs usually have a high dimensionality.", "In the proposed method, we address this problem by introducing an intermediate pooling block, i.e., the Bag of Features block, between the last convolutional layer and the fully connected layers.", "The proposed model achieves comparable results to previous state of the art illumination estimation methods while substantially reducing the number of the needed parameters, by up to 95%.", "Additionally, the pooling process natively discards all global spatial information, which is, as discussed earlier, irrelevant for color constancy.", "Using only two convolutional layers in the first block, limits the model to only shallow features.", "These two advantages make proposed approach both consistent and in full corroboration with statistical approaches [13].", "To further improve the performance of the proposed model, we also propose two variants of a self-attention mechanism for the BoCF model.", "In the first variant, we add an attention mechanism between the feature extraction block and the Bag of Features block.", "This mechanism allows the network to dynamically select parts of the image to use for estimating the illumination, while discarding the remaining parts.", "Thus, the network becomes robust to noise and irrelevant features.", "In the second variant, we add an attention mechanism on top of the histogram representation, i.e., between the Bag of Features block and the estimation block.", "In this way, we allow the network to learn to adaptively select the elements of the histogram which best encode the illuminant information.", "The model looks over the whole histogram after the spatial information has been discarded and generates a proper representation according the current context (histogram).", "The introduced dynamics will be shown in the experiments to enhance the model performance with respect to all evaluation metrics and across all the datasets.", "The main contributionsof the paper are as follows: We propose a novel CNN-based color constancy algorithm, called BoCF, based on Bag-of-Features Pooling.", "The proposed model is both shallow and able to achieve competitive results across multiple datasets compared to the state of the art.", "We establish explicit links between BoCF and prior statistical methods for illumination estimation and show that the proposed method can be framed as a learning-based generalization of many statistical approaches.", "This powerful approach fills the gap and provides the missing links between CNN-based approaches and static approaches.", "We propose two novel attention mechanisms for BoCF that can further improve the results.", "To the best of our knowledge, this is the first work which combines attention mechanism with Bag-of-Features Pooling.", "The proposed method is extensively evaluated over three datasets leading to competitive performance with respect to existing state of the art, while substantially reducing the number of parameters.", "The rest of this paper is organized as follows.", "Section provides the background of color constancy approaches and a brief review of the Bag-of-Features Pooling technique and the attention mechanism used in this work.", "Section details the proposed approach along with the two attention mechanisms based variants.", "Section introduces the datasets and the evaluation metrics used in this work along with the evaluation procedure.", "Section presents the experimental results on three datasets: ColorChecker RECommended [12], NUS8-Dataset[13], and INTEL-TUT version2[10].", "In Section , we highlight the links between our approach and many existing methods and we show how our approach can be considered as a generic framework for expressing existing approaches.", "Section concludes the paper.", "Typically, two types of color constancy approaches are distinguished, namely static methods and supervised methods.", "The former involves methods with static parameters settings that do not need any labeled image data for learning the model, while the latter are data-driven approaches that learn to estimate the illuminant in a supervised manner using labeled data.", "Static methods exploit the statistical or physical properties of a scene by making assumptions about the nature of colors.", "They can be classified into two categories: methods based on low-level statistics[23], [24], [25], [26] and methods based on the physics-based dichromatic reflection model [4], [27], [15], [28].", "A number of approaches belonging to the first category were unified by Van de Weijer et al.", "[25] into a single framework expressed as follows: $\\rho ^{gt}(n,p, \\sigma ) = \\frac{1}{k} (\\int _x \\int _y | \\bigtriangledown ^n \\rho _{\\sigma }(x,y) |^p dxdy)^{ \\frac{1}{p}},$ where $n$ denotes the derivative order, $p$ the Minkowski norm and $k$ the normalization constant for $\\rho ^{gt}$ .", "Also, $\\rho _{\\sigma }(x,y) = \\rho (x,y) * g_{\\sigma }(x,y) $ denotes the image convolution with a Gaussian filter with a scale parameter $\\sigma $ .", "This framework allows for deriving different algorithms simply by setting the appropriate values for $n$ , $p$ and $\\sigma $ .", "The well-known Gray-World method [24], corresponding to $(n = 0,p =1, \\sigma = 0)$ , assumes that under a neutral illumination the average reflectance in a scene is achromatic and the illumination is estimated as the shift of the image average color from gray.", "White-Patch [23] $(n = 0,p = \\infty , \\sigma = 0)$ , assumes that the maximum values of RGB color channels are caused by a perfectly reflecting surface in the scene.", "Therefore, the illumination components correspond to these maximum values.", "Besides Gray-World and White-Patch methods, which make use of the color distribution in the scene to build their estimations, Gray-Edge method [25] utilizes image derivatives.", "Instead of the global average color, Gray-Edge methods $(n = 1,p = p, \\sigma =\\sigma )$ assume that the average color of edges or the gradient of edges is gray.", "The illuminant's color is then estimated as the shift of the average edge color from gray.", "Physics-based dichromatic reflection models estimate the illumination by analyzing the scene and exploiting the physical interactions between the objects and the illumination.", "The main assumption of most methods in this category is that all pixels of a surface form a plane in RGB color space.", "As the scene contains multiple surfaces, this results in multiple planes.", "The intersection between these planes is used to compute the color of the light source [27].", "Lee et al.", "[15] exploited the bright areas in the captured scene to obtain an estimate of the illuminant color.", "In this work, we establish links between our proposed approach, BoCF, and several static methods.", "We show that BoCF can be interpreted as a learning-based extension of several of these approaches." ], [ "Supervised methods", "Supervised methods can be further divided into two main categories: characterization-based methods [29], [30] and training-based methods[31], [32], [5], [6].", "The former involves ’light’ training processes in order to learn the characterization of the camera response in some way, while the latter involves methods that try to learn the illumination directly from the scene.", "Gamut Mapping [29], [30] is one of the most famous characterization-based approaches.", "It assumes that for a given illumination condition, only a limited number of colors can be observed.", "Thus any unexpected variation in the observed colors is caused by the light source illuminant.", "The set of colors that can occur under a given illumination, called canonical gamut, is first learned in a supervised manner.", "In the evaluation, an input gamut which represents the set of colors used to acquire the scene is constructed.", "The illumination is then estimated by mapping this input gamut to the canonical gamut.", "Another group of training-based methods combines different illumination estimation approaches and learns a model that uses the best performing method or a combination of methods to estimate the illuminant of each input based on the scene characteristics [31].", "Bianco et al.", "used indoor/outdoor classification to select the optimal color constancy algorithm given an input image[32].", "Lu et al.", "proposed an approach which exploits 3D scene information for estimating the color of a light source [33].", "However, these methods tend to overfit and fail to generalize to all scene types.", "The first attempt to use CNN for solving the illuminant estimation problem was established by Bianco et al.", "[5], where they adopted a CNN architecture operating on small local patches to overcome the data shortage.", "In the testing phase, a map of local estimates is pooled to obtain one global illuminant estimate using median or mean pooling.", "Hu et al.", "[6] introduced a pooling layer, namely confidence-weighted pooling.", "In their fully convolutional network, they incorporate learning the confidence of each patch of the image in an end-to-end learning process.", "Patches in an image can carry different confidence weights according to their estimated accuracy in predicting the illumination.", "Shi et al.", "[7] proposed a network with two interacting sub-networks to estimate the illumination.", "One sub-network, called the hypothesis network, is used to generate multiple plausible illuminant estimations depending on the patches in the scene.", "The second sub-network, called the selection network, is trained to select the best estimate generated by the first sub-network.", "Inspired by the success of GAN in image to image translation[34], Das et al.", "formulated the illumination estimation task as an image-to-image translation task [35] and used a GAN to solve it.", "However, these CNN-based methods suffer from certain weaknesses: computational complexity and disconnection with both the illumination assumption[13] and the prior static methods, e.g., Grey-World [24] and White-Patch [23].", "This paper attempts to cure these drawbacks by proposing a novel CNN approach, BoCF, which discards the global spatial information in agreement with [13] and [25], and is competitive with the training-based methods while using only 5% of the parameters.", "Passalis and Tefas proposed a Bag-of-Features Pooling (BoFP) layer [16], [17], which is a neural extension of the Bag-of-Features model (BoF).", "BoFPL can be combined with convolutional layers to form a powerful architecture which can be trained end-to-end using the regular back-propagation algorithm [17], [36].", "In this work, we use this pooling technique to learn the codebook of color features.", "Thus, the naming Bag of Color Features (BoCF).", "This pooling discards all the global spatial information and outputs a fixed length histogram representation.", "This allows us to reduce the large number of parameters usually needed when linking convolutional layers to fully connected layers.", "Furthermore, discarding global spatial information forces the network to learn to extract the illumination without global spatial inference, thus improving model robustness and adhering to the illumination assumption [13].", "As an additional novel feature to the prior works using Bag-of- Features Pooling [17], [36], we propose introducing an attention mechanism to enables the model to discard noise and focus only on relevant parts of the input presentation.", "To the best of our knowledge, this is the first work which combines attention mechanisms with Bag-of-Features Pooling." ], [ "Attention mechanisms", "Attention mechanisms were introduced in NLP [37] for sequence-to-sequence (seq2seq) models in order to tackle the problem of short-term memory faced in machine translators.", "They allow a machine translator to see the full information contained in the original input and then generate the proper translation for the current word.", "More specifically, they allow the model to focus on local or global features, as needed.", "Self-attention [38], also known as intra-attention, is an attention mechanism relating different positions of a single sequence in order to compute a representation of the same sequence.", "In other words, the attention mask is computed directly from the original sequence.", "This idea has been exported to many other problems in NLP and computer vision such as machine reading [39], text summarization [40], [41], and image description generation [42].", "In [42], self-attention is applied to an image to enable the network to generate an attention mask and focus on the region of interest in the original image.", "Attention in deep learning can be broadly interpreted as a mask of importance weights.", "In order to evaluate the importance of a single element, such as a pixel or a feature in general, for the final inference, one can form an attention vector by estimating how strongly the element is correlated with the other elements and use this attention vector as a mask when evaluating the final output [42].", "Let $\\textbf {x} = [x_1 , ..., x_n ] \\in \\mathbb {R}^n$ be a vector.", "The goal of a self-attention mechanism is to learn to generate a mask vector $\\textbf {v} \\in \\mathbb {R}^n $ depending only on x, which encodes the importance weights of the elements of x.", "Let $f$ be a mapping function between x and v. The dependency can be expressed as follows: $\\textbf {v} = f(\\textbf {x}) = [v_1, ..., v_n],$ under the constraint: $\\sum _{i=1}^n v_i = 1,$ After computing the mask vector v, the final output of the attention layer y is computed as follows: $\\textbf {y} = [y_1, ..., y_n] = [x_1v_1, ..., x_nv_n],$ The concept of attention, i.e., focusing on particular regions to extract the illumination information in color constancy, can be rooted back to many statistical approaches.", "For example, White-Patch reduces this region to the pixel with the highest RGB values.", "Other methods, such as [15] focus on the bright areas in the captured scene, called specular highlights.", "Instead of making such a strong assumption on the relevant regions, in BoCF we allow the model to learn to extract these regions dynamically.", "To the best of our knowledge, this is the first work, which uses attention mechanisms in the color constancy problem.", "In order to reduce the number of parameters needed to learn the illumination [6], [7], we propose a novel color constancy approach based on the Bag-of-Features Pooling [17], called herein the BoCF approach.", "The proposed approach along with the novel attention variants is illustrated in Figure REF .", "The proposed model has three main blocks, namely the feature extraction, the Bag of Features, and the illumination estimation blocks.", "In the first block, a nonlinear transformation of a raw image is obtained.", "In the second block, a histogram representation of this transform is compiled.", "This histogram is used in the third block to estimate the illumination.", "Figure: Proposed approach (basic, no attention) along with attention variants (Attention1 and Attention2)." ], [ "Feature extraction", "The feature extraction algorithm takes a raw image as input and outputs a nonlinear transformation representing the image features.", "A CNN is used in this block.", "CNN are known for their ability to extract relevant features directly from raw images.", "Technically, any CNN architecture can be used in this block.", "However, we observed in our experiments that only two convolutions followed by downsampling layers, e.g., max-pooling yields satisfactory results.", "This is in accordance with the assumption of statistical methods that the second order information is enough to estimate the illumination [13], [25].", "After a raw image is fed to the feature extraction block, the output of the last convolutional layer is used to extract feature vectors that are subsequently fed to the next block.", "The number of extracted feature vectors depends on the size of the feature map and the used filter size as described in [17]." ], [ "Bag of Features ", "The Bag of Features is essentially the codebook (dictionary) learning component.", "The output features of the previous block are pooled using the Bag-of-Features Pooling and mapped to a final histogram representation.", "During training, the network optimizes the codebook using the traditional back-propagation.", "The output of this block is a histogram of a fixed size, i.e., the size of the codebook, which is a hyper-parameter that needs to be carefully tuned to avoid over-fitting.", "This approach discards all global spatial information.", "As described in [17], the Bag-of-Features Pooling is composed of two sub-layers: an RBF layer that measures the similarity of the input features to the RBF centers and an accumulation layer that builds the histogram of the quantized feature vectors.", "The normalized output of each RBF neuron can be expressed as $[\\Phi (\\textbf {x})]_k = \\frac{exp(- || \\textbf {x} - \\textbf {v}_k || / \\rho _k )}{ \\sum _m exp(- || \\textbf {x} - \\textbf {v}_m || / \\rho _m ) } ,$ where x is a feature vector, $v_k$ is the center of the k-th RBF neuron, exp is the exponential function, and $\\rho _k$ is a scaling factor.", "The output of the RBF neurons is accumulated in the next layer, compiling the final representation of each image: $\\textbf {S} = \\frac{1}{ N } \\sum _j \\Phi (\\textbf {x}_j) ,$ where N is the number of feature vectors extracted from the last convolutional layer for the image." ], [ "Illumination Estimation", "The Bag of Features layer receives a transformation of the raw image and compiles its histogram representation.", "Then, this histogram is fed to a regressor that estimates the illu- mination.", "In this work, a multi-layer perceptron with one hidden layer is used for this purpose, although any other estimator with differentiable loss function can be used.", "Let $\\textbf {x} \\in \\mathbb {R}^n $ be the histogram compiled by the second block.", "The intermediate layer output $\\textbf {h} \\in \\mathbb {R}^m $ can be computed as follows $\\textbf {h} = \\varphi (\\textbf {W}_1\\textbf {x} +\\textbf {b}_1) ,$ where $\\textbf {W}_1 \\in \\mathbb {R}^{n\\times m}$ is the weight matrix, $b_1 \\in \\mathbb {R}^{m} $ is the bias vector, and $\\varphi $ is the Rectified Linear Units (ReLU) activation function [43].", "The final estimate $\\textbf {I} \\in \\mathbb {R}^{3} $ is computed as follows $\\textbf {I} = \\phi (\\textbf {W}_2\\textbf {h} +\\textbf {b}_2) ,$ where $\\textbf {W}_2 \\in \\mathbb {R}^{m\\times 3}$ is the weight matrix, $b_2 \\in \\mathbb {R}^{3} $ is the bias vector, and $\\phi $ is the softmax activation function defined by $\\phi (a_i) = \\frac{exp(a_i)}{ \\sum _j a_j} ,$" ], [ "Attention mechanism for BoCF", "We introduce a novel attention mechanism in the BoCF model to enable the algorithm to dynamically learn to focus on a specific region of interest in order to yield a confident output.", "We combine self-attention, described in Section REF , with the Bag-of-Features Pooling for the color constancy problem.", "We propose two variants of this mechanism which can be applied in our model.", "For the mapping function f in (Eq.", "4), we use a fully connected layer with softmax activation.", "In the first variant, we propose to apply attention on the nonlinear transformation of the image after the feature extraction block.", "This enables the model to learn to ’attend’ the region of the interest in the mapping and to reduce noise before pooling.", "By applying attention in this stage, the number of parameters will rise exponentially as we need as many parameters as features.", "In the second variant, we propose to apply the attention mechanism on the histogram representation of the BoCF, i.e., after the global spatial information is discarded.", "This enables the model to dynamically learn to ’tend’ to the relevant parts of the histogram which encode the illuminant information.", "In this variant, the attention mask size is equal to the size of the histogram.", "Thus, the number of additional parameters is relatively small.", "Following the notations of (4) and (5), $\\textbf {x} \\in \\mathbb {R}^n $ is the histogram representation and $\\textbf {v} \\in \\mathbb {R}^n $ is the attention mask is obtained via the fully connected layer as follows: $\\textbf {v} = \\varphi (\\textbf {W}\\textbf {x} +\\textbf {b}) ,$ where $\\textbf {W} \\in \\mathbb {R}^{N\\times N}$ is a weight matrix, $\\textbf {b} \\in \\mathbb {R}^{N} $ is the bias.", "Using softmax as $\\phi $ ensures that the masking constraint defined in (Eq.", "5) is not violated.", "Finally, y, the final output of the attention mechanism, is computed using the following equation $\\textbf {y} = \\lambda ( \\textbf {v} \\odot \\textbf {x} ) + (1 - \\lambda ) \\textbf {x},$ where $\\odot $ is the element wise product operator and $\\lambda \\in \\mathbb {R}$ is a weighting parameter between the masked histogram $\\textbf {v} \\odot \\textbf {x} $ and the original histogram $\\textbf {x}$ .", "$\\lambda $ is a learnable parameter in our model.", "Not using $\\lambda $ and outputting only the masked histogram is also another option.", "However, we determined experimentally that outputting the weighted sum of both the original and the masked version is more robust and stable for the gradient-based optimizers, since it is less susceptible to random initialization weights of the attention.", "Parameter $\\lambda $ can be optimized using the gradient decent in the back-propagation process along with the rest of the parameters.", "Its gradient with respect to the output of the attention block can be obtained via the following equation $\\frac{\\partial \\textbf {y}}{\\partial \\lambda } = \\textbf {v} \\odot \\textbf {x} - \\textbf {x},$ In this section, we present the experimental setup used in this work.", "In Subsection REF , we introduce the datasets used to test our models.", "In Subsection REF , we report the network architectures of the three blocks used in BoCF.", "In Subsection REF , we detail the evaluation process followed in our experiments.", "Finally, the evaluation metrics used are briefly described in Subsection REF ." ], [ "ColorChecker RECommended dataset", "ColorChecker RECommended dataset [12] is a publicly available updated version of Gehler-Shi dataset [11]http://www.cs.sfu.ca/ colour/data/shi_gehler/ with a proposed (recommended) ground truth to use for evaluation.", "This dataset contains 568 high-quality indoor and outdoor images acquired by two cameras: Canon 1D and Canon 5D.", "Similar to the works in [5], [6], [7], [8], for Color Cheker REComended dataset, we used three-fold cross validation to evaluate our algorithms." ], [ "NUS-8 Camera Dataset", "NUS-8 is a publicly available datasethttp://cvil.eecs.yorku.ca/projects/public_html/illuminant/illuminant.html, containing 1736 raw images from eight different camera models.", "Each camera has about 210 images.", "Following previous works [13], [6], we perform tests on each camera separately and report the mean of all the results for each evaluation metric.", "As a result, although the total number of images in NUS-8 dataset is large, each experiment involves using only 210 images for both training and testing." ], [ "INTEL-TUT2", "INTEL-TUT2http://urn.fi/urn:nbn:fi:csc-kata20170901151004490662 is the second version of the publicly available INTEL-TUT dataset [10].", "The main particularity of this dataset is that it contains a large number of images taken by several cameras from different scenes.", "We use this dataset for an extreme testing protocol, the third protocol described in [10].", "The models are trained with images acquired by one camera and containing one type of scene and tested on the other cameras and the other scenes.", "This extreme test is useful to show the robustness of a given model and its ability to generalize across different cameras and scenes.", "INTEL-TUT2 contains images acquired with three different cameras, namely Canon, Nikon, and, Mobile.", "For each camera, the images are divided into four sets: field (144 images per camera), lab printouts (300 images per camera), lab real scenes (4 images per camera), and field2.", "The last set field2 concerns only Canon and it has a total of 692 images.", "Figure REF shows some samples from the field, lab printouts, and lab real scenes sets of the three cameras, while Figure REF displays samples from field2 related to Canon camera.", "We used only Canon field2 set for training and validation (80% for training and 20% for validation).", "We constructed two test sets.", "The first one, called field in this work, contains all the field images taken by the other camera models, i.e., Nikon and Mobile.", "The second set, called non-field in this work, contains all the non-field images acquired by Nikon and Mobile.", "Comparing the performance on these two sets allowed us to test both scene and camera invariance of the model.", "As we are using different camera models in same experiments, the variation of camera spectral sensitivity needs to be discounted.", "For this purpose, we use Color Conversion Matrix (CCM) based preprocessing[44] to learn the $3\\times 3$ CCM matrices for each camera pair." ], [ "Network architectures", "The BoCF network is composed of three blocks: the feature extraction, the Bag of Features, where the Bag-of-Features Pooling is applied, and the illumination estimation blocks as described in Section .", "The feature extraction block consists of convolution layers followed by max pooling operators.", "We experiment with different number of layers two and three.", "Thirty convolution filters of size $4 \\times 4$ are used in both layers.", "Max-pooling with a window size 2 is applied in both layers.", "For the codebook size, i.e., number of RBF neurons in the Bag of Features block, we experiment with 3 different values 50, 150 and 200.", "The illumination estimation block consists of 2 fully connected layers, the first (hidden layer) has a size of 40 and it takes as input the histogram representation and the second one (final output) has of size 3 to output the illumination." ], [ "Evaluation procedure", "To evaluate the proposed approach, we used 2 sets of experiments.", "In the first set, we evaluate different variants of the model to study the effect of the hyper-parameters and validate the effectiveness of each component in our model by conducting ablation studies.", "For this purpose, we used ColorChecker RECommended dataset.", "In the second set of experiments, we compared our approach with current state-of-the-art approaches on the three datasets.", "For all testing scenarios, we augmented the datasets using the following process: As the size of the original raw images is high, we first randomly cropped $512\\times 512$ patches of each image.", "This ensured getting meaningful patches.", "The crops were then rotated by a random angle between -30and +30.", "Finally, we rescaled the RGB values of each patch and its corresponding ground truths by random factor in the range of [0.8, 1.2].", "Before feeding the sample to the network, we down-sampled it to $227\\times 227$ .", "In testing, the images are resized to $227\\times 227$ to fit the network model.", "Our network was implemented in Keras [45] with Tensorflow backend [46].", "We trained our network end-to-end by back-propagation.", "For optimization, Adam [47] was employed with a batch size of 15 and a learning rate of $3 \\times 10^{−4}$ .", "The model was trained on image patches of size $227\\times 227$ for 3000 epochs.", "The centers of the dictionary were initialized using the k-means algorithm as described in [17].", "The parameter $\\lambda $ , discussed in Section REF , was initialized as 0.5." ], [ "Evaluation metrics", "We report the mean of the top 25%, the mean, the median, Tukey's trimean, and the mean of the worst 25% of the RAE [48] between the ground truth illuminant and the estimated illuminant, defined as $\\text{{RAE}}(\\rho ^{gt},\\rho ^{Est})= \\cos ^{-1} ({ \\frac{ \\rho ^{gt} \\rho ^{Est}}{\\Vert \\rho ^{gt} \\Vert \\Vert \\rho ^{Est} \\Vert } }),$ where $\\rho ^{gt}$ is the ground truth illumination for a given image and $\\rho ^{Est}$ is the estimated illumination." ], [ "Experimental results", "In this section, we provide the experimental evaluation of the proposed method and its variants.", "In Subsection REF , different topologies for the three blocks of BoCF are evaluated on the ColorChecker RECommended dataset and the effect of each block in our model is examined by reporting the results of the ablation studies.", "In Subsection REF , we compare the performance of the proposed models with different state-of-the-art algorithms over the three datasets." ], [ " BoCF performance evaluation", "We first evaluated the accuracy of the different variants of BoCF on ColorChecker RECommended dataset.", "Table REF presents the comparative results for BoCF using different topologies in the three blocks.", "We evaluate the model using different dictionary sizes in the second block (codewords), different numbers of convolution layers in the first block, and with/without attention.", "Table REF shows that the dictionary size in the Bag-of-Features Pooling block significantly affects the overall performance of the model.", "Using a larger codebook results in higher risk of overfitting to the training data, while using a smaller codebook size restricts the model to only few codebook centers which can decrease the overall performance of the model.", "Thus, the choice of this hyperparameter is critical for our model.", "The findings in Table REF confirm this effect and highlights the importance of this hyperparameter.", "By comparing the model performance using different dictionary sizes, we can see that a dictionary of size 150 yields the best compromise between the number of parameters and the overall performance.", "Using three convolutional layers instead of two in the first block yields slightly better median errors and worse trimean errors.", "However, to keep the model as shallow as possible, we opt for the two convolution layers.", "Table REF shows that models equipped with an attention mechanism perform better than models without attention almost consistently across all error metrics.", "This is expected as attention mechanisms allow the model to focus on relevant parts only and as a result, the model becomes more robust to noise and to inadequate features.", "The performance boost obtained by both attention variants is more highlighted in terms of the median and trimean errors compared to the non-attention variant.", "By comparing the performance achieved by the two attention variants, we note that the first attention variant yields in a better performance in terms of worse 25% error rate, while the second variant yields a better median and trimean error rates.", "It should also be remembered that the first variant applies attention over the feature map output of the first convolutional block.", "Thus, it dramatically increases the number of model parameters (over 20 times) compared to the second variant (doubling the number of parameters) which applies the attention over the histogram.", "Figure REF presents a visualization of the attention weights[49] for both attention variants.", "The heat maps demonstrate which regions of the image each model pays attention to so as to output a certain illumination.", "We note a large difference between both attentions.", "The first attention variant tends to focus on regions with dense edges and sharp shapes, while the second model focuses on uniform regions to estimate the illumination.", "Figure: Attention mask visualization for three samples from INTEL-TUT2 dataset.", "The first column contains the input image.", "The second one illustrates the attention mask generated by the first attention variant overlaid on the input image.", "The last column contains the attention masks generated by the second variant of the attention overlaid on the input image.", "Gamma correction was applied for visualization." ], [ "Ablation studies", "To examine the effect of each block in our proposed approach, we conduct ablation studies on the colorChecker RECommended dataset.", "Table REF reports the results of the basic BoCF approach, the results achieved by removing the feature extraction block, and the results obtained by removing the estimation block, i.e., replacing the fully connected layer in the estimation block with a simple regression, We note that removing any block significantly decreases the overall performance of our models.", "Comparing the model with and without the feature extraction block, we note a large drop in performance especially in terms of the worst 25% error rates, i.e., 1.8drop compared to 0.6drop when the estimation block is removed.", "Table: Results of the ablation studies for the BoCF over the RECommended Color Checker Dataset.", "BoCF is the basic BoCF composed of the three blocks.", "In BoCF -1, the feature extraction block is removed, while in BoCF -2 the fully connected layer in the estimation block is substituted with a linear regression." ], [ "Comparisons against state-of-the-art", "We compare our BoCF approach with the state-of-the-art methods on ColorChecker RECommended, NUS-8, and INTEL-TUT2 datasets, which have been widely adopted as benchmark datasets in the literature.", "Tables REF , REF , and REF provide quantitative results for ColorChecker RECommended, NUS-8, and INTEL-TUT2 datasets, respectively.", "We provide results for the static methods Grey-World, White-Patch, Shades-of-Grey, and General Grey-World.", "The parameter values $n$ , $p$ , $\\rho $ are set as described in [25].", "In addition, we compare against Pixel-based Gamut, Bright Pixels, Spatial Correlations, Bayesian Color Constancy [11], and six convolutional approaches: Deep Specialized Network for Illuminant Estimation (DS-Net) [7], Bianco CNN [5], Fast Fourier Color Constancy [50], Convolutional Color Constancy[51], Fully Convolutional Color Constancy With Confidence-Weighted Pooling (FC4) [6], and Color Constancy GANs (CC-GANs) [35].", "The results for ColorChecker RECommended and NUS-8 datasets were taken from related papers [35], [6].", "From Recommended Color Checker and NUS-8 datasets results in Tables REF and REF , we note that learning-based methods usually outperform statistical-based methods across all error metrics.", "This can be explained by the fact that statistical approaches rely on some assumptions in their model.", "These assumptions can be violated in some testing samples which results in high error rates especially in terms of the worst 25% errors.", "Table REF shows that the proposed method BoCF and its variants achieve competitive results on Recommended Color Checker dataset.", "The only models performing slightly better than BoCF are FC4(SqueezeNet) and DS-Net.", "By comparing the number of parameters required by each model given in Table REF , we see that BoCF achieves very competitive results, while using less than 1% of the parameters of FC4(SqueezeNet) and less than 0.1 % of the parameters of DS-Net.", "Compared to Bianco's CNN, we note that our model performs better across all error metrics except for the worst 25% error metric.", "Bianco CNN operates on patches instead of the full image directly and this makes it more robust but, at the same time, it increases its time complexity as the network has to estimate many local estimates before outputting the global one.", "Table: Number of parameters of different CNN-based approachesResults for NUS-8 dataset are similar to their counter parts on ColorChecker RECommended, as illustrated in Table REF .", "Our models achieve comparable results with FC4 and overall better results compared to DS-Net across all error metrics.", "Bianco's CNN outperforms all the other CNN-based methods.", "As discussed earlier, this can likely be explained by the fact that Bianco operates on patches while BoCF and FC4 produce global estimates directly.", "Table REF reports the comparative results achieved on INTEL-TUT2 dataset.", "We note that all the error rates are high as this is an extreme testing senario.", "The models are trained and validated using only one type of scene (field2 set) acquired by one camera model (Canon) and then evaluated over different scene types and different camera models not seen during the training as described in Section REF .", "The proposed BoCF model achieves better overall performance compared to Bianco's CNN and C3AE methods and competitive results compared to FC4.", "By comparing the performance achieved by BoCF with and without attention, we note both the attention mechanisms proposed in this paper significantly boost the performance of our model for all datasets.", "It should also be mentioned that despite requiring much less parameters, the second variant of our attention model, where the attention is applied over the histogram representation, performs slightly better than the first variant, where the attention is applied over the feature extraction block.", "Table: Results of BoCF approach and comparative methods on the RECommended Color Checker Dataset.Table: Results of BoCF approach and benchmark methods on the NUS-8 Dataset.Table: Results of BoCF approach and benchmark methods on INTEL-TUT2." ], [ "Discussion", "When comparing our approach to the competing methods, it must be pointed out that our approach can be linked to many previous static-based approaches.", "In Grey-World[24], one takes the average of the RGB channels of the image.", "In the proposed method, this corresponds to using the identity as a feature extractor and using equal weights in the estimation block.", "This way all the histogram bins will contribute equally in the estimation.", "White-Patch[23] takes the max across the color channels, which corresponds to giving a high weight to the histogram bin with the highest intensity and giving zero weights to the rest.", "Grey-edge and its variants[25] correspond to using the first and second order derivatives as a feature extractor.", "Thus, BoCF approach can be interpreted as a learning-based generalization of these statistical based approaches.", "Instead of using the image directly, we allow the model to learn a suitable non-linear transformation of the original image, through the feature extraction block, and instead of imposing a prior assumption on the contribution of each feature in the estimation, we allow the model to learn the mapping dynamically using the training data.", "It is interesting to note that the attention variants in our approach can be tightly linked to the confidence maps in FC4 [6].", "In FC4, confidence scores are assigned to each patch of the image and a final estimate is generated by a weighted sum of the scores and their corresponding local estimates.", "This way the network learns to select which features contribute to the estimation and which parts should be discard.", "Similarly, attention mechanism learn to dynamically pay attention to the parts encoding the illumination information and discarding the rest." ], [ "Conclusion", "In this paper, we proposed a novel color constancy method called BoCF, which is composed of three blocks.", "In first block, called feature extraction, we employ convolutional layers to extract relevant features from the input image.", "In the second block, we apply Bag-of-Features Pooling to learn a codebook and output of histogram.", "The latter is fed into the last block, the estimation block, where the final illumination is estimated.", "This end-to-end model is evaluated and compared with prior works over three datasets: ColorChecker RECommended, NUS-8, and INTEL-TUT2.", "BoCF was able to achieve competitive results compared to state-of-the-art methods while reducing the number of parameters up to 95%.", "In this paper, we also discussed links between the proposed method and statistic based methods and we showed how the proposed approach can be interpreted as a supervised extension of these approaches and can act as a generic framework for expressing existing approaches as well as developing new powerful methods.", "In addition, we proposed combining the Bag-of-Features Pooling with two novel attention mechanisms.", "In the first variant, we apply attention over the nonlinear transform of the image after the feature extraction block.", "In the second extension, we apply attention over the histogram representation of the Bag-of-Features Pooling.", "These extensions are shown to improve the overall performance of our model.", "In future work, extensions of the proposed approach could include exploring regularization techniques to ensure diversity in the learned dictionary and improve the generalization capability of the model." ], [ "Acknowledgment", "This work was supported by the NSF-Business Finland Center for Visual and Decision Informatics project (CVDI) project AMALIA, Dno 3333/31/2018, sponsored by Intel Finland.", "Firas Laakom is a doctoral student at Tampere university,Finland.", "He received his engineering degree from Tunisia Polytechnic School (TPS) in 2018.", "His research interests include deep learning, computer vision and computational intelligence.", "Nikolaos Passalis is a postdoctoral researcher at Tampere University, Finland.", "He has (co-)authored more than 45 journal and conference papers and contributed one chapter to one edited book.", "His research interests include machine learning, information retrieval and computational intelligence.", "Jenni Raitoharju is a postdoctoral research fellow in Unit of Computing Sciences, Tampere University, Finland.", "She received her PhD in Information Technology in Tampere University of Technology in 2017.", "Her current projects deal with machine learning and pattern recognition in applications such as bio-monitoring, intelligent buildings, and autonomous boats.", "She has (co-)authored 11 journal papers and 20 conference papers.", "Jarno Nikkanen received his M.Sc.", "and Dr.Sc.Tech.", "degrees from Tampere University of Technology in 2001 and 2013, respectively, with subjects in Signal Processing and Software Systems.", "Jarno has 18 years of industry experience in digital imaging topics, starting at Nokia Corporation in 2000 where he developed and productized many digital camera algorithms, and moving to Intel Corporation in 2011 where he is currently working as Intel Principal Engineer and Imaging Technology Architect.", "Jarno holds international patents for over 20 digital camera related inventions.", "Anastasios Tefas an Associate Professor at the Department of Informatics, Aristotle University of Thessaloniki.", "He has co-authored 100 journal papers, 215 papers in international conferences and contributed 8 chapters to edited books in his area of expertise.", "Over 4900 citations have been recorded to his publications and his H-index is 36 according to Google scholar.", "His current research interests include computational intelligence, deep learning, digital signal and image analysis and retrieval and computer vision.", "Alexandros Iosifidis is an Associate Professor at Aarhus University, Denmark.", "He has contributed in more than ten R&D projects financed by EU, Greek, Finnish, and Danish funding agencies and companies.", "He has co-authored 53 articles in international journals and 78 papers in international conferences proposing novel Machine Learning techniques and their application in a variety of problems.", "Dr. Iosifidis is a Senior Member of IEEE and he served as an Officer of the Finnish IEEE Signal Processing-Circuits and Systems Chapter.", "Moncef Gabbouj received his MS and PhD degrees in EE from Purdue University, in 1986 and 1989, respectively.", "Dr. Gabbouj is Professor of Signal Processing at the Department of Computing Sciences, Tampere University, Finland.", "His research interests include Big Data analytics, multimedia analysis, artificial intelligence, machine learning, pattern recognition, nonlinear signal processing, video processing and coding.", "Dr. Gabbouj is a Fellow of the IEEE and member of the Academia Europaea and the Finnish Academy of Science and Letters." ] ]
1906.04445
[ [ "Analyzing the Structure of Attention in a Transformer Language Model" ], [ "Abstract The Transformer is a fully attention-based alternative to recurrent networks that has achieved state-of-the-art results across a range of NLP tasks.", "In this paper, we analyze the structure of attention in a Transformer language model, the GPT-2 small pretrained model.", "We visualize attention for individual instances and analyze the interaction between attention and syntax over a large corpus.", "We find that attention targets different parts of speech at different layer depths within the model, and that attention aligns with dependency relations most strongly in the middle layers.", "We also find that the deepest layers of the model capture the most distant relationships.", "Finally, we extract exemplar sentences that reveal highly specific patterns targeted by particular attention heads." ], [ "Introduction", "Contextual word representations have recently been used to achieve state-of-the-art performance across a range of language understanding tasks [18], [19], [7].", "These representations are obtained by optimizing a language modeling (or similar) objective on large amounts of text.", "The underlying architecture may be recurrent, as in ELMo [18], or based on multi-head self-attention, as in OpenAI's GPT [19] and BERT [7], which are based on the Transformer [28].", "Recently, the GPT-2 model [20] outperformed other language models in a zero-shot setting, again based on self-attention.", "An advantage of using attention is that it can help interpret the model by showing how the model attends to different parts of the input [1], [3].", "Various tools have been developed to visualize attention in NLP models, ranging from attention matrix heatmaps [1], [23], [22] to bipartite graph representations [16], [14], [24].", "A visualization tool designed specifically for multi-head self-attention in the Transformer [13], [27] was introduced in [28].", "We extend the work of [13], by visualizing attention in the Transformer at three levels of granularity: the attention-head level, the model level, and the neuron level.", "We also adapt the original encoder-decoder implementation to the decoder-only GPT-2 model, as well as the encoder-only BERT model.", "In addition to visualizing attention for individual inputs to the model, we also analyze attention in aggregate over a large corpus to answer the following research questions: Does attention align with syntactic dependency relations?", "Which attention heads attend to which part-of-speech tags?", "How does attention capture long-distance relationships versus short-distance ones?", "We apply our analysis to the GPT-2 small pretrained model.", "We find that attention follows dependency relations most strongly in the middle layers of the model, and that attention heads target particular parts of speech depending on layer depth.", "We also find that attention spans the greatest distance in the deepest layers, but varies significantly between heads.", "Finally, our method for extracting exemplar sentences yields many intuitive patterns." ], [ "Related Work", "Recent work suggests that the Transformer implicitly encodes syntactic information such as dependency parse trees [10], [21], anaphora [30], and subject-verb pairings [9], [32].", "Other work has shown that RNNs also capture syntax, and that deeper layers in the model capture increasingly high-level constructs [4].", "In contrast to past work that measure a model's syntactic knowledge through linguistic probing tasks, we directly compare the model's attention patterns to syntactic constructs such as dependency relations and part-of-speech tags.", "[21] also evaluated dependency trees induced from attention weights in a Transformer, but in the context of encoder-decoder translation models." ], [ "Stacked Decoder:", "GPT-2 is a stacked decoder Transformer, which inputs a sequence of tokens and applies position and token embeddings followed by several decoder layers.", "Each layer applies multi-head self-attention (see below) in combination with a feedforward network, layer normalization, and residual connections.", "The GPT-2 small model has 12 layers and 12 heads." ], [ "Self-Attention:", "Given an input $x$ , the self-attention mechanism assigns to each token $x_i$ a set of attention weights over the tokens in the input: $\\text{Attn}(x_i) = (\\alpha _{i, 1}(x), \\alpha _{i, 2}(x), ..., \\alpha _{i, i}(x))$ where $\\alpha _{i, j}(x)$ is the attention that $x_i$ pays to $x_j$ .", "The weights are positive and sum to one.", "Attention in GPT-2 is right-to-left, so $\\alpha _{i, j}$ is defined only for $j \\le i$ .", "In the multi-layer, multi-head setting, $\\alpha $ is specific to a layer and head.", "The attention weights $\\alpha _{i, j}(x)$ are computed from the scaled dot-product of the query vector of $x_i$ and the key vector of $x_j$ , followed by a softmax operation.", "The attention weights are then used to produce a weighted sum of value vectors: $\\text{Attention}(Q, K, V) = \\text{softmax}(\\frac{QK^T}{\\sqrt{d_k}})V$ using query matrix $Q$ , key matrix $K$ , and value matrix $V$ , where $d_k$ is the dimension of $K$ .", "In a multi-head setting, the queries, keys, and values are linearly projected $h$ times, and the attention operation is performed in parallel for each representation, with the results concatenated." ], [ "Visualizing Individual Inputs", "In this section, we present three visualizations of attention in the Transformer model: the attention-head view, the model view, and the neuron view.", "Source code and Jupyter notebooks are available at https://github.com/jessevig/bertviz, and a video demonstration can be found at https://vimeo.com/339574955.", "A more detailed discussion of the tool is provided in [29]." ], [ "Attention-head View", "The attention-head view (Figure REF ) visualizes attention for one or more heads in a model layer.", "Self-attention is depicted as lines connecting the attending tokens (left) with the tokens being attended to (right).", "Colors identify the head(s), and line weight reflects the attention weight.", "This view closely follows the design of [13], but has been adapted to the GPT-2 model (shown in the figure) and BERT model (not shown).", "Figure: Attention-head view of GPT-2 for layer 4, head 11, which focuses attention on previous token.This view helps focus on the role of specific attention heads.", "For instance, in the shown example, the chosen attention head attends primarily to the previous token position.", "Figure: Model view of GPT-2, for same input as in Figure  (excludes layers 6–11 and heads 6–11).Figure: Neuron view for layer 8, head 6, which targets items in lists.", "Positive and negative values are colored blue and orange, respectively, and color saturation indicates magnitude.", "This view traces the computation of attention (Section ) from the selected token on the left to each of the tokens on the right.", "Connecting lines are weighted based on attention between the respective tokens.", "The arrows (not in visualization) identify the neurons that most noticeably contribute to this attention pattern: the lower arrows point to neurons that contribute to attention towards list items, while the upper arrow identifies a neuron that helps focus attention on the first token in the sequence." ], [ "Model View", "The model view (Figure REF ) visualizes attention across all of the model's layers and heads for a particular input.", "Attention heads are presented in tabular form, with rows representing layers and columns representing heads.", "Each head is shown in a thumbnail form that conveys the coarse shape of the attention pattern, following the small multiples design pattern [26].", "Users may also click on any head to enlarge it and see the tokens.", "This view facilitates the detection of coarse-grained differences between heads.", "For example, several heads in layer 0 share a horizontal-stripe pattern, indicating that tokens attend to the current position.", "Other heads have a triangular pattern, showing that they attend to the first token.", "In the deeper layers, some heads display a small number of highly defined lines, indicating that they are targeting specific relationships between tokens." ], [ "Neuron View", "The neuron view (Figure REF ) visualizes how individual neurons interact to produce attention.", "This view displays the queries and keys for each token, and demonstrates how attention is computed from the scaled dot product of these vectors.", "The element-wise product shows how specific neurons influence the dot product and hence attention.", "Whereas the attention-head view and the model view show what attention patterns the model learns, the neuron view shows how the model forms these patterns.", "For example, it can help identify neurons responsible for specific attention patterns, as illustrated in Figure REF ." ], [ "Analyzing Attention in Aggregate", "In this section we explore the aggregate properties of attention across an entire corpus.", "We examine how attention interacts with syntax, and we compare long-distance versus short-distance relationships.", "We also extract exemplar sentences that reveal patterns targeted by each attention head." ], [ "Part-of-Speech Tags", "Past work suggests that attention heads in the Transformer may specialize in particular linguistic phenomena [28], [21], [29].", "We explore whether individual attention heads in GPT-2 target particular parts of speech.", "Specifically, we measure the proportion of total attention from a given head that focuses on tokens with a given part-of-speech tag, aggregated over a corpus: $\\text{P}_{\\alpha }(tag) =\\dfrac{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x) {\\cdot } {1}_{\\text{pos}(x_j) = tag}}{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x)}$ where $tag$ is a part-of-speech tag, e.g., NOUN, $x$ is a sentence from the corpus $X$ , $\\alpha _{i, j}$ is the attention from $x_i$ to $x_j$ for the given head (see Section ), and $\\text{pos}(x_j)$ is the part-of-speech tag of $x_j$ .", "We also compute the share of attention directed from each part of speech in a similar fashion." ], [ "Dependency Relations", "Recent work shows that Transformers and recurrent models encode dependency relations [10], [21], [15].", "However, different models capture dependency relations at different layer depths.", "In a Transformer model, the middle layers were most predictive of dependencies [15], [25].", "Recurrent models were found to encode dependencies in lower layers for language models  [15] and in deeper layers for translation models [2].", "We analyze how attention aligns with dependency relations in GPT-2 by computing the proportion of attention that connects tokens that are also in a dependency relation with one another.", "We refer to this metric as dependency alignment: $\\text{DepAl}_{\\alpha } =\\dfrac{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x) dep(x_i, x_j)}{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x)}$ where $dep(x_i, x_j)$ is an indicator function that returns 1 if $x_i$ and $x_j$ are in a dependency relation and 0 otherwise.", "We run this analysis under three alternate formulations of dependency: (1) the attending token ($x_i$ ) is the parent in the dependency relation, (2) the token receiving attention ($x_j$ ) is the parent, and (3) either token is the parent.", "We hypothesized that heads that focus attention based on position—for example, the head in Figure REF that focuses on the previous token—would not align well with dependency relations, since they do not consider the content of the text.", "To distinguish between content-dependent and content-independent (position-based) heads, we define attention variability, which measures how attention varies over different inputs; high variability would suggest a content-dependent head, while low variability would indicate a content-independent head: $\\text{Variability}_{\\alpha } =\\dfrac{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i|\\alpha _{i,j}(x)-\\bar{\\alpha }_{i,j}|}{ 2\\cdot \\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x)}$ where $\\bar{\\alpha }_{i,j}$ is the mean of $\\alpha _{i,j}(x)$ over all $x \\in X$ .", "$\\text{Variability}_\\alpha $ represents the mean absolute deviationWe considered using variance to measure attention variability; however, attention is sparse for many attention heads after filtering first-token attention (see Section REF ), resulting in a very low variance (due to $\\alpha _{i,j}(x) \\approx 0$ and $\\bar{\\alpha }_{i,j} \\approx 0$ ) for many content-sensitive attention heads.", "We did not use a probability distance measure, as attention values do not sum to one due to filtering first-token attention.", "of $\\alpha $ over $X$ , scaled to the $[0,1 ]$ interval.The upper bound is 1 because the denominator is an upper bound on the numerator.When computing variability, we only include the first $N$ tokens ($N$ =10) of each $x \\in X$ to ensure a sufficient amount of data at each position $i$ .", "The positional patterns appeared to be consistent across the entire sequence.", "Variability scores for three example attention heads are shown in Figure REF .", "Figure: Attention heads in GPT-2 visualized for an example input sentence, along with aggregate metrics computed from all sentences in the corpus.", "Note that the average sentence length in the corpus is 27.7 tokens.", "Left: Focuses attention primarily on current token position.", "Center: Disperses attention roughly evenly across all previous tokens.", "Right: Focuses on words in repeated phrases." ], [ "Attention Distance", "Past work suggests that deeper layers in NLP models capture longer-distance relationships than lower layers [2], [21].", "We test this hypothesis on GPT-2 by measuring the mean distance (in number of tokens) spanned by attention for each head.", "Specifically, we compute the average distance between token pairs in all sentences in the corpus, weighted by the attention between the tokens: $\\bar{D}_{\\alpha } =\\dfrac{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x) \\cdot (i - j)}{\\sum \\limits _{x \\in X} \\sum \\limits _{i=1}^{|x|}\\sum \\limits _{j=1}^i\\alpha _{i,j}(x)}$ We also explore whether heads with more dispersed attention patterns (Figure REF , center) tend to capture more distant relationships.", "We measure attention dispersion based on the entropyWhen computing entropy, we exclude attention to the first (null) token (see Section REF ) and renormalize the remaining weights.", "We exclude tokens that focus over 90% of attention to the first token, to avoid a disproportionate influence from the remaining attention from these tokens.", "of the attention distribution [8]: $\\text{Entropy}_{\\alpha }(x_i) = - \\sum _{j = 1}^{i} \\alpha _{i,j}(x)\\text{log}(\\alpha _{i,j}(x))$ Figure REF shows the mean distance and entropy values for three example attention heads.", "Figure: Proportion of attention focused on first token, broken out by layer and head." ], [ "Dataset", "We focused our analysis on text from English Wikipedia, which was not included in the training set for GPT-2.", "We first extracted 10,000 articles, and then sampled 100,000 sentences from these articles.", "For the qualitative analysis described later, we used the full dataset; for the quantitative analysis, we used a subset of 10,000 sentences.", "Figure: Each heatmap shows the proportion of total attention that originates from the given part of speech, broken out by layer (vertical axis) and head (horizontal axis).", "Scales vary by tag.", "Results for all tags available in appendix." ], [ "Tools", "We computed attention weights using the pytorch-pretrained-BERThttps://github.com/huggingface/pytorch-pretrained-BERT implementation of the GPT-2 small model.", "We extracted syntactic features using spaCy [11] and mapped the features from the spaCy-generated tokens to the corresponding tokens from the GPT-2 tokenizer.In cases where the GPT-2 tokenizer split a word into multiple pieces, we assigned the features to all word pieces." ], [ "Filtering Null Attention", "We excluded attention focused on the first token of each sentence from the analysis because it was not informative; other tokens appeared to focus on this token by default when no relevant tokens were found elsewhere in the sequence.", "On average, 57% of attention was directed to the first token.", "Some heads focused over 97% of attention to this token on average (Figure REF ), which is consistent with recent work showing that individual attention heads may have little impact on overall model performance [31], [17].", "We refer to the attention directed to the first token as null attention.", "Figure: Proportion of attention directed to various dependency types, broken out by layer.Figure: Attention variability by layer / head.", "High-values indicate content-dependent heads, and low values indicate content-independent (position-based) heads." ], [ "Part-of-Speech Tags", "Figure REF shows the share of attention directed to various part-of-speech tags (Eq.", "REF ) broken out by layer and head.", "Most tags are disproportionately targeted by one or more attention heads.", "For example, nouns receive 43% of attention in layer 9, head 0, compared to a mean of 21% over all heads.", "For 13 of 16 tags, a head exists with an attention share more than double the mean for the tag.", "The attention heads that focus on a particular tag tend to cluster by layer depth.", "For example, the top five heads targeting proper nouns are all in the last three layers of the model.", "This may be due to several attention heads in the deeper layers focusing on named entities (see Section REF ), which may require the broader context available in the deeper layers.", "In contrast, the top five heads targeting determiners—a lower-level construct—are all in the first four layers of the model.", "This is consistent with previous findings showing that deeper layers focus on higher-level properties [4], [2].", "Figure REF shows the proportion of attention directed from various parts of speech.", "The values appear to be roughly uniform in the initial layers of the model.", "The reason is that the heads in these layers pay little attention to the first (null) token (Figure REF ), and therefore the remaining (non-null) attention weights sum to a value close to one.", "Thus, the net weight for each token in the weighted sum (Section REF ) is close to one, and the proportion reduces to the frequency of the part of speech in the corpus.", "Beyond the initial layers, attention heads specialize in focusing attention from particular part-of-speech tags.", "However, the effect is less pronounced compared to the tags receiving attention; for 7 out of 16 tags, there is a head that focuses attention from that tag with a frequency more than double the tag average.", "Many of these specialized heads also cluster by layer.", "For example, the top ten heads for focusing attention from punctuation are all in the last six layers.", "Figure: Mean attention distance by layer / head (left), and by layer (right).Figure: Mean attention entropy by layer / head.", "Higher values indicate more diffuse attention." ], [ "Dependency Relations", "Figure REF shows the dependency alignment scores (Eq.", "REF ) broken out by layer.", "Attention aligns with dependency relations most strongly in the middle layers, consistent with recent syntactic probing analyses [15], [25].", "One possible explanation for the low alignment in the initial layers is that many heads in these layers focus attention based on position rather than content, according to the attention variability (Eq.", "REF ) results in Figure REF .", "Figure REF (left and center) shows two examples of position-focused heads from layer 0 that have relatively low dependency alignment[7] (0.04 and 0.10, respectively); the first head focuses attention primarily on the current token position (which cannot be in a dependency relation with itself) and the second disperses attention roughly evenly, without regard to content.", "An interesting counterexample is layer 4, head 11 (Figure REF ), which has the highest dependency alignment out of all the heads ($\\text{DepAl}_{\\alpha } = 0.42$ )Assuming relation may be in either direction.", "but is also the most position-focused ($\\text{Variability}_{\\alpha } = 0.004$ ).", "This head focuses attention on the previous token, which in our corpus has a 42% chance of being in a dependency relation with the adjacent token.", "As we'll discuss in the next section, token distance is highly predictive of dependency relations.", "One hypothesis for why attention diverges from dependency relations in the deeper layers is that several attention heads in these layers target very specific constructs (Tables REF and REF ) as opposed to more general dependency relations.", "The deepest layers also target longer-range relationships (see next section), whereas dependency relations span relatively short distances (3.89 tokens on average).", "We also analyzed the specific dependency types of tokens receiving attention (Figure REF ).", "Subjects (csubj, csubjpass, nsubj, nsubjpass) were targeted more in deeper layers, while auxiliaries (aux), conjunctions (cc), determiners (det), expletives (expl), and negations (neg) were targeted more in lower layers, consistent with previous findings [2].", "For some other dependency types, the interpretations were less clear." ], [ "Attention Distance", "We found that attention distance (Eq.", "REF ) is greatest in the deepest layers (Figure REF , right), confirming that these layers capture longer-distance relationships.", "Attention distance varies greatly across heads ($SD = 3.6$ ), even when the heads are in the same layer, due to the wide variation in attention structures (e.g., Figure REF left and center).", "We also explored the relationship between attention distance and attention entropy (Eq.", "REF ), which measures how diffuse an attention pattern is.", "Overall, we found a moderate correlation ($r=0.61$ , $p < 0.001$ ) between the two.", "As Figure REF shows, many heads in layers 0 and 1 have high entropy (e.g., Figure REF , center), which may explain why these layers have a higher attention distance compared to layers 2–4.", "One counterexample is layer 5, head 1 (Figure REF , right), which has the highest mean attention distance of any head (14.2), and one of the lowest mean entropy scores (0.41).", "This head concentrates attention on individual words in repeated phrases, which often occur far apart from one another.", "We also explored how attention distance relates to dependency alignment.", "Across all heads, we found a negative correlation between the two quantities ($r=-0.73, p <0.001$ ).", "This is consistent with the fact that the probability of two tokens sharing a dependency relation decreases as the distance between them increasesThis is true up to a distance of 18 tokens; 99.8% of dependency relations occur within this distance.", "; for example, the probability of being in a dependency relation is 0.42 for adjacent tokens, 0.07 for tokens at a distance of 5, and 0.02 for tokens at a distance of 10.", "The layers (2–4) in which attention spanned the shortest distance also had the highest dependency alignment.", "4pt plus 3pt minus 2pt 2.6pt.8pt" ], [ "Qualitative Analysis", "To get a sense of the lexical patterns targeted by each attention head, we extracted exemplar sentences that most strongly induced attention in that head.", "Specifically, we ranked sentences by the maximum token-to-token attention weight within each sentence.", "Results for three attention heads are shown in Tables REF –REF .", "We found other attention heads that detected entities (people, places, dates), passive verbs, acronyms, nicknames, paired punctuation, and other syntactic and semantic properties.", "Most heads captured multiple types of patterns.", "Table: Exemplar sentences for layer 11, head 10 which focuses attention from the end of a noun phrase to the head noun.", "In the first sentence, for example, the head noun is prospects and the remainder of the noun phrase is of Anglo - American assistance in another war with Germany.", "The purpose of this attention pattern is likely to predict the word (typically a verb) that follows the noun phrase, as the head noun is a strong predictor of this." ], [ "Conclusion", "In this paper, we analyzed the structure of attention in the GPT-2 Transformer language model.", "We found that many attention heads specialize in particular part-of-speech tags and that different tags are targeted at different layer depths.", "We also found that the deepest layers capture the most distant relationships, and that attention aligns most strongly with dependency relations in the middle layers where attention distance is lowest.", "Our qualitative analysis revealed that the structure of attention is closely tied to the training objective; for GPT-2, which was trained using left-to-right language modeling, attention often focused on words most relevant to predicting the next token in the sequence.", "For future work, we would like to extend the analysis to other Transformer models such as BERT, which has a bidirectional architecture and is trained on both token-level and sentence-level tasks.", "Although the Wikipedia sentences used in our analysis cover a diverse range of topics, they all follow a similar encyclopedic format and style.", "Further study is needed to determine how attention patterns manifest in other types of content, such as dialog scripts or song lyrics.", "We would also like to analyze attention patterns in text much longer than a single sentence, especially for new Transformer variants such as the Transformer-XL [6] and Sparse Transformer [5], which can handle very long contexts.", "We believe that interpreting a model based on attention is complementary to linguistic probing approaches (Section ).", "While linguistic probing precisely quantifies the amount of information encoded in various components of the model, it requires training and evaluating a probing classifier.", "Analyzing attention is a simpler process that also produces human-interpretable descriptions of model behavior, though recent work casts doubt on its role in explaining individual predictions [12].", "The results of our analyses were often consistent with those from probing approaches." ], [ "Acknowledgements", "Y.B.", "was supported by the Harvard Mind, Brain, and Behavior Initiative." ], [ "Appendix", "Figures REF and REF shows the results from Figures REF and REF for the full set of part-of-speech tags." ] ]
1906.04284
[ [ "Decision Dynamics in Groups with Interacting Members" ], [ "Abstract Group decisions involve the combination of evidence accumulation by individual members and direct member-to-member interactions.", "We consider a simplified framework of two deciders, each undergoing a two alternative forced choice task, with the choices of early deciding members biasing members who have yet to choose.", "We model decision dynamics as a drift-diffusion process and present analysis of the associated Fokker-Planck equation for the group.", "We show that the probability of coordinated group decisions (both members make the same decision) is maximized by setting the decision threshold of one member to a lower value than its neighbor's.", "This result is akin to a speed-accuracy tradeoff, where the penalty of lowering the decision threshold is choice inaccuracy while the benefit is that earlier decisions have a higher probability of influencing the other member.", "We numerically extend these results to large group decisions, where it is shown that by choosing the appropriate parameters, a small but vocal component of the population can have a large amount of influence on the total system." ], [ "Introduction", "There is a long history of study in how evidence is integrated and ultimately drives decisions [11], [21], [4].", "An often used framework is the two-alternative forced choice task (TAFC), where decisions are constrained to be between two alternatives with evidence steadily accumulated over time.", "While the TAFC framework is admittedly oversimplified it has provided a wealth of data by which to compare and contrast various models of decision processes [4].", "Many models treat TAFC decision dynamics as a drift-diffusion stochastic process [25], [16], [24], [4], [17], where the drift term models the steady accumulation of evidence and the diffusion term models variability in decision making.", "Drift-diffusion models capture observed decision behavior at both single neuron [12] and psychophysical levels [18], and has been shown to perform optimal decision making with appropriate assumptions [4].", "Decisions are not always made by individuals in isolation, yet by members in a group where all individuals are actively engaged in the decision process [29], [8].", "There are mixed reports about the benefits (or lack of) of deciding within a group, with examples where group decisions are more accurate [27] and others where a systematic group bias is detected [10].", "Nevertheless, group decision making theory has been applied to economics, political science, and animal behavior [1], [3], [7], [2], [13], [8], [22].", "Combining the drift-diffusion dynamics of a population of deciders performing a TAFC task to form a group decision is a natural extension, and several groups have made important advances in this area (see [14] for a review).", "However, these modeling studies rarely consider interactions between deciders during evidence accumulation (but see [23]).", "In this paper we consider a group of deciders each engaged in a TAFC task with each member modeled as a drift-diffusion processes.", "When a member of the group makes a decision it communicates its choice to all other members of the group, with the hope to influence those who have yet to decide.", "In general, introducing coupling between drift-diffusion processes creates significant challenges in any analysis.", "To make our model tractable we consider an instantaneous interaction: when a certain member decides, it `kicks' all the other members towards the decision it made.", "Following the interaction, the decision makers continue their drift-diffusion processes independent of one another.", "Therefore, other than at a finite set of interaction times, the stochastic processes are independent of each other.", "This type of interaction permits a calculation of the probabilities of group decisions through extensions of the single decider framework.", "We begin with the simple case of two deciders.", "One of the deciders is biased towards the + choice (without loss of generality).", "By varying the amount of evidence that is required for this decider to make a decision (decision threshold), we aim to maximize the probability that both deciders choose the + decision (++ decision).", "In the case where there is no interaction between the deciders, the optimal solution for the + decider is to require a very large amount of evidence to make a decision.", "In this way, the random noise from the diffusion term is irrelevant and the + decider always makes the + decision.", "Thus, the probability of a joint ++ decision rests solely on the other decider (which the $+$ decider has no influence over).", "We contrast this to the case with decider interaction.", "We find that for large enough decider coupling a finite decision threshold maximizes the probability of a ++ decision.", "A compromise occurs between the + decider sometimes choosing the $-$ decision in error, but the + decider having the possibility of influence over the other decider.", "We conclude by showing that the intuition gained for the two decider case carries over to a large $N$ -decider population.", "We begin by considering a system of two deciders, each of which is trying to decide between a + choice and a – choice (Figure REF ).", "The group decision dynamics obey the following pair of Langevin equations [20]: $dX_{1}&=&\\mu _1 dt+\\sqrt{2D}dW_{1}+G_2q\\delta (t-t_1), \\\\dX_{2}&=&\\mu _2 dt+\\sqrt{2D}dW_{2}+G_1q\\delta (t-t_2).", "$ The processes $X_1(t)$ and $X_2(t)$ represent the amount of evidence collected by the decider 1 and 2 at time $t$ , respectively.", "For a given decider $i$ ($i = 1,2$ ), when the evidence $X_i(t)$ reaches a value $+\\theta _i$ it decides on the + choice, but if the evidence reaches $-\\theta _i$ , it decides on the $-$ choice.", "We consider evidence accumulation to be stochastic and include the Brownian processes $W_1(t)$ and $W_2(t)$ ($W_1(t)$ and $W_2(t)$ are statistically independent), with diffusion coefficient $D>0$ .", "Finally, we set $X_1(0)=X_2(0)=0$ so that neither decider is initially biased towards the $+$ or $-$ decision.", "Figure: Modeling group decisions with drift-diffusion dynamics.", "A. Instantaneous kick: A decider crosses through the -- threshold, and immediately kicks its neighbor across the same threshold.", "B.", "Kick and diffusion: A decider crosses through the ++ threshold and kicks its neighbor up by an amount q=θ/2q=\\theta /2 after which the decider then diffuses across the ++ threshold.The random decision time for decider $i$ is denoted by $t_i$ and $G_i=1$ if $i$ chooses the + choice, while $G_i=-1$ if $i$ chooses the $-$ choice.", "The interaction between the deciders is modeled by the final terms in Eqs.", "(REF ) and ().", "When decider $i$ chooses the $\\pm $ choice at time $t_i$ , it provides an instantaneous evidence kick of intensity $G_iq$ at time $t_i$ to $X_j$ ($j \\ne i$ ).", "In other words, upon a decision a decider will attempt to influence its neighbor to choose the same choice it did.", "We remark that the coupling term is only relevant if the neighbor decider has yet to decide.", "Assuming decider $i$ decides before decider $j$ , then the interaction can separated into two cases: either decider $i$ kicks $j$ across $\\pm \\theta _j$ instantly (meaning $t_i=t_j$ ; see Figure REF A), or it kicks decider $j$ and it eventually drifts across one of the boundaries at a later time ($t_j > t_i$ ; see Figure REF B).", "An alternative model of decision coupling would be for $\\mu _j \\rightarrow \\mu _j +q$ at time $t_i$ .", "This model would imply that a neighbor's decision is a continual source of evidence to the other decider, which must be accumulated over time to have influence.", "In this study we confine ourselves to the former model where decisions are communicated in an instant.", "We are interested in the group decision ($G_1,G_2$ ).", "One approach for obtaining the probabilities of ($G_1,G_2$ ) is to estimate them from many Monte-Carlo realizations of Eqs.", "(REF ) and ().", "Another is to solve an associated Fokker-Planck equation to obtain analytic estimates for $G_1$ and $G_2$ .", "We follow both of these approaches and find that they agree very closely." ], [ "Calculating group decisions with interaction", "The group decision dynamics within our model is a two dimensional problem governed by the concentration $c(x_1,x_2,t)$ .", "With proper normalization (see below) the concentration is the probability density at time $t$ for the evidences $X_1$ and $X_2$ over $x_1 \\in (-\\theta _1,\\theta _1)$ and $x_2\\in (-\\theta _2,\\theta _2)$ , respectively.", "Since $X_1$ and $X_2$ are independent before the first interaction at time $t_i$ , the evolution of the system decouples for $t < t_i$ and we get that $c(x_1,x_2,t)=c_1(x_1,t)c_2(x_2,t)$ .", "The stochastic dynamics of either Eqs.", "(REF ) and () obey the associated Fokker-Planck equation [20] for $c_i(x,t)$ : $\\frac{\\partial c_i}{\\partial t}=-\\mu _i\\frac{\\partial c_i}{\\partial x}+D\\frac{\\partial ^{2}c_i}{\\partial x^{2}}, \\hspace{28.45274pt} i=1,2.", "$ A decision occurs when $X_i$ reaches one the boundaries $\\pm \\theta _i$ ; this amounts to supplementing Eq.", "(REF ) with the absorbing boundary conditions: $c_i\\left( \\theta _i,t\\right) =c_i\\left( -\\theta _i,t\\right) =0,$ for all $t>0.$ Furthermore, there is no evidence accumulation for $t < 0$ so that the concentration at time $t=0$ obeys: $c_i\\left( x,0\\right) =\\delta \\left( x\\right) .$ In general, under these conditions the drift-diffusion equation admits the Fourier series solution: $ c_i\\left( x,t\\right) =\\sum _{m=1}^{\\infty }\\frac{e^{_{\\frac{\\mu _i x}{2D}}}}{2\\theta _i}\\left( -1\\right) ^{m+1}e^{-k_{2m-1}t}\\sin \\left( w_{2m-1}\\left(x+\\theta _i\\right) \\right)$ where $w_{2m-1} & \\equiv \\frac{\\left( 2m-1\\right) \\pi }{2\\theta _i},\\\\k_{2m-1} & \\equiv \\frac{\\mu _i^{2}}{4D}+Dw_{2m-1}^{2}.$ In what follows we wish to calculate the probability that both deciders cross the $+$ threshold (without loss of generality).", "This requires incorporating the interaction that happens at the decision time of the first decider.", "The conditioned first passage time (FPT) density of decider $i$ is denoted $f_i^{\\pm }(t)$ , and it describes the probability that the decider will make the $\\pm $ decision at time $t$ .", "Note that $\\int _{0}^{\\infty } f_i^{\\pm }(t) dt$ is the total probability that the decider makes the $\\pm $ choice, and we have that $\\int _{0}^{\\infty }[f^{+ }_i(t)+f^{- }_i(t)] dt = 1$ (i.e there is always a decision).", "The total FPT density of decider $i$ is then: $f_i(t) \\equiv f_i^{+}(t)+f_i^{- }(t).$ The FPT densities can be computed from the flux of concentration passing the threshold at time $t$ : $f^{\\pm }_i\\left( t\\right) =\\mp \\left.", "D\\frac{\\partial c}{\\partial x} \\right|_{x=\\pm \\theta _i}.$ If we condition on decider $i$ deciding before $j$ ($t_i < t_j$ ) then the FPT density of decider $i$ escaping through the $\\pm \\theta $ threshold at time $t_i$ is simply: $ f_{i}^{\\pm }(t_{i}\\vert t_i < t_j) = f_{i}^{\\pm }(t_{i})\\int _{t_{i}}^{\\infty } f_{j}(t_{j})dt_{j}.$ As expected the conditioned $f_{i}^{\\pm \\theta }(t_{i}\\vert t_i < t_j)$ is an asymmetric, single mode function (Figure REF ).", "Using the truncated Fourier series solution in Eq.", "(REF ) gives an excellent agreement for $f_{i}^{\\pm }(t_{i}\\vert t_i < t_j)$ estimated from direct simulations of Eqs.", "(REF ) and () (Figure REF ).", "Figure: Simulated and theoretical FPT densities for decider 1 making the + decision first.", "We take D=1D=1, μ 1 =-μ 2 =0.75\\mu _1=-\\mu _2=0.75, and θ 1 =θ 2 =1\\theta _1=\\theta _2=1.", "For the theoretical FPT density we truncated the Fourier series solution for c(x,t)c(x,t) at mode m=100m=100.Immediately before decider $j$ is kicked at time $t_{i}$ , labelled $t_{i}^{\\rightarrow }$ To avoid cumbersome notation we denote the left (right) limit $ t \\rightarrow t_i$ as $t_i^{\\rightarrow }$ ($t_i^{\\leftarrow }$ )., the concentration $c_{j}(x,t_i^\\rightarrow )$ of decider $j$ is given by: $c_{j}(x,t_i^{\\rightarrow })=\\sum _{m=1}^{\\infty }\\frac{e^{_{\\frac{\\mu _{j}x}{2D}}}}{2\\theta _{j}}\\left( -1\\right) ^{m+1}e^{-k_{2m-1}t_{i}}\\sin \\left(w_{2m-1}\\left( x+\\theta _{j}\\right) \\right) .$ For decider $j$ to escape at the $\\pm $ threshold, there are two cases to consider: 1) $j$ is kicked across the $\\pm $ threshold instantly ($t_j=t_i$ ; Figure REF A), or 2) $j$ is kicked and then diffuses across the $\\pm $ threshold at a later time ($t_j>t_i$ , Figure REF B).", "The conditional probability that $j$ crosses the $\\pm $ gate can be thus decomposed as: $P(G_j = \\pm 1|i\\text{ crossing }\\pm \\text{ gate attime }t_{i}) \\\\= P(j\\text{ inst.", "crossing }\\pm \\text{ gate}|i\\text{ crossing }\\pm \\text{ gate attime }t_{i}) \\\\ + P(j\\text{ diff.", "across }\\pm \\text{ gate}|i\\text{ crossing }\\pm \\text{ gate attime }t_{i}).$ If $X_j$ is between the lower value $L_{j}^{+}\\equiv \\theta _{j}-q$ and the upper value $U_{j}^{+}\\equiv \\theta _{j}$ , then the decider will be kicked instantaneously across the $+\\theta _{j}$ gate (assuming decider $i$ made the + choice).", "Similarly, if the decider $j$ is between $L_{j}^{-}\\equiv -\\theta _{j}$ and $U_{j}^{-}\\equiv -\\theta _{j}+q$ , then the decider will be kicked instantaneously across the $-\\theta _{j}$ gate (assuming decider $i$ made the - choice).", "The probability of instantaneous crossing conditioned on $i$ crossing the $\\pm $ gate at time $t_{i}$ is then the probability that the evidence $X_j$ will be in the range $(L_{j}^{\\pm },U_{j}^{\\pm })$ .", "That is, $P(j\\text{ inst.", "crossing }\\pm \\text{ gate}|i\\text{ crossing }\\pm \\text{ gate attime }t_{i})=\\frac{\\int _{L_{j}^{\\pm }}^{U_{j}^{\\pm }}c_{j}(x,t_i^{\\rightarrow })dx}{\\int _{-\\theta _{j}}^{\\theta _{j}}c_{j}(x,t_i^{\\rightarrow })dx}.$ We can make this expression simpler by defining the density $\\rho (x,t)$ as the normalized concentration: $\\rho _i(x,t)\\equiv \\frac{c_i(x,t)}{\\int _{-\\theta _i}^{\\theta _i}c_i(x,t)dx}.$ This means that $\\int _{-\\theta _i}^{\\theta _i}\\rho _i(x,t)dx=1$ for all $t>0$ .", "Then the above equation becomes $P(j\\text{ inst.", "crossing }\\pm \\text{ gate}|i\\text{ crossing }\\pm \\text{ gate attime }t_{i})=\\int _{L_{j}^{\\pm }}^{U_{j}^{\\pm }}\\rho _{j}(x,t_i^{\\rightarrow })dx.", "$ We next must treat the case when decider $j$ is kicked and then diffuses across the threshold separately.", "At time $t_{i}$ decider $j$ is kicked with magnitude $q$ and we assume that $X_j(t_i^{\\rightarrow }) \\pm q \\in (-\\theta _j,\\theta _j)$ .", "Immediately after the kick the concentration $c_{j}(x,t_{i}^+)$ of decider $j$ is shifted by magnitude $q$ in the $\\pm x$ direction: $c_{j}(x,t_i^{\\leftarrow }) = c_{j}(x \\mp q,t_i^{\\rightarrow }).$ For times $t>t_i$ we again have independent diffusion and the evidence accumulation of decider $j$ can be obtained from the one dimensional diffusion process (Eq.", "(REF )), now with an initial density of $c_{j}(x,t_i^{\\leftarrow })$ (as opposed to $c_j(x,0)=\\delta (x)$ ).", "However, in this case we are only interested in the decision $G_j$ and not the decision time $t_j$ .", "For simple random walk dynamics it is well known that if decider $j$ has evidence $x$ the probability that it will cross through the $\\pm $ gate is denoted by $\\epsilon ^{\\pm }_j$ is given by [19], [26]: $\\epsilon ^{+}_j(x) &=& \\exp \\left( \\frac{\\mu _j (\\theta _j-x)}{2D}\\right) \\frac{\\sinh [\\mu _j(x+\\theta _j)/(2D)]}{\\sinh [2\\mu _j \\theta _j/(2D)]}, \\\\\\epsilon ^{-}_j(x) &=& 1-\\epsilon ^{+}_j(x).$ If $i$ escapes through the positive gate then decider $j$ is not kicked across instantly if $X_j$ is between $\\Lambda _{j}^{+} \\equiv -\\theta _{j}$ and $\\Omega _{j}^{+} \\equiv \\theta _{j}-q$ .", "Equivalently, if $i$ escapes through the negative gate, decider $j$ is not kicked across instantly if it is between $\\Lambda _{j}^{-} \\equiv -\\theta _{j}+q$ and $\\Omega _{j}^{-} \\equiv \\theta _{j}$ .", "From this we have the probability that $j$ is not kicked across instantaneously being $\\int _{\\Lambda _{j}^{\\pm }}^{\\Omega _{j}^{\\pm }} \\rho _{j}(x,t_i^{\\rightarrow })dx$ .", "Once decider $j$ has been kicked its probability density is given by $\\rho _{j}(x,t_i^{\\leftarrow })$ .", "Thus, the total conditional probability that decider $j$ diffuses through the $\\pm $ gate is: $P(j \\text{ diff.", "across } \\pm \\text{ gate} \\vert i \\text{ crossing } \\pm \\text{gate at time } t_{i}) \\\\ =\\int _{\\Lambda _{j}^{\\pm }}^{\\Omega _{j}^{\\pm }} \\rho _{j}(x,t_i^{\\rightarrow }) dx \\int _{-\\theta _{j}}^{\\theta _{j}} \\rho _{j}(x,t_i^{\\leftarrow })\\epsilon ^{\\pm }_{j}(x) dx .$ Finally, the probability of $j$ crossing the $\\pm $ gate (by whatever means) conditioned on $i$ crossing the same gate at $t_{i}$ is from Eqs.", "(REF ) and (REF ): $P(G_j = \\pm 1 \\vert i \\text{ crossing } \\pm \\text{ gateat time } t_{i}) \\\\ = \\int _{L_{j}^{\\pm }}^{U_{j}^{\\pm }} \\rho _{j}(x,t_i^{\\rightarrow })dx +\\int _{\\Lambda _{j}^{\\pm }}^{\\Omega _{j}^{\\pm }} \\rho _{j}(x,t_i^{\\rightarrow }) dx\\int _{-\\theta _{j}}^{\\theta _{j}} \\rho _{j}(x,t_i^{\\leftarrow }) \\epsilon ^{\\pm }_{j}(x) dx.$ Putting this all together, we find that the probability that both deciders cross the $\\pm $ gate (with $i$ crossing before $j$ ) is given by: $P(G_i=G_j =\\pm 1 \\vert t_i < t_j ) \\\\ = \\int _{0}^{\\infty } f_{i}^{\\pm }(t_{i}\\vert t_i < t_j) \\Bigg [ \\int _{L_{j}^{\\pm }}^{U_{j}^{\\pm }} \\rho _{j}(x,t_i^{\\rightarrow })dx \\\\ + \\int _{\\Lambda _{j}^{\\pm }}^{\\Omega _{ij}^{\\pm }} \\rho _{j}(x,t_i^{\\rightarrow }) dx \\int _{-\\theta _{j}}^{\\theta _{j}} \\rho _{j}(x,t_i^{\\leftarrow }) \\epsilon ^{\\pm }_{j}(x) dx \\Bigg ] dt_{i}.", "$ Without loss of generality we now calculate the probability that both decide on the + choice.", "In this case, we have that Eq.", "(REF ) is: $P(G_i=G_j = 1 \\vert t_i < t_j ) \\\\ =\\int _{0}^{\\infty } f_{i}^{+}(t_{i}\\vert t_i < t_j) \\Bigg [ \\int _{\\theta _{j}-q}^{\\theta _{j}}\\rho _{j}(x,t_i^{\\rightarrow })dx \\\\ +\\int _{-\\theta _{j}}^{-\\theta _{j}+q}\\rho _{j}(x,t_i^{\\rightarrow })dx\\int _{-\\theta _{j}}^{\\theta _{j}}\\rho _{j}(x,t_i^{\\leftarrow })\\epsilon ^{+}_{j}(x)dx\\Bigg ] dt_{i}.$ Finally, the probability of a $+,+$ group decision is then given by the sums of the probabilities for the cases that $t_1 < t_2$ and $t_2 < t_1$ .", "This yields: $P_{++} \\equiv \\textrm {Prob}(G_1=1,G_2=1) \\\\ = \\sum _{\\begin{array}{c}i=1\\\\j\\ne i\\end{array}}^2\\int _{0}^{\\infty }f_{i}^{ + }(t_{i}\\vert t_i < t_j) \\Bigg [ \\int _{\\theta _{j}-q}^{\\theta _{j}}\\rho _{j}(x,t_i^{\\rightarrow })dx \\\\ + \\int _{-\\theta _{j}}^{-\\theta _{j}+q}\\rho _{j}(x,t_i^{\\rightarrow })dx\\int _{-\\theta _{j}}^{\\theta _{j}}\\rho _{j}(x,t_i^{\\leftarrow })\\epsilon ^{+ }_{j}(x)dx\\Bigg ] dt_{i}.", "$ Eq.", "(REF ) naturally decomposes into two terms.", "The first term represents the contribution whereby decider $j$ is kicked across the threshold instantly from the kick that it receives from decider $i$ .", "The second term is the contribution when $j$ is kicked by $i$ , and then $j$ diffuses, at a later time, across the threshold." ], [ "Simulation and theory results", "To explore the joint decision dynamics of Eqs.", "(REF )-() we begin by setting $D=1,$ $\\mu _1=-\\mu _2=0.75,$ $\\theta _{2}=1$ , and varying over $\\theta _{1}$ and $q$ .", "More to the point, we assume that the deciders have opposing drifts so that if $D=0$ and $q=0$ then $P_{++}=0$ .", "However, with $D>0$ and $q>0$ noise induced errors and decider interaction will ensure that $P_{++}>0$ .", "Figure: Instantaneous (A) and diffusion (B) components of P ++ P_{++} for a two-decider system as θ 1 \\theta _1 varies.", "Here D=1D=1, μ 1 =0.75\\mu _1=0.75, μ 2 =-0.75\\mu _2=-0.75, and θ 2 =1\\theta _2=1.", "C. The solid lines are calculated directly from Eq.", "(), while the dashed lines are calculated using simulations of Eqs.", "()-() with a stochastic Euler scheme (Δt=10 -3 \\Delta t = 10^{-3}, 10 5 10^5 decisions for a fixed θ\\theta ).", "In all panels we show results for the different interaction strengths (qq) as indicated in the legend.For various values of $q$ we consider the contributions to $P_{++}$ that result from an instantaneous kick (Figure REF A) and a kick with diffusion (Figure REF B), as well as their sum (Figure REF C).", "When $q=0.45$ the instantaneous component is much smaller than the kick and diffuse component (Figure REF A,B, blue curves).", "This is expected since the coupling is weak relative to the typical distance between $X_i$ and $\\pm \\theta _i$ .", "In contrast, when $q=1.85$ the instantaneous component far outweighs the kick and diffuse component (Figure REF A,B, red curves).", "The total probability $P_{++}$ (sum of the two components) as derived from our analysis in Eq.", "(REF ) gives a very accurate match to direct simulations of the decision processes described by the Langevin equations in Eqs.", "(REF )-() (Figure REF C, dashed vs. solid curves).", "The central aim of our study is to understand how $\\theta _1$ determines $P_{++}$ .", "For $q=0$ this is straightforward.", "In this case the deciders are independent with $P(G_1=1,G_2=1)= P(G_1=1)P(G_2=1)$ , and maximizing $P_{++}$ is equivalent to maximizing $P(G_1=1)$ .", "When $\\mu _1>0$ then $P(G_1=1)$ only increases with $\\theta _1$ , since large decision thresholds are less susceptible to noise induced decision errors.", "Thus, for groups without coupling decider 1 should have as high a decision threshold as possible, to at least be confident in their own decision.", "Decision networks with $q>0$ give an interesting contrast to the independent case.", "For both small and large $q$ , $P_{++}$ is maximized at a finite value of $\\theta _1 < \\theta _2=1$ (Figure REF C, blue and red curves).", "In other words, if a decider wishes to bias the group decision towards their personal bias then they should set their decision threshold to a lower value than if they were deciding in isolation from the group.", "While large $\\theta _1$ mitigates the fluctuations in the decision process, it also forces the decision time $t_1$ to be large.", "Recall that if $t_1 >t_2$ then decider 1 cannot influence decider 2, since decider 2 will have already decided.", "Thus, in coupled networks there is a benefit to deciding early so as to influence the neighbor decider.", "In this way the coupling introduces a form of 'speed accuracy tradeoff' in the group decision.", "Figure: Values of P ++ P_{++} and θ\\theta at the interior maximum as a function of the coupling qq.", "The dashed line in the P ++ P_{++} plot is the limiting value of the maximum P ++ P_{++} when q=0q = 0.The interior maximum in $P_{++}$ as a function of $\\theta _1$ , labeled $\\theta _{max}$ , first appears at $q \\approx 0.45$ (Figure REF ).", "As $q$ increases the value of $\\theta _{max}$ decreases initially.", "This is because for larger $q$ the + decider has more influence on the $-$ decider, and to maximize $P_{++}$ through interaction it is best to have a lower value of $\\theta _1$ so that there is a higher probability that $t_1 < t_2$ .", "In a small region around $q=1$ , the peak disappears, and then reappears for larger $q$ .", "After it reappears, $\\theta _{max}$ increases with $q$ .", "This is because for large $q$ when the + decider makes its choice, it will likely immediately kick the other decider, forcing it to make the same choice.", "Thus, for $q>1$ it is best for it to have a higher threshold, thereby increasing the chance that decider 1 will cross the + threshold, which will likely result in the other decider making the + choice as well through an instaneous kick.", "At the limit $q=2,$ the curves become saturated because when one of the deciders makes its decision, it will always kick the other one over the same threshold (since $q$ is twice the value of $\\theta _2).$ In general, the value of $P_{++}$ at $\\theta _{max}$ increases with $q$ , since coupling will increase the probability of both making the same decision.", "We remark that for large $q$ the group decision $P_{++}$ at $\\theta _{max}$ is larger than the case for $q=0$ and $\\theta \\rightarrow \\infty $ (Figure REF B, dashed vs. solid).", "In other words, despite the $+$ decider losing accuracy with a lower decision threshold, with sufficient coupling the probability of the $+,+$ group decision is higher than the optimal uncoupled case.", "Finally, we asked whether the interior maximum in $P_{++}$ as a function of $\\theta _1$ is a robust feature over a range in $\\mu _2$ and $D$ .", "Overall, $P_{++}$ increases with $D$ since fluctuations are required for decider 2 (with $\\mu _2=-0.75$ ) to cross the positive $\\theta _2$ threshold.", "For a wide range of $D$ a maximum occurs at a specific $\\theta _1 < \\theta _2=1$ (Figure REF A).", "The maximum is also robust to changes in $\\mu _2 <0$ ; however, the maximum disappears when $\\mu _2$ becomes sufficiently positive (Figure REF B).", "In this case decider 2 will have a tendency to cross the positive threshold even without coupling.", "However, for $\\theta _1 < \\theta _2$ there is a larger gain in $P_{++}$ than for $\\theta _1 > \\theta _2$ .", "This is because for $\\theta _1 < \\theta _2$ the interaction can help overcome the fluctuations that causes errors in decider 2.", "The robustness of a maximum in $P_{++}$ at a finite value of $\\theta _1$ occurs for large $q$ as well (not shown).", "Figure: Joint probability of a +,+ group decision for various values of the diffusion coefficient DD (A) and the bias for decider 2 μ 2 \\mu _2 (B).", "Here μ 1 =0.75\\mu _1=0.75 and q=0.45q=0.45.Our analytic theory can be extended to the $N$ -decider case (see Appendix).", "However, for $N>2$ , the theory is cumbersome and we will simply explore the larger population case using numerical simulations.", "We consider a total population of 100 deciders.", "We divide them into two populations, $A$ with $N_A=75$ deciders, and $B$ with $N_B=25$ deciders.", "The diffusion coefficient for all the deciders in the population is fixed at $D=1$ .", "As in the two decider case the drift for deciders in group $A$ is $\\mu = 0.75$ , while the drift for deciders in $B$ is $-\\mu $ .", "Members of population $A$ have no influence on any member, so that $q_{AA}=q_{AB}=0,$ while members of $B$ influence everyone in the system with magnitude $q$ , so that $q_{BA}=q_{BB}=q$ (Figure REF A).", "The Langevin equations governing the evidence accumulation for this system are as follows.", "$dX_{Ai}&=&\\mu dt+\\sqrt{2D}dW_{Ai}+\\sum \\limits _{k=1}^{N_B} G_{Bk} q \\delta (t-t_k), \\\\dX_{Bj}&=&-\\mu dt+\\sqrt{2D}dW_{Bj}+ \\sum \\limits _{k=1}^{N_B}G_{Bk} q \\delta (t-t_k)(1-\\delta _{jk}), $ where $1 \\le i \\le 75$ , $1 \\le j \\le 25$ , $t_k$ is the time at which decider $k$ in the B population decides, $G_{Bk}$ is $\\pm 1$ if decider $k$ chose the $\\pm $ decision.", "The $1-\\delta _{ik}$ term in Eq.", "(REF ) removes self coupling within population $B$ .", "Sample realizations show stochastic decision dynamics similar to the two decider case (Figure REF B).", "Figure: A.", "A schematic of the population.", "Members of B can influence other B members and A members, but A members can influence no one.", "B.", "A demonstration of the cascading effect discussed in the text.", "Populations A (2 members) and B (3 members) are as described in the text, with Q=0.5Q=0.5.", "Here, B1 escapes first, kicking all the other members down.", "This almost immediately pushes out B3 through the lower exit, further lowering A1, A2, and B2.", "C. Fraction f d f_d of deciders that make the - choice.Let $f_d$ be the fraction of deciders that choose the $-$ decision.", "For $q>0$ this measures the influence that the smaller, interacting part of the population has on the group as a whole; i.e.", "the more deciders that choose the $-$ decision, the more influence population $B$ has on the total population.", "In the limit $q \\rightarrow 0$ , to maximize $f_d$ we should send $\\theta _B \\rightarrow \\infty $ to maximize the chance that $B$ deciders cross through the negative threshold.", "However, as $q$ increases $f_d$ is maximized at a finite value of $\\theta _B$ (Figure REF C; $\\theta _A=1$ ).", "This maximum can be as high as $f_d \\approx 0.8$ – a quarter of the population has made 80 percent of the total members of the population cross through the $-$ threshold towards which the the 25 members of population $B$ are biased (Figure REF C, $q=0.1$ ).", "Note that, unlike in the two-decider case discussed above, the value of the peak first increases with $q$ , but then it decreases and eventually saturates (Figure REF C, $q=0.5$ and $q=2$ curves).", "This can be explained as follows.", "When the first $B$ decider makes its decision, it will make the $\\pm $ choice and will shift the evidences of all of the other deciders in the $\\pm $ direction.", "If $q$ is large, this will cause a large number of deciders (some of which will, of course, be $B$ deciders) to cross through the $\\pm $ threshold.", "These will in turn shift the remaining deciders, causing some of them to cross, and so on and so forth.", "In this way, for large enough values of $q$ , we have a cascading effect–the fate of almost all the deciders is determined by a single (uncertain) decider.", "This dynamic is akin to herd behavior where members of the group are driven primarily by neighbor decisions rather than their own evidence accumulation [1].", "On the other hand, for smaller values of $q$ one decider will have some influence on the system, but not enough to solely influence a large fraction of the population.", "Rather, the system's overall behavior is dependent on more than one decider and is thus subject to less noise.", "This means that a higher fraction of the deciders will decide on the choice towards which $B$ is biased.", "Our simulations of the $N$ -decider system show that larger populations qualitatively match the key feature of the 2-decider case.", "Namely, for nonzero coupling it is possible to choose a decision threshold so as to maximize the probability of a coordinated group decision." ], [ "Discussion", "In this paper we modeled interactions between two or more TAFC drift-diffusion models.", "In the two-decider model the deciders were biased towards opposite decisions.", "By varying the threshold for the + decider, we determined how $P_{++}$ (the probability that both deciders choose the + gate) is controlled for various values of the coupling strength $q$ .", "When $q=0$ , i.e., no coupling, $P_{++}$ is maximized for decision threshold $\\theta \\rightarrow \\infty $ .", "Our main finding is that for a large range of $q>0$ , $P_{++}$ is maximized at a finite $\\theta $ for the + decider.", "The intuition for the two decider case was extended to a large population of coupled deciders, where we demonstrated that with an appropriately chosen threshold a small but strongly coupled subgroup of the population can have a large impact on the group decision.", "In developing the theory of the two-decider system, we had to consider integrals over one of the time variables (see Eq.", "(REF )).", "In principle, this can be extended to the case with $N$ deciders.", "However, in order to complete this, we need to analyze the combinatorics of the orders in which the deciders make their decisions (see Appendix).", "The analog of (REF ) for the $N$ -decider case will be an $N-1$ dimensional integral (see Eq.", "(REF )).", "Evaluating a large number of these high-dimensional integrals is computationally cumbersome.", "Previous studies have considered the collective decision making of groups of drift-diffusion models.", "One class of model considers the accumulation dynamics in populations of uncoupled agents with a threshold decision rule (same as Eqs.", "(REF )-() with $q=0$ ) combined with a consensus group decision [9], [14].", "In such models the independent diffusion permits a clear analysis to be performed.", "Another class of model considers evidence accumulation in a population where deciders linearly couple their evidence [23], [15].", "Here the interaction is continuous in time, yet the decision mechanics do not involve thresholded evidence, rather the accumulation is free running.", "The linearity of the model permits an analysis of the full population accumulation.", "Our framework is distinct from these models in that it combines both interaction during the accumulation process and a threshold decision rule.", "Evidence accumulation is only shared at decision times, and otherwise accumulation is independent between deciders, permitting an analysis of group activity (Eq.", "(REF )).", "However, both the uncoupled population model with consensus and the linear free running population model have analysis that scales well with system size, unlike our model.", "Our model exhibits a form of speed-accuracy tradeoff [28], [5], [6], [4], [23].", "In the TAFC task there is a tradeoff in the decision making process: the decider would like to make the correct decision in the shortest amount of time.", "To increase the probability that it makes the correct choice the decider can increase the amount of evidence that it requires to make a decision (i.e., the threshold $\\theta $ ).", "However, larger decision thresholds increase the amount of time it takes to reach the threshold, and hence decision accuracy and decision speed are at odds with one another.", "In the classical speed-accuracy tradeoff for a single TAFC drift-diffusion model, the reward for making a correct decision quickly is expressed via a `cost function' that is added to the model by hand [4].", "However, in our group decision model the speed-accuracy tradeoff emerges from the group interaction.", "The decision agents still want to be accurate, but the incentive for speed is not a built-in cost function.", "Rather, the reward for an individual decider to make its choice quickly (and correctly) is the chance for it to have influence other members of the population, and thus increase the fraction of population members that decide its correct choice.", "The single decider drift diffusion model has been a very influential in decision theory [25], [16], [24], [4], [17], in large part because one can compute the threshold value $\\theta $ that will optimize the reward rate received by the decider [4].", "Combining this theory and efficient statistical techniques to estimate the drift and diffusion terms from a collection of decision experiments gives a prescription to test whether deciders are acting optimally.", "Our model provides a theory for the decision outcome and times from groups of coupled deciders.", "It remains to extend our theory in Eq.", "(REF ) to compute optimal thresholds ($\\theta $ ) and interactions ($q$ ) under a set of task constraints.", "With this in hand it may be possible to determine whether groups of deciders act optimally." ], [ "Theory for $N$ -Decider case", "Suppose we have $N$ deciders where decider $i$ has evidence accumulation described by $D_i,$ $\\mu _i,$ and $\\theta _i$ .", "A decider pair has a coupling $q_{ij}$ representing the influence of $j$ on $i.$ If decider $i$ escapes through the threshold $\\pm \\theta _{i}$ , we assign the value $\\pm 1$ to $G_{i}$ ($1\\le i\\le N)$ .", "We wish to calculate: $P\\left( G_{1},\\ldots ,G_{N},\\text{ord}\\right),$ where ord refers to the decision order under consideration; without loss of generality we take $\\text{ord}=t_{1}<t_{2}<\\cdots <t_{N}.$ This probability can be written as follows $P\\left( G_{1},\\ldots ,G_{N},\\text{ord}\\right) =\\int _{0}^{\\infty }dt_{1}P\\left( G_{1},t_{1},\\text{ord}\\right) P\\left( G_{2},\\ldots ,G_{N}|G_{1},t_{1},\\text{ord}\\right) .$ We have that: $P\\left( G_{1},t_{1},\\text{ord}\\right) =\\int _{\\text{ord}}f_{1}^{G_{1}}\\left( t_{1}\\right){\\displaystyle \\prod \\limits _{k=2}^{N}}f_{k}\\left( t_{k}\\right) dt_{2}\\cdots dt_{N}.$ Define $f_{1,\\text{ord}}^{G_{1}}\\left( t_{1}\\right) \\equiv \\int _{\\text{ord}}f_{1}^{G_{1}}\\left( t_{1}\\right){\\displaystyle \\prod \\limits _{k=2}^{N}}f_{k}\\left( t_{k}\\right) dt_{2}\\cdots dt_{N}$ so that $P\\left( G_{1},\\ldots ,G_{N},\\text{ord}\\right) =\\int _{0}^{\\infty }dt_{1}f_{1,\\text{ord}}^{G_{1}}\\left( t_{1}\\right) P\\left( G_{2},\\ldots ,G_{N}|G_{1},t_{1},\\text{ord}\\right) .$ We now wish to calculate $P\\left( G_{2},\\ldots ,G_{N}|G_{1},t_{1},\\text{ord}\\right)$ .", "Note that decider 2 makes its decision after decider 1 but before decider 3, and so on.", "At each stage there are two possibilities: either the decider is kicked instantly across the desired threshold (which is only possible if $G_{i}=G_{i-1}$ ), or it diffuses and escapes later.", "Now, define the probability density ($t_{j}$ is the time at which decider $j$ makes its decision) as: $\\rho _{i}\\left( x_i,t;G_{1}q_{i1},t_{1},\\ldots ,G_{k}q_{ik},t_{k}\\right)$ to be the density of decider $i$ 's evidence at time $t$ after the kicks $G_{1}q_{i1}$ at $t_{1},\\ldots ,G_{k}q_{ik},$ at time $t_{k}.$ Define $\\rho _{i}\\left( x_j,t;k\\right) \\equiv \\rho _{i}\\left( x_i,t;G_{1}q_{i1},t_{1},\\ldots ,G_{k}q_{ik},t_{k}\\right).$ If decider 1 escapes at $t_{1},$ decider 2 can escape instantly so that $t_{2}=t_{1},$ or it can escape at some later time $t_{2}>t_{1}.$ In either of these situations, it can kick 3 across instantly, or some time later, and so on, until the $N$ th decider.", "This can be seen below schematically.", "$\\text{decider 1 crosses at }t_{1}\\longrightarrow \\Biggl \\lbrace \\ \\begin{array}[c]{c}2\\text{ kicked inst.", "}\\longrightarrow \\Biggl \\lbrace \\ \\begin{array}[c]{c}3\\text{ kicked inst.", "}\\longrightarrow \\cdots \\end{array}\\\\3\\text{ crosses later}\\longrightarrow \\cdots \\end{array}$ 2 crosses later [c]c3 kicked inst.", "3 crosses later $$ For each $2\\le i\\le N$ define the variable $k_{i}$ to be $k_{i}=\\Biggl \\lbrace \\ \\begin{array}[c]{c}0,\\text{ }i\\text{ kicked instantly after }i-1\\\\1,\\text{ }i\\text{ escapes sometime later through correct gate}\\end{array}$ Now, consider the decision of the $i$ th decider.", "Suppose that the previous deciders made their decisions at times $t_{1},\\ldots ,t_{k}.$ If $k_{i}=0,$ the probability that decider makes an instantaneous threshold crossing is: $\\int _{\\ell _{i}^{i-1}}^{u_{i}^{i-1}}\\rho _{i}\\left( x_i,t_{i-1};i-2\\right) dx$ where $\\ell _{i}^{i-1}=\\ell _{i}^{i-1}\\left( G_{i},G_{i-1}\\right) =\\Biggl \\lbrace \\ \\begin{array}[c]{c}\\theta _{i}-G_{i-1}q_{i}^{i-1},\\text{ }G_{i}>0\\\\-\\theta _{i},\\text{ }G_{i}<0\\end{array}$ $u_{i}^{i-1}=u_{i}^{i-1}\\left( G_{i},G_{i-1}\\right) =\\Biggl \\lbrace \\ \\begin{array}[c]{c}\\theta _{i},\\text{ }G_{i}>0\\\\-\\theta _{i}-G_{i-1}q_{i}^{i-1},\\text{ }G_{i}<0.\\end{array}.$ The probability that the decision is not made instantly is: $\\int _{L_{i}^{i-1}}^{U_{i}^{i-1}}\\rho _{i}\\left( x_i,t_{i-1};i-2\\right) dx$ with $L_{i}^{i-1}=L_{i}^{i-1}\\left( G_{i},G_{i-1}\\right) =\\Biggl \\lbrace \\ \\begin{array}[c]{c}-\\theta ,\\text{ }G_{i-1}>0\\\\-\\theta _{i}+q_{i}^{i-1},\\text{ }G_{i-1}<0\\end{array}$ $U_{i}^{i-1}=U_{i}^{i-1}\\left( G_{i},G_{i-1}\\right) =\\Biggl \\lbrace \\ \\begin{array}[c]{c}\\theta _{i}-q_{i}^{i-1},\\text{ }G_{i-1}>0\\\\\\theta _{i},\\text{ }G_{i-1}<0\\end{array}.$ Given that decider $i$ does not decide instantly after $i-1$ , the first passage time density $f_{i}^{G_{i}}\\left( t_{i};i-1\\right) $ is obtained by calculating the flux from $\\rho _{i}\\left( x_i,t_{i};i-1\\right) ,$ normalized such that $\\int _{t_{i-1}}^{\\infty }\\left[ f_{i}^{G_{i}}\\left( t_{i};i-1\\right)+f_{i}^{-G_{i}}\\left( t_{i};i-1\\right) \\right] dt_{i}=1.$ Thus the contribution from this case is: $\\int _{L_{i}^{i-1}}^{U_{i}^{i-1}}\\rho _{i}\\left( x_i,t_{i-1};i-2\\right)dx\\int _{t_{i-1}}^{\\infty }f_{i}^{G_{i}}\\left( t_{i};i-1\\right)dt_{i}.$ Note that everything after decider $i$ depends on $t_{i}$ so we are integrating these over $t_{i}$ as well.", "Now, at each stage (i.e.", "for each value of $i>1$ ), $k_{i}$ can be 0 or $1.$ Summing over all possible values of the $k_{i}$ 's gives us all cases for a given set of decisions ($G$ 's) and a time ordering: $P\\left( G_{1},\\ldots ,G_{N},\\text{ord}\\right) & =\\int _{0}^{\\infty }f_{1,\\text{ord}}^{G_{1}}\\left( t_{1}\\right) P\\left( G_{2},\\ldots ,G_{N}|G_{1},t_{1},\\text{ord}\\right)dt_{1} \\\\& =\\sum _{k_{2},\\ldots ,k_{n}}\\int _{0}^{\\infty }f_{1,\\text{ord}}^{G_{1}}\\left( t_{1}\\right) P\\left( G_{2},\\ldots ,G_{N},k_{2},\\ldots ,k_{n}|G_{1},t_{1},\\text{ord}\\right)dt_{1}$ For a given set of values for $k_{2},\\ldots ,k_{i-1},$ the contribution from decider $i$ is $\\mathcal {C}_{i}\\left( k_{i};k_{2},\\ldots ,k_{i-1}\\right) =\\Biggl \\lbrace \\ \\begin{array}[c]{c}\\int _{\\ell _{i}^{i-1}}^{u_{i}^{i-1}}\\rho _{i}\\left( x_i,t_{i-1};i-2\\right)dx,\\text{ }k_{i}=0\\\\\\int _{L_{i}^{i-1}}^{U_{i}^{i-1}}\\rho _{i}\\left( x_i,t_{i-1};i-2\\right)dx\\int _{t_{i-1}}^{\\infty }f_{i}^{G_{i}}\\left( t_{i};i-1\\right)dt_{i} ,\\text{}k_{i}=1.\\end{array}$ As well, $P\\left( G_{2},\\ldots ,G_{N},k_{2},\\ldots ,k_{n}|G_{1},t_{1},\\text{ord}\\right)={\\displaystyle \\prod \\limits _{i=2}^{N}}\\mathcal {C}_{i}\\left( k_{i};k_{2},\\ldots ,k_{i-1}\\right)$ so that $P\\left( G_{2},\\ldots ,G_{N},\\text{ord}\\right) =\\sum _{k_{2},\\ldots ,k_{n}}\\int _{0}^{\\infty }f_{1,\\text{ord}}^{G_{1}}\\left( t_{1}\\right)dt_{1}{\\displaystyle \\prod \\limits _{i=2}^{N}}\\mathcal {C}_{i}\\left( k_{i};k_{2},\\ldots ,k_{i-1}\\right) .$ We can define $\\mathcal {C}_{1}\\equiv \\int _{0}^{\\infty }dt_{1}f_{1,\\text{ord}}^{G_{1}}\\left( t_{1}\\right) $ so that $P\\left( G_{2},\\ldots ,G_{N},\\text{ord}\\right) =\\sum _{k_{2},\\ldots ,k_{n}}{\\displaystyle \\prod \\limits _{i=1}^{N}}\\mathcal {C}_{i}\\left( k_{i};k_{2},\\ldots ,k_{i-1}\\right) .$ As an application consider a system with three deciders; two are in the $A$ population, and one is in the $B$ population.", "Let $D_{a}=D_{b}=D,$ and $\\mu _{B}=-\\mu _{A}=-\\mu .$ The deciders in the $A$ population ($A1$ and $A2$ ) influence the other system members with magnitude $q_{A}$ and the $B$ decider influences the other deciders with magnitude $q_{B}.$ Up to permutations of the $A$ deciders, there are 3 ways of ordering the deciders: $\\text{ord 1}\\text{: } & t_{a1}<t_{a2}<t_{b}\\\\\\text{ord 2}\\text{: } & t_{a1}<t_{b}<t_{a2}\\\\\\text{ord 3}\\text{: } & t_{b}<t_{a1}<t_{a2}.$ Let us calculate $P\\left( G_{1},G_{2},G_{3},\\text{ord 1}\\right)$ (labeling the deciders as $1\\longleftrightarrow A1,$ $2\\longleftrightarrow A2,$ and $3\\longleftrightarrow B$ ).", "Using the scheme outlined above, $A1\\text{ crosses at }t_{A1}\\longrightarrow \\Biggl \\lbrace \\ \\begin{array}[c]{c}k_{2}=0\\longrightarrow \\Biggl \\lbrace \\ \\begin{array}[c]{c}k_{3}=0\\\\k_{3}=1\\end{array}\\\\k_{2}=1\\longrightarrow \\Biggl \\lbrace \\ \\begin{array}[c]{c}k_{3}=0\\\\k_{3}=1\\end{array}\\end{array}$ So, using the formulas for each case, $& P\\left( G_{1},G_{2},G_{3},\\text{ord 1}\\right) \\\\& =\\int _{0}^{\\infty }f_{A,\\text{ord1}}^{G_{1}}\\left( t_{1}\\right)dt_{1}\\left\\lbrace \\begin{array}[c]{c}\\delta _{G_{1}G_{2}}\\int _{\\ell _{2}^{1}}^{u_{2}^{1}}\\rho _{A}\\left(x,t_{1}\\right) dx \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{3}^{2}}^{u_{3}^{2}}\\rho _{B}\\left(x,t_{1};1\\right) dx\\\\+\\int _{L_{3}^{2}}^{U_{3}^{2}}\\rho _{B}\\left( x,t_{1};t_{1}\\right)dx\\int _{-\\theta _{B}}^{\\theta _{B}}\\rho _{B}\\left( x,t_{1};t_{1},t_{1}\\right)\\varepsilon _{B}^{G_{3}}\\left( x\\right) dx\\end{array}\\right] \\\\+\\int _{L_{2}^{1}}^{U_{2}^{1}}\\rho _{A}\\left( x,t_{1}\\right) dx\\int _{t_{1}}^{\\infty }dt_{2}f_{a}^{G_{2}}\\left( t_{2};1\\right) \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{3}^{2}}^{u_{3}^{2}}\\rho _{B}\\left(x,t_{2};1\\right) dx\\\\+\\int _{L_{3}^{2}}^{U_{3}^{2}}\\rho _{B}\\left( x,t_{2};t_{1}\\right)dx\\int _{-\\theta _{B}}^{\\theta _{B}}\\rho _{B}\\left( x,t_{2};t_{1},t_{2}\\right)\\varepsilon _{B}^{G_{3}}\\left( x\\right) dx\\end{array}\\right]\\end{array}\\right\\rbrace $ $\\ $ To get ord 2 from ord 1, we need to swap decider 2 with decider $3.$ $& P\\left( G_{1},G_{2},G_{3},\\text{ord 2}\\right) \\\\& =\\int _{0}^{\\infty }f_{A,\\text{ord2}}^{G_{1}}\\left( t_{1}\\right)dt_{1}\\left\\lbrace \\begin{array}[c]{c}\\delta _{G_{1}G_{3}}\\int _{\\ell _{3}^{1}}^{u_{3}^{1}}\\rho _{B}\\left(x,t_{1}\\right) dx\\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{2}^{3}}^{u_{2}^{3}}\\rho _{A}\\left(x,t_{1};1\\right) dx\\\\+\\int _{L_{2}^{3}}^{U_{2}^{3}}\\rho _{a}\\left( x,t_{1};t_{1}\\right)dx\\int _{-\\theta _{a}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{1};t_{1},t_{1}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right] \\\\+\\int _{L_{3}^{1}}^{U_{3}^{1}}\\rho _{B}\\left( x,t_{1}\\right) dx\\int _{t_{1}}^{\\infty }f_{B}^{G_{3}}\\left( t_{3};1\\right)dt_{3} \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{2}^{3}}^{u_{2}^{3}}\\rho _{A}\\left(x,t_{3};1\\right) dx\\\\+\\int _{L_{2}^{3}}^{U_{2}^{3}}\\rho _{a}\\left( x,t_{3};t_{1}\\right)dx\\int _{-\\theta _{A}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{3};t_{1},t_{3}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right]\\end{array}\\right\\rbrace $ To get ord 3 from ord 2, we switch decider 1 with decider $3.$ $& P\\left( G_{1},G_{2},G_{3},\\text{ord 3}\\right) \\\\& =\\int _{0}^{\\infty }f_{B,\\text{ord3}}^{G_{3}}\\left( t_{3}\\right)dt_{3}\\left\\lbrace \\begin{array}[c]{c}\\delta _{G_{1}G_{3}}\\int _{\\ell _{1}^{3}}^{u_{1}^{3}}\\rho _{A}\\left(x,t_{3}\\right) dx\\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{1}}\\int _{\\ell _{2}^{1}}^{u_{2}^{1}}\\rho _{A}\\left(x,t_{3};1\\right) dx\\\\+\\int _{L_{2}^{1}}^{U_{2}^{1}}\\rho _{A}\\left( x,t_{3};t_{3}\\right)dx\\int _{-\\theta _{A}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{3};t_{3},t_{3}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right] \\\\+\\int _{L_{1}^{3}}^{U_{1}^{3}}\\rho _{A}\\left( x,t_{3}\\right) dx\\int _{t_{3}}^{\\infty }f_{A}^{G_{1}}\\left( t_{1};1\\right)dt_{1} \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{1}}\\int _{\\ell _{2}^{1}}^{u_{2}^{1}}\\rho _{A}\\left(x,t_{3};1\\right) dx\\\\+\\int _{L_{2}^{1}}^{U_{2}^{1}}\\rho _{A}\\left( x,t_{3};t_{1}\\right)dx\\int _{-\\theta _{a}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{3};t_{3},t_{1}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right]\\end{array}\\right\\rbrace $ Here $\\rho _{A}\\left( x,t;1\\right) $ represents the density of $A$ after the first kick, and $\\rho _{B}\\left( x,t;t_{1},t_{3}\\right) $ is the density of $B$ after the first kick at $t_{1}$ and the second kick at $t_{3},$ and $\\varepsilon _{i}^{\\pm 1}\\left( x\\right) $ is the probability that a decider of type $i$ $\\left( =A\\text{ or }B\\right) $ at some evidence $x$ will cross threshold $\\pm \\theta _{i}$ .", "Now, for each of the ord 1, ord 2, ord 3, we can swap the two $A$ deciders and get the same answer; these represent all possible time orderings.", "Hence, $& P\\left( G_{1},G_{2},G_{3}\\right) \\\\& =2\\sum _{i=1}^{3}P\\left( G_{1},G_{2},G_{3},\\text{ord }i\\right) \\\\& =2\\int _{0}^{\\infty }f_{A,\\text{ord1}}^{G_{1}}\\left( t_{1}\\right)dt_{1}\\left\\lbrace \\begin{array}[c]{c}\\delta _{G_{1}G_{2}}\\int _{\\ell _{2}^{1}}^{u_{2}^{1}}\\rho _{A}\\left(x,t_{1}\\right) dx\\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{3}^{2}}^{u_{3}^{2}}\\rho _{B}\\left(x,t_{1};1\\right) dx\\\\+\\int _{L_{3}^{2}}^{U_{3}^{2}}\\rho _{B}\\left( x,t_{1};t_{1}\\right)dx\\int _{-\\theta _{B}}^{\\theta _{B}}\\rho _{B}\\left( x,t_{1};t_{1},t_{1}\\right)\\varepsilon _{B}^{G_{3}}\\left( x\\right) dx\\end{array}\\right] \\\\+\\int _{L_{2}^{1}}^{U_{2}^{1}}\\rho _{A}\\left( x,t_{1}\\right) dx\\int _{t_{1}}^{\\infty }dt_{2}f_{A}^{G_{2}}\\left( t_{2};1\\right) \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{3}^{2}}^{u_{3}^{2}}\\rho _{B}\\left(x,t_{2};1\\right) dx\\\\+\\int _{L_{3}^{2}}^{U_{3}^{2}}\\rho _{B}\\left( x,t_{2};t_{1}\\right)dx\\int _{-\\theta _{B}}^{\\theta _{B}}\\rho _{B}\\left( x,t_{2};t_{1},t_{2}\\right)\\varepsilon _{B}^{G_{3}}\\left( x\\right) dx\\end{array}\\right]\\end{array}\\right\\rbrace \\\\& +2\\int _{0}^{\\infty }dt_{1}f_{A,\\text{ord2}}^{G_{2}}\\left( t_{1}\\right)\\left\\lbrace \\begin{array}[c]{c}\\delta _{G_{1}G_{3}}\\int _{\\ell _{3}^{1}}^{u_{3}^{1}}\\rho _{B}\\left(x,t_{1}\\right) dx\\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{2}^{3}}^{u_{2}^{3}}\\rho _{A}\\left(x,t_{1};1\\right) dx\\\\+\\int _{L_{2}^{3}}^{U_{2}^{3}}\\rho _{A}\\left( x,t_{1};t_{1}\\right)dx\\int _{-\\theta _{A}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{1};t_{1},t_{1}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right] \\\\+\\int _{L_{3}^{1}}^{U_{3}^{1}}\\rho _{B}\\left( x,t_{1}\\right) dx\\int _{t_{1}}^{\\infty }dt_{3}f_{B}^{G_{3}}\\left( t_{3};1\\right) \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{3}}\\int _{\\ell _{2}^{3}}^{u_{2}^{3}}\\rho _{A}\\left(x,t_{3};1\\right) dx\\\\+\\int _{L_{2}^{3}}^{U_{2}^{3}}\\rho _{A}\\left( x,t_{3};t_{1}\\right)dx\\int _{-\\theta _{A}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{3};t_{1},t_{3}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right]\\end{array}\\right\\rbrace \\\\& +2\\int _{0}^{\\infty }dt_{3}f_{B,\\text{ord3}}^{G_{3}}\\left( t_{3}\\right)\\left\\lbrace \\begin{array}[c]{c}\\delta _{G_{1}G_{3}}\\int _{\\ell _{1}^{3}}^{u_{1}^{3}}\\rho _{A}\\left(x,t_{3}\\right) dx\\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{1}}\\int _{\\ell _{2}^{1}}^{u_{2}^{1}}\\rho _{A}\\left(x,t_{3};1\\right) dx\\\\+\\int _{L_{2}^{1}}^{U_{2}^{1}}\\rho _{A}\\left( x,t_{3};t_{3}\\right)dx\\int _{-\\theta _{A}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{3};t_{3},t_{3}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right] \\\\+\\int _{L_{1}^{3}}^{U_{1}^{3}}\\rho _{A}\\left( x,t_{3}\\right) dx\\int _{t_{3}}^{\\infty }dt_{1}f_{A}^{G_{1}}\\left( t_{1};1\\right) \\\\ \\left[\\begin{array}[c]{c}\\delta _{G_{2}G_{1}}\\int _{\\ell _{2}^{1}}^{u_{2}^{1}}\\rho _{A}\\left(x,t_{3};1\\right) dx\\\\+\\int _{L_{2}^{1}}^{U_{2}^{1}}\\rho _{A}\\left( x,t_{3};t_{1}\\right)dx\\int _{-\\theta _{A}}^{\\theta _{A}}\\rho _{A}\\left( x,t_{3};t_{3},t_{1}\\right)\\varepsilon _{A}^{G_{2}}\\left( x\\right) dx\\end{array}\\right]\\end{array}\\right\\rbrace $ We want to calculate the probability that 2 or more deciders make the - choice.", "There are four quantities to calculate: $P\\left( -,-,-\\right) ,P\\left( -,-,+\\right) ,P\\left( -,+,-\\right),P\\left( +,-,-\\right) .$ However, note that the last two are the same, due to the symmetry of the $a$ deciders.", "Hence, $P_{majority\\text{ }-}=P\\left( -,-,-\\right) +P\\left( -,-,+\\right)+2P\\left( -,+,-\\right).$" ] ]
1906.04377
[ [ "Representation Learning-Assisted Click-Through Rate Prediction" ], [ "Abstract Click-through rate (CTR) prediction is a critical task in online advertising systems.", "Most existing methods mainly model the feature-CTR relationship and suffer from the data sparsity issue.", "In this paper, we propose DeepMCP, which models other types of relationships in order to learn more informative and statistically reliable feature representations, and in consequence to improve the performance of CTR prediction.", "In particular, DeepMCP contains three parts: a matching subnet, a correlation subnet and a prediction subnet.", "These subnets model the user-ad, ad-ad and feature-CTR relationship respectively.", "When these subnets are jointly optimized under the supervision of the target labels, the learned feature representations have both good prediction powers and good representation abilities.", "Experiments on two large-scale datasets demonstrate that DeepMCP outperforms several state-of-the-art models for CTR prediction." ], [ "Introduction", "Click-through rate (CTR) prediction is to predict the probability that a user will click on an item.", "It plays an important role in online advertising systems.", "For example, the ad ranking strategy generally depends on CTR $\\times $ bid, where bid is the benefit the system receives if an ad is clicked by a user.", "Moreover, according to the common cost-per-click charging model, advertisers are only charged once their ads are clicked by users.", "Therefore, in order to maximize the revenue and to maintain a desirable user experience, it is crucial to estimate the CTR of ads accurately.", "CTR prediction has attracted lots of attention from both academia and industry [9], [24], [7].", "For example, the Logistic Regression (LR) model [23] considers linear feature importance and models the predicted CTR as $\\hat{y} = \\sigma (w_0 + \\sum _i w_i x_i),$ where $\\sigma (\\cdot )$ is the sigmoid function, $x_i$ is the $i$ th feature and $w_0, w_i$ are model weights.", "The Factorization Machine (FM) [21] is proposed to further model pairwise feature interactions.", "It models the predicted CTR as $\\hat{y} = \\sigma (w_0 + \\sum _i w_i x_i + \\sum _i \\sum _j \\mathbf {v}_i^T \\mathbf {v}_j x_i x_j),$ where $\\mathbf {v}_i$ is the latent embedding vector of the $i$ th feature.", "In recent years, Deep Neural Networks (DNNs) [14] are exploited for CTR prediction and item recommendation in order to automatically learn feature representations and high-order feature interactions [26], [29], [20], [3].", "To take advantage of both shallow and deep models, hybrid models are also proposed.", "For example, Wide&Deep [2] combines LR and DNN, in order to improve both the memorization and generalization abilities of the model.", "DeepFM [7] combines FM and DNN, which further improves the model ability of learning feature interactions.", "Neural Factorization Machine [8] combines the linearity of FM and the non-linearity of DNN.", "Figure: (a) Classical CTR prediction methods model the feature-CTR relationship.", "(b) DeepMCP further models feature-feature relationships, such as the user-ad relationship (dashed curve) and the ad-ad relationship (dotted curve).Nevertheless, these models only consider the feature-CTR relationship.", "In contrast, the DeepMCP model proposed in this paper additionally considers feature-feature relationships, such as the user-ad relationship and the ad-ad relationship.", "We illustrate their difference in Figure REF .", "Note that the feature interaction in FM still models the feature-CTR relationship.", "It can be considered as two_features-CTR, because it models how the feature interaction $\\mathbf {v}_i^T \\mathbf {v}_j x_i x_j$ relates to the CTR $\\hat{y}$ , but does not model whether the two feature representations $\\mathbf {v}_i$ and $\\mathbf {v}_j$ should be similar to each other.", "In particular, our proposed DeepMCP model contains three parts: a matching subnet, a correlation subnet and a prediction subnet.", "They share the same embedding matrix.", "The matching subnet models the user-ad relationship (i.e., whether an ad matches a user's interest) and aims to learn useful user and ad representations.", "The correlation subnet models the ad-ad relationship (i.e., which ads are within a time window in a user's click sequence) and aims to learn useful ad representations.", "The prediction subnet models the feature-CTR relationship and aims to predict the CTR given all the features.", "When these subnets are jointly optimized under the supervision of the target labels, the feature representations are learned in such a way that they have both good prediction powers and good representation abilities.", "Moreover, as the same feature appears in different subnets in different ways, the learned representations are more statistically reliable.", "In summary, the main contributions of this paper are We propose a new model DeepMCP for CTR prediction.", "Unlike classical CTR prediction models that mainly consider the feature-CTR relationship, DeepMCP further considers user-ad and ad-ad relationships.", "We conduct extensive experiments on two large-scale datasets to compare the performance of DeepMCP with several state-of-the-art models.", "We make the implementation code of DeepMCP publicly availablehttps://github.com/oywtece/deepmcp." ], [ "Deep Matching, Correlation and Prediction (DeepMCP) Model", "In this section, we present the DeepMCP model in detail.", "Table: Each row is an instance for CTR prediction.", "The first column is the label (1 - clicked, 0 - unclicked).", "Each of the other columns is a field.", "The instantiation of a field is a feature.Figure: DeepMCP - Testing" ], [ "Model Overview", "The task of CTR prediction in online advertising is to estimate the probability of a user clicking on a specific ad.", "Table REF shows some example instances.", "Each instance can be described by multiple fields such as user information (user ID, city, etc.)", "and ad information (creative ID, title, etc.).", "The instantiation of a field is a feature.", "Unlike most existing CTR prediction models that mainly consider the feature-CTR relationship, our proposed DeepMCP model additionally considers the user-ad and ad-ad relationships.", "DeepMCP contains three parts: a matching subnet, a correlation subnet and a prediction subnet (cf.", "Figure REF (a)).", "When these subnets are jointly optimized under the supervision of the target labels, the learned feature representations have both good prediction powers and good representation abilities.", "Another property of DeepMCP is that although all the subnets are active during training, only the prediction subnet is active during testing (cf.", "Figure REF (b)).", "This makes the testing phase rather simple and efficient.", "Figure: Motivating example (uu - user features, aa - ad features, oo - other features).", "Please refer to § for more detail.We segregate the features into four groups: user (e.g., user ID, age), query (e.g., query, query category), ad (e.g., creative ID, ad title) and other features (e.g., hour of day, day of week).", "Each subnet uses a different set of features.", "In particular, the prediction subnet uses all the four groups of features, the matching subnet uses the user, query and ad features, and the correlation subnet uses only the ad features.", "All the subnets share the same embedding matrix.", "Figure: Detailed view of the DeepMCP model.", "The prediction, matching and correlation subnets share the same embedding matrix." ], [ "Motivating Example", "Before we present the details of DeepMCP, we first illustrate the rationale of DeepMCP through a motivating example in Figure REF .", "For simplicity, we only show user features $u$ , ad features $a$ and other features $o$ .", "Because the feature embeddings (i.e., representations) are randomly initialized, when we consider the prediction task only, it is likely that the learned representation of user $u_1$ and that of user $u_2$ are largely different.", "This is because the prediction task does not model the relationship between features.", "As a consequence, it is hard to accurately estimate the pCTR of user $u_2$ on ad $a_3$ .", "If we further consider the matching task which models the user-ad relationship and the correlation task which models the ad-ad relationship, the learned representation of user $u_2$ should be similar to that of user $u_1$ and the representation of ad $a_3$ should be similar to that of ad $a_1$ .", "The pCTR of user $u_2$ on ad $a_3$ would be similar to the pCTR of $u_1$ on $a_3$ (as well as the pCTR of $u_1$ on $a_1$ ).", "As a consequence, the target pCTR is more likely to be accurate." ], [ "Prediction Subnet", "The prediction subnet presented here is a typical DNN model.", "It models the feature-CTR relationship (where explicit or implicit feature interactions are modeled).", "It aims to predict the CTR given all the features, supervised by the target labels.", "Nevertheless, the DeepMCP model is flexible that the prediction subnet can be replaced by any other CTR prediction model, such as Wide&Deep [2] and DeepFM [7].", "First, a feature $x_i \\in \\mathbb {R}$ (e.g., a user ID) goes through an embedding layer and is mapped to its embedding vector $\\mathbf {e}_i \\in \\mathbb {R}^K$ , where $K$ is the vector dimension and $\\mathbf {e}_i$ is to be learned.", "The collection of all the feature embeddings is an embedding matrix $\\mathbf {E} \\in \\mathbb {R}^{N\\times K}$ , where $N$ is the number of unique features.", "For multivalent categorical features such as the bi-grams in the ad title, we first map each bi-gram to an embedding vector and then perform sum pooling to generate the aggregated embedding vector of the ad title.", "We then concatenate the embedding vectors from all the features as a long vector $\\mathbf {m}$ .", "The vector $\\mathbf {m}$ then goes through several fully connected (FC) layers with the ReLU activation function ($\\mathrm {ReLU}(x) = \\max (0, x)$ ), in order to exploit high-order nonlinear feature interactions [8].", "Nair and Hinton [[18]] show that ReLU has significant benefits over sigmoid and tanh activation functions in terms of the convergence rate and the quality of obtained results.", "Finally, the output $\\mathbf {z}$ of the last FC layer goes through a sigmoid function to generate the predicted CTR as $\\hat{y} = \\frac{1}{1+\\exp [- (\\mathbf {w}^T \\mathbf {z} + b)]},$ where $\\mathbf {w}$ and $b$ are model parameters to be learned.", "To avoid model overfitting, we apply dropout [25] after each FC layer.", "Dropout prevents feature co-adaptation by setting to zero a portion of hidden units during parameter learning.", "All the model parameters are learned by minimizing the average logistic loss on a training set as $ \\mathrm {loss}_p = - \\frac{1}{|\\mathbb {Y}|}\\sum _{y \\in \\mathbb {Y}} [y \\log \\hat{y} + (1 - y) \\log (1 - \\hat{y})],$ where $y \\in \\lbrace 0,1\\rbrace $ is the true label of the target ad corresponding to $\\hat{y}$ and $\\mathbb {Y}$ is the collection of labels." ], [ "Matching Subnet", "The matching subnet models the user-ad relationship (i.e., whether an ad matches a user's interest) and aims to learn useful user and ad representations.", "It is inspired by semantic matching models for web search [10].", "In classical matrix factorization for recommendation [13], the rating score is approximated as the inner product of the latent vectors of the user ID and the item ID.", "In our problem, instead of directly matching the user ID and the ad ID, we perform matching at a higher level, incorporating all the features related to the user and the ad.", "When a user clicks an ad, we assume that the clicked ad is relevant, at least partially, to the user's need (given the query submitted by the user, if any).", "In consequence, we would like the representation of the user features (and the query features) and the representation of the ad features to match well.", "In particular, the matching subnet contains two parts: “user part” and “ad part”.", "The input to the “user part” is the user features (e.g., user ID, age) and query features (e.g., query, query category).", "As in the prediction subnet, a feature $x_i \\in \\mathbb {R}$ first goes through an embedding layer and is mapped to its embedding vector $\\mathbf {e}_i \\in \\mathbb {R}^K$ .", "We then concatenate all the feature embeddings as a long vector $\\mathbf {m}_u \\in \\mathbb {R}^{N_u}$ ($N_u$ is the vector dimension).", "The vector $\\mathbf {m}_u$ then goes through several FC layers in order to learn more abstractive, high-level representations.", "We use $\\tanh $ (rather than ReLU) as the activation function of the last FC layer, which is defined as $\\tanh (x) = \\frac{1-\\exp (-2x)}{1+\\exp (-2x)}.$ We will explain the reason later.", "The output of the “user part” is a high-level user representation vector $\\mathbf {v}_u \\in \\mathbb {R}^M$ ($M$ is the vector dimension).", "The input to the “ad part” is the ad features (e.g., creative ID, ad title).", "Similarly, we first map each ad feature to its embedding vector and then concatenate them as a long embedding vector $\\mathbf {m}_a \\in \\mathbb {R}^{N_a}$ ($N_a$ is the vector dimension).", "The vector $\\mathbf {m}_a$ then goes through several FC layers and results in a high-level ad representation vector $\\mathbf {v}_a \\in \\mathbb {R}^M$ .", "Note that, the inputs to the “user” and “ad” parts usually have different sizes, i.e., $N_u \\ne N_a$ (because the number of user features and the number of ad features may not necessarily be the same).", "However, after the matching subnet, $\\mathbf {v}_u$ and $\\mathbf {v}_a$ have the same size $M$ .", "In other words, we project two different sets of features into a common low-dimensional space.", "We then compute the matching score $s$ as $s(\\mathbf {v}_u, \\mathbf {v}_a) = \\frac{1}{1 + \\exp (- \\mathbf {v}_u^T \\mathbf {v}_a)}.$ We do not use ReLU as the activation function of the last FC layer because the output after ReLU will contain lots of zeros, which makes $\\mathbf {v}_u^T \\mathbf {v}_a \\rightarrow 0$ .", "There are at least two choices to model the matching score: point-wise and pair-wise [15].", "In a point-wise model, we could model $s(\\mathbf {v}_u, \\mathbf {v}_a) \\rightarrow 1$ if user $u$ clicks ad $a$ and model $s(\\mathbf {v}_u, \\mathbf {v}_a) \\rightarrow 0$ otherwise.", "In a pair-wise model, we could model $s(\\mathbf {v}_u, \\mathbf {v}_{a_i}) > s(\\mathbf {v}_u, \\mathbf {v}_{a_j}) + \\delta $ where $\\delta >0$ is a margin, if user $u$ clicks ad $a_i$ but not ad $a_j$ .", "We choose the point-wise model because it can directly reuse the training dataset for the prediction subnet.", "Formally, we minimize the following loss for the matching subnet $\\mathrm {loss}_m &= - \\frac{1}{|\\mathbb {Y}|}\\sum _{y \\in \\mathbb {Y}} \\big [y(u,a) \\log s(\\mathbf {v}_u, \\mathbf {v}_a) \\nonumber \\\\&+ (1 - y(u,a)) \\log (1 - s(\\mathbf {v}_u, \\mathbf {v}_a)) \\big ], $ where $y(u,a) = 1$ if user $u$ clicks ad $a$ and it is 0 otherwise." ], [ "Correlation Subnet", "The correlation subnet models the ad-ad relationship (i.e., which ads are within a time window in a user's click sequence) and aims to learn useful ad representations.", "The skip-gram model is proposed in [17] to learn useful representations of words in a sequence, where words within a context window have certain correlation.", "It has been widely applied in many tasks to learn useful low-dimensional representations [30], [31].", "In our problem, we apply the skip-gram model to learn useful ad representations, because the clicked ads of a user also form a sequence with certain correlation over time.", "Formally, given a sequence of ads $\\lbrace a_1, a_2, \\cdots , a_L\\rbrace $ clicked by a user, we would like to maximize the average log likelihood as $ll = \\frac{1}{L} \\sum _{i=1}^L \\sum _{-C \\le j \\le C}^{1\\le i+j \\le L, j \\ne 0} \\log p(a_{i+j} | a_i),$ where $L$ is the number of ads in the sequence and $C$ is a context window size.", "The probability $p(a_{i+j} | a_i)$ can be defined in different ways such as softmax, hierarchical softmax and negative sampling [17].", "We choose the negative sampling technique due to its efficiency.", "$p(a_{i+j} | a_i)$ is then defined as $p(a_{i+j} | a_i) = \\sigma (\\mathbf {h}_{a_{i+j}}^T \\mathbf {h}_{a_i}) \\prod _{q=1}^Q \\sigma (-\\mathbf {h}_{a_q}^T \\mathbf {h}_{a_i}),$ where $Q$ is the number of sampled negative ads and $\\sigma (\\mathbf {h}_{a_{i+j}}^T \\mathbf {h}_{a_i}) = \\frac{1}{1 + \\exp (- \\mathbf {h}_{a_{i+j}}^T \\mathbf {h}_{a_i})}.$ $\\mathbf {h}_{a_i}$ is a high-level representation vector that involves all the features related to ad $a_i$ and that goes through several FC layers (cf.", "Figure REF ).", "The loss function of the correlation subnet is then given by minimizing the negative average log likelihood as $\\mathrm {loss}_c & = \\frac{1}{L} \\sum _{i=1}^L \\sum _{-C \\le j \\le C}^{1\\le i+j \\le L, j \\ne 0} \\Big [- \\log \\left[\\sigma (\\mathbf {h}_{a_{i+j}}^T \\mathbf {h}_{a_i}) \\right] \\nonumber \\\\& - \\sum _{q=1}^Q \\log \\left[\\sigma (-\\mathbf {h}_{a_q}^T \\mathbf {h}_{a_i}) \\right] \\Big ].", "$" ], [ "Offline Training Procedure", "The final joint loss function of DeepMCP is given by $ \\mathrm {loss} = \\mathrm {loss}_p + \\alpha \\mathrm {loss}_m + \\beta \\mathrm {loss}_c,$ where $\\mathrm {loss}_p$ is the prediction loss in Eq.", "(REF ), $\\mathrm {loss}_m$ is the matching loss in Eq.", "(REF ), $\\mathrm {loss}_c$ is the correlation loss in Eq.", "(REF ), and $\\alpha $ and $\\beta $ are tunable hyperparameters for balancing the importance of different subnets.", "The DeepMCP model is trained by minimizing the joint loss function on a training dataset.", "Since our aim is to maximize the CTR prediction performance, we evaluate the model on a separate validation dataset and record the validation AUC (an evaluation metric, which will be explained in §REF ) during the training procedure.", "The optimal model parameters are obtained at the highest validation AUC." ], [ "Online Procedure", "As we have illustrated in Figure REF (b), in the online testing phase, the DeepMCP model only needs to compute the predicted CTR (pCTR).", "Therefore, only the features from the target ad are needed and only the prediction subnet is active.", "This makes the online phase of DeepMCP rather simple and efficient." ], [ "Experiments", "In this section, we conduct experiments on two large-scale datasets to evaluate the performance of DeepMCP and several state-of-the-art methods for CTR prediction." ], [ "Datasets", "Table REF lists the statistics of two large-scale datasets.", "1) Avito advertising datasethttps://www.kaggle.com/c/avito-context-ad-clicks/data.", "This dataset contains a random sample of ad logs from avito.ru, the largest general classified website in Russia.", "We use the ad logs from 2015-04-28 to 2015-05-18 for training, those on 2015-05-19 for validation, and those on 2015-05-20 for testing.", "In CTR prediction, testing is usually the next-day prediction.", "The test set contains $2.3\\times 10^6$ instances.", "The features used include 1) user features such as user ID, IP ID, user agent and user device, 2) query features such as search query, search category and search parameters, 3) ad features such as ad ID, ad title and ad category, and 4) other features such as hour of day and day of week.", "2) Company advertising dataset.", "This dataset contains a random sample of ad impression and click logs from a commercial advertising system in Alibaba.", "We use ad logs of 30 consecutive days during Aug.-Sep. 2018 for training, logs of the next day for validation, and logs of the day after the next day for testing.", "The test set contains $1.9\\times 10^6$ instances.", "The features used also include user, query, ad and other features." ], [ "Methods Compared", "We compare the following methods for CTR prediction.", "LR.", "Logistic Regression [23].", "It models linear feature importance.", "FM.", "Factorization Machine [21].", "It models both first-order feature importance and second-order feature interactions.", "DNN.", "Deep Neural Network.", "It contains an embedding layer, several fully connected layers and an output layer.", "PNN.", "The Product-based Neural Network in [20].", "It introduces a production layer between the embedding layer and fully connected layers of DNN.", "Wide&Deep.", "The Wide&Deep model in [2].", "It combines LR (wide part) and DNN (deep part).", "DeepFM.", "The DeepFM model in [7].", "It combines FM (wide part) and DNN (deep part).", "DeepCP.", "A variant of the DeepMCP model, which contains only the correlation and the prediction subnets.", "It is equivalent to setting $\\alpha = 0$ in Eq.", "(REF ).", "DeepMP.", "A variant of the DeepMCP model, which contains only the matching and the prediction subnets.", "It is equivalent to setting $\\beta =0$ in Eq.", "(REF ).", "DeepMCP.", "The DeepMCP model (§) which contains the matching, correlation and prediction subnets." ], [ "Parameter Settings", "We set the embedding dimension of each feature as $K=10$ , because the number of distinct features is huge.", "We set the number of fully connected layers in neural network-based models as 2, with dimensions 512 and 256.", "We set the batch size as 128, the context window size as $C=2$ and the number of negative ads as $Q=4$ .", "The dropout ratio is set to 0.5.", "All the methods are implemented in Tensorflow and optimized by the Adagrad algorithm [4].", "Table: Test AUC and Logloss on two large-scale datasets.", "DNN = Pred, DeepCP = Pred+Corr, DeepMP = Pred+Match, DeepMCP = Pred+Match+Corr." ], [ "Evaluation Metrics", "We use the following evaluation metrics.", "AUC: the Area Under the ROC Curve over the test set.", "The larger the better.", "It reflects the probability that a model ranks a randomly chosen positive instance higher than a randomly chosen negative instance.", "A small improvement in AUC is likely to lead to a significant increase in online CTR [2].", "Logloss: the value of Eq.", "(REF ) over the test set.", "The smaller the better." ], [ "Effectiveness", "Table REF lists the AUC and Logloss values of different methods.", "It is observed that FM performs much better than LR, because FM models second-order feature interactions while LR models linear feature importance.", "DNN further outperforms FM, because it can learn high-order nonlinear feature interactions [8].", "PNN outperforms DNN because it further introduces a production layer.", "Wide&Deep further outperforms PNN, because it combines LR and DNN, which improves both the memorization and generalization abilities of the model.", "DeepFM combines FM and DNN.", "It performs slightly better than Wide&Deep on the Avito dataset, but slightly worse on the Company dataset.", "We now examine our proposed models.", "DeepCP contains the correlation subnet and the prediction subnet.", "DeepMP contains the matching subnet and the prediction subnet.", "It is observed that both DeepCP and DeepMP outperform the best-performing baseline on the two datasets.", "As the baseline methods only consider the prediction task, these observations show that additionally consider representation learning tasks can aid the performance of CTR prediction.", "It is also observed that DeepMP performs much better than DeepCP.", "It indicates that the matching subnet brings more benefits than the correlation subnet.", "This makes sense because the matching subnet considers both the users and the ads, while the correlation subnet considers only the ads.", "It is observed that DeepMCP that contains the matching, correlation and prediction subnets performs best on both datasets.", "These observations demonstrate the effectiveness of DeepMCP." ], [ "Effect of the Balancing Parameters", "In this section, we examine the effect of tuning balancing hyperparameters of DeepMCP.", "Figure REF and Figure REF examine $\\alpha $ (matching subnet) and $\\beta $ (correlation subnet) respectively.", "It is observed that the AUCs increase when one hyperparameter enlarges at the beginning, but then decrease when it further enlarges.", "On the Company dataset, large $\\beta $ can lead to very bad performance that is even worse than the prediction subnet only.", "Overall, the matching subnet leads to larger AUC improvement than the correlation subnet.", "The Company dataset is more sensitive to the $\\beta $ parameter.", "Figure: CompanyFigure: Company" ], [ "Effect of the Hidden Layer Size", "In this section, we examine the effect of the hidden layer sizes of neural network-based methods.", "In order not to make the figure cluttered, we only show the results of DNN, Wide&Deep and DeepMCP.", "Figure REF plots the AUCs vs. the hidden layer sizes when the number of hidden layers is 2.", "We use a shrinking structure, where the second layer dimension is only half of the first.", "It is observed that when the first layer dimension increases from 128 to 512, AUCs generally increase.", "But when the dimension further enlarges, the performance may degrade.", "This is possibly because it is more difficult to train a more complex model." ], [ "Effect of the Number of Hidden Layers", "In this section, we examine the effect of the number of hidden layers.", "The dimension settings are: 1 layer - [256], 2 layers - [512, 256], 3 layers - [1024, 512, 256], and 4 layers - [2048, 1024, 512, 256].", "It is observed in Figure REF that when the number of hidden layers increases from 1 to 2, the performance generally increases.", "This is because more hidden layers have better expressive abilities [8].", "But when the number of hidden layers further increases, the performance then decreases.", "This is because it is more difficult to train deeper neural networks.", "Figure: CompanyFigure: Company" ], [ "CTR Prediction", "CTR prediction has attracted lots of attention from both academia and industry [9], [2], [8], [31].", "Generalized linear models, such as Logistic Regression (LR) [23] and Follow-The-Regularized-Leader (FTRL) [16], have shown decent performance in practice.", "However, a linear model lacks the ability to learn sophisticated feature interactions [1].", "Factorization Machines (FMs) [21], [22] are proposed to model pairwise feature interactions in terms of the latent vectors corresponding to the involved features.", "Field-aware FM [12] and Field-weighted FM [19] further consider the impact of the field that a feature belongs to in order to improve the performance of FM.", "In recent years, Deep Neural Networks (DNNs) are exploited for CTR prediction and item recommendation in order to automatically learn feature representations and high-order feature interactions [26], [3], [27], [8].", "Zhang et al.", "[[29]] propose Factorization-machine supported Neural Network (FNN), which pre-trains an FM before applying a DNN.", "Qu et al.", "[[20]] propose the Product-based Neural Network (PNN) where a product layer is introduced between the embedding layer and the fully connected layer.", "Cheng et al.", "[[2]] propose Wide&Deep, which combines LR and DNN in order to improve both the memorization and generalization abilities of the model.", "Guo et al.", "[[7]] propose DeepFM, which models low-order feature interactions like FM and models high-order feature interactions like DNN.", "He et al.", "[[8]] propose the Neural Factorization Machine which combines the linearity of FM and the non-linearity of neural networks.", "Nevertheless, these methods mainly model the feature-CTR relationship.", "Our proposed DeepMCP model further considers user-ad and ad-ad relationships." ], [ "Multi-modal / Multi-task Learning", "Our work is also closely related to multi-modal / multi-task learning, where multiple kinds of information or auxiliary tasks are introduced to help improve the performance of the main task.", "For example, Zhang et al.", "[[28]] leverage heterogeneous information (i.e., structural content, textual content and visual content) in a knowledge base to improve the quality of recommender systems.", "Gao et al.", "[[5]] utilize textual content and social tag information, in addition to classical item structure information, for improved recommendation.", "Huang et al.", "[[11]] introduce context-aware ranking as an auxiliary task in order to better model the semantics of queries in entity recommendation.", "Gong et al.", "[[6]] propose a multi-task model which additionally learns segment tagging and named entity tagging for slot filling in online shopping assistant.", "In our work, we address a different problem and we introduce two auxiliary but related tasks (i.e., matching and correlation with shared embeddings) to improve the performance of CTR prediction." ], [ "Conclusion", "In this paper, we propose DeepMCP, which contains a matching subnet, a correlation subnet and a prediction subnet for CTR prediction.", "These subnets model the user-ad, ad-ad and feature-CTR relationship respectively.", "Compared with classical CTR prediction models that mainly consider the feature-CTR relationship, DeepMCP has better prediction power and representation ability.", "Experimental results on two large-scale datasets demonstrate the effectiveness of DeepMCP in CTR prediction.", "It is observed that the matching subnet leads to higher performance improvement than the correlation subnet.", "This is possibly because the former considers both users and ads, while the latter considers ads only." ] ]
1906.04365
[ [ "Analysis of the susceptible-infected-susceptible epidemic dynamics in\n networks via the non-backtracking matrix" ], [ "Abstract We study the stochastic susceptible-infected-susceptible model of epidemic processes on finite directed and weighted networks with arbitrary structure.", "We present a new lower bound on the exponential rate at which the probabilities of nodes being infected decay over time.", "This bound is directly related to the leading eigenvalue of a matrix that depends on the non-backtracking and incidence matrices of the network.", "The dimension of this matrix is N+M, where N and M are the number of nodes and edges, respectively.", "We show that this new lower bound improves on an existing bound corresponding to the so-called quenched mean-field theory.", "Although the bound obtained from a recently developed second-order moment-closure technique requires the computation of the leading eigenvalue of an N^2 x N^2 matrix, we illustrate in our numerical simulations that the new bound is tighter, while being computationally less expensive for sparse networks.", "We also present the expression for the corresponding epidemic threshold in terms of the adjacency matrix of the line graph and the non-backtracking matrix of the given network." ], [ "Introduction", "Epidemic processes are probably one of the most extensively studied dynamical processes in complex networks [1], [2], [3], [4], [5].", "These processes can be used for modeling the spread of infectious diseases in contact networks, as well as news in (offline or online) social networks, or computer viruses in communication networks, to name a few applications.", "A fundamental question in the analysis of epidemic processes, in the case of both deterministic and stochastic models, is to quantify the total number of nodes being infected by the spread over time.", "In most epidemic models, we find two clearly differentiated dynamical phases: one phase in which an initial infection quickly dies out and another phase in which an infection may propagate to a large fraction of the network.", "The concept of epidemic threshold is used for characterizing the conditions separating these two dynamical phases.", "Most of the existing stochastic epidemic models are Markov processes where the disease-free state is a unique absorbing state.", "This absorbing state is reached with probability one in finite time, regardless of the initial set of infected nodes or the values chosen for the parameters of the model.", "A critical distinction between the two phases described above is the expected time required to reach the disease-free state.", "In the first phase mentioned above, the epidemic dynamics converges exponentially fast towards the absorbing state.", "In contrast, in the second phase, this time can be exponentially long in terms of the number of nodes.", "It is also worth remarking that this observation is not applicable to stochastic epidemic processes taking place in infinite networks [6], [7] or deterministic models taking place in both finite and infinite networks [8], [2], [4], [5] because in both these cases it is possible for the disease to survive forever.", "Therefore, for stochastic epidemic processes in finite networks, the exponential rate at which the number of infected individuals decays toward zero, called the decay rate, is a relevant characterization of the dynamics [9], [10], [11].", "Intuitively, if the disease-free equilibrium takes a long time to be reached (in expectation), the decay rate would be close to zero.", "In contrast, if the infection dies out exponentially fast, the decay rate would be a positive value bounded away from zero.", "The decay rate can e.g.", "be used to measure the performance of control strategies aiming to eradicate an epidemic exponentially fast [12], [13], [14], [15].", "Generally speaking, finding the decay rate of stochastic epidemic processes in a large network is computationally hard.", "This is because the number of possible states in the Markovian models typically used to model epidemics over networks grows exponentially in terms of the number of nodes in the network.", "Specifically, the decay rate is given by the leading eigenvalue of the transition-probability matrix of the Markovian model, whose dimension depends exponentially on the number of nodes.", "For example, in the case of the susceptible-infected-susceptible (SIS) model on $N$ nodes, the exponential rate corresponds to the leading eigenvalue of a $2^N \\times 2^N$ transition-rate matrix [11], which is computationally challenging to calculate for large networks.", "An alternative approach to the exact computation of the decay rate is to seek computationally feasible bounds.", "For example, for the SIS model, a lower bound can be obtained using a mean-field approximation.", "This approximation is based on a first-order moment-closure technique allowing us to compute a bound on the decay rate from the leading eigenvalue of an $N\\times N$ matrix [9], [13].", "However, this mean-field approximation can result in a loose bound for many networks [16].", "To increase the accuracy of the approximation, the authors proposed a tighter bound on the decay rate using a second-order moment-closure techniques [16].", "This tighter bound, however, requires the computation of the leading eigenvalue of an $N^2\\times N^2$ matrix, which can be computationally prohibitive when analyzing epidemic processes in large networks.", "In the present work, we derive a new lower bound on the decay rate of the stochastic SIS model in an arbitrary finite network.", "This new bound depends on the leading eigenvalue of an $(N+M) \\times (N+M)$ matrix, where $M$ is the number of directed edges; hence, for sparse networks — such as networks with a bounded maximum degree — the proposed lower bound is computationally more tractable than the bound derived in [16].", "Our lower bound is based on an alternative second-order moment-closure technique aiming to overcome the computational challenges of existing second-order moment-closure techniques.", "The new bound depends on the non-backtracking matrix [17], [18].", "The non-backtracking matrix has recently gained popularity in the network science community because it is the basis of efficient and theoretically appealing techniques for community detection, network centralities, and others (see references in [22]).", "We theoretically prove that the new lower bound is tighter than the first-order lower bound.", "We also show that our new lower bound is numerically more accurate than the bound obtained in [16].", "We also present a new epidemic threshold, which corresponds to our lower bound on the decay rate.", "The new epidemic threshold is given in terms of the adjacency matrix of the line graph and the non-backtracking matrix of the given network." ], [ "Problem statement", "We start with mathematical preliminaries.", "A directed graph is defined as the pair $\\mathcal {G} = (\\mathcal {V}, \\mathcal {E})$ , where $\\mathcal {V} = \\lbrace v_1, \\cdots , v_N\\rbrace $ is a finite ordered set of nodes, $N$ is the number of nodes, and $\\mathcal {E} \\subset \\mathcal {V} \\times \\mathcal {V}$ is a set of directed edges.", "By definition, $(v, v^{\\prime }) \\in \\mathcal {E}$ indicates that there is an edge from $v$ to $v^{\\prime }$ .", "The adjacency matrix of $\\mathcal {G}$ is an $N\\times N$ matrix of which the $(i, j)$ -th entry is equal to 1 if $(v_i, v_j) \\in \\mathcal {E}$ and 0 otherwise.", "An in-neighbor of $v$ is a node $v^{\\prime }$ such that $(v^{\\prime }, v)\\in \\mathcal {E}$ .", "We denote the identity and the zero matrices by $I$ and $O$ , respectively.", "A real matrix $A$ (or a vector as its special case) is said to be nonnegative, denoted by $A\\ge 0$ , if all the entries of $A$ are nonnegative.", "If all the entries of $A$ are positive, then $A$ is said to be positive.", "We say that $A\\le B$ , where $A$ and $B$ are of the same dimension, whenever $B-A\\ge 0$ .", "A square matrix $A$ is said to be Metzler if all its off-diagonal entries are nonnegative [19].", "If $A$ is Metzler, it holds true that $e^{At} \\ge 0$ for all $t\\ge 0$ [19].", "For a Metzler matrix $A$ , the maximum real part of the eigenvalues of $A$ is denoted by $\\lambda _{\\max }(A)$ .", "For any matrix $A$ , the spectral radius is the largest absolute value of its eigenvalues and denoted by $\\rho (A)$ .", "We study the stochastic SIS model on networks, which is also known as the contact process in the probability theory literature [6].", "This model is defined as follows: let $\\mathcal {G} = (\\mathcal {V}, \\mathcal {E})$ be a directed graph.", "At any given continuous time $t \\ge 0$ , each node is in one of the two possible states, namely, susceptible (i.e.", "healthy) or infected.", "An infected node $v_i$ stochastically transits to the susceptible state at a constant instantaneous rate of $\\delta _i > 0$ , which is called the recovery rate of node $v_i$ .", "Whenever $v_i$ is susceptible and its infected in-neighbor $v_j$ is infected, then $v_j$ stochastically and independently infects $v_i$ at a constant instantaneous rate of $\\beta _{ji}$ .", "We call $\\beta _{ji} > 0$ the infection rate.", "Note that the present SIS model effectively accommodates directed and weighted networks because the infection rate $\\beta _{ij}$ is allowed to depend on $v_i$ and $v_j$ .", "The SIS model is a continuous-time Markov process with $2^N$ possible states [11], [4], [5] and has a unique absorbing state in which all the $N$ nodes are susceptible.", "Because this absorbing state is reachable from any other state, the dynamics of the SIS model reaches the disease-free absorbing equilibrium in finite time with probability one.", "The aim of the present paper is to study how fast this disease-free equilibrium is reached in expectation.", "This can be quantified via the following definition: Definition 1 Let $p_i(t)$ be the probability that the $i$ th node is infected at time $t$ .", "The decay rate of the SIS model is defined by $\\gamma = - \\limsup _{t\\rightarrow \\infty } \\frac{\\log \\sum _{i=1}^N p_i(t)}{t},$ where all nodes are assumed to be infected at $t=0$ .", "Definition REF states that $\\sum _{i=1}^N p_i(t)$ , which is equal to the expected number of infected nodes at time $t$ , roughly decays exponentially in time as $\\propto e^{-\\gamma t}$ .", "Because the number of infected nodes always becomes zero in finite time, the SIS model always has a positive decay rate (potentially close to zero), even if the infection rate is large.", "The decay rate has theoretically been studied in continuous time [11] and discrete time [10] SIS models and is closely related to other quantities of interest, such as the epidemic threshold [11] and the mean time to absorption [9].", "However, exact computation of the decay rate is computationally demanding in practice.", "Even in the homogeneous case, where all nodes share the same infection and recovery rates, the decay rate equals the modulus of the largest real part of the non-zero eigenvalues of a $2^N\\times 2^N$ matrix representing the infinitesimal generator of the Markov chain [11].", "Due to the difficulty of its computation, several approaches have been proposed to bound the decay rate.", "A first-order lower bound, which corresponds to the so-called quenched mean-field approximation [4], is derived as follows [9], [13]: Let $\\mathbf {p}(t) = \\left[p_1(t), \\ldots , p_N(t)\\right]^{\\top }$ , where $\\top $ represents the matrix transposition.", "We define the $N\\times N$ matrices $B$ and $D$ by $B_{ij} ={\\left\\lbrace \\begin{array}{ll}\\beta _{ij},& \\mbox{if $(v_i, v_j)\\in \\mathcal {E}$,}\\\\0,&\\mbox{otherwise,}\\end{array}\\right.", "}$ and $D = \\text{diag}(\\delta _1, \\ldots , \\delta _N),$ where $\\text{diag}(\\alpha _1, \\ldots , \\alpha _N)$ is the $N\\times N$ diagonal matrix whose diagonal elements are equal to $\\alpha _1$ , $\\ldots $ , $\\alpha _N$ .", "Note that matrix $B$ fully contains the information about the adjacency matrix of $\\mathcal {G}$ .", "Then, one can show that $\\mathbf {p}(t)\\le e^{(B^\\top -D)t} \\mathbf {p}(0)$ , which implies that $\\gamma \\ge \\gamma _1 \\equiv -\\lambda _{\\max }(B^\\top - D),$ where we will call $\\gamma _1$ the first-order lower bound.", "Although this lower bound is computationally efficient to find, there can be a large discrepancy between $\\gamma _1$ and the true decay rate $\\gamma $  [16].", "A second lower bound on the decay rate was proposed in a recent study [16] and summarized in .", "This second bound depends on the leading eigenvalue of an $N^2 \\times N^2$ matrix, which is computationally demanding when $N$ is relatively large.", "In this paper, we propose an alternative lower bound on the decay rate that is computationally efficient, provably more accurate than the first-order bound, and numerically tighter than the second bound described in ." ], [ "A lower bound on the decay rate", "To state our mathematical results, we label the directed edges of a given network $\\mathcal {G}$ as $\\lbrace e_1, \\ldots , e_M\\rbrace $ , where the $\\ell $ th edge ($1\\le \\ell \\le M$ ) is represented by $e_\\ell = (i_\\ell , j_\\ell )$ , i.e., the edge is directed from node $v_{i_\\ell }$ to node $v_{j_\\ell }$ .", "Although we use notations $i$ and $j$ to represent general nodes $v_i$ and $v_j$ in the following text, $i$ and $j$ with a subscript will exclusively represent the starting and terminating nodes of an edge, thus avoiding confusions.", "Define the incidence matrix $C\\in \\mathbb {R}^{N\\times M}$ of the network $\\mathcal {G}$ by [20], [21] $C_{i\\ell } = {\\left\\lbrace \\begin{array}{ll}1, & \\mbox{if $j_\\ell = i$},\\\\-1, & \\mbox{if $i_\\ell = i$},\\\\0, & \\mbox{otherwise}.\\end{array}\\right.", "}$ Also, define the non-backtracking matrix $H\\in \\mathbb {R}^{M\\times M}$ of $\\mathcal {G}$ by [17], [18] $H_{\\ell m} = {\\left\\lbrace \\begin{array}{ll}1, & \\mbox{if $j_\\ell = i_m$ and $j_m \\ne i_\\ell $},\\\\0, & \\mbox{otherwise.}\\end{array}\\right.", "}$ The main result of the present paper is stated as follows: Theorem 2 Define the $(N+M)\\times (N+M)$ Metzler matrix $\\mathcal {A} = \\begin{bmatrix}-D & C_+ B^{\\prime }\\\\D_2^{\\prime } C_-^\\top & H^\\top B^{\\prime } - B^{\\prime } - D_1^{\\prime } - D_2^{\\prime }\\end{bmatrix},$ where $B^{\\prime } =& \\text{diag}(\\beta _{i_1 j_1}, \\ldots , \\beta _{i_M j_M}),\\\\D_1^{\\prime } =& \\text{diag}(\\delta _{i_1}, \\ldots , \\delta _{i_M}),\\\\D_2^{\\prime } =& \\text{diag}(\\delta _{j_1}, \\ldots , \\delta _{j_M}),$ $C_+ = \\max (C, 0)$ , and $C_- = \\max (-C, 0)$ ; $C_+$ and $C_-$ denote the positive and negative parts of the incidence matrix $C$ , respectively.", "Then, we obtain the following lower bound on the decay rate: $\\gamma \\ge \\gamma _2 \\equiv -\\lambda _{\\max }(\\mathcal {A}).$ Define the binary variable $x_i(t)$ such that $x_i(t)=0$ or $x_i(t)=1$ if node $v_i$ is susceptible or infected at time $t$ , respectively.", "The variables $x_1(t)$ , $\\ldots $ , $x_N(t)$ obey a system of stochastic differential equations with Poisson jumps, and the expectation $p_i(t) = E[x_i(t)]$ obeys $\\frac{dp_i}{dt}&= \\left( \\sum _{j=1}^N E[(1-x_i)x_j]\\beta _{ji} \\right) - \\delta _i E[x_i]\\\\&= \\left( \\sum _{j=1}^N \\beta _{ji} q_{ji} \\right) - \\delta _i p_i,$ where $q_{ji}(t) = E[x_j(t)(1-x_i(t))]$ is equal to the joint probability that node $v_j$ is infected and node $v_i$ is susceptible at time $t$ .", "Using the identities $\\sum _{j=1}^N \\beta _{ji} q_{ji}=\\sum _{\\ell =1; j_\\ell = i}^{M} \\beta _{i_\\ell j_\\ell } q_{i_\\ell j_\\ell } = \\sum _{\\ell =1}^{M} [C_+]_{i\\ell }[B^{\\prime }]_{\\ell \\ell } q_{i_\\ell j_\\ell },$ one obtains $\\frac{dp_i}{dt}= \\left(\\sum _{\\ell =1}^{M} [C_+ B^{\\prime }]_{i\\ell } q_{i_\\ell j_\\ell } \\right) - \\delta _i p_i \\quad (i \\in \\lbrace 1, \\ldots , N\\rbrace ).$ Equation (REF ) is equivalent to $\\frac{d\\mathbf {p}}{dt} = C_+ B^{\\prime } \\mathbf {q} - D\\mathbf {p},$ where we remind that $\\mathbf {p}(t) = \\left[p_1(t), \\ldots , p_N(t)\\right]^{\\top }$ and define $\\mathbf {q}(t) \\equiv \\left[ q_{i_1 j_1}(t), \\ldots ,q_{i_M j_M}(t) \\right]^{\\top }.$ Using the notation $p_{ij}(t) \\equiv E[x_i(t) x_j(t)]$ , one obtains $\\frac{dq_{ij}}{dt}&=- \\left(\\sum _{k=1}^N E[x_i (1-x_j) x_k] \\beta _{kj}\\right) + \\delta _j E[x_ix_j]\\\\ &\\quad + \\left(\\sum _{k=1}^N E[(1-x_i) (1-x_j)x_k] \\beta _{ki}\\right) - \\delta _i E[x_i (1-x_j)] \\\\&\\le - \\beta _{ij} q_{ij} + \\delta _j p_{ij}+ \\left(\\sum _{k=1; k\\ne j}^N \\beta _{ki} q_{ki}\\right) - \\delta _i q_{ij}.$ The first term on the right-hand side of the first line in Eq.", "(REF ) represents the rate at which node $v_j$ is infected when node $v_i$ is infected and node $v_j$ is susceptible; the second term represents the rate at which $v_j$ recovers when both $v_i$ and $v_j$ are infected; the third term represents the rate at which $v_i$ is infected when both $v_i$ and $v_j$ are susceptible; the fourth term represents the rate at which $i$ recovers when $v_i$ is infected and $v_j$ is susceptible.", "To derive the last inequality in Eq.", "(REF ), for the first term on the right-hand side, we ignored all the $k$ values but $k=i$ in the summation and used $x_i^2 = x_i$ .", "For the third term on the right-hand side, we used $E[(1-x_i) (1-x_j)x_k] \\le E[(1-x_i) x_k]$ .", "By combining Eq.", "(REF ) and $p_{ij} = E[x_ix_j] = E[x_i] - E[x_i(1-x_j)] = p_i - q_{ij}$ , one obtains $\\frac{dq_{i_\\ell j_\\ell }}{dt}\\le - (\\beta _{i_\\ell j_\\ell } + \\delta _{i_\\ell } +\\delta _{j_\\ell }) q_{i_\\ell j_\\ell } + \\delta _{j_\\ell } p_{i_\\ell } + \\sum _{k=1; k\\ne j_\\ell }^N \\beta _{ki_\\ell } q_{k i_\\ell }.$ By combining Eq.", "(REF ) with $[C_-^\\top \\mathbf {p}]_\\ell = \\sum _{i=1}^N [C_-]_{i \\ell } p_i= p_{i_\\ell }$ and $\\begin{multlined}[H^\\top B^{\\prime } \\mathbf {q}]_\\ell = \\sum _{m=1}^{M} H_{m\\ell } B^{\\prime }_{mm} q_m= \\sum ^M_{\\begin{array}{c}m=1; j_m = i_\\ell ,\\\\ j_\\ell \\ne i_m\\end{array}} \\beta _{i_m j_m} q_{i_m j_m}\\\\= \\sum _{k=1; k\\ne j_\\ell }^N \\beta _{k i_\\ell } q_{k i_\\ell },\\end{multlined}$ one obtains $\\frac{dq_{i_\\ell j_\\ell }}{dt}\\le &-([B^{\\prime }]_{\\ell \\ell } + [D^{\\prime }_1]_{\\ell \\ell } + [D^{\\prime }_2]_{\\ell \\ell }) q_{i_\\ell j_\\ell } + [D^{\\prime }_2]_{\\ell \\ell }[C_-^\\top \\mathbf {p}]_\\ell + [H^\\top B^{\\prime } \\mathbf {q}]_\\ell \\\\=&-[B^{\\prime } \\mathbf {q}]_{\\ell } - [D^{\\prime }_1 \\mathbf {q}]_{\\ell } - [D^{\\prime }_2 \\mathbf {q}]_{\\ell }+ [D^{\\prime }_2 C_-^\\top \\mathbf {p}]_\\ell + [H^\\top B^{\\prime } \\mathbf {q}]_\\ell .$ By stacking this inequality with respect to $\\ell $ , one observes that there exists an $\\mathbb {R}^{M}_+$ -valued function $\\mathbf {\\epsilon }(t)$ defined for $t\\in [0, \\infty )$ such that $\\frac{d \\mathbf {q}}{dt} = D_2^{\\prime } C_-^\\top \\mathbf {p} + (H^\\top B^{\\prime } - B^{\\prime } - D_1^{\\prime } - D^{\\prime }_2) \\mathbf {q} - \\mathbf {\\epsilon }.$ Equations (REF ) and (REF ) imply $\\frac{d}{dt}\\begin{bmatrix}\\mathbf {p}\\\\ \\mathbf {q}\\end{bmatrix}=\\mathcal {A} \\begin{bmatrix}\\mathbf {p}\\\\ \\mathbf {q}\\end{bmatrix} - \\begin{bmatrix}\\mathbf {0} \\\\ \\mathbf {\\epsilon }\\end{bmatrix}.$ Because $\\mathcal {A}$ is Metzler and $\\mathbf {\\epsilon }(t)$ is entry-wise nonnegative for every $t\\ge 0$ , one obtains $\\begin{bmatrix}\\mathbf {p}(t) \\\\ \\mathbf {q}(t)\\end{bmatrix}&=e^{\\mathcal {A} t}\\begin{bmatrix}\\mathbf {p}(0) \\\\ \\mathbf {q}(0)\\end{bmatrix}-\\int _0^te^{\\mathcal {A}(t-\\tau )}\\begin{bmatrix}\\mathbf {0} \\\\ \\mathbf {\\epsilon }(\\tau )\\end{bmatrix}\\,{\\rm d}\\tau \\\\&\\le e^{\\mathcal {A} t}\\begin{bmatrix}\\mathbf {p}(0) \\\\ \\mathbf {q}(0)\\end{bmatrix},$ which proves Eq.", "(REF ).", "Next, to prove that the new lower bound is tighter than the first-order lower bound, we start by stating (and proving) a convenient adaptation of the classical Perron–Frobenius theorem [25] for nonnegative matrices to the case of Metzler matrices.", "Lemma 3 Let $M$ be an irreducible Metzler matrix.", "There exists a positive vector $\\mathbf {v}$ such that $M \\mathbf {v} = \\lambda _{\\max }(M)\\mathbf {v}$ .", "Assume that there exist a real number $\\mu $ and a nonzero vector $\\mathbf {u}\\ge 0$ such that $M\\mathbf {u} \\le \\mu \\mathbf {u}$ and $M\\mathbf {u}\\ne \\mu \\mathbf {u}$ .", "Then, $\\lambda _{\\max }(M)< \\mu $ .", "Let $\\nu $ be a real number such that matrix $M^{\\prime } = M + \\nu I$ is nonnegative.", "Note that the spectral radius of $M^{\\prime }$ satisfies $\\rho (M^{\\prime }) = \\lambda _{\\max }(M)+\\nu $ .", "To prove the first statement, we use the Perron–Frobenius theorem (see Fact 5.b in [25]), which guarantees that $M^{\\prime }$ has a positive eigenvector $\\mathbf {v}$ corresponding to the eigenvalue $\\rho (M^{\\prime })$ .", "The vector $\\mathbf {v}$ satisfies $M\\mathbf {v} = M^{\\prime }\\mathbf {v} - \\nu \\mathbf {v} = \\left[\\rho (M^{\\prime })-\\nu \\right] \\mathbf {v} = \\lambda _{\\max }(M) \\mathbf {v}$ .", "To prove the second statement, assume that a nonzero vector $\\mathbf {u}\\ge 0$ satisfies $M\\mathbf {u} \\le \\mu \\mathbf {u}$ and $M\\mathbf {u}\\ne \\mu \\mathbf {u}$ .", "Then, the nonnegative and irreducible matrix $M^{\\prime }$ satisfies $M^{\\prime } \\mathbf {u} \\le (\\mu + \\nu ) \\mathbf {u}$ and $M^{\\prime } \\mathbf {u} \\ne (\\mu +\\nu ) \\mathbf {u}$ .", "Therefore, the Perron–Frobenius theorem (see Fact 7.b in [25]) guarantees that $\\rho (M^{\\prime })< \\mu +\\nu $ , which yields $\\lambda _{\\max }(M) = \\rho (M^{\\prime })-\\nu < \\mu $ .", "The following theorem proves that the bound proposed in Eq.", "(REF ) improves the first-order bound given by Eq.", "(REF ).", "Theorem 4 If the network is strongly connected, then $\\gamma _2 > \\gamma _1$ .", "Lemma REF .REF implies that the irreducible Metzler matrix $B^\\top - D$ has a positive eigenvector $\\mathbf {v}$ corresponding to the eigenvalue $-\\gamma _1$ , i.e., $(B^\\top - D) \\mathbf {v} = - \\gamma _1 \\mathbf {v}.$ Define the positive $(N+M)$ -dimensional vector $\\mathbf {\\xi }$ as $\\mathbf {\\xi }= \\begin{bmatrix}\\mathbf {v}\\\\ \\mathbf {w}\\end{bmatrix},\\ \\mathbf {w} = C_-^\\top \\mathbf {v}=\\left[ v_{i_1}, \\ldots , v_{i_M}\\right]^{\\top }.$ Let us define $\\mathbf {\\zeta }\\equiv \\mathcal {A} \\mathbf {\\xi }$ and decompose $\\mathbf {\\zeta }$ as $\\mathbf {\\zeta }= \\begin{bmatrix}\\mathbf {\\zeta }_1 \\\\ \\mathbf {\\zeta }_2\\end{bmatrix},$ where $\\mathbf {\\zeta }_1$ and $\\mathbf {\\zeta }_2$ are $N$ - and $M$ -dimensional vectors, respectively.", "Simple algebraic manipulations yield $B^\\top = C_+ B^{\\prime } C_-^\\top .$ Therefore, one obtains $C_+ B^{\\prime } \\mathbf {w} = C_+ B^{\\prime } C_-^\\top \\mathbf {v} = B^\\top \\mathbf {v}$ .", "This implies that $\\mathbf {\\zeta }_1 &= -D \\mathbf {v} + C_+ B^{\\prime } \\mathbf {w} \\\\&=-D \\mathbf {v} + B^\\top \\mathbf {v} \\\\&=- \\gamma _1 \\mathbf {v}.$ One also obtains $\\mathbf {\\zeta }_2 &= D_2^{\\prime }C_-^\\top \\mathbf {v} + (H^\\top B^{\\prime } - B^{\\prime } - D_1^{\\prime } - D_2^{\\prime }) \\mathbf {w}\\\\&= (H^\\top B^{\\prime } - B^{\\prime } - D_1^{\\prime }) \\mathbf {w}\\\\&\\le (A_{L(\\mathcal {G})}^\\top B^{\\prime }-B^{\\prime }-D_1^{\\prime })\\mathbf {w},$ where $A_{L(\\mathcal {G})}$ denotes the adjacency matrix of the line graph $L(\\mathcal {G})$ defined by $[A_{L(\\mathcal {G})}]_{\\ell m} = {\\left\\lbrace \\begin{array}{ll}1,& \\mbox{if {$j_\\ell = i_m$}} ,\\\\0,& \\mbox{otherwise}.\\end{array}\\right.", "}$ Matrix $A_{L(\\mathcal {G})}$ satisfies $A_{L(\\mathcal {G})}^{{\\top }} = C_-^\\top C_+$ because $[C_-^\\top C_+]_{\\ell m}&= \\sum _{i=1}^N [C_-]_{i\\ell } [C_+]_{im}\\\\&={\\left\\lbrace \\begin{array}{ll}1,&\\mbox{if $i_\\ell = j_m$},\\\\0,&\\mbox{otherwise.}\\end{array}\\right.", "}$ Because simple algebraic manipulations yield $C_-^\\top D = D_1^{\\prime } C_-^\\top $ , using Eqs.", "(REF ), (REF ), and (REF ), one obtains $(A_{L(\\mathcal {G})}^\\top B^{\\prime } - D_1^{\\prime }) \\mathbf {w}&=C_-^\\top C_+ B^{\\prime } C_-^\\top \\mathbf {v} - D_1^{\\prime } C_-^\\top \\mathbf {v}\\\\&=C_-^\\top (B^\\top - D) \\mathbf {v}\\\\&= -\\gamma _1 C_-^\\top \\mathbf {v}\\\\&=-\\gamma _1 \\mathbf {w}.$ Using Eqs.", "(REF ) and (REF ), one obtains $\\mathbf {\\zeta }_2 \\le -\\gamma _1 \\mathbf {w} - B^{\\prime } \\mathbf {w}.$ Because any entry of $B^{\\prime } \\mathbf {w}$ is positive, Eqs.", "(REF ) and (REF ) guarantee that positive vector $\\mathbf {\\xi }$ satisfies $\\mathcal {A} \\mathbf {\\xi }\\le - \\gamma _1 \\mathbf {\\xi }$ and $\\mathcal {A} \\mathbf {\\xi }\\ne - \\gamma _1 \\mathbf {\\xi }$ .", "Because $\\mathcal {A}$ is irreducible, as will be shown later, Lemma REF guarantees that $\\lambda _{\\max }(\\mathcal {A}) < - \\gamma _1$ , which implies that $\\gamma _2 > \\gamma _1$ .", "Finally, let us show the irreducibility of matrix $\\mathcal {A}$ or, equivalently, the irreducibility of $\\mathcal {A}^\\top $ .", "We regard the matrix $\\mathcal {A}^\\top $ as the adjacency matrix of a directed graph on $N+M$ nodes denoted by $\\mathcal {G}^{\\prime }$ .", "We label the nodes of $\\mathcal {G}^{\\prime }$ as $p_1$ , $\\ldots $ , $p_N$ , $q_{i_1 j_1}$ , $\\ldots $ , $q_{i_M j_M}$ .", "The first term on the right-hand side of Eq.", "(REF ) implies that $\\mathcal {G}^{\\prime }$ has an edge $(q_{i_\\ell j_\\ell }, p_{j_\\ell })$ for all $\\ell $ , which corresponds to $C_+ B^{\\prime }$ in Eq.", "(REF ).", "The second term on the right-hand side of Eq.", "(REF ) implies that $\\mathcal {G}^{\\prime }$ has an edge $(p_{i_\\ell }, q_{i_\\ell j_\\ell })$ for all $\\ell $ , which corresponds to $D_2^{\\prime } C_-^{\\top }$ in Eq.", "(REF ).", "To show that $\\mathcal {G}^{\\prime }$ is strongly connected, we first consider an arbitrary ordered pair of nodes $p_i$ and $p_j$ in $\\mathcal {G}^{\\prime }$ .", "Let us take a path $v_i = v_{\\iota (0)}$ , $v_{\\iota (1)}$ , ..., $v_{\\iota (s)}=v_j$ in the original graph $\\mathcal {G}$ .", "Then, from the above observation, we see that the graph $\\mathcal {G}^{\\prime }$ contains the path $p_{i} = p_{\\iota (0)}$ , $q_{\\iota (0)\\iota (1)}$ , $q_{\\iota (1)}$ , $q_{\\iota (1)\\iota (2)}$ , ..., $q_{\\iota (s-1)\\iota (s)}$ , $p_{\\iota (s)}=p_j$ .", "Likewise, for an arbitrary ordered pair of nodes $p_i$ and $q_{i_\\ell j_\\ell }$ in $\\mathcal {G}^{\\prime }$ , there is a path in $\\mathcal {G}^{\\prime }$ from $p_i$ to $p_{i_\\ell }$ .", "By appending edge $(p_{i_\\ell }, q_{i_\\ell j_\\ell })$ to the end of this path, one obtains a path from $p_i$ to $q_{i_\\ell j_\\ell }$ .", "A path from arbitrary $q_{i_\\ell j_\\ell }$ to $p_j$ and one from $q_{i_\\ell j_\\ell }$ to $q_{i_{\\ell ^{\\prime }} j_{\\ell ^{\\prime }} }$ can be similarly constructed.", "Therefore, a path exists between any pair of nodes in $\\mathcal {G}^{\\prime }$ ." ], [ "Epidemic threshold", "In this section, we assume that $\\beta _{ij} = \\beta $ and $\\delta _i = \\delta $ , where $i, j \\in \\lbrace 1, \\ldots , N\\rbrace $ and $\\beta , \\delta > 0$ , and derive conditions under which the expected number of infected individuals decays exponentially fast.", "It holds true that having $\\gamma _1 < 0$ in Eq.", "(REF ) is equivalent to the well-known epidemic threshold $\\beta /\\delta > 1/ \\lambda _{\\max }(A)$ [26], [10], [4], [5].", "Likewise, Theorem REF provides a tighter epidemic threshold given by $\\left(\\beta /\\delta \\right)_{\\rm c} = \\max \\lbrace \\beta /\\delta \\mid \\gamma _2 \\ge 0\\rbrace ,$ where $\\gamma _2$ is defined in Eq.", "(REF ).", "Our following corollary provides an explicit expression of the epidemic threshold in terms of the adjacency matrix of the line graph and the non-backtracking matrix $H$ .", "Corollary 5 Define the matrix $A_{L(\\mathcal {G})}$ by Eq.", "(REF ).", "Then, $\\left(\\frac{\\beta }{\\delta }\\right)_{\\rm c} = \\frac{2}{\\rho (A_{L(\\mathcal {G})}+H)-1}.$ We decompose $\\mathcal {A}$ such that $\\mathcal {A} = R + P,$ where $R = \\begin{bmatrix}-\\delta I & O\\\\\\delta C_-^\\top & -\\beta I - 2\\delta I\\end{bmatrix}$ and $P = \\begin{bmatrix}O & \\beta C_+\\\\O & \\beta H^\\top \\end{bmatrix}.$ Matrix $R$ is a Metzler matrix, all the eigenvalues of $R$ have negative real parts, and matrix $P$ is nonnegative.", "Therefore, Theorem 2.11 in Ref.", "[27] implies that $\\lambda _{\\max }(\\mathcal {A}) < 0$ if and only if $\\rho (R^{-1} P ) < 1$ .", "Because $R^{-1}P = &\\begin{bmatrix}-\\dfrac{1}{\\delta }I & O\\\\-\\dfrac{1}{\\beta +2\\delta }C_-^{\\top } & - \\dfrac{1}{\\beta + 2\\delta }I\\end{bmatrix}P\\\\= &\\begin{bmatrix}O & -\\dfrac{\\beta }{\\delta }C+ \\\\ O & -\\dfrac{\\beta }{\\beta + 2\\delta }(C_-^\\top C_+ + H^\\top )\\end{bmatrix},$ one obtains $\\rho (R^{-1} P) &= \\frac{\\beta }{\\beta +2\\delta }\\rho (C_-^\\top C_+ + H^\\top ) \\\\&= \\frac{\\beta }{\\beta +2\\delta }\\rho (A_{L(\\mathcal {G})}^\\top + H^\\top )\\\\&=\\frac{\\beta }{\\beta +2\\delta }\\rho (A_{L(\\mathcal {G})} + H),$ where we used Eq.", "(REF ).", "Therefore, $\\rho (R^{-1} P) < 1$ if and only if $\\frac{\\beta }{\\delta } < \\frac{2}{\\rho (A_{L(\\mathcal {G})} + H) - 1},$ which is equivalent to Eq.", "(REF ).", "Remark: Corollary REF does not require strong connectedness (i.e., irreducibility of the adjacency matrix) of the network." ], [ "Numerical results", "In this section, we carry out numerical simulations of the stochastic SIS dynamics for several networks to assess the tightness of the different lower bounds on the decay rate.", "In the following numerical simulations, we set $\\beta _{ij} = \\beta $ and $\\delta _i = \\delta $ , where $i, j \\in \\lbrace 1, \\ldots , N\\rbrace $ , for simplicity.", "We further assume that $\\delta = 1$ without loss of generality (because changing $\\beta $ and $\\delta $ simultaneously by the same factor is equivalent to rescaling the time variable without changing $\\beta $ or $\\delta $ ).", "We ran the stochastic SIS dynamics $10^4$ times starting from the initial condition in which all nodes are infected.", "For each run of the simulations, we measured the number of infected individuals at every integer time (including time 0) until the infection dies out or the maximum time, which is set to $5\\times 10^4$ , is reached.", "Then, at each integer time, we summed the number of infected nodes over all the runs and divided it by $N$ and by the number of runs ($= 10^4$ ), thus obtaining the average fraction of infected nodes, i.e., $\\rho (t)\\equiv \\sum _{i=1}^N p_i(t)/N$ , where $t=0, 1, \\ldots $ .", "We calculated the decay rate from the observed $\\lbrace \\rho (t) : t=0, 1, \\ldots \\rbrace $ as follows: because the fluctuations in $\\rho (t)$ are expected to be large when $\\rho (t)$ is small, we identified the smallest integer time at which $\\rho (t)$ is less than $10^{-4}$ for the first time, and discarded $\\rho (t)$ at this and all larger $t$ values.", "Then, because $\\rho (t)$ is expected to decay exponentially in $t$ , we calculated a linear regression between $\\log \\rho (t)$ and $t$ at the remaining integer values of $t$ .", "The sign-flipped slope of this regression provides a numerical estimate of the decay rate.", "We confirmed that the Pearson correlation coefficient in the linear regression was at least $0.958$ for all networks and all $\\beta $ values.", "The Pearson correlation was typically larger than $0.99$ .", "We used eight undirected and unweighted networks to compare the numerically obtained decay rate and the rigorous lower bounds.", "The lower bounds to be compared are $\\gamma _1$ , $\\gamma _2$ , and the one obtained in our previous study [16], which is denoted by $\\gamma _2^{\\prime }$ (see for a summary).", "Four of the eight networks used were created by generative models with $N=100$ nodes.", "First, we generated a regular random graph in which all nodes had degree six, resulting in 300 undirected edges (therefore, $M=600$ directed edges).", "Second, we used the Barabási-Albert (BA) model to generate a power-law degree distribution with an exponent of 3 when $N$ is large [28].", "We set the parameters $m_0=3$ and $m=3$ , where $m_0$ is the initial number of nodes forming a clique in the process of growing a network, and $m$ is the number of edges that each new node initially brings into the network.", "With these parameter values, the mean degree is approximately equal to $2m = 6$ .", "The generated network had 294 undirected edges.", "Third, we used a cycle graph, where each node had degree two (by definition), and there were 100 undirected edges.", "These three models lack community structure that many empirical contact networks have.", "Therefore, as a fourth network, we used the Lancichinetti–Fortunato–Radicchi (LFR) model that can generate networks with community structure [29].", "The LFR model creates networks having a heterogeneous degree distribution and a heterogeneous distribution of community size.", "A small value of parameter $\\mu $ corresponds to a strong community structure.", "We set $\\mu =0.1$ .", "We set the mean degree to six, the largest degree to $N/4 = 25$ , the power-law exponent for the degree distribution to two and the power-law exponent for the distribution of community size to one.", "The network had 319 undirected edges.", "We also used four real-world networks, for which we ignored the direction and weight of the edge.", "First, we used the dolphin social network, which has $N=62$ nodes and 159 undirected edges [30].", "A node represents a bottleneck dolphin individual.", "An edge indicates frequent association between two dolphins.", "This network is a connected network.", "Second, we used the largest connected component (LCC) of a coauthorship network of researchers in network science, which has $N=379$ nodes and 914 undirected edges [31].", "A node represents a researcher publishing in fields related to network science.", "An edge indicates that two researchers have coauthored a paper at least once.", "Third, we used the LCC of an email network, which has $N=1,133$ nodes and $5,451$ undirected edges [32].", "A node represents a member of the University Rovira i Virgili, Tarragona, Spain.", "An edge is an email exchange relationship between a pair of members.", "Fourth, we used the LCC of the hamsterter network, which has $N=1,788$ nodes and $12,476$ undirected edges [33].", "A node represents a user of the website hamsterster.com.", "An edge is a friendship relationship between two users.", "For a range of values of $\\beta $ , we compare decay rates obtained numerically with the three lower bounds described in this paper for the eight networks mentioned above.", "The results are shown in Fig.", "REF .", "It should be noted that the decay rate and its bounds are equal to one for $\\beta =0$ because we set $\\delta =1$ for normalization.", "The bound $\\gamma _2$ proposed in the present study is considerably tighter than the first-order bound, $\\gamma _1$ , for some networks, in particular, the cycle (Fig.", "REF (c)).", "The improvement tends to be more manifested for smaller networks.", "We also find that $\\gamma _2$ is tighter than $\\gamma _2^{\\prime }$ for all the networks and infection rates, despite that $\\gamma _2$ is easier to calculate than $\\gamma _2^{\\prime }$ .", "For example, for the regular random graph (Fig.", "REF (a)) and the cycle (Fig.", "REF (c)), $\\gamma _2$ is close to the numerically estimated decay rate for small to moderate values of $\\beta $ , which is not the case for $\\gamma _2^{\\prime }$ as well as for $\\gamma _1$ ." ], [ "Conclusions", "We have introduced a lower bound on the decay rate of the SIS model on arbitrary directed and weighted networks.", "The new bound is based on a new second-order moment-closure technique aiming to improve both the computational cost and the accuracy of existing second-order bound.", "It is equal to the leading eigenvalue of an $(N+M)\\times (N+M)$ Metzler matrix depending on the non-backtracking and incidence matrices of the network (Eq.", "(REF )).", "Therefore, for sparse networks, the dimension of this matrix grows quasi-linearly.", "Furthermore, we have shown that the new bound, $\\gamma _2$ , is tighter than the first-order lower bound, $\\gamma _1$ , which is equal to the leading eigenvalue of an $N\\times N$ matrix depending directly on the adjacency matrix.", "Non-backtracking matrices of networks have been employed for analyzing properties of stochastic epidemic processes on networks, such as the epidemic threshold of the SIS model [34], [35] and the susceptible-infected-recovered (SIR) model [36], [37], [38], [39].", "The non-backtracking matrix more accurately describes unidirectional state-transition dynamics, such as the SIR dynamics, than the adjacency matrix does because unidirectional dynamics implies that contagions do not backtrack, i.e.", "if node $v_i$ has infected its neighbor $v_j$ , $v_j$ does not re-infect $v_i$ .", "For the same reason, the non-backtracking matrix also predicts the percolation threshold for networks better than the adjacency matrix [40], [41].", "However, the same logic does not apply to the SIS model, in which re-infection through the same edge can happen indefinitely many times.", "This is a basis of a recent criticism to the application of the non-backtracking matrix to the SIS model [42].", "For some networks, the epidemic threshold of the SIS model that does not take into account backtracking infection paths [34], [35] is not accurate [42].", "Although $\\gamma _2$ and the corresponding epidemic threshold that we have derived use the non-backtracking matrix, they are mathematical bounds and do not suffer from the inaccuracy caused by the neglect of backtracking infection paths.", "The present study has shown a new and solid usage of the non-backtracking matrix in understanding the SIS model on networks.", "By following the derivation of the epidemic threshold via $\\gamma _1$ , we derived the epidemic threshold based on $\\gamma _2$ .", "The new epidemic threshold is always larger than that based on $\\gamma _1$ , which is the reciprocal of the largest eigenvalue of the adjacency matrix.", "Because $\\gamma _2$ improves upon $\\gamma _1$ , we expect that the new epidemic threshold is a better estimate than that based on $\\gamma _1$ .", "This point warrants future work.", "Likewise, the eigenvalue statistics for the adjacency matrix of scale-free networks yield intricate relationships between the epidemic threshold based on $\\gamma _1$ and statistics of the node's degree in scale-free networks [43].", "How such a result translates to the case of the epidemic threshold based on $\\gamma _2$ also warrants future work." ], [ "Lower bound on the decay rate derived in Ref. {{cite:f39c5dc7a3f824cf44e1f8a17ece160f039e5d7b}}", "In the proof of Theorem REF , we have used the following inequality for bounding $q_{ij} = E[x_i(1-x_j)]$ (see Eq.", "(REF )): $E[(1-x_i) (1-x_j)x_k] \\le E[(1-x_i) x_k],$ in which the inequality $x_j \\ge 0$ is used; we have presumed that node $j$ is susceptible.", "In contrast, in our previous study [16], we used $E[(1-x_i) (1-x_j)x_k] \\le E[(1-x_j) x_k],$ which was based on  $x_i\\ge 0$ .", "The use of Eq.", "(REF ) led to the following lower bound on the decay rate [16]: Theorem 6 Assume that there exist positive numbers $\\beta _1$ , ..., $\\beta _N$ such that $\\beta _{ij} = \\beta _j$ for all $i, j \\in \\lbrace 1, \\ldots , N\\rbrace $ .", "Let $A$ be the adjacency matrix of $\\mathcal {G}$ and its $(i, j)$ th entry be $a_{ij}$ .", "Define the $N^2\\times N^2$ Metzler matrix $\\mathcal {B} = \\begin{bmatrix}-D & \\bigoplus _{i=1}^N (\\beta _i A_{i, \\backslash \\lbrace i\\rbrace })\\\\\\operatornamewithlimits{col}_{1\\le i\\le N}(\\delta _i V_i) &\\ \\ \\bigoplus _{i=1}^N\\left(- \\Gamma _i + \\operatornamewithlimits{col}_{j\\ne i} \\beta _j A_{j, \\backslash \\lbrace i\\rbrace }\\right)\\end{bmatrix},$ where $\\bigoplus _{i=1}^N M_i$ is the block-diagonal matrix containing matrices $M_1$ , $\\ldots $ , $M_N$ as the diagonal blocks, $\\backslash \\lbrace i\\rbrace $ denotes all the columns except the $i$ th column, $V_i\\in \\mathbb {R}^{(N-1)\\times N}$ is the matrix obtained by removing the $i$ th row from the $N\\times N$ identity matrix, $\\Gamma _i =\\text{diag}(\\overline{\\gamma }_{i,1}, \\ldots , \\overline{\\gamma }_{i,i-1}, \\overline{\\gamma }_{i,i+1}, \\ldots , \\overline{\\gamma }_{i,N})$ , and $\\overline{\\gamma }_{i,j} = \\delta _i + \\delta _j +a_{ij}\\beta _i$ .", "Then, the decay rate $\\gamma $ satisfies $\\gamma \\ge \\gamma _2^{\\prime } \\equiv -\\lambda _{\\max }(\\mathcal {B}).$" ], [ "Acknowlegdments", "We thank Claudio Castellano for valuable comments on the manuscript.", "National Science Foundation (CAREER-ECCS-1651433 to V.M.P.)", "and Japan Society for the Promotion of Science (JP18K13777 to M.O.).", "Figure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTION" ] ]
1906.04269
[ [ "Reinforcement Learning of Minimalist Numeral Grammars" ], [ "Abstract Speech-controlled user interfaces facilitate the operation of devices and household functions to laymen.", "State-of-the-art language technology scans the acoustically analyzed speech signal for relevant keywords that are subsequently inserted into semantic slots to interpret the user's intent.", "In order to develop proper cognitive information and communication technologies, simple slot-filling should be replaced by utterance meaning transducers (UMT) that are based on semantic parsers and a \\emph{mental lexicon}, comprising syntactic, phonetic and semantic features of the language under consideration.", "This lexicon must be acquired by a cognitive agent during interaction with its users.", "We outline a reinforcement learning algorithm for the acquisition of the syntactic morphology and arithmetic semantics of English numerals, based on minimalist grammar (MG), a recent computational implementation of generative linguistics.", "Number words are presented to the agent by a teacher in form of utterance meaning pairs (UMP) where the meanings are encoded as arithmetic terms from a suitable term algebra.", "Since MG encodes universal linguistic competence through inference rules, thereby separating innate linguistic knowledge from the contingently acquired lexicon, our approach unifies generative grammar and reinforcement learning, hence potentially resolving the still pending Chomsky-Skinner controversy." ], [ "Introduction", "Speech-controlled user interfaces such as Amazon's Alexa, Apple's Siri or Cortana by Microsoft substantially facilitate the operation of devices and household functions to laymen.", "Instead of using keyboard and display as input-output interfaces, the operator pronounces requests or instructions to the device and listens to its responses.", "State-of-the-art language technology scans the acoustically analyzed speech signal for relevant keywords that are subsequently inserted into semantic frames [1] to interpret the user's intent.", "This slot filling procedure [2], [3], [4] is based on large language corpora that are evaluated by standard machine learning methods, such as conditional random fields [3] or by deep learning of neural networks [4], for instance.", "The necessity to overcome traditional slot filling techniques by proper cognitive information and communication technologies has already been emphasized by Allan [5].", "His research group trains semantic parsers from large language data bases such as WordNet or VerbNet that are constrained by hand-crafted expert knowledge and semantic ontologies [2], [6], [7].", "One particular demand on cognitive user interfaces are the processing and understanding of numerals, e.g.", "in instructions like “increase the heating to 22.5 degrees”, where the device may probably respond with a sensor registration: “the current room temperature is 18.3 degrees” [8].", "Numerals are an important research domain in cognitive linguistics and language technology [9], [10], [11], [12], [13], [14].", "They exhibit typological differences among languages but share a simple arithmetic semantics.", "Decent examples are different morphologies in German ($\\texttt {zweiundvierzig} = 2 + 40$ ) or English ($\\texttt {fourtytwo} = 40;2$ ), and also different base systems in German ($\\texttt {achtzig} = 8 \\times 10$ ) or French ($\\texttt {quatre\\text{-}vingts} = 4 \\times 20$ ) [11].", "Linguistically, numerals are regarded as modifiers [12] with a particular syntactic morphology that should be described by a suitable grammar formalism.", "This grammar must store numeral morphemes together with their arithmetic semantics in a data base, called the mental lexicon.", "It should be complex enough to account for the wealth of linguistic typology and constrained enough to exclude ungrammatical compositions such as $\\texttt {zweizig}$ in German or $\\texttt {twoty}$ in English [11].", "Recent research in computational linguistics has demonstrated that quite different grammar formalisms, such as categorial grammar [15], tree-adjoining grammar [16], multiple context free grammar (MCFG) [17], range concatenation grammar [18], and minimalist grammar [19], [20] converge toward universal description models [21], [22].", "Minimalist grammar has been developed by Stabler [19] to mathematically codify Chomsky's Minimalist Program [23] in the generative grammar framework.", "A minimalist grammar (MG) consists of a mental lexicon storing linguistic signs as arrays of syntactic, phonetic and semantic features, on the one hand, and of two structure-building functions, called “merge” and “move”, on the other hand.", "Syntactic features in the lexicon are, e.g., the linguistic base categories noun ($\\texttt {n}$ ), verb ($\\texttt {v}$ ), adjective ($\\texttt {a}$ ), or, in the present context, numeral ($\\texttt {num}$ ).", "These are syntactic heads selecting other categories either as complements or as adjuncts.", "The structure generation is controlled by selector categories that are “merged” together with their selected counterparts.", "Moreover, one distinguishes between licensors and licensees, triggering the movement of maximal projections.", "An MG does not comprise any phrase structure rules; all syntactic information is encoded in the feature array of the mental lexicon.", "Furthermore, syntax and compositional semantics can be combined via the lambda calculus [24], [25], while MG parsing can be implemented by compilation into an equivalent MCFG [26].", "One important property of MG is their effective learnability in the sense of Gold's formal learning theory [27].", "Specifically, MG can be acquired by positive examples [28], [29] from linguistic dependence graphs [30], [31], which is consistent with psycholinguistic findings on early-child language acquisition [32], [33], [34].", "However, learning through positive examples only, could easily lead to overgeneralization.", "According to Pinker [33] this could effectively be avoided through reinforcement learning [35], [36].", "Although there is only little psycholinguistic evidence for reinforcement learning in human language acquisition [37], [38], we outline a machine learning algorithm for the acquisition of an MG mental lexicon of numeral morphology and semantics through reinforcement learning in this contribution." ], [ "Numeral Grammar", "Our language acquisition approach for numeral grammar combines methods from computational linguistics, formal logic, and abstract algebra.", "Starting point of our algorithm are utterance meaning pairs (UMP) $u = \\langle e, \\sigma \\rangle \\:,$ where $e \\in E$ is the spoken or written utterance, given as the exponent of a linguistic sign [39].", "Technically, exponents are strings taken from the Kleene hull of some finite alphabet, $A$ , i.e.", "$E = A^*$ .", "The sign's semantics $\\sigma \\in \\Sigma $ is a logical term, usually expressed by means of lambda calculus." ], [ "Numeral Semantics", "The straightforward meaning of a numeral, say $\\texttt {fourtytwo}$ , is a number concept, such as 42.", "However, from a computational point of view, the UMP $\\langle \\texttt {fourtytwo}, 42 \\rangle $ simply relates a symbolic string $\\texttt {fourtytwo}$ to another symbolic string 42, without making the exponent and the semantics of the sign operationally accessible.", "This is achieved by interpreting digit strings in a $g$ -adic number system.", "In the decimal system with $g = 10$ , we have $42 = 4 \\times 10 + 2 \\times 1 = \\sum _{k = 1}^n a_k g^{k-1}$ with $n$ coefficients $0 \\le a_k \\le g - 1$ ($n$ the number of digits).", "Equation (REF ) can directly be written as a tree-like arithmetic term structure Figure: Arithmetic term tree for 42.Using the binary operators $+(x, y) = x + y$ and $\\times (x, y) = x \\times y$ , and writing them in the unary Schönfinkel representation $+(x, y) &=& +(y)(x) = x + y \\\\\\times (x, y) &=& \\times (y)(x) = x \\times y$ where $+(y)$ is regarded as a function $f: x \\mapsto (+(y))(x) = x + y =f(x)$ , and $\\times (y)$ as another function $g: x \\mapsto (\\times (y))(x) = x \\times y = g(x)$ , respectively, we obtain an expression of the arithmetic term algebra [39] in Polish notation $\\sigma = +( \\times (10^1)(4) )( \\times (10^0)(2) )$ that will be interpreted as the meaning of the numeral $\\texttt {fourtytwo}$ in the sequel [13], [14].", "Hence, the correct UMP for 42 is $u = \\langle \\texttt {fourtytwo}, +( \\times (10^1)(4) )( \\times (10^0)(2) ) \\rangle \\:.$" ], [ "Minimalist Grammar", "Following Kracht [39], we regard a linguistic sign as an ordered triple $z = \\langle e , t , \\sigma \\rangle $ with the same exponent $e \\in E$ and semantics $\\sigma \\in \\Sigma $ as in the UMP (REF ).", "In addition, $t \\in T$ is a syntactic type that we encode by means of minimalist grammar (MG) in its chain representation [20].", "The type controls the generation of syntactic structure and hence the order of lambda application, analogously to the typed lambda calculus in Montague semantics.", "An MG consists of a data base, the mental lexicon, containing signs as arrays of syntactic, phonetic and semantic features, and of two structure-generating functions, called “merge” and “move”.", "Syntactic features are the basic types $b \\in B$ from a finite set $B$ , with $b = \\texttt {n}, \\texttt {v}, \\texttt {a}, \\texttt {num}$ , etc, together with a set of their respective selectors $S = \\lbrace \\texttt {=}b | b \\in B \\rbrace $ that are unified by the “merge” operation.", "Moreover, one distinguishes between a set of licensers $L_+ = \\lbrace \\texttt {+}l | l \\in L \\rbrace $ and another set of their corresponding licensees $L_- = \\lbrace \\texttt {-}l | l \\in L \\rbrace $ triggering the “move” operation.", "$L$ is another finite set of movement identifiers.", "$F = B \\cup S \\cup L_+ \\cup L_-$ is called the feature set.", "Finally, one has a two-element set $C = \\lbrace \\texttt {:\\!", ":}, \\texttt {:} \\rbrace $ of categories, where “$\\texttt {:\\!", ":}$ ” indicates simple, lexical categories while “$\\texttt {:}$ ” denotes complex, derived categories.", "The ordering of syntactic features is prescribed as regular expressions, i.e.", "$T = C (S \\cup L_+)^* B L_-^*$ is the set of syntactic types [19], [20].", "The set of linguistic signs is then given as $Z = E \\times T \\times \\Sigma $ [39].", "Let $e_1, e_2 \\in E$ be exponents, $\\sigma _1, \\sigma _2 \\in \\Sigma $ semantic terms in the lambda calculus, $f \\in B \\cup L$ one feature identifier, $\\mathbf {t}, \\mathbf {t}_1, \\mathbf {t}_2 \\in F^*$ feature strings compatible with the regular types in $T$ , $\\cdot \\in C$ and $\\mathbf {z}, \\mathbf {z}_1, \\mathbf {z}_2 \\in Z^*$ sequences of signs, then $\\langle e_1 , \\texttt {:\\!", ":} \\texttt {=}f \\mathbf {t}_1, \\sigma _1 \\rangle $ and $\\langle e_2 , \\texttt {:} f , \\sigma _2 \\rangle $ form signs in the sense of (REF ).", "A sequence of signs is called a minimalist expression, and the first sign of an expression is called its head, controlling the structure building through “merge” and “move” as follows.", "The MG function “merge” is defined through inference schemata $&&\\dfrac{\\langle e_1, \\texttt {:\\!", ":=}f \\mathbf {t}, \\sigma _1 \\rangle \\quad \\langle e_2, \\cdot f, \\sigma _2 \\rangle \\mathbf {z}}{\\langle e_1e_2, \\texttt {:} \\mathbf {t}, \\sigma _1\\sigma _2 \\rangle \\mathbf {z}}\\,\\text{merge-1} \\:, \\\\&&\\dfrac{\\langle e_1, \\texttt {:=}f \\mathbf {t}, \\sigma _1 \\rangle \\mathbf {z}_1 \\quad \\langle e_2, \\cdot f, \\sigma _2 \\rangle \\mathbf {z}_2}{\\langle e_2e_1, \\texttt {:} \\mathbf {t}, \\sigma _1 \\sigma _2 \\rangle \\mathbf {z}_1 \\mathbf {z}_2}\\,\\text{merge-2} \\:, \\\\&&\\dfrac{\\langle e_1,\\cdot \\texttt {=}f \\mathbf {t}_1, \\sigma _1 \\rangle \\mathbf {z}_1 \\quad \\langle e_2, \\cdot f \\mathbf {t}_2, \\sigma _2 \\rangle \\mathbf {z}_2}{\\langle e_1, \\texttt {:} \\mathbf {t}_1, \\sigma _1 \\rangle \\mathbf {z}_1\\langle e_2, \\texttt {:} \\mathbf {t}_2, \\sigma _2 \\rangle \\mathbf {z}_2}\\,\\text{merge-3} \\:, $ Correspondingly, “move” is given through $&&\\dfrac{\\langle e_1, \\texttt {:+}f \\mathbf {t}, \\sigma _1 \\rangle \\mathbf {z}_1\\langle e_2, \\texttt {:-}f, \\sigma _2 \\rangle \\mathbf {z}_2}{\\langle e_2e_1, \\texttt {:} \\mathbf {t}, \\sigma _1\\sigma _2 \\rangle \\mathbf {z}_1 \\mathbf {z}_2}\\,\\text{move-1} \\:, \\\\&&\\dfrac{\\langle e_1, \\texttt {:+}f \\mathbf {t}_1, \\sigma _1 \\rangle \\mathbf {z}_1\\langle e_2, \\texttt {:-}f \\mathbf {t}_2, \\sigma _2 \\rangle \\mathbf {z}_2}{\\langle e_1, \\texttt {:} \\mathbf {t}_1, \\sigma _1 \\rangle \\mathbf {z}_1\\langle e_2, \\texttt {:} \\mathbf {t}_2, \\sigma _2 \\rangle \\mathbf {z}_2}\\,\\text{move-2} \\:.", "$ where only one sign with licensee $\\texttt {-}f$ may appear in the expression licensed by $\\texttt {+}f$ in the head.", "This so-called shortest movement constraint (SMC) guarantees syntactic locality demands [19], [20].", "A minimalist derivation terminates when all syntactic features besides only one distinguished start symbol, in our case $\\texttt {num}$ , have been consumed.", "The meaning of rules (REF – ) and their applicability becomes clear in the next section." ], [ "Reinforcement Learning", "The language learner is a cognitive agent $L$ in a state $X_t$ , to be identified with $L$ 's mental lexicon at training time $t$ .", "At time $t = 0$ , $L$ is initialized as a tabula rasa with empty lexicon $X_0 \\leftarrow \\emptyset $ and exposed to UMPs produced by a continuously counting teacher $T$ .", "The first UMPs given by $T$ are $u_1 = \\langle \\texttt {one}, 1 \\rangle $ , $u_2 = \\langle \\texttt {two}, 2 \\rangle $ , $u_3 = \\langle \\texttt {three}, 3 \\rangle $ , and so forth.", "Note that we assume $T$ presenting already complete UMPs and not singular utterances to $L$ .", "Thus we avoid the symbol grounding problem of firstly assigning meanings $\\sigma $ to uttered exponents $e$ [40], which will be addressed in future research.", "Moreover, we assume that $L$ is instructed to reproduce $T$ 's counting based on its own numeric understanding.", "This provides a feedback loop and therefore applicability of reinforcement learning [35], [36].", "As long as $L$ is not able to detect patterns or common similarities in $T$ 's UMPs, it simply adds new entries directly to its mental lexicon, assuming that all numerals have base type $\\texttt {num}$ .", "Hence, $L$ 's state $X_t$ evolves according to the update rule $X_t \\leftarrow X_{t - 1} \\cup \\lbrace \\langle e_t, \\texttt {:\\!", ": num}, \\sigma _t \\rangle \\rbrace \\:,$ when $u_t = \\langle e_t, \\sigma _t \\rangle $ is the UMP presented at time $t$ by $T$ .", "In this way, the mental lexicon $X_{12}$ of simplex numerals in Tab.", "REF has been acquired at time $t = 12$ .", "Table: Content of the minimalist lexicon X 12 X_{12} of language learner LL at time t=12t = 12.The learner is so able to perfectly reproduce the learned entries directly via data base query.", "As a consequence, the teacher $T$ rewards $L$ thus signalling that it has correctly learned the lexicon $X_{12}$ .", "When the teacher continues counting: $u_{13} = \\langle \\texttt {thirteen}, +(\\times (10^1)(1))(3) \\rangle $ , $u_{14} = \\langle \\texttt {fourteen}, +(\\times (10^1)(1))(4) \\rangle $ , $u_{15} = \\langle \\texttt {fifteen}, +(\\times (10^1)(1))(5) \\rangle $ and so on, the learner's pattern matching faculty detects a common affix $\\texttt {teen}$ in the exponents, and a common function $x \\mapsto +(\\times (10^1)(1))(x)$ in the semantics of UMPs $u_{13}, u_{14}, \\dots u_{19}$ .", "Thus, in a first step UMP $u_{13}$ is still added to the lexicon according to update rule (REF ), $X_{13} \\leftarrow X_{12} \\cup \\lbrace \\langle \\texttt {thirteen}, \\texttt {\\!", ":: num} , +(\\times (10^1)(1))(3) \\rangle \\rbrace \\:.$ However, at time $t = 14$ , pattern matching, segmentation and lambda abstraction are performed, leading to a revision [28], [29] $X_{14} &\\leftarrow & X_{13} \\setminus \\lbrace \\langle \\texttt {thirteen}, \\texttt {\\!", ":: num} , +(\\times (10^1)(1))(3) \\rangle \\rbrace \\\\X_{14} &\\leftarrow & X_{14} \\cup \\lbrace \\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\rbrace \\\\X_{14} &\\leftarrow & X_{14} \\cup \\lbrace \\langle \\texttt {thir}, \\texttt {:\\!", ": num}, 3 \\rangle \\rbrace \\:,$ such that in (REF ) the previously learned lexicon $X_{13}$ is revised by removing the entry for the composite $\\texttt {thirteen}$ , followed by adding the complex morpheme $\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle $ in (), and completed in ().", "For the morpheme $\\langle \\texttt {four}, \\texttt {:\\!", ": num}, 4 \\rangle $ is already contained in the lexicon, further updating is not required at this time.", "Next, $L$ has to correctly reproduce the UMPs $u_{13}$ and $u_{14}$ by invoking its utterance-meaning transducer (UMT) [14].", "Consider $u_{13}$ , which is now ambiguous with respect to the lexicon entries for 3.", "First, $L$ may access data base entries $\\langle \\texttt {thir}, \\texttt {:\\!", ": num}, 3 \\rangle $ and $\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle $ and derive the following UMP according to the MG rules (REF – ) $&&\\dfrac{\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\qquad \\langle \\texttt {thir}, \\texttt {:\\!", ": num}, 3 \\rangle }{\\langle \\texttt {thirteen}, \\texttt {: num}, (\\lambda x.+(\\times (10^1)(1))(x))(3) \\rangle } \\: \\text{merge-2} \\:.", "\\nonumber \\\\$ This yields the correct semantics with the lambda calculus $(\\lambda x.+(\\times (10^1)(1))(x))(3) = +(\\times (10^1)(1))(3) = 13$ and the uttered exponent $\\texttt {thirteen}$ , generated by the UMT [14], is well-formed and will be rewarded by the teacher.", "However, $L$ may alternatively select data base entries $\\langle \\texttt {three}, \\texttt {:\\!", ": num}, 3 \\rangle $ and $\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle $ as well.", "Then $&&\\dfrac{\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\qquad \\langle \\texttt {three}, \\texttt {:\\!", ": num}, 3 \\rangle }{\\langle \\texttt {threeteen}, \\texttt {: num}, (\\lambda x.+(\\times (10^1)(1))(x))(3) \\rangle } \\: \\text{merge-2} \\nonumber \\\\$ will be derived instead.", "Although it has the correct semantics 13, uttering the exponent $\\texttt {threeteen}$ will be rejected by $T$ .", "Upon the resulting punishment, $L$ has to reconfigure its mental lexicon by introducing additional licenser/licensee pairs, here denoted as $\\texttt {+k}/\\texttt {-k}$ [28], [29].", "Table REF displays the result of this reorganization process at some time $n$ later than $t = 19$ when all possible ungrammaticalities have been abandoned.", "Table: Content of the minimalist lexicon X n X_n of language learner LL after punishment reorganization at time nn.Now only the data base selection $\\langle \\texttt {thir}, \\texttt {:\\!", ": num} \\ \\texttt {-k}, 3 \\rangle $ and $\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle $ leads to a grammatical derivation of the UMT [14], $&&\\dfrac{\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\qquad \\langle \\texttt {thir}, \\texttt {:\\!", ": num} \\ \\texttt {-k}, 3 \\rangle }{\\langle \\texttt {teen}, \\texttt {: +k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\langle \\texttt {thir}, \\texttt {: -k}, 3 \\rangle } \\: \\text{merge-3} \\nonumber \\\\&&\\dfrac{\\langle \\texttt {teen}, \\texttt {: +k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\langle \\texttt {thir}, \\texttt {: -k}, 3 \\rangle }{\\langle \\texttt {thirteen}, \\texttt {: num}, (\\lambda x.+(\\times (10^1)(1))(x))(3) \\rangle \\:,} \\: \\text{move-1} \\:, \\nonumber \\\\$ while its ambiguous counterpart $&&\\dfrac{\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\qquad \\langle \\texttt {three}, \\texttt {:\\!", ": num}, 3 \\rangle }{\\langle \\texttt {threeteen}, \\texttt {: +k} \\ \\texttt {num}, (\\lambda x.+(\\times (10^1)(1))(x))(3) \\rangle } \\: \\text{merge-2} \\nonumber \\\\$ cannot be further processed due to a lacking licensee $\\texttt {-k}$ .", "The same argument applies to the ambiguous entries $\\langle \\texttt {five}, \\texttt {:\\!", ": num}, 5 \\rangle $ and $\\langle \\texttt {fif}, \\texttt {:\\!", ": num} \\ \\texttt {-k}, 5 \\rangle $ where only the latter successfully derives $\\langle \\texttt {fifteen}, \\texttt {: num}, +(\\times (10^1)(1))(5) \\rangle $ .", "Note that the currently learned grammar also derives the exponent $\\texttt {eightteen}$ instead of $\\texttt {eighteen}$ ; this could be corrected by either learning an additional entry $\\langle \\texttt {eigh}, \\texttt {:\\!", ": num} \\ \\texttt {-k}, 8 \\rangle $ and revising $\\langle \\texttt {eight}, \\texttt {:\\!", ": num}, 8 \\rangle $ , or, perhaps more appropriately, by introduction of additional phonotactical rules operating on abstract graphon representations [10].", "Moreover, since simplex numerals such as $\\texttt {four}$ , $\\texttt {six}$ , $\\texttt {seven}$ , and $\\texttt {nine}$ must not possess any other features than $\\texttt {num}$ , they would be doubled in a more rigorous treatment, resulting in four additional lexicon entries.", "From a semantic point of view, the lexicon state in Tab.", "REF is not yet satisfactory, because another step of lambda abstraction can be applied to entry $\\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle $ , entailing the semantics of plain addition $\\lambda x.+(\\times (10^1)(1))(x) = (\\lambda y.\\lambda x.+(y)(x))(\\times (10^1)(1)) \\:.$ Incorporating this into the training process gives another updating dynamics $X_m &\\leftarrow & X_{m - 1} \\setminus \\lbrace \\langle \\texttt {teen}, \\texttt {: =num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\rbrace \\\\X_m &\\leftarrow & X_m \\cup \\lbrace \\langle \\varepsilon , \\texttt {:\\!", ": =num} \\ \\texttt {=num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda y.\\lambda x.+(y)(x) \\rangle \\rbrace \\\\X_m &\\leftarrow & X_m \\cup \\lbrace \\langle \\texttt {teen}, \\texttt {:\\!", ": num}, \\times (10^1)(1)) \\rangle \\rbrace \\:,$ such that (REF ) removes the original $\\texttt {teen}$ from the lexicon which is subsequently replaced by the phonetically void addition operator $\\langle \\varepsilon , \\texttt {:\\!", ": =num} \\ \\texttt {=num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda y.\\lambda x.+(y)(x) \\rangle $ and a new representative $\\langle \\texttt {teen}, \\texttt {:\\!", ": num}, \\times (10^1)(1)) \\rangle $ .", "Table REF shows the updated lexicon at some even later time $t = m$ .", "Table: Content of the minimalist lexicon X m X_m of language learner LL after semantic reorganization at time mm.Now, the correct derivation of $\\texttt {thirteen}$ reads $&&\\dfrac{\\langle \\varepsilon , \\texttt {:\\!", ": =num} \\ \\texttt {=num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda y.\\lambda x.+(y)(x) \\rangle \\qquad \\langle \\texttt {teen}, \\texttt {:\\!", ": num}, \\times (10^1)(1) \\rangle }{\\langle \\texttt {teen}, \\texttt {:} \\texttt {=num} \\ \\texttt {+k} \\ \\texttt {num}, (\\lambda y.\\lambda x.+(y)(x))(\\times (10^1)(1)) \\rangle } \\: \\text{merge-1} \\nonumber \\\\&&\\dfrac{\\langle \\texttt {teen}, \\texttt {:} \\texttt {=num} \\ \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\qquad \\langle \\texttt {thir}, \\texttt {:\\!", ": num} \\ \\texttt {-k}, 3 \\rangle }{\\langle \\texttt {teen}, \\texttt {:} \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\langle \\texttt {thir}, \\texttt {: -k}, 3 \\rangle } \\: \\text{merge-3} \\nonumber \\\\&&\\dfrac{\\langle \\texttt {teen}, \\texttt {:} \\texttt {+k} \\ \\texttt {num}, \\lambda x.+(\\times (10^1)(1))(x) \\rangle \\langle \\texttt {thir}, \\texttt {: -k}, 3 \\rangle }{\\langle \\texttt {thirteen}, \\texttt {:} \\texttt {num}, +(\\times (10^1)(1))(3) \\rangle \\:.}", "\\: \\text{move-1} \\:.", "\\nonumber \\\\$ By virtue of lexicon $X_m$ the learner is able to correctly reproduce numerals $\\texttt {one}, \\dots , \\texttt {nineteen}$ , employing its UMT [14].", "This will be rewarded by the teacher.", "Later, the teacher utters the UMPs $u_{20} = \\langle \\texttt {twenty}, \\times (10^1)(2) \\rangle $ , $u_{21} = \\langle \\texttt {twentyone}, +(\\times (10^1)(2))(1) \\rangle $ , $u_{22} = \\langle \\texttt {twentytwo}, +(\\times (10^1)(2))(2) \\rangle $ etc.", "Again, the learner will first incorporate $\\langle \\texttt {twenty}, \\times (10^1)(2) \\rangle $ according to rule (REF ) into the lexicon.", "But upon perceiving $u_{21}$ its pattern matching device produces a common morpheme $\\langle \\texttt {ty}, \\texttt {:\\!", ":} \\: \\texttt {=num} \\: \\texttt {num}, \\lambda x.", "\\times (10^1)(x) \\rangle $ through lambda abstraction.", "Then the essentially same processes of reinforcement learning are repeated as above until the complete numeral system of the language taught by the teacher has been acquired by the learner." ], [ "Discussion", "In this contribution we have outlined an algorithm for effectively learning the syntactic morphology and semantics of English numerals [11].", "Number words are presented to a cognitive agent by a teacher in form of utterance meaning pairs (UMP) where the meanings are encoded as arithmetic terms from a suitable term algebra.", "This representation allows for the application of compositional semantics via lambda calculus.", "For the description of syntactic categories we use Stabler's minimalist grammar (MG) [19], [20], a powerful computational implementation of Chomsky's recent Minimalist Program for generative linguistics [23].", "Despite the controversy between Chomsky and Skinner [41], we exploit reinforcement learning [35], [36] as training paradigm.", "Since MG encodes universal linguistic competence through the five inference rules (REF – ), thereby separating innate linguistic knowledge from the contingently acquired lexicon, our approach could potentially unify generative grammar and reinforcement learning, hence resolving the abovementioned dispute.", "Minimalist grammar can be learned from linguistic dependency structures [28], [29], [30], [31] by positive examples, which is supported by psycholinguistic findings on early human language acquisition [32], [33], [34].", "However, as Pinker [33] has emphasized, learning through positive examples alone, could lead to undesired overgeneralization.", "Therefore, reinforcement learning that might play a role in children language acquisition as well [37], [38], could effectively avoid such problems.", "The required dependency structures are directly provided by the semantics in the training UMPs.", "Thus, our approach is explicitly semantic-driven, in contrast to the algorithm in [31] that regards dependencies as latent variables for EM training.", "As a proof-of-concept we suggested an algorithm for English numerals.", "However, we also have evidence that it works for German and French number systems as well and hopefully for other languages also.", "Using attribute-value logics [42] and its associated term algebra, it should be possible to encode the semantics of arbitrary utterances in a compositional fashion.", "This will open up an entirely new avenue for the further development of speech-controlled cognitive user interfaces [8]." ] ]
1906.04447
[ [ "Twisted characters and holomorphic symmetries" ], [ "Abstract We consider holomorphic twists of arbitrary supersymmetric theories in four dimensions.", "Working in the BV formalism, we rederive classical results characterizing the holomorphic twist of chiral and vector supermultiplets, computing the twist explicitly as a family over the space of nilpotent supercharges in minimal supersymmetry.", "The BV formalism allows one to work with or without auxiliary fields, according to preference; for chiral superfields, we show that the result of the twist is an identical BV theory, the holomorphic $\\beta\\gamma$ system with superpotential, independent of whether or not auxiliary fields are included.", "We compute the character of local operators in this holomorphic theory, demonstrating agreement of the free local operators with the usual index of free fields.", "The local operators with superpotential are computed via a spectral sequence, and are shown to agree with functions on a formal mapping space into the derived critical locus of the superpotential.", "We consider the holomorphic theory on various geometries, including Hopf manifolds and products of arbitrary pairs of Riemann surfaces, and offer some general remarks on dimensional reductions of holomorphic theories along the $(n-1)$-sphere to topological quantum mechanics.", "We also study an infinite-dimensional enhancement of the flavor symmetry in this example, to a recently-studied central extension of the derived holomorphic functions with values in the original Lie algebra that generalizes the familiar Kac--Moody enhancement in two-dimensional chiral theories." ], [ "Introduction", "Twists of supersymmetric theories have been the subject of intense study over the past thirty years.", "Such twists produce simpler quantum field theories, which are of mathematical interest as sources of, or organizing principles for, invariants of the spacetimes on which they live, and are often related to interesting gauge-theoretic moduli spaces associated to the spacetime.", "Perhaps one of the most familiar cases is the topological twist of $=2$ gauge theory considered in , for which the relevant moduli space is the space of anti-self-dual instantons.", "In such a topologically twisted theory, deformations of the metric act trivially up to homotopy, so that the theory depends only on the smooth structure of the spacetime.", "Starting on an affine spacetime, a necessary condition for the existence of a topological twist is a nilpotent supercharge whose image under the bracket contains all of the translation operators.", "More generally, when no such supercharge exists, one can still define a more general class of twists by passing to the cohomology of a chosen nilpotent supercharge, with the caveat that the resulting theory may only make sense on manifolds of restricted holonomy.In fact, such a caveat applies even for topological supercharges; see  for a general discussion.", "An example of such a construction is the holomorphic twist, which (due to general properties of Clifford multiplication) exists in any even dimension with any amount of supersymmetry.", "In these cases, the space of nullhomotopic translations is half-dimensional, and corresponds to a choice of complex structure on the spacetime.", "The theories which result from such a procedure depend only on the structure of the spacetime as a complex manifold, and can be defined generally on Kähler (or Calabi–Yau) manifolds; we will refer to such objects—whether or not they arise from the twist of a full $=1$ theory—as holomorphic field theories.", "Holomorphic twists have been previously considered in , , ; as objects in their own right, holomorphic field theories in various guises have been studied in the works of , , , , just for example.", "More recently, a program for studying twisted versions of supergravity and string theory in terms of Kodaira-Spencer theory and holomorphic Chern-Simons is currently being developed by Costello and Li in , and Costello in , .", "For related work, including supporting evidence of some conjectures of Costello and Li, see , .", "In addition, a foundational mathematical treatment of holomorphic field theory is given in .", "In this paper, we will most closely borrow notations and conventions from the cited works , .", "One advantage of holomorphic twists is that renormalization is significantly better behaved for holomorphic theories than for their untwisted parents.", "Indeed, regularization in supersymmetric theories, especially gauge theories, is notoriously difficult.", "A salient feature of holomorphic theories is the existence of a gauge in which analytic difficulties become much more tractable.", "Consequently, facets of these theories, such as their anomalies, can be cast in a more algebraic framework.", "While we don't delve deeply into a utilization of such features in this paper, we refer the reader to  for work on the renormalization of holomorphic field theories.", "Another appealing aspect of holomorphic theories is the theory of their observables.", "In complex dimension one, the local observables of a holomorphic theory are mathematically described by a vertex (or chiral) algebra .", "Likewise, on a global Riemann surface there is a rich theory of conformal blocks which describes how such vertex algebras are glued together.", "The partition functions of chiral theories have interesting modular properties, and a chiral theory can be obtained from a supersymmetric theory via a holomorphic twist; the partition function of the holomorphic twist is then the elliptic genus of the original theory.", "Moreover, global symmetries of two-dimensional theories enhance to infinite-dimensional Kac–Moody symmetries of chiral theories.", "One philosophical point that we wish to explore in this note is the idea that these phenomena are not as peculiar to the two-dimensional case as is usually assumed.", "Rather, most of the above can be interpreted in the setting of holomorphic twists of minimally supersymmetric theories in any even dimension.", "There are several good reasons the two-dimensional case is easier to see: firstly, due to the factorization of the two-dimensional wave equation, one does not need the notion of a holomorphic twist to arrive at chiral two-dimensional theories.", "In particular, supersymmetry is not an essential ingredient in dimension two.", "Secondly, the Dolbeault cohomology of punctured $d$ is concentrated in a single degree only for $d=1$ .", "For larger values of $d$ , it is therefore essential to work in a derived setting.", "The relevant analogues of Kac–Moody algebras were studied recently in , .", "The theory of factorization algebras, as developed in , , is a general mathematical tool that described the algebra of observables in any perturbative field theory.", "While the data of a factorization algebra can be quite unwieldy, in special situations—for example, in chiral CFTs or topological theories—factorization algebras admit efficient algebraic descriptions.", "For higher-dimensional holomorphic theories, it is tempting to speculate that this holomorphic factorization algebra should be analogous to a vertex algebra, and should reflect the structure of the operator product expansion in the twisted theory.", "While we don't develop such a theory in this paper, we gesture at the existence of such a structure through quantities reminiscent of ordinary vertex algebras, such as a higher dimensional $q$ -character.", "Throughout this paper, we work with $=1$ supersymmetric theories in four dimensions, focusing on the chiral multiplet with superpotential (the Wess–Zumino model).", "While we compute the twist of a general $=1$ theory in our language, and make use of the twisted vector multiplet to connect to the holomorphic flavor current multiplet, we do not provide explicit results for characters of local operators in gauge theory here, although the techniques to do so are largely developed.", "We hope to return to a more complete treatment in future work, with the goal of studying dualities, such as Seiberg duality, at the level of the holomorphic twist.", "The explicit description we give of the local operators in twisted $=1$ theories, coupled with an understanding of renormalization group flow in the holomorphic setting, should make it tractable to understand Seiberg dualities completely at the level of twisted local operators.", "Here is an outline of the remainder of the paper.", "In §, we recall basic facts about supersymmetry algebras and holomorphic twists, as well as setting up conventions for the remainder of the paper and giving some general discussion.", "The key perspective we'd like to emphasize here is that—even if some of the calculations we perform are for a fixed choice of nilpotent supercharge—the formalism is set up in such a way that all constructions exist as a family over the nilpotence variety.", "In addition to being of theoretical interest, this will have applications later in the paper, when dimensional reduction is studied §.", "In §, we discuss theories of $=1$ chiral multiplets in the BV formalism, and demonstrate the equivalence of the BV complex obtained in the auxiliary-field formulation with the higher-order BV complex for on-shell supersymmetry constructed by Baulieu et al. .", "The main novelty of this alternative formulation is of an off-shell description of $=1$ supersymmetry without the need to introduce auxiliary fields; the twist can be obtained equivalently from either description.", "The first part of § recalls general facts about holomorphic field theories in the BV formalism.", "Then, we go on to compute the holomorphic twist of a theory of chiral matter with $F$ -term interactions, both in the usual auxiliary field formulation and in the higher order BV setup.", "We show that both BV complexes lead—in somewhat subtle fashion—to the same theory, the $\\beta \\gamma $ system with $F$ -term interactions.", "We perform the twist as a family over the nilpotence variety of the supersymmetry algebra; in fact, both the tangent and normal bundles to the nilpotence variety play a role, the first generating deformations of complex structure on $^4$ and the second being responsible for a holomorphic analogue of topological descent.", "In § we give a definition of the higher dimensional local character for holomorphic theories in arbitrary dimension, generalizing the usual $q$ -character definition for vertex algebras.", "We proceed to compute the character of the free $\\beta \\gamma $ system, demonstrating agreement with the $=1$ index as studied in .", "Furthermore, citing well-known results of supersymmetric localization in three and four dimensions, we show that the local character of the twisted theory agrees with its partition function on Hopf manifolds, which are complex surfaces topologically equivalent to $S^1 \\times S^3$ .", "This is strong evidence of a holomorphic state-operator correspondence, which we conjecture that this class of holomorphic field theories satisfies.", "Note that, while Hopf manifolds are not Kähler, they nevertheless have $SO(3)$ holonomy.", "For a very thorough discussion of placing supersymmetric $=1$ field theories on Hopf manifolds, as well as other 4-manifolds, we refer the reader to , .", "Our results presented here are complementary to this work, but are different in the sense that we study theories directly at the level of the holomorphic twist, and place the $\\beta \\gamma $ system on the Hopf manifold by exhibiting it as a quotient of punctured 2, which is Kähler.", "We further argue that compactification of a general holomorphic theory in $d$ complex dimensions along $S^{2d-1}$ gives rise to a topological quantum mechanics, which is just a single associative algebra: in fact, this always takes the form of a dg Weyl algebra, and a natural module is given by the local operators in the original theory.", "(The quantum mechanics contains additional local operators, arising from nonlocal operators in the higher-dimensional theory wrapping a holomorphic $(d-1)$ -cycle.)", "Proving an analogue of the Stone–von Neumann theorem for a broad enough class of such dg algebras should result in a general argument establishing the holomorphic state-operator correspondence.", "In §, we consider the spectral sequence induced by deforming the BV differential of the free theory by superpotential terms, and correspondingly compute the character in this case.", "Further, we introduce a chiral version of the Jacobi ring associated to a holomorphic potential, and show that it agrees with the holomorphic local operators of the theory.", "The fields on the $E_1$ page of the relevant spectral sequence consists of two copies of functions on the formal disk, tensored with dual vector spaces and placed in adjacent homological degrees; the differential arising from interactions, acting on operators, simply witnesses functions on this space as the Koszul complex for the partial derivatives of the superpotential.", "One can thus interpret our result as an identification of the holomorphically twisted Wess–Zumino model with the sigma model on the derived critical locus of the superpotential.", "The goal of § is to introduce an infinite-dimensional symmetry present in the twisted theory on $^2$ —by (derived) holomorphic functions on the spacetime, with values in the Lie algebra of the flavor symmetry.", "This type of symmetry was first discussed in  as a higher dimensional version of the Kac–Moody symmetry present in chiral CFT; the relevant algebras were also discussed in .", "This symmetry is quite different from other versions of infinite chiral symmetries in four dimensions, for example in , , in a few regards.", "Firstly, it is present even at the level of the twist of $=1$ supersymmetry, rather than the $=2$ required in .", "As such, this algebra will also act in a holomorphic twist of any $=2$ theory, from which any other twist (including that considered in ) can be obtained by a further deformation of the differential.", "Secondly, this symmetry does not pick out any preferred $$ -plane (or a more general Riemann surface) inside of the four-dimensional manifold $^2$ ; as such, we view it as more intrinsic symmetry of the twisted theory.", "At the level of algebras, however, our previous caveat is of course still valid: one must work in a derived way in order to see anything nontrivial.", "Indeed, instead of the state space having a symmetry by a Lie algebra (as occurs for affine Kac–Moody in complex dimension one), there is an $L_\\infty $ algebra which acts: as previously mentioned, it arises as a central extension of derived sections of holomorphic functions on the complex manifold.", "Finally, in § we demonstrate the compatibility of our calculations with dimensional reduction to a theory with in two dimensions along an arbitrary Riemann surfacae.", "In the case of a torus, this procedure produces the holomorphic twist of $=(2,2)$ ; more generally, one finds a twist of a theory with $=(0,2)$ supersymmetry.", "We also consider dimensional reduction along a plane which may not be a complex subspace of 2, using our expression for the twist as a family over the space of complex structures on $^4$ ; this produces either the holomorphic or $B$ -type twist of the resulting $=(2,2)$ theory, and witnesses the spectral sequence between them  as an instance of the Hodge-to-de-Rham spectral sequence." ], [ "Acknowledgements", "We thank K. Costello, T. Dimofte, R. Eager, O. Gwilliam, V. Kac, D. Pei, M. Szczesny, and J. Walcher for conversations and helpful advice related to many aspects of this work.", "I.S.", "thanks the Kavli Institute for Theoretical Physics, the Erwin-Schrödinger-Institut für mathematische Physik, the Center for Quantum Geometry of Moduli Spaces, and the Mathematisches Forschungsinstitut Oberwolfach for hospitality during the preparation of this work.", "B.W.", "thanks Northeastern University, the Banff International Research Station, the Aspen Center for Physics, and the Simons Center for hospitality during the preparation of this work.", "The work of I.S.", "was supported in part by the Deutsche Forschungsgemeinschaft, within the framework of the Exzellenzstrategie des Bundes und der Länder.", "The work of B.W.", "was supported by Northeastern University and National Science Foundation Award DMS-1645877." ], [ "$=1$ supersymmetry in four dimensions", "We begin by recalling some standard facts and conventions of supersymmetric field theory." ], [ "Conventions for operators in field theories", "The building blocks of a classical field theory are its fields, which arise from local data on the spacetime manifold.", "The fields of a theory without defects define a locally free sheaf on the spacetime manifold $M$ , and are typically given as the smooth sections of some (translation invariant) super vector bundle on $M$ .", "For instance, the chiral multiplet in a four-dimensional $=1$ theory contains as component fields one complex scalar and one Weyl fermion: (4, = C(4) ,       (4,S+) = C(4) S+ .", "These are both sectionsBy convention, sections or smooth functions are always complex valued.", "of trivial bundles on $^4$ , and we refer to the trivial bundle ${} \\oplus \\Pi {S_+}$ simply as the chiral multiplet for the remainder of this section.$\\Pi (-)$ denotes parity shift with respect to the super grading.", "Suppose $E$ is a super vector bundle on a spacetime manifold $M$ , defining a field theory whose fields are its sheaf $$ of smooth sections.", "(In full generality, such as in gauge theories, $E$ will also carry a differential.)", "If $x \\in M$ is any point, we can speak of the local operators of the theory supported at $x \\in M$ .", "These are operators that depend algebraically (or formally algebraically) on the fields and their derivatives at the point $x$ .", "Mathematically, the definition is the following.", "Let $E$ be a super vector bundle on $M$ and $$ its sheaf of smooth sections.", "The space of local operators of $$ at $x \\in M$ is the super vector spaceHere, $\\hat{}(W) = \\prod _{n \\ge 0} ^n(W)$ is the completed symmetric algebra.", "$_x = \\hat{}(J^\\infty E |_x)^\\vee .$ Here, $J^\\infty E$ denotes the super vector bundle of $\\infty $ -jets of $E$ and $J^\\infty E|_x$ is its fiber at $x \\in M$ .", "In this work, we will mostly consider field theories defined on an affine space $$ , which will be $^n$ with a metric of either Euclidean or Lorentzian signature.", "(For most of the paper, $$ will simply be Euclidean $^4$ .)", "In this setting, it is natural to suppose that the bundle $E$ is translation invariant.", "That is, we specify an isomorphism with the trivial bundle $E = ^n \\times E_0$ .", "(Note that translation invariance of a bundle is data, rather than a property!)", "$E_0$ denotes the fiber of $E$ over $0 \\in ^n$ .", "For translation-invariant bundles, the spaces of local functionals over any two points are identified, so that it makes sense to write $O_x$ for the operator at $x\\in $ corresponding to $O_0 \\in _0$ .", "Take $E = { to be the trivial complex line bundle on~^4, so that \\phi \\in is a complex scalar field.", "An example of a local operator is given by[eq:exop]{O_y(\\phi ) = \\phi (y) {\\phi }{x_1} (y) .", "}Since the chiral multiplet in four-dimensional =1 theories contains a complex scalar field, this expression will also define an operator in any =1 theory with chiral matter.", "}We note that it is standard in the physics literature to simply refer to an operator like~(\\ref {eq:exop}) using the notation$$\\phi {\\phi }{x_1} .$$This standard notation has the potential to lead to confusion between fields and operators.The distinction is conceptually important, though, as the following remark makes clear: while the fields of a theory are naturally a sheaf over the spacetime, operators with specified support naturally form a sort of \\emph {(pre)cosheaf}.$ As we have emphasized, the fields of the theory are the smooth sections $$ of the super vector bundle $E$ .", "Intrinsically, the fields satisfy a sort of locality: they form a sheaf on spacetime, which for us is just $$ .", "One can define a more general class of operators (sometimes thought of as “smeared” operators in physics) by restricting the support not to be pointlike, but to lie in a more general open set.", "That is, we could consider all functions on the sheaf $$ : $() = \\hat{}(^\\vee ) .$ Since $$ is a sheaf, and we are taking an appropriate topological linear dual, this object behaves like a cosheaf: it makes sense to evaluate $()$ on any open set $U \\subset $ ; and if $U \\hookrightarrow V$ is an embedding of open sets, then there is a natural map $()(U) \\rightarrow ()()$ .", "In fact, a more general structure is present: the object $()$ has the structure of a factorization algebra on $$  , .", "This more general notion of an observable is related to the local operators we have just defined.", "We can evaluate the factorization algebra on a disk $D(x,r)$ centered at $x \\in $ to obtain the super vector space $()(D(x,r))$ .", "The local operators (with pointlike support) embed inside this space: x ()(D(x,r)).", "In fact, there is a more precise relationship in the context of holomorphic theories, as we will point out in §REF .", "The space of local operators is not quite the home for Lagrangians in a (supersymmetric) field theory.", "The difference is that we want to consider local operators that are only defined up total derivatives.", "The way to say this invariantly is the following.", "Notice that the bundle of jets $J^\\infty E$ is not just a super vector bundle; in fact, it is a super $D$ -module (in the appropriate super sense).", "In other words, it comes equipped with a canonical flat connection.", "Thus, as we vary the points $x \\in ^n$ the local operators $_x$ also carry the structure of a $D$ -module.", "Next, we recall the axiomatization of action functionals as integrals over Lagrangian densities.", "We borrow conventions from , .", "The space of local functionals of a super vector bundle $E$ is $() = {\\rm Dens}_{M} _{D_{M}} _\\text{red} (J^\\infty E) .$ Here, $D_M$ is the algebra of differential operators on $M$ , and $_\\text{red}(J^\\infty E) = \\prod _{n > 0} ^n_{C^\\infty _M} \\left(J^\\infty (E)^\\vee \\right)$ is the space of reduced functionals on jets.", "A local functional encapsulates the data of a Lagrangian density defining a theory.", "We will often write local functionals as operators, with the caveat that we are modding out by those functionals that are a total derivative.", "For a field theory on an affine spacetime $M \\cong ^n$ , there is an action by the abelian (complex) Lie algebra of translations, $= ^n = _\\lbrace \\partial _{x_1}, \\ldots , \\partial _{x_n}\\rbrace ,$ on the space of local functionals.", "$$ is just $_, regarded as an abelian Lie algebra;we will use $ n$ for the real dimension of the spacetime, and later on $ d = n/2$ for the complex dimension after the holomorphic twist.We will mostly be interested in those local functionals that are invariant with respect to this action.Further, if $ E$ is a translation invariant vector bundle on~$$, there is an isomorphism$$()^{^n} \\cong \\cdot n x ^{}_{U()} _\\text{red} (J^\\infty E|_0) .$$where we note that the algebra $ U() = [x1, ..., xn]$ is precisely the (commutative) algebra of translation invariant differential operators.$ As an example of a local functional consider the free supersymmetric Lagrangian for the $=1$ chiral multiplet on $^4$ .", "It consists of the standard kinetic terms: L = ( - + i )   4x.", "Note that this functional is manifestly translation invariant." ], [ "Supersymmetry algebras and spinors in four dimensions", "We now specialize from general field theories to supersymmetric field theories.", "By definition, these are theories in which the action of the affine transformations of the spacetime $$ are extended to a super Lie algebra.", "We also consider only four-dimensional theories here; as such, let $= ^{1,3}$ or $^4$ , corresponding to Lorentzian or Euclidean signature.", "(We will work with complexified algebras in any case, so that $$ will usually denote 4; however, the signature will be relevant in a couple of places, which we will point out explicitly when they occur.)", "In order to discuss the supersymmetry algebra, we first fix a couple of conventions related to spinors in four dimensions.", "In any number of dimensions, the spinors of $()$ can be constructed by choosing a maximal isotropic subspace $L\\subset V_; the exterior algebra{D = \\Lambda ^* (L)}then carries the structure of a Clifford module.", "To give this structure, we just need to specify the action of~$ V$ on~$ D$ by Clifford multiplication; we recall that, in even dimensions,{V_L \\oplus L^\\vee .", "}$ L$ is then taken to act by multiplication and $ L$ by contraction.", "The commutator generates the pairing between~$ L$ and~$ L$, which is precisely the inner product on~$$.", "Since the spin group sits inside of the even Clifford algebra, $ D$ acquires a representation of~$ ()$.", "This representation is reducible, since the splitting{D = \\Lambda ^\\text{even}(L) \\oplus \\Lambda ^\\text{odd}(L)}is preserved by the action of~$ 2 V ()$.", "These irreducible spinor representations are called \\emph {Weyl spinors} in the physics literature.$ We will always work with Weyl spinors, represented by symbols like $\\psi $ or $\\chi $ ; these transform in the complex two-dimensional chiral spinor representation $S_+ \\cong \\Lambda ^\\text{even}(L)$ of $(4)$ , constructed above.", "After using the exceptional isomorphism (4) (2) (2), the representation $S_+$ is the defining representation of the left $(2)$ , tensored with the trivial representation of the right.", "The antisymmetric square of this representation is the trivial representation; we will write this pairing simply as $\\psi \\chi $ , which (since spinor fields will have odd parity by spin and statistics) is meaningful independent of the order in which the symbols appear.", "An element of the $S_-$ representation will carry a bar, reflecting the fact that—in Lorentzian signature—the $S_-$ is the complex conjugate representation of the $S_+$ .", "For example, $\\bar{\\psi }$ denotes the conjugate of $\\psi $ .", "In Euclidean signature, one must also apply the automorphism of the algebra which interchanges the two $(2)$ factors; this is often denoted $\\gamma ^0$ in the physics literature.", "(In general, we will work in a complexified setting, and will not need to consider real structures.)", "It is thus immediate that $\\overline{(\\chi \\psi )} = \\bar{\\chi }\\bar{\\psi }$ , where the overline denotes complex conjugation, and, just as the case of $S_+$ , we have identified the anti-symmetric square of $S_-$ with the trivial representation.", "We will also make frequent use of the isomorphism $\\Gamma : S_+ \\otimes S_- {\\cong } .$ We will also use the Feynman “slash” notation for the inverse inclusion $\\hookrightarrow S_+ \\otimes S_-$ , so that (for example) there is a linear differential operator : (4,S+) (4,S-).", "Having settled these conventions, let us now return to the topic of supersymmetry.", "The four-dimensional $=1$ supersymmetry algebra is a super-Lie algebra with underlying super-vector space $^= ^^0 \\oplus ^^1.$ The superscripts denote the grading by $/2$ corresponding to fermion parity; $^^1$ is therefore odd.", "Here, the bosonic part is of the form $^^0 = \\left[ \\rtimes {so}() \\right] \\oplus {r},$ where $\\rtimes {so}()$ is the Poincaré algebra, which generates affine transformations of $$ , and ${r}$ , the $R$ -symmetry, is in this case a one-dimensional (abelian) Lie algebra, which one may or may not choose to include.", "As an $^0$ -module, the fermionic part is $^^1 = \\Pi (S_+ \\oplus S_-),$ where $S_\\pm $ has charge $\\pm 1$ under ${r}$ .", "Here $\\Pi $ denotes parity shift, with respect to the $/2$ grading, and the anticommutator map is just the isomorphism $\\Gamma : S_+ \\otimes S_- \\rightarrow ,$ extended by zero in the obvious way to a map from $^2(S_+ \\oplus S_-)$ .", "As with any super-Poincaré algebra, $$ has a normal $/2$ -graded subalgebra $$ of supertranslations, of the form $= \\oplus ^^1 \\subset ^.$ That is, $^1 \\cong ^^1$ .", "We can think of $$ as arising from $^$ by forgetting the ${so}()\\times {r}$ part of the algebra.", "However, we can remember the ${so}()\\times {r}$ -module structure; $A$ is then the extension of ${so}()\\times {r}$ by the module $$ .", "The algebra $^$ also sits inside a larger algebra, the $=1$ superconformal algebra in four dimensions, which is a simple super-Lie algebra: $^\\subset \\sc = (2,2|1).$ Here $SO() \\cong (2) \\oplus (2)$ sits block-diagonally inside of $(2,2)$ , and $$ is one of the off-diagonal blocks.", "Notice that, while there is no requirement for the $R$ -symmetry to be represented on a super-Poincaré-invariant theory—and in fact, it is often anomalous—it forms part of the simple algebra $\\sc $ , and therefore must be present in superconformal field theories." ], [ "Supersymmetric field theories", "By definition, a supersymmetric theory admits an action of the super-Poincaré algebra, extending the action of affine transformations of the spacetime on the fields.", "Ideally, this means we have an (strict) action of the Lie algebra $^$ on the fields of the theory $$ in such a way that the classical Lagrangian is invariant.", "In practice, this is rarely the case, even for the free $=1$ chiral multiplet in four dimensions.", "What one can really find is that there is an action of the supersymmetry algebra on the critical locus of the action functional.", "It is natural to require that this action be “local\" in the sense that it is through differential operators.", "A way to cast this is to require that the supersymmetry action determine a representation on $()$ , : (()), in such a way that the Lagrangian $L \\in ()$ defining the classical theory is fixed.We have seen that $()$ is actually a sheaf on $^n$ , but for now we consider just its global sections.", "We will find it convenient to repackage the data of this algebra action using standard manipulations of Koszul duality.", "For physicist readers, this is analogous to the way in which the BRST formalism repackages the procedure of taking gauge invariants: the BRST differential encodes the gauge algebra and its action on the fields.", "We are free to repackage any symmetry in this fashion; if we do not wish to take (co)invariants, we can just remember the differential without passing to (co)homology.", "Recall that the Chevalley–Eilenberg complex of a Lie algebra is defined by *(g) = [ ( g[-1] ), ], where $$ is the differential.", "It is a degree $+1$ operator, obtained by extending the dual of the Lie bracket on ${g}$ according to the Leibniz rule; the data of such a degree-one nilpotent differential is precisely equivalent to a Lie algebra structure on ${g}$ provided $\\fg $ is concentrated in degree zero.", "If we relax the condition that $\\fg $ is concentrated in degree zero, then we obtain the structure of an $L_\\infty $ algebra on ${g}$ .", "Furthermore, if ${g}$ is a super-Lie algebra, the same definition applies (with the condition that overall parity is determined by the sum of the homological degree and intrinsic parity).", "We also note that a Lie algebra structure on ${g}$ and a ${g}$ -module structure on $M$ are together precisely equivalent to a degree-one nilpotent differential on ( g[-1] M).", "(The physicist reader may think of the collection of all ordinary and ghost fields in the BRST formalism.)", "It is further standard that the data of a map (or more generally, an $L_\\infty $ map) of Lie algebras $\\rho : \\fg \\rightarrow $ is equivalent to the data of a Maurer–Cartan element in the dg Lie algebra $\\theta _{\\rho } \\in ^*(\\fg ) .$ Here, we use the commutative dg algebra structure on $^*(\\fg )$ together with the Lie bracket on $$ .", "The Maurer–Cartan equation for $\\theta _{\\rho }$ is $_\\fg \\theta _{\\rho } + \\frac{1}{2} [\\theta _{\\rho } , \\theta _{\\rho }]_{} = 0 .$ where $_\\fg $ is the differential for $\\fg $ and $[-,-]_{}$ is the Lie bracket on $$ .", "(Further terms would appear if ${h}$ were an $L_\\infty $ algebra rather than simply Lie.)", "Thus, another way to encode $=1$ supersymmetry is to prescribe a Maurer–Cartan element in $\\theta _{\\rho } \\in ^*(^) (()) ,$ or equivalently a BRST-type differential acting in the space *() () One can think of this as adding ghosts that do not depend on the spacetime; thus, the sheaf is the constant sheaf with value $^$ , rather than the sheaf of sections of the bundle $\\underline{^}$ as is typical in gauge theories.", "See  for an example of this technique in the physics literature.", "Restricting to just the supertranslation algebra, we can decompose such a Maurer–Cartan element as $\\theta _{\\rho } = \\sum _{i=1}^4 \\delta _{x_i} + \\delta _{Q} + \\delta _{\\smash{\\bar{Q}}} .$" ], [ "Twisting, the nilpotence variety, and $B$", "We will be interested in holomorphic twists of supersymmetric field theories.", "At root, this means that one passes to the cohomology of a chosen nilpotent element in the supersymmetry algebra.", "One can also think of this as giving a vacuum expectation value to the corresponding ghost; although the ghosts in our setting are spacetime-independent and nondynamical, this description also makes sense for theories with local supersymmetry, where it recovers the proposal of Costello and Li for defining twists of supergravity theories .", "For recent reviews of the twisting procedure and classifications of possible twists in different dimensions, see for example , .", "The moduli space of allowed “vacuum expectation values” for the ghosts of supertranslations is nothing other than the space of nilpotent elements in $^1$ , which is an algebraic variety $Y$ defined by homogeneous quadratic equations: Y = {Q 1 : Q2 = 0 }.", "For four-dimensional minimal supersymmetry, it is easy to see that Y = S+ {0} S- S+ S- = 1.", "Thus, if $Ø(Y)$ denotes the algebra of functions on $Y$ , one finds [eq:OY] Ø(Y) = Ø(1[-1]) / S+S-.", "Note that $Y$ is an ordinary (not super) affine variety and $Ø(Y)$ is its ordinary algebra of functions—the homogeneous coordinate ring of the corresponding projective variety, which is here the disjoint union of two projective lines in $P^3$ .", "On general grounds, twists of a particular supersymmetric theory can therefore be thought of as a family over the corresponding nilpotence variety $Y$ .", "We will see this explicitly for minimal supersymmetry in four dimensions, where the result will be a family of holomorphic theories over the space of complex structures on $^4$ ; in general, nilpotence varieties that do not arise from dimensional reduction are closely related to spaces of complex structures, since only holomorphic twists are present.", "The nilpotence variety is also closely related to the classifying space of the super-Lie algebra $$  ; this makes sense, since the twist construction to yield a family over $B$ being a module over $$ .", "Recall that the classifying space $B{g}$ of a Lie algebra ${g}$ is the formal derived space whose space of functions is modeled by the complex: by definition, Ø(Bg) *(g) .", "For ${g}$ semisimple, $B{g}$ is the de Rham stack of (a compact real form of) the corresponding group $G$ .", "For super-Lie algebras, as we recalled above, the complex is graded by both homological degree and intrinsic parity, $\\times /2$ .", "In the case of the supertranslation algebra $$ , this grading lifts to a bigrading by $\\times $ , since $^0$ is abelian and $^1$ is trivial as a $^0$ -module.", "The result can be given in the form [eq:BST] Ø(B) = [ Ø([1]), ] = [ *(1)* (0), ], where $$ carries degree $(2,-1)$ with respect to the two indicated gradings, and the original homological grading is their sum.", "Of course, everything here is equivariant with respect to the $() \\times {r}$ -module structure.", "Moreover, the differential is in fact $Ø(^1[-1])$ -linear, so that () becomes the Koszul complex (with respect to the second grading) of the defining ideal of the nilpotence variety, in free $Ø(^1[-1])$ -modules; its zeroth homology is then just the homogeneous coordinate ring ()." ], [ "Pure spinor superfields", "In addition to classifying the possible twists of a supersymmetric field theory, the nilpotence variety can also be used to construct field representations of the corresponding super-Poincaré algebra.", "We briefly recall this technique here; later, we will show how this formalism simplifies the computation of the holomorphic twist in four-dimensional theories.", "For more detail, we refer the reader to , .", "The construction begins by observing that there is a canonical scalar element (1) 1 (1).", "We can then push this element forward to $\\otimes _(Y)$ , using the natural inclusion maps on either side.", "The result is a scalar nilpotent operator, the Berkovits differential, which we will denote $$ .", "It acts in the tensor product of any $$ -module $M$ with any $Ø(Y)$ -module $\\Gamma $ ; in the case where the latter is just $Ø(Y)$ itself, one can think of this as forming the trivial bundle over $Y$ with fiber $M$ , and then acting by the obvious tautological bundle whose fiber over $Q\\in Y$ is spanned by the operator $Q$ itself.", "Moreover, the whole construction is Lorentz invariant if we insist that $M$ and $\\Gamma $ are equivariant modules (so that $M$ is in fact an $^$ -module).", "$\\Gamma $ can be thought of geometrically as the space of global sections of an equivariant bundle or coherent sheaf.", "In general, $H^*(M \\otimes \\Gamma , )$ does not admit an action of the full $^$ ; $^^0$ is guaranteed to commute with $$ (since it is a scalar), but $^^1$ may not.", "However, we can make a specific choice for $M$ such that the homology becomes a field representation of $^$ for any choice of $\\Gamma $ .", "Namely, we can take $M$ to be the space of free superfields, i.e., the algebra of functions $Ø()$ .", "This admits two commuting actions of $$ , by left and right translations.", "The action of $SO()$ comes from pullback under the (adjoint) $SO()$ action on $$ .", "We are therefore free to apply the Berkovits differential using the right action, and are guaranteed that $^$ will still act on the left on its homology.", "It follows that H* ( Ø() , ) is a supermultiplet corresponding to the equivariant sheaf $\\Gamma $ on $Y$ .", "Furthermore, $Ø()$ is naturally bigraded, and $$ decomposes into the sum of two bigraded pieces; one is independent of $^*(^0)^\\vee $ , whereas the other involves even translations (derivatives).", "There is thus a spectral sequence beginning at $Ø() \\otimes \\Gamma $ and abutting to the homology of $$ ; the $E_1$ page contains the component fields of the resulting supermultiplet, and the differentials on following pages correspond to a BV differential.", "In the context of $=1$ supersymmetry in four dimensions, the vector multiplet arises from the structure sheaf of $Y$ , whereas the chiral multiplet comes from = Ø(S+), considered as an $Ø(Y)$ -module in the obvious way as a quotient of $Ø(Y)$ with respect to the ideal generated by $S_-^\\vee $ .", "The complex $Ø() \\otimes \\Gamma $ thus reduces to [ ( Ø() * S+* S-) * (S+), ], where $$ acts on the $E_0$ page by the identity on the two copies of $S_+$ , so that its homology is simply H* ( Ø() , ) Ø( ) * S-.", "The reader will recognize this as the chiral multiplet of four-dimensional minimal supersymmetry, which we review in more detail in the component formalism in the following section.", "For degree reasons, no BV differential can appear here." ], [ "Component formalism", "The reader will recall that the chiral multiplet in a four-dimensional $=1$ theory contains as component fields one complex scalar and one Weyl fermion: (, ) (4, S+) = Ø(4) (S+) together with (, ) (4, S-) = Ø(4) (S-) where $\\psi , \\bar{\\psi }$ have opposite chirality.", "The free Lagrangian for this multiplet just consists of the standard kinetic terms for each field: Lfree = - + i .", "If $E = {} \\oplus \\Pi \\smash{{S}}_+$ is the underlying super vector bundle of the chiral multiplet, this Lagrangian is a local functional in $L_0 \\in ()$ .", "The action $L_{\\rm free}$ is invariant (up to a total derivative) under the transformations $\\begin{aligned}[c]\\delta \\phi &= \\psi + a^\\mu \\partial _\\mu \\phi , \\\\\\delta \\psi &= i \\bar{}(\\phi ) + a^\\mu \\partial _\\mu \\psi .\\end{aligned}$ Here $\\in (^1)^\\vee [-1]$ and $a^\\mu \\in (^0)^\\vee [-1]$ are generators of the Chevalley–Eilenberg complex, and the notation $\\psi $ represents spinor contraction as defined above in §REF .", "Of course, the complex conjugates of these also hold: $\\begin{aligned}[c]\\delta \\bar{\\phi }&= \\bar{}\\bar{\\psi }+ a^\\mu \\partial _\\mu \\bar{\\phi }, \\\\\\delta \\bar{\\psi }&= -i (\\bar{\\phi }) + a^\\mu \\partial _\\mu \\bar{\\psi }.\\end{aligned}$ Here, we are encoding the action of the supersymmetry algebra using the techniques of , as reviewed above in §REF .", "Global (i.e.", "spacetime-independent), nondynamical ghosts are included, and the differential $\\delta $ of type encodes the structure of the supertranslation algebra as well as the module structure on the physical fields.", "While we do not pass to cohomology of this differential, we can conveniently recover the differential arising in any twist of the theory by setting the global supersymmetry ghosts $$ to appropriate nonzero values—i.e., to any point on the nilpotence variety.", "The ghosts are bosonic spinor variables $$ and $\\bar{}$ , along with a fermionic vector $a^\\mu $ , all carrying ghost number one.", "The differential acts on fields according to (REF ), and the structure of the supersymmetry algebra is encoded in the action of the differential on the ghosts themselves: $\\begin{aligned}[c]\\delta a^\\mu &= \\bar{}\\gamma ^\\mu , \\\\\\delta &= 0, \\\\\\delta \\bar{}&= 0.\\end{aligned}$ Now, in order for us to have an action of $^$ , the Maurer–Cartan condition $\\delta ^2 = 0$ must hold.", "However, this is not true using the naive action above, since $\\delta ^2 \\psi = \\delta (\\delta \\psi )$ is in fact nonzero; in other words, the naive definition of $\\delta $ does not define a map of Lie algebras $^\\rightarrow (())$ .", "However, $\\delta ^2$ lies in the ideal generated by the equations of motion, so that (REF ) does define an action of the supersymmetry algebra on-shell (i.e., on the sheaf of solutions to equations of motion).", "This is a general feature of supersymmetry multiplets; when only physical fields are included in the multiplet, closure of the algebra requires that the equations of motion for the fermion be imposed.", "In our case, the well-known work-around for this issue is to introduce an auxiliary field.", "This has the affect of modifying the space of fields $$ to a larger space $\\tilde{}$ that admits an off-shell action of $^$ , without changing the sheaf of solutions to equations of motion or its representation of $^$ .", "An auxiliary-field formalism is not always available; we will see a general technique for avoiding the introduction of auxiliary fields below in §REF .", "For the chiral multiplet, the auxiliary field is an element F C(4) , that appears algebraically in the action functional.", "In particular, the free Lagrangian is extended to [freeF] Lfree = - + i + F F, so that $F$ 's equation of motion in the free theory simply sets $F = \\bar{F} = 0$ .", "If the bundle associated to this larger space of fields is E = E = S+ , then this Lagrangian is a local functional $\\tilde{L}_0 \\in (\\tilde{})$ .", "The reader will now recognize that = Ø() * (S+), matching the pure spinor superfield construction above.", "The modified supersymmetry transformations now read $\\delta \\phi &= \\psi , \\nonumber \\\\\\delta \\psi &= i \\bar{}(\\phi ) + F, \\\\\\delta F &= - i \\bar{}\\psi .", "\\nonumber $ (The complex conjugates of these are also valid.)", "After restoring the obvious action by ordinary bosonic translations, which is suppressed above, the differential defined by (REF ) is now nilpotent, signaling closure of the algebra off-shell.", "Indeed, $\\delta $ now defines a map of Lie algebras (()) in such a way that $\\tilde{L}_0$ is preserved." ], [ "Superpotential interactions", "In this section, we review supersymmetry-preserving interactions of $=1$ chiral multiplets, in the formalism with auxiliary fields; these are parameterized by a holomorphic function known as the superpotential.", "We consider a theory with $n$ chiral superfields, labeled with an index $i$ .", "In the formalism with auxiliary fields, the most general supersymmetry-preserving interaction term that can be added to the free Lagrangian—$n$ copies of ()—is Lint = - 12 Wij i j + Wi Fj + c.c., where $W^i$ denotes the corresponding derivative of a holomorphic function of the bosonic chiral fields $\\phi _i$ .", "For example, Wij = d2di   dj W. While $W$ is an arbitrary holomorphic function, only the quadratic and cubic terms in $W$ will provide relevant interaction terms; these are (respectively) the mass matrix of the $\\phi $ fields and the scalar self-couplings for the theory, and supersymmetry invariance fixes all other relevant couplings (i.e.", "fermion masses and cubic and quartic scalar vertices) as functions of these.", "The terms at linear order only shift the action functional by a constant; we will consider only $W$ of quadratic and higher order.", "When superpotential interactions are included, the auxiliary field $F$ still appears algebraically in the Lagrangian, but new terms appear: L = Fi Fi + Wi Fi + Wi Fi + $F$ -independent terms.", "Thus, the equations of motion for $F$ are deformed to [F-EOM] Fi = Wi,       Fi = Wi, and the supersymmetry transformations of the fermionic component fields $\\psi _i$ are correspondingly deformed after $F$ is eliminated, as can be read off by substituting () into (REF ): [susy-deformed] i = i (i ) + Wi." ], [ "The BV formalism", "We quickly remind the reader of the basics of the BV formalism, mostly to fix conventions.", "For more detailed discussion, we refer to , , the article , or the review in .", "In the BV formalism for classical field theories, one is interested in studying the sheaf of solutions to the classical equations of motion.", "These are just critical points for the action functional $S = \\int L$ : (S) .", "One now resolves functions on the critical locus (i.e., classical observables) freely in functions on $$ , using the Koszul complex of the equations of motion.", "This amounts to constructing the BV fields of the theory, = T*[-1] , which is a sheaf of shifted symplectic vector spaces, obtained (just as before) as the sections of a dg vector bundle $B \\rightarrow $ whose fibers are the shifted cotangent bundles to the fibers of the original bundle $E$ .", "Since $$ is a shifted symplectic space, its algebra of functions $Ø()$ carries a shifted Poisson structure; this bracket structure is usually called the antibracket in the physics literature.", "In order to resolve the critical locus, the classical BV observables $Ø()$ are equipped with the differential $_S = \\lbrace S,-\\rbrace $ generated by the action functional under the antibracket.", "In the simplest case, when $$ carries no differential, this makes $Ø()$ into the Koszul complex for the equations of motion of the original theory, and no further modification is needed.", "More generally, though, the action must be modified so as to generate the differential on $$ while maintaining nilpotence; the latter requirement amounts to imposing the classical master equation, $\\lbrace \\mathfrak {S},\\mathfrak {S}\\rbrace = 0$ .", "(Here $\\mathfrak {S} = \\int \\mathfrak {L}$ is the BV action functional.)", "One can think of this as finding an appropriate lift of $L \\in ()$ to $\\mathfrak {L} \\in ()$ , such that its restriction to the zero section returns $L$ , and the first-order terms in antifields generate the internal differential on $$ ; in the end, the BV differential will do the job of passing to the quotient of $(S)$ by the action of the gauge group.", "These two requirements fix the terms of the BV Lagrangian $\\mathfrak {L}$ that are of order zero and one in antifields; higher-order terms, if any, are generated by requiring the classical master equation.", "To quantize the theory, one tries to deform the BV action to a solution of the quantum master equation, which is a deformation of the classical master equation by the BV Laplacian; see §REF below.", "In general, for an honestly nilpotent internal differential on $$ , a BV action that is linear in antifields will suffice, and the master equation will already be satisfied.", "However, one can even extend the construction of the BV action to cases where the internal differential on $$ is only nilpotent modulo the defining ideal of $(S)$  .", "We will see an example of a BV action that is higher order in antifields below." ], [ "BV actions for $=1$ chiral multiplets", "Using the above procedure, we can now construct the BV Lagrangian at linear order in antifields, beginning with the chiral multiplet in the auxiliary-field formalism: L = L + * + * + F* F + c.c.", "Note that we are here treating $\\delta $ , which represents the action of the super-Poincaré algebra, as the internal differential on $\\widetilde{}$ !", "This amounts to constructing the BV action equivariantly with respect to the supersymmetry algebra.", "(The physical chiral multiplet, of course, has no internal differential.)", "Since the supersymmetry transformations close on-shell, we are assured that the BV differential (i.e., the adjoint action of $\\mathfrak {S}$ under the antibracket) is nilpotent.", "Indeed, for our purposes, it is sufficient to note that this continues to hold when the $a$ -ghosts are set to zero—as long as we will eventually choose values for the bosonic $$ ghosts that lie in the nilpotence variety of the supersymmetry algebra.", "And taking this choice as internal differential on $\\widetilde{}$ is precisely passing to the holomorphic twist of the theory.", "Since this is our aim, we discard the $a$ -ghosts now, obtaining [BVaction-F-sum] L = Lfree + Lint + La.f., where $\\begin{aligned}[c]\\widetilde{L}_\\text{free} &= - \\partial \\bar{\\phi }\\cdot \\partial \\phi + i \\bar{\\psi }\\psi + \\bar{F} F, \\\\\\widetilde{L}_\\text{int} &= - \\frac{1}{2} W^{ij} \\psi _i \\psi _j + W^i F_j + \\text{c.c.", "}, \\\\\\widetilde{L}_\\text{a.f.}", "&= \\phi ^* \\psi + \\psi ^* F + i \\psi ^* \\bar{}\\phi - i F^* \\bar{}\\psi + \\text{c.c.", "}\\end{aligned}$ An implicit summation over flavors is understood in the first and third terms." ], [ "Eliminating the auxiliary field", "We will now integrate out the auxiliary field, reducing the BV fields from $T^*[-1]\\widetilde{}$ to $T^*[-1]$ and producing a BV action without auxiliary fields.", "(Since the procedure is analogous to symplectic reduction, we will have to first restrict to the observables that have zero antibracket with the auxiliary field; this amounts to throwing out its antifield.)", "Notice that in (REF ), the equation of motion for the auxiliary field $F$ is deformed to Fi + Wi + *i = 0.", "(The equation of motion for $\\bar{F}$ is just the complex conjugate of the above.)", "Substituting these back into (), and setting the antifields of auxiliary fields to zero, yields the BV action for the chiral multiplet in component formalism: $\\mathfrak {L} = - \\partial \\bar{\\phi }\\cdot \\partial \\phi + i \\bar{\\psi }\\psi + (W + \\psi ^*)(\\bar{W} + \\bar{}\\bar{\\psi }^*) \\\\- \\frac{1}{2} W^{ij} \\psi _i \\psi _j - W^i (\\bar{W}_i + \\bar{}\\bar{\\psi }^*_i )+ \\phi ^* \\psi + i \\psi ^* \\bar{}\\phi - \\psi ^{*i} ( \\bar{W}_i + \\bar{}\\bar{\\psi }^*_i ) + \\text{c.c.", "}$ After expanding terms and cleaning this up, one finds L = L0 + L1 + L2, where $\\begin{aligned}[c]L_0 &= - \\partial \\bar{\\phi }\\cdot \\partial \\phi + i \\bar{\\psi }\\psi - \\frac{1}{2} W^{ij} \\psi _i \\psi _j - \\frac{1}{2} \\bar{W}_{ij} \\bar{\\psi }^i \\bar{\\psi }^j - |W|^2, \\\\L_1 &= \\phi ^* \\psi + i \\psi ^* \\bar{}\\phi - \\psi ^{*i} \\bar{W}_i + \\text{c.c.", "}, \\\\L_2 &= - \\psi ^{*i} \\bar{}\\bar{\\psi }^*_i.\\end{aligned}$ A few words of explanation are warranted here.", "Firstly, $L_0$ consists of terms that are independent of antifields; it reproduces the standard component action of the theory with superpotential interactions, without auxiliary fields.", "The antifield-dependent terms reflect the supersymmetry transformations of the fields; recall that, because we have already set the ghost parameters for translations to zero, our differential will be nilpotent only for ghost parameters $$ corresponding to nilpotent supercharges.", "Nonetheless, the transformations in (REF ) neatly reproduce those in (REF ), with an additional term that is dependent on the superpotential and give rise to the interaction spectral sequence in this formalism (arising from the terms $\\psi ^{*i} \\bar{W}_i$ ).", "This additional term is the one appearing in (); for this reason, the interaction no longer affects only the antifield-independent portion of the action, and we do not separate the free and the interaction terms explicitly.", "Furthermore, the action (REF ) is no longer linear in antifields.", "A quadratic term $L_2$ has appeared upon elimination of the auxiliary fields, corresponding to the fact that the supersymmetry transformations no longer define a module structure on the space of off-shell fields (although they do define a module structure modulo the equation of motion for $\\psi $ , as we remarked previously).", "Had we started just with the component action and the transformations (REF ), we could have used the techniques of  to produce this BV action directly, without any reference to an auxiliary field formalism.", "We would have written down $L_0$ and $L_1$ based on that information, we then would have found that, rather than being zero, $\\lbrace S,S\\rbrace $ would have been proportional to the Dirac equation for $\\psi $ .", "Choosing $L_2$ so as to cancel that term would have produced the solution (REF ), and the process of solving order by order in antifields would then be complete.", "These techniques apply even in circumstances where no auxiliary-field formalism is available, and produce BV complexes with actions quadratic in antifields that are analogous to (REF )." ], [ "Twisted chiral matter: the $\\beta \\gamma $ system", "The type of twists of supersymmetric theories we study in this paper are not of the familiar topological flavor, but are holomorphic.", "Like a topological twist, these holomorphic twists do not depend on the underlying metric data as the starting supersymmetric theory does, but unlike a topological twist, they depend on the complex structure.", "The starting point for a holomorphic theory is that of a holomorphic vector bundle equipped with a holomorphic version of the BRST operator.", "All of the theories we consider in this section will be in the BV formalism, and there is a suitable holomorphic version of the BV bracket.", "Given a holomorphic theory there is a natural way to construct a BV theory, which we will refer to as its BV-ification.", "Note: Unless otherwise stated, we will work in the BV formalism for the remainder of the paper.", "Thus, when we refer to fields we mean the full space of BV fields, and the action functional is the full BV action." ], [ "Holomorphic field theory", "In this section we set up notations and conventions for holomorphic field theories.", "We mostly follow the the approach to this subject presented in .", "First, we define the appropriate notion of a “free\" theory.", "A free holomorphic theory on $^d$ consists of the following data: a $$ -graded complex vector space $Z^\\bullet $ ; a non-degenerate pairing of cohomological degree $(d-1)$ $\\omega ^\\text{hol} : Z^\\bullet \\times Z^\\bullet \\rightarrow \\cdot d z [d-1] ;$ a holomorphic differential operator of cohomological degree $+1$ : $Q^\\text{hol} : ^\\text{hol}(^d) Z^\\bullet \\rightarrow ^\\text{hol}(^d) Z^\\bullet ;$ such that: $Q^\\text{hol}$ is graded skew self-adjoint to $\\omega ^\\text{hol}$ , and $(Q^\\text{hol})^2 = Q^\\text{hol} \\circ Q^\\text{hol} = 0$ .", "We use the notation $\\cdot d z$ to indicate the fiber of the holomorphic canonical bundle at the origin in $^d$ .", "In the definition, we extend $\\omega ^\\text{hol}$ to a pairing $\\omega ^\\text{hol} : ^\\text{hol}(^d) Z^\\times ^\\text{hol}(^d) Z^\\rightarrow ^\\text{hol}(^d) \\cdot d z [d-1] = \\Omega ^{d,hol}(^d) [d-1]$ by $^\\text{hol}(^d)$ -linearity.", "There is a weakening of this definition that is relevant for us.", "Instead of starting with a $$ -graded vector bundle, we can consider a $/2$ -graded vector bundle $Z^\\bullet = Z_\\text{even} \\oplus Z_\\text{odd}$ .", "We then require that $Q^\\text{hol}$ is an odd holomorphic differential operator and that $\\omega ^\\text{hol}$ be odd when $d$ is even, and even when $d$ is odd.", "The reader may observe that there is no “space of fields\" in the definition of a free holomorphic theory.", "One obtains the fields of the resulting free field theory, in the BV formalism, by taking the Dolbeault complex with values in the trivial holomorphic vector bundle with fiber $Z^\\bullet $ .", "That is, the space of BV fields (including ghosts, fields, anti-fields, etc.)", "with its linear BV differential, is the complex $_Z = \\left(\\Omega ^{0,*}(^d, Z^\\bullet ), + Q^\\text{hol}\\right) .$ If $\\alpha \\in _X$ denotes a section, the free Lagrangian is $L_{\\rm free} = \\omega ^\\text{hol} (\\alpha , (+ Q^\\text{hol}) \\alpha ) d z$ where $d z$ is the standard holomorphic volume form on $^d$ .", "We think of the passage from $Z^\\bullet $ to $_X = \\Omega ^{0,*}(^d , Z^\\bullet )$ an assignment that takes a holomorphic theory (as we've defined it) to the data of a BV theory.", "We remark on a few essential points: (1) There is a $$ -grading on $_Z$ given by the totalization of the internal grading of $Z^\\bullet $ and the natural grading on Dolbeault forms.", "This is the ghost grading of the BV theory.", "(2) The non-degenerate pairing $\\omega ^\\text{hol}$ extends to the space of fields by $\\Omega ^{0,*}(^d)$ -linearity.", "The operator $Q^\\text{hol}$ extends to the Dolbeault complex since $\\Omega ^{0,*}(^d)$ is a resolution for holomorphic functions.", "(3) Since $Q^\\text{hol}$ is holomorphic and $(Q^\\text{hol})^2 = 0$ , the total linear BV differential satisfies $(+ Q^\\text{hol})^2 = 0$ .", "Thus, we have the following complex of BV fields $(_Z, + Q^\\text{hol})$ which resolves $(^\\text{hol}(^d) Z^\\bullet , Q^\\text{hol})$ .", "Here is the most basic, and perhaps most important for this paper, example of a free holomorphic theory.", "The $\\beta \\gamma $ system on $^d$ .", "Suppose $V$ is any $$ -graded vector space.", "Given this data, there is a natural holomorphic theory defined on $^d$ , for any $d$ .", "The graded vector space underlying the fields of the $\\beta \\gamma $ system with values in $V$ is $Z^\\bullet = V \\oplus d z \\cdot V^\\vee [d-1] ,$ equipped with $Q^\\text{hol} = 0$ .", "Here, $d z\\cdot V^\\vee $ is meant to indicate that we are looking at the fiber of the vector bundle $K_{^d} V^\\vee $ at $0 \\in ^2$ .", "The pairing $\\omega ^\\text{hol}$ is given by the obvious evaluation pairing $-,-\\:$ between $V$ and its linear dual $V^\\vee $ .", "The resulting fields in the BV formalism are given by the Dolbeault complex $(\\gamma , \\beta ) \\in \\Omega ^{0,*}(^2) V \\oplus \\Omega ^{d,*}(^d) V^\\vee [d-1]$ and the free action functional is simply $L_{\\rm free} (\\beta ,\\gamma ) = \\beta , \\gamma \\: .$ Of course, for the holomorphic twist of $4d$ $= 1$ we will be most interested in the case $d = 2$ .", "Simply put, holomorphic Lagrangians are the ones that are natural from the point of view of the complex structure on the spacetime manifold we are putting the holomorphic theory on.", "That is, they are Lagrangians that are built from the fields of the BV theory which only involve holomorphic derivatives.", "Our main examples of holomorphic Lagrangians will result from twists of supersymmetric interaction terms.", "The precise definition is the following.", "The space of holomorphic local functionals of a free holomorphic theory $(X, \\omega ^\\text{hol}, Q^\\text{hol})$ on $^d$ is the $$ -graded sheaf hol(Z) = d,hold Ddhol n > 0 Homhold (Jhold Z)n, hold) Here $\\Omega ^{d, \\text{hol}}_{^d}$ is the space of holomorphic top forms on $^d$ , $D_{^d}^\\text{hol}$ is the algebra of holomorphic differential operators, and $J^\\text{hol}_{^d}$ is the vector bundle of holomorphic $\\infty $ -jets of the trivial bundle on $^d$ .", "We note that $Q^\\text{hol}$ determines a differential on the graded sheaf $^\\text{hol}(Z^\\bullet )$ , giving it the structure of a sheaf of cochain complexes.", "We have seen how every free holomorphic theory based on $Z^\\bullet $ gives rise to a free BV theory $_Z$ .", "We can also consider the complex of local functionals on $_Z$ as in Definition REF equipped with the linear BV differential $((_Z), + Q^\\text{hol})$ .", "There is a quasi-isomorphism of sheaves on $^d$ $\\left(^\\text{hol}(Z)[d], Q^\\text{hol} \\right) \\simeq \\left((_Z), + Q^\\text{hol}\\right)$ which is compatible with the BV brackets induced by $\\omega ^\\text{hol}$ on both sides.", "This quasi-isomorphism tells us that holomorphic Lagrangians are precisely ordinary local functionals that are closed for the $$ operator.", "We can use the ordinary classical master equation for local functionals of a BV theory to make the following natural definition.", "A classical holomorphic theory on a complex manifold $Y$ is the data of a free holomorphic theory $(Z^\\bullet , Q^\\text{hol}, (-,-)_Z)$ plus a holomorphic local functional $I^\\text{hol} \\in ^\\text{hol}(Z^\\bullet )$ of cohomological degree $d$ such that the resulting local functional satisfies the classical master equation.", "This is an extension of Remark REF .", "In the case that the holomorphic theory is just $/2$ graded, we require that $I^\\text{hol}$ be even when $d$ is even, and $I^\\text{hol}$ be odd when $d$ is odd.", "Before discussing how holomorphic Lagrangians arise from twists, we give an intrinsic holomorphic description of certain interactions one can add to the free $\\beta \\gamma $ system on $^2$ , or more generally on any Calabi–Yau manifold.", "Let $W \\in ^{\\ge 2} (V^\\vee ) = \\bigoplus _{n \\ge 2} ^{n} (V^\\vee )$ be a polynomial on the complex vector space $V$ that is at least quadratic.", "This determines a holomorphic Lagrangian on the $\\beta \\gamma $ system via the formula $L_W (\\gamma , \\beta ) = W(\\gamma ) \\,2 z$ Note that in order for this Lagrangian to make sense we have used the obvious holomorphic volume form on $^2$ .", "Explicitly, this Lagrangian has the following interpretation.", "Choose a basis $\\lbrace e^i\\rbrace $ of $V$ and identify $W \\in [e_i]$ .", "Then, we can expand the $\\gamma $ field $\\gamma = \\gamma ^{0} + \\gamma ^{1} + \\gamma ^{2} \\in \\Omega ^{0,*}(^2) V$ as $\\gamma ^0 = \\gamma ^0_i e^i \\;\\; , \\;\\; \\gamma ^{1} = \\gamma ^{1}_i e^i \\;\\; , \\;\\; \\gamma ^{2} = \\gamma ^{2}_i e^i$ where where $\\gamma ^{a}_i$ is a form of type $(0,a)$ .", "The Lagrangian, which only remembers the top component, is of the form $L_W(\\gamma , \\beta ) = \\frac{1}{2} \\left(\\partial _i W(\\gamma ^0) \\gamma ^{2}_i + \\partial _i \\partial _j W(\\gamma ^0) \\gamma ^{1}_i \\wedge \\gamma ^{1}_j \\right) \\,2 z .$ For example, if $V = $ and $W = x^3 \\in (V^\\vee ) = [x]$ , then the Lagrangian would read $L_W(\\gamma , \\beta ) = \\frac{1}{2} \\left((\\gamma ^0)^2 \\gamma ^{2} + \\gamma ^0 (\\gamma ^{1})^2 \\right) 2 z .$ We remark is that while the free $\\beta \\gamma $ system is BRST $$ -graded, the deformed theory by a nonzero potential $W$ is only $/2$ -graded; see Remarks REF and REF .", "This is because $L_W$ is not homogenous with respect to the $$ -grading.", "It is, however, even with respect to the obvious forgetful map $\\rightarrow /2$ .", "In particular, the fields $\\gamma ^0$ , $\\gamma ^2$ , and $\\beta ^1$ are of even parity and $\\gamma ^1$ , $\\beta ^0$ , and $\\beta ^2$ are of odd parity.", "To match with the terminology in supersymmetry, we refer to the $\\beta \\gamma $ system on $^2$ with values in $V$ equipped with the interaction $L_W (\\gamma ) = W(\\gamma ) \\,2 z$ , as the $\\beta \\gamma $ system deformed by the superpotential $W$ ." ], [ "Holomorphic operators", "Local operators supported at a point in an arbitrary field theory can be identified with polynomials of derivatives of fields at that that point.", "In a topological twist, since all derivatives are made exact by the supercharge, the only operators left over are given by evaluating the fields at a particular point.", "In a holomorphic theory, not all derivatives are made exact, but all the anti-holomorphic ones are.", "Thus, the operators left over are given by (formal) polynomials in holomorphic derivatives of the fields.", "The precise definition is the following.", "Let $(Z^\\bullet , \\omega ^\\text{hol}, Q^\\text{hol}, I^\\text{hol})$ be a holomorphic theory on $^d$ .", "The $$ -graded vector space of classical holomorphic local operators at $w \\in ^d$ is defined by $_{w}^\\text{hol} = (Z^\\bullet [\\!", "[z_1 - w_1,\\ldots , z_d-w_d]\\!])", "= \\hat{} \\left(Z^\\bullet [\\!", "[z_1 - w_1,\\ldots , z_d-w_d]\\!", "]\\right)^\\vee .$ Here, as always, $(-)^\\vee $ denotes the continuous linear dual.", "Let $^d$ be the formal (complex) $d$ -disk, whose ring of functions is $(^d) = [\\!", "[z_1,\\ldots , z_d]\\!", "]$ .", "The holomorphic operators are just functionals on the space $(^d) Z^\\bullet $ .", "If one thinks of $(^d) Z^\\bullet $ as a formal $\\sigma $ -model of maps $^n \\rightarrow Z^\\bullet $ , then we are simply considering the corresponding operators of the $\\sigma $ -model.", "In other words, the holomorphic operators are the operators on the formal completion of the space of maps $^d \\rightarrow Z^\\bullet $ at the point $w \\in ^d$ .", "We have already mentioned that through the program of Costello-Gwilliam , one attaches a factorization algebra to any perturbative QFT.", "The local operators, as we've defined them, see only a small piece of this factorization algebra.", "Indeed, $_w^{\\rm hol}$ is the value of the factorization algebra on a disk in $^d$ centered at $w$ as the radius gets infinitesimally small.", "When $d=1$ this is the same relationship between holomorphic factorization algebras and vertex algebras, whereby the state space of the vertex algebra is given by the holomorphic local operators.", "For general $d$ , the factorization algebra should endow the space of holomorphic local operators with a higher dimensional analog of the OPE.", "Every holomorphic local operator determines a functional on the solutions to the equations of motion to the holomorphic theory.", "Explicitly, the linear element $(z_1 - w_1)^{-k_1-1} \\cdots (z_d - w_d)^{-k_d - 1} v^\\vee \\in _{w}^\\text{hol}$ can be understood as the operator $\\varphi \\in ^\\text{hol} (^d) V \\mapsto \\left.^\\vee , \\frac{\\partial ^{k_1}}{\\partial z_1^{k_1}} \\cdots \\frac{\\partial ^{k_d}}{\\partial z_d^{k_d}} \\varphi \\right.", "(z = w)$ Here, the braces denote the contraction between $V$ and its dual $V^\\vee $ .", "Non-linear operators can be understood similarly.", "The piece of the BV differential $Q^\\text{hol} + \\lbrace I^\\text{hol},-\\rbrace $ acts on the space $_{w}^\\text{hol}$ .", "Inherently, this operator is square zero, so we find that $\\left(_{w}^\\text{hol} , Q^\\text{hol} + \\lbrace I^\\text{hol},-\\rbrace \\right)$ is a cochain complex.", "We will refer to this as the cochain complex of holomorphic local operators.", "We could have started with the full classical BV description of a holomorphic theory with fields $_V = \\left(\\Omega ^{0,*}(^d, Z^\\bullet ), + Q^\\text{hol}\\right) .$ The space of local operators $_{w}$ of $_V$ at $w \\in ^d$ , as defined in Definition REF , is much bigger than the space of holomorphic local operators.", "Indeed, such operators could involve anti-holomorphic derivatives ${\\partial }/{\\partial _i}$ .", "However, $_w$ is a cochain complex equipped with the classical BV differential $+ Q^\\text{hol} + \\lbrace I,-\\rbrace $ .", "This results in a filtration on the observables by antiholomorphic form degree, and therefore a corresponding spectral sequence, on whose $E_0$ page we take the cohomology of the classical observables with respect to the $$ operator.", "Since we are dealing with local operators, there is no higher $$ -cohomology and the $E_1$ -page can be identified with the cochain complex of holomorphic local operators $\\left(_{w}^\\text{hol}, Q^\\text{hol} + \\lbrace I^\\text{hol},-\\rbrace \\right)$ .", "Since the fields of the BV theory associated to a holomorphic theory are built from the Dolbeault complex, there is a natural action by the unitary group $U(d)$ on the space of fields.", "In fact, the group of biholomorphisms on $^d$ acts on the Dolbeault complex of $^d$ simply by pullback of differential forms.", "Given any biholomorphism $\\varphi : ^d \\rightarrow ^d$ there is an automorphism of complexes $\\varphi ^* : \\Omega ^{0,*}(^d) \\rightarrow \\Omega ^{0,*}(^d)$ sending $\\alpha \\mapsto \\varphi ^* \\alpha $ , which is compatible with $$ by holomorphicity.", "Moreover, if we additionally assume that $I^\\text{hol}$ is $U(d)$ -invariant, the resulting BV action has a symmetry by the group $U(d)$ .", "In this case, this symmetry determines an action of $U(d)$ on the classical holomorphic local operators $^\\text{hol}_w$ ." ], [ "$U(2)$ -equivariant description and holomorphic twist", "The next two sections involve the proof of the following proposition, which completely characterizes the holomorphic twist of the chiral multiplet.", "Consider the $= 1$ chiral multiplet on $^4$ with holomorphic superpotential $W$ .", "Let $Q \\in \\Pi S_+ \\subset $ be a chiral element in the Lie algebra of supertranslations.", "The twist of the $=1$ chiral multiplet with respect to $Q$ is equivalent to the deformation of the free $\\beta \\gamma $ system on $^2$ by the interaction $L_W(\\beta ,\\gamma ) = W(\\gamma ) \\, 2 z.$ Here we are extracting the top component of the mixed form $W(\\gamma ) \\in \\Omega ^{0,*}(2)$ .", "The $\\beta \\gamma $ system was our typical example of a free holomorphic theory.", "The deformation we are seeing in the twist of the theory with superpotential is a deformation of this theory by a Lagrangian that is holomorphic by assumption.", "Therefore, the twist of the chiral multiplet in the presence of a superpotential arises as the BV-ification of a holomorphic theory on $^2$ .", "The next two sections provide two separate proofs of this proposition.", "The first uses the standard description of off-shell supersymmetry via the introduction of auxiliary fields and closely follows §REF above.", "The second uses the off-shell BV description which does not involve auxiliary fields at the expense of introducing some higher interaction terms in the Lagrangian.", "We then further show that the computation of the twist can be packaged neatly in terms of the pure spinor superfield formalism." ], [ "The twist in the presence of auxiliary fields", "In this section we compute the holomorphic twist, and prove Proposition REF , using the standard auxiliary fields.", "It is a straightforward exercise to compute the decomposition of the spin representation under the subgroup $U(2) \\subset SO(4)$ , As we have recalled above, the complex Dirac spinor (with its decomposition into Weyl spinors) is constructed as D = * L;       S+ = 0 L 2 L,    S- = 1 L L. The $U(2)$ subgroup is nothing other than the stabilizer of the subspace $L \\subset _, which corresponds to a complex structure on~$$; its complexification is just $ gl(L)$.", "As such, the spinors and the vector decompose into~$ U(2)$ representations as[spindecomp]{S_+ \\rightarrow {1}^{1} \\oplus {1}^{-1}, \\quad S_- \\rightarrow {2}^0, \\quad ^0 \\rightarrow {2}^1 \\oplus {2}^{-1}.", "}The dual of the vector transforms identically, but of course the pairing connects opposite $ U(1)$ charges.Here boldface integers label $ SU(2)$ representations by their dimensions, and the exponent represents the charge under the center of~$ U(2)$, being $ U(1)$.", "The isomorphism between the vector and the bispinor decomposes into the two obvious maps{{1}^1 \\otimes {2}^0 \\cong {2}^1, \\quad {1}^{-1} \\otimes {2}^0 \\cong {2}^{-1}.", "}In what follows, we will write the components of the $$ field under the decompositions~(\\ref {spindecomp}) as $ +$, $ -$, and~$$, and similarly for $$ and~$$.", "The supersymmetry transformations~(\\ref {susy-F}) then become{\\begin{@align}{1}{-1}\\delta \\phi &= _+ \\psi _- - _- \\psi _+, \\nonumber \\\\\\delta \\psi _+ &= _+ F + i \\bar{}\\left( \\partial \\phi \\right) , \\nonumber \\\\\\delta \\psi _- &= _- F + i \\bar{}\\left( \\bar{\\partial }\\phi \\right), \\\\\\delta F &= - i \\bar{}\\left( \\partial \\psi _- - \\bar{\\partial }\\psi _+ \\right), \\nonumber \\end{@align}}and their complex conjugates reduce to{\\begin{@align}{1}{-1}\\delta \\bar{\\phi }&= \\bar{}\\bar{\\psi }, \\nonumber \\\\\\delta \\bar{\\psi }&= i _+ \\bar{\\partial }\\bar{\\phi }+ i _- \\partial \\bar{\\phi }+ \\bar{}\\bar{F}, \\\\\\delta \\bar{F} &= - i _+ \\bar{\\partial }\\wedge \\bar{\\psi }- i _- \\partial \\wedge \\bar{\\psi }.", "\\nonumber \\end{@align}}(Here, we are again writing a differential of Chevalley--Eilenberg type, acting on linear operators rather than fields.)", "Using these transformations, it is trivial to read off the twisting differential $ = { Q-, }$ by setting $ + = 1$ and other ghosts to zero:\\footnote {Of course, choosing _- instead would give an equivalent result with respect to a different complex structure.", "}\\begin{equation}\\begin{aligned}[c]\\delta \\phi &= \\psi _-, \\\\\\delta \\psi _- &= 0, \\\\\\delta \\psi _+ &= F, \\\\\\delta F &= 0,\\end{aligned}\\qquad \\qquad \\begin{aligned}[c]\\delta \\bar{\\phi }&= 0, \\\\\\delta \\bar{\\psi }&= i \\bar{\\partial }\\bar{\\phi }, \\\\\\delta \\bar{F} &= - i \\bar{\\partial }\\wedge \\bar{\\psi }.\\end{aligned}\\end{equation}We thus arrive at a cochain complex which is just the Dolbeault resolution of holomorphic functions, $ 0,*(2)$.", "The twist of the chiral multiplet therefore becomes the field $$ of the $$ system; the BV antifields will provide the corresponding $$ field.", "This is the first half of the proof of Prop.~\\ref {prop:F-terms}, in auxiliary-field language.$ In fact, the full structure of () showcases the structure of the twist as a family over the nilpotence variety.", "It is immediate to see that, for a generic point $(_+,_-)$ , one obtains the Dolbeault complex, with a differential corresponding to a deformation of complex structure: = dzi   ( + zi + - ij zj ) .", "The connected component of the nilpotence variety corresponding to the $S_+$ , which is just a copy of $P̏^1$ , is thus explicitly identified with the space of complex structures on $^4$ , as must happen on general grounds .", "At the point $(_+,_-) = (1,0)$ , the $_-$ deformation is just the tangent space to the nilpotence variety.", "Further, the $\\bar{}$ deformations represent the normal bundle to the nilpotence variety.", "This must always contain a copy of the defining representation of $SU(n)$ , witnessing the fact that antiholomorphic translations are nullhomotopic in the twisted theory.", "In a topologically twisted theory, the supercharges providing a nullhomotopy for the translations are responsible for the phenomenon of topological descent; here, a holomorphic analogue is present, which allows us to construct nonlocal holomorphic operators of ghost number zero out of local operators of nonzero ghost number.", "While the story of topological descent is classical, such higher structures were considered again recently in .", "In our case, the relevant operad is a holomorphic analogue of the little discs operad that appears in topological field theories; in complex dimension $d$ , there is a class in the cohomology of the binary part of the operad in degree $d-1$ , corresponding to the Dolbeault cohomology of punctured $d$ ; this is nothing other than the pairing on the fields of the $\\beta \\gamma $ system.", "We plan to give a detailed discussion of this operad in future work.", "Let's return now to the BV action (REF ), decompose it in $U(2)$ -equivariant language including antifields, and then simply set $_+$ to one and all other ghost antifields to zero.", "This will give the deformation of the action that corresponds to the deformation of the original BV differential by the twisting supercharge.", "The $U(2)$ -equivariant decomposition is straightforward, and the result for the first two terms is $\\begin{aligned}[c]L_\\text{free} &= - \\partial \\bar{\\phi }\\wedge \\bar{\\partial }\\phi - \\bar{\\partial }\\bar{\\phi }\\wedge \\partial \\phi - i \\bar{\\psi }\\left( \\partial \\psi _- + \\bar{\\partial }\\psi _+ \\right) + \\bar{F} F, \\\\L_\\text{int} &= - \\frac{1}{2} W^{ij} \\left( \\psi _{i+} \\psi _{j-} - \\psi _{i-} \\psi _{j+} \\right) - \\frac{1}{2} \\bar{W}_{ij} \\bar{\\psi }^i \\wedge \\bar{\\psi }^j + W^i F_i + \\bar{W}_i \\bar{F}^i.\\end{aligned}$ For the antifield portion of the action, we have $L_\\text{a.f.}", "= \\phi ^* ( _+ \\psi _- - _- \\psi _+ ) + F (\\psi ^*_+ _- - \\psi ^*_- _+ ) - i \\bar{}(\\psi ^*_+ \\bar{\\partial }\\phi + \\psi ^*_- \\partial \\phi ) \\\\- i F^* \\bar{}(\\partial \\psi _- + \\bar{\\partial }\\psi _+ )+ \\bar{\\phi }^* \\bar{}\\bar{\\psi }+ F \\bar{\\psi }^* \\bar{}- i \\bar{\\psi }^* (_+ \\bar{\\partial }\\bar{\\phi }+ _- \\partial \\bar{\\phi })+ i \\bar{F}^* (_+ [\\bar{\\partial }\\bar{\\psi }] + _- [\\partial \\bar{\\psi }] ) .$ As such, the BV action for the holomorphic twist is easy to write down: [BVaction-holotwist] Ltw = Lfree + Lint + * - - F *- - i * + i F* [ ], reproducing the differential () above.", "After discarding the contractible part of this complex, the rest of the action takes a much simpler form: $L_\\text{free}$ disappears entirely, reflecting the fact that it arises from a pairing between the chiral field and its $Q$ -trivial complex conjugate.", "This is a reflection of the standard piece of dogma that $D$ -terms are not relevant for index computations.", "On the other hand, $L_\\text{int}$ (which comes from the $F$ -term of the original action) does play a significant role: it contributes the terms [twisted-F-terms] Lint - 12 Wij i j + Wi Fi.", "Identifying the fields in the twisted theory with a total Dolbeault form $\\gamma $ , this reduces to the simple expression Lint = [W()]top.", "Of course, the term in the action pairs this $(0,2)$ form with the Calabi–Yau form on spacetime.", "It is also ineresting to note that the kinetic terms of the twisted theory do not arise from the kinetic terms of the full theory; rather, they come from the terms pairing antifields with fields, and therefore from the supersymmetry transformations of the chiral multiplet (which become the internal differential in the twisted theory)." ], [ "The twist without auxiliary fields", "Armed with the results of the above calculations, it is straightforward to compute the twist of the theory in the BV formalism without auxiliary fields.", "Note that this is explicitly an “$L_\\infty $ ” twist, in the sense that we are deforming a Maurer–Cartan element that is not simply a BRST differential.", "As above, we will first reduce to holomorphic language, following (REF ) and (REF ) above.", "Then we will integrate out the auxiliary field; in holomorphic language, its equations of motion are [holo-Fterm] $[c]\\bar{F}^i &= -W^i - _+ \\psi ^*_- + _- \\psi ^*_+, \\\\F_i &= - \\bar{W}_i - \\bar{}\\bar{\\psi }^*_i.$ Having done this, we can set the supersymmetry ghosts to the background values corresponding to the holomorphically twisted theory: $_+ = 1$ , and $_- = \\bar{}= 0$ .", "The analogue of (REF ), imposing (), is then [eqs:holoL-noF] $[c]L_\\text{free} &\\rightarrow - \\partial \\bar{\\phi }\\wedge \\bar{\\partial }\\phi - \\bar{\\partial }\\bar{\\phi }\\wedge \\partial \\phi - i \\bar{\\psi }\\left( \\partial \\psi _- + \\bar{\\partial }\\psi _+ \\right) + (W^i + \\psi _-^{*i})\\bar{W}_i, \\\\L_\\text{int} &\\rightarrow - \\frac{1}{2} W^{ij} \\left( \\psi _{i+} \\psi _{j-} - \\psi _{i-} \\psi _{j+} \\right) - \\frac{1}{2} \\bar{W}_{ij} \\bar{\\psi }^i \\wedge \\bar{\\psi }^j - W^i \\bar{W}_i - \\bar{W}_i (W^i + \\psi ^{*i}_-).$ Notice that the terms containing the antifield $\\psi ^{*i}_-$ cancel in the sum of these two terms!", "The BV portion of the action correspondingly becomes [eqs:holoLaf-noF] La.f.", "*i -i + Wi *i- - i *i i.", "Here, we have simply set the antifields of $F$ to zero, as we did above in the full theory.", "Note that the term in the BV action quadratic in antifields plays no role in the twisted theory in this dimension, since it contains both $$ and $\\bar{}$ , and is therefore set to zero on any background corresponding to a twist.", "In this sense, the $L_\\infty $ structure is lost in the holomorphic theory.", "It will, of course, play an important role in theories where the entire action of supertranslations is gauged—i.e., in the coupling of chiral matter to supergravity.", "Furthermore, the portion of the complex that was contractible before is not completely contractible now!", "The pair $\\phi , \\psi _-$ still form an acyclic boson–fermion pair, and can be removed; the resulting total BV action is [BVaction-tot] L = - i i +i - 12 Wij i j + Wi *i- - i *i i.", "The quadratic term $|W_i|^2$ disappears because $\\phi $ still belongs to an acyclic pair.", "As above, the fields that survive to the twist can still be identified with two copies of the Dolbeault complex, but the components of the antifield that cannot be integrated out now play the roles of the auxiliary fields.", "(For the assignments of $U(1)$ charges, see below in Table REF .)", "We identify = (, , -*),       = (+, *, *).", "As above, the antifield of $\\gamma ^i$ is $\\beta ^{2-i}$ .", "Making these identifications and integrating once by parts, we can rewrite () as [BVaction-ident] L = i , + [W()]top, in perfect agreement with the result above." ], [ "Twisting in the pure spinor superfield formalism", "In this section, we arrive at the twist of the chiral multiplet in yet a third way, Recall that superfields are elements of the coordinate ring of super-Minkowski space, which (by definition) is a free graded commutative algebra: Ø() = Ø() , ].", "Here the $\\theta \\in S_+$ are odd coordinates; we will denote even coordinates by $x^\\mu \\in V$ , $\\mu = 1,\\ldots ,4$ .", "The space of superfields carries the obvious representation of $(4)$ .", "However, we will find it convenient to reduce immediately to ${u}(2)$ -equivariant language, as we are interested in considering the holomorphic twist.", "The space of superfields then looks like zi,zi; +,-, ] ,    i =1,2.", "The ${u}(2)$ representations are the obvious ones: 2-1, 21; 11, 1-1, 20.", "Note that we take the momentum operator $\\partial /\\partial z$ , rather than $z$ itself, to transform in the fundamental of $U(2)$ .", "The algebra of supertranslations acts on the space of superfields by the left and right regular representations, which mutually commute: [QD-SO(4)] $[c]Q_\\alpha &= i {}{\\theta ^\\alpha } - \\sigma ^\\mu _{\\alpha \\dot{\\beta }} \\bar{\\theta }^{\\dot{\\beta }} {}{x^\\mu },\\\\\\bar{Q}_{\\dot{\\beta }} &=- i {}{\\bar{\\theta }^{\\dot{\\beta }}} + \\sigma ^\\mu _{\\alpha \\dot{\\beta }} \\theta ^{\\alpha } {}{x^\\mu },$        $[c]D_\\alpha &= {}{\\theta ^\\alpha } - i \\sigma ^\\mu _{\\alpha \\dot{\\beta }} \\bar{\\theta }^{\\dot{\\beta }} {}{x^\\mu },\\\\\\bar{D}_{\\dot{\\beta }} &= - {}{\\bar{\\theta }^{\\dot{\\beta }}} + i \\sigma ^\\mu _{\\alpha \\dot{\\beta }} \\theta ^{\\alpha } {}{x^\\mu }.$ It is easy to reduce these expressions to holomorphic language, and the results look like [QD-U(2)] $[c]&Q_+ = i {}{\\theta ^-} - \\bar{\\theta }\\wedge {}{{z}},\\\\&Q_- = i {}{\\theta ^+} - \\bar{\\theta }\\wedge {}{\\bar{z}},\\\\&\\bar{Q} =- i {}{\\bar{\\theta }} + \\theta ^- {}{z} + \\theta ^+ {}{\\bar{z}},$        $[c]&D_+ = {}{\\theta ^-} - i \\bar{\\theta }\\wedge {}{{z}},\\\\&D_- = {}{\\theta ^+} - i \\bar{\\theta }\\wedge {}{\\bar{z}},\\\\&\\bar{D} = - {}{\\bar{\\theta }} + i \\theta ^- {}{z} + i \\theta ^+ {}{\\bar{z}}.$ As we reviewed above in §REF , the chiral superfield in the pure spinor superfield formalism can be obtained from the $Ø(Y)$ -module $\\Gamma = u_+,u_-]$ , and is then isomorphic to H*(Ø() , ) Ø(S-).", "The action of the supersymmetry generators can be obtained by throwing out nullhomotopic terms from () above: Q+ = - z,    Q- = - z,    Q = -i .", "This makes it apparent that $Q_-$ is nothing other than the Dolbeault differential, and in fact that the differential obtained from generic values $(_+,_-)$ is just the Dolbeault differential for a deformation of the complex structure on $$ .", "We will return to this point later on.", "The $\\bar{}$ differential, on the other hand, is the same cancelling differential we saw above." ], [ "Gauge interactions and general $=1$ theories", "In this section, we compute the twist of the four-dimensional $=1$ vector multiplet.", "Having done this, we will give a general description (Proposition REF ) for the holomorphic twist of any four-dimensional supersymmetric theory; this applies just as well to the minimal (holomorphic) twist of theories with more supersymmetry, since they are special examples of $=1$ theories.", "While we will not explicitly compute characters for gauge theories in this work, we will use our description of the gauge multiplet in the discussion of the twisted flavor current multiplet below.", "The vector multiplet consists of fields $(A, \\lambda , \\bar{\\lambda }, D)$ of ghost degree zero, where $A \\in \\Omega ^1(^4) \\fg $ is a connection one-form with values in the Lie algebra $\\fg $ ; $(\\lambda ,\\bar{\\lambda }) \\in C^\\infty () \\fg \\Pi \\left( S_+ \\oplus S_- \\right)$ are a pair of $\\fg $ -valued spinors of opposite chirality; $D \\in C^\\infty (^4) \\fg $ is an auxiliary field.", "In addition, there is a ghost field $c \\in C^\\infty () \\fg $ of ghost degree $-1$ .", "This data compiles together to define a super dg Lie algebra where the differential is of the form $\\begin{tikzcd}[row sep = tiny]{0} & & {1} \\\\C^\\infty (^4) \\fg [rr, \"] & & \\Omega ^{1} (^4) \\fg \\\\& & C^\\infty (^4) \\fg \\Pi \\left( S_+ + S_- \\right) \\\\& & C^\\infty (^4) \\fg ,\\end{tikzcd}$ and the Lie bracket is extended from the matrix commutator.", "Note that it is the shift by one of the fields, rather than simply the fields, that carry a dg Lie structure; as such, the grading in (REF ) above is shifted by one from the natural grading on fields.", "The Chevalley–Eilenberg complex of this dg Lie algebra, which incorporates the shift by one, returns the operators of the theory, together with their BRST differential.", "In this diagram, the outer two lines are of even parity and the last two lines are of even parity (although note that the homological degree means that $c$ has odd parity overall).", "The horizontal degree is the BRST degree.", "Let us now pass to the dual description, in which gauge transformations on fields are represented by a Chevalley–Eilenberg differential on operators.", "The internal differential of degree $+1$ then acts by [eq:internal-diff] $[c]\\kappa c &= [c, c], \\\\\\kappa A &= d c + [A, c],\\\\\\kappa \\lambda &= [\\lambda , c], \\\\\\kappa D &= [D,c].$ (Note that we are abusing notation slightly, by using the same letters in () for operators as we used for the corresponding fields.)", "The cohomology of this differential accomplishes the task of passing to the gauge-invariant sector of the theory.", "The action of $=1$ supertranslations is the familiar one for the $=1$ vector multiplet.", "This action can be represented by a differential, as we did for the chiral multiplet above, as follows: $\\begin{aligned}[c]\\delta c &= 0 , \\\\\\delta A &= \\bar{}\\lambda + \\bar{\\lambda }, \\\\\\delta \\lambda &= i F^+ + D, \\\\\\delta D &= i \\left( -\\bar{}\\lambda + \\bar{\\lambda }\\right)\\end{aligned}$ As above, we need to either restore the obvious $a^\\mu \\partial _\\mu $ terms, or set one of $$ or $\\bar{}$ to zero (i.e., restrict to the nilpotence variety).", "The notation deserves a word of explanation: When we here write the product $S_+ \\otimes S_-$ , as in $\\bar{}\\lambda $ , we are implicitly identifying it with the vector representation.", "Furthermore, in the variation of $\\lambda $ , we have identified (S+,S+) S+ S+S+ S+, using the antisymmetric pairing on the Weyl spinor; this representation contains the self-dual two-form, denoted by $F_+$ .", "Finally, in the last line, the vector, spinor, and conjugate spinor are contracted to form a scalar.", "To twist this multiplet, we proceed as in §REF : first, we choose a complex structure on $M$ , and perform the $U(2)$ -equivariant decomposition of the fields; then, we set $$ to a value corresponding to an appropriate point on the nilpotence variety.", "The $U(2)$ decomposition of the transformations (REF ) is $\\begin{aligned}[c]\\delta c &= 0, \\\\\\delta A &= \\bar{}\\lambda _+ + _+ \\bar{\\lambda }, \\\\\\delta \\bar{A} &= \\bar{}\\lambda _- + _- \\bar{\\lambda }, \\\\\\delta \\lambda _+ &= _+ (D + i F_0) + i_- F_2, \\\\\\delta \\lambda _- &= _- (D + i F_0) + i_+ F_{-2}, \\\\\\delta \\bar{\\lambda }&= \\bar{}D + F^- \\bar{}, \\\\\\delta D &= i \\left( - \\bar{}\\wedge \\left( \\lambda _- + \\lambda _+ \\right)+ _+ \\wedge \\bar{\\lambda }+ _- \\wedge \\bar{\\lambda }\\right).\\end{aligned}$ Here $F$ with a subscript labels the three scalar components of the self-dual two-form, under the $U(2)$ -equivariant decomposition, with their $U(1)$ charges.", "As above, to read off the twisting differential, we can just set $_+$ to one and all other parameters to zero.", "When we have done this, it is immediate that $(A,\\bar{\\lambda })$ form an acyclic pair and can be discarded.", "Furthermore, $D$ is closed under the differential, but it is set equal to a component $\\partial \\wedge \\bar{A}$ of the field strength by the image of $\\lambda _+$ , so that these two generators can also be discarded.", "To obtain the full differential of the twisted theory, we will now have to add back in the original internal differential ().", "The result is $\\begin{aligned}[c]\\delta c &= [c,c], \\\\\\delta \\bar{A} &= \\bar{\\partial }c + [\\bar{A},c], \\\\\\delta \\lambda _- &= \\left( \\bar{\\partial }+ \\bar{A} \\right) \\wedge \\bar{A} + [\\lambda _-,c].\\end{aligned}$ To sum up, the fields of the twisted vector multiplet assemble into a single copy of the Dolbeault complex, $\\in \\Omega ^{0,*}(M,{g})[1]$ , but shifted so that the $(0,1)$ form—which arises from the physical gauge field—appears in ghost number zero.", "The differential, however, is slightly different: it reads = d+ [,], where $d$ is the de Rham differential.", "This arises from a BV action of the type L = b ( d+ [,] ) = b F, where $b \\in \\Omega ^{2,*}(M,{g}^\\vee )$ is the antifield multiplet.", "The physical values of unbroken $U(1)$ symmetries are given in Table REF ; a similar table for the chiral multiplet is displayed below (Table REF ).", "Note that in our conventions, the $R$ -charges of the fields $(c, A, \\lambda , \\bar{\\lambda }, D)$ read $(0, 0, -1, 1, 0)$ for the untwisted multiplet.", "Table: Gradings on the twist of the vector multiplet.", "The differential has twisted bidegree (1,0)(1,0).", "(Compare Table  for the chiral multiplet.", ")We can summarize the results of this section with the following proposition, giving a tidy description of the holomorphic twist of any four-dimensional supersymmetric theory in complete generality: [Compare ] The holomorphic twist of a general $=1$ theory in four dimensions on a Kähler manifold $M$ , with gauge Lie algebra ${g}$ and chiral matter transforming in a representation $V$ of ${g}$ , produces the holomorphic BV theory whose fields are $\\begin{aligned}[c]&\\in \\Omega ^{0,*}(M,{g})[1], \\\\b &\\in \\Omega ^{2,*}(M,{g}^\\vee ),\\end{aligned}\\qquad \\begin{aligned}[c]\\gamma &\\in \\Omega ^{0,*}(M,V), \\\\\\beta &\\in \\Omega ^{2,*}(M,V^\\vee )[1].\\end{aligned}$ The dynamics of the theory are specified by the BV action $\\mathfrak {L} = \\langle b, \\bar{\\partial }+ [,] \\rangle + \\langle \\beta , (\\bar{\\partial }+ ) \\gamma \\rangle ,$ encoding the minimal coupling of the $\\beta \\gamma $ system to holomorphic gauge theory.", "On a Calabi–Yau manifold, we can also add gauge-invariant superpotential interactions, in the form discussed previously: $\\mathfrak {L}_W = W(\\gamma ) \\wedge \\Omega ,$ where $\\Omega $ is the Calabi–Yau form.", "In the twist of pure gauge theory is computed using a presentation of the theory in the first-order formalism of super Yang-Mills.", "We note that, precisely in complex dimension two, the holomorphic twist of the vector multiplet is closely related to holomorphically twisted matter.", "In general, the $\\beta \\gamma $ system on an (ungraded) vector space has operators supported in degrees zero and $(d-1)$ , while one expects the vector multiplet to have operators supported in degree one, with antifields in degree $(d-2)$ .", "Precisely in complex dimension two, these pairs coincide.", "Note, however, that it is probably more accurate to think of the twisted vector multiplet as related to the $\\beta \\gamma $ system on ${g}[1]$ , rather than as an ungraded $\\beta \\gamma $ system with the roles of fields and antifields reversed.", "It is a pleasant exercise, performed in , , to check that the dimensional reduction of the holomorphic twist of ten-dimensional supersymmetric Yang–Mills theory (which is the holomorphic Chern–Simons theory with fields $\\Omega ^{0,*}(5,{g})$ ) is the twist of maximally supersymmetric Yang–Mills in four dimensions.", "From that perspective, the $$ -grading is broken to $/2$ due to the presence of a cubic superpotential, which originates from a component of the ten-dimensional cubic Chern–Simons interaction." ], [ "BV quantization of the holomorphic twist", "In this section, we turn to the quantization of the $\\beta \\gamma $ system on $^2$ .", "One of the advantages of formulating the holomorphic twist of the supersymmetric theory in the BV formalism is that there is a natural BV quantization.", "In fact, for every free BV theory there is a unique quantization obtained by deforming the classical BV differential by the BV Laplacian $\\hbar \\Delta $ given by the contraction with the $(-1)$ -shifted symplectic form defining the classical theory.", "Even in the case of a superpotential, we will see that no quantum correction arise.", "Indeed, the quantization still exists uniquely.", "We will see how this works at the level of holomorphic local operators, as we introduced in §REF ." ], [ "A recollection of the quantum BV formalism", "Classically, we have recalled that in the BV formalism a theory is given by the data of a complex of BV fields $= T^*[-1] $ together with a BV action $\\mathfrak {S} = \\int \\mathfrak {L}$ satisfying the classical master equation.", "We study the BV quantization of the system through the quantization of its observables.", "As we have already mentioned, the full theory of quantization of the observables of a BV theory has been developed in , , using the language of factorization algebras.", "While we do not use the full theory here, we remark that the quantization takes place locally on the spacetime manifold; that is, for each open set.", "The main result of is that these quantizations glue together according to the axioms of a factorization algebra.", "Schematically, the BV formalism suggests that a quantization of a classical theory is constructed in two steps: tensoring the underlying graded vector space of observables $$ with $[\\!", "[\\hbar ]\\!", "]$ and modifying the differential to $\\lbrace \\mathfrak {S}^q,-\\rbrace +\\hbar \\Delta $ where $\\Delta $ is the BV Laplacian and $\\mathfrak {S}^q = \\mathfrak {S} + O(\\hbar )$ is a quantum action satisfying the quantum master equation $\\lbrace \\mathfrak {S}^q, \\mathfrak {S}^q\\rbrace + \\hbar \\Delta \\mathfrak {S}^q = 0 .$ Naively, this prescription is incomplete for several reasons.", "First, $\\Delta $ is not defined on all of the observables; the naive formula involves an ill-defined pairing of distributions.", "There is a natural way to circumvent this difficulty by introducing a mollification of $\\Delta $ instead.", "This approach is developed in a very broad context in Chapter 9 of , where one introduces a scale dependent BV Laplacian $\\Delta _L$ .More generally, one can associate a BV Laplacian to every parametrix.", "Secondly, the same infinities that plagued the naive BV operator persist in defining the quantum action $\\mathfrak {S}$ .", "For a generic theory, when introducing an interaction term one must properly regularize the action functional by introducing counterterms.", "Even if one can introduce counterterms to get a well-defined loop expansion of the quantum action, it may still fail to satisfy the quantum master equation." ], [ "Exactness in the holomorphic theory", "For the holomorphic theory we consider, there is a simple combinatorial reason why no such counterterms arise.", "No loop diagrams can ever contribute to the quantum action; therefore, the classical BV action will also automatically satisfy the quantum master equation!", "The $\\beta \\gamma $ system on any complex surface $X$ in the presence of a holomorphic potential $W \\in (V^\\vee )$ is exact at tree level.", "In particular, a quantization of the theory exists locally on $^2$ and on Hopf surfaces $X_{q_1,q_2} = \\left(^2 \\setminus 0\\right) / (z_1, z_2) \\sim (q_1^z_1, q_2^z_2)$ .", "The quantum theory is constructed out of weights of diagrams are constructed out of the vertices, labeled by $I_W$ , and edges, labeled by the propagator.", "The weights are filtered with respect to the parameter $\\hbar $ , which counts the genus of the graph.", "Since the propagator arises from the BV pairing on the space of fields, it is symbolically of the form $\\begin{tikzpicture}[fermion] (0,0) -- (2,0);(-0.2, 0) node {\\beta };(2.2,0) node {\\gamma };(1,.5) node {P};\\end{tikzpicture}$ Here, we are ignoring any regularization, since it will play no role for us.", "Since the vertices of the diagrams, labeled by $I_W$ , are only functions the fields $\\gamma $ , we see that only trees will appear in the expansion.", "This proposition implies that the quantization of the holomorphic theory is easy to understand.", "We are most interested in its implication at the level of local observables.", "Just as in the non-BV case, there is a notion of local observables in the classical BV formalism.", "We define the cochain complex of local BV observables at $x \\in M$ as $_x := \\left( \\hat{}(J^\\infty B |_x)^\\vee , \\lbrace \\mathfrak {S},-\\rbrace \\right)$ where $B$ is the vector bundle on $M$ underlying the sheaf $$ .", "To quantize, we adjoin the parameter $\\hbar $ , and begin the construction of the quantum action by adding the BV Laplacian $\\hbar \\Delta $ .", "For the holomorphic theory, the quantum action receives no quantum corrections and we can disregard terms of order $\\hbar ^n$ for $n \\ge 1$ .", "Also, it is easy to see that the BV Laplacian vanishes identically on the holomorphic local operators.", "Thus, the quantum holomorphic operators for the $\\beta \\gamma $ system in the presence of a superpotential is of the form $\\left(_0^\\text{hol}[\\!", "[\\hbar ]\\!]", ", \\lbrace L_W,-\\rbrace \\right),$ where $_0^\\text{hol}$ is the space of classical holomorphic operators defined in §REF .", "When one couples the $=1$ chiral multiplet to a gauge field, where the matter takes values in the adjoint representation, it is known that there are nonzero quantum corrections to the superpotential in perturbation theory .", "We hope to return to seeing these perturbative corrections at the level of the holomorphic twist in future work." ], [ "Holomorphic characters", "Consider a quantum field theory defined on affine space $^n$ .", "For simplicity, we assume this theory is translation invariant and we denote by $_0$ the local operators of the theory supported at the origin $0 \\in ^n$ .", "In the case that the theory is conformal, there is the state-operator correspondence, which relates local operators to states in the Hilbert space on $S^{n-1}$ .", "In the holomorphic case on $^d$ , we will find evidence (at least for $d=2$ ) for such a correspondence by relating a certain $q$ -character of the local operators to the partition function over a class of complex manifolds diffeomorphic to $S^{2d-1} \\times S^1$ .", "This is a generalization to the situation in chiral CFT whereby the $q$ -character of a vertex algebra is related to the partition function along an elliptic curve.", "There is a certain class of non-local operators that will play an essential role for us.", "To see them, note that we can restrict the theory to the submanifold $^n \\setminus 0 \\subset ^n .$ The radius of a point in punctured affine space gives a natural projection $r : ^n \\setminus 0 \\rightarrow _{>0}$ .", "We can then reduce the theory along the map $r$ to get a theory of quantum mechanics defined on the positive line $_{>0}$ .", "In other words, since $^n \\setminus 0 \\cong S^{n-1} \\times _{>0}$ , we can understand this as compactifying the theory along the $(n-1)$ -sphere.", "We assume that compactification along $S^{n-1}$ results in a topological theory.", "For topological or holomorphic theories, this is certainly true, since the translations that survive the holomorphic twist cannot intersect nontrivially with the translations that survive compactification.In fact, there is the small caveat that we must actually consider a dense algebraic subspace of operators of the resulting quantum mechanics that is actually topological.", "Denote by $$ the local operators of the compactified theory on $_{>0}$ .", "A topological quantum mechanics is nothing other than a single associative algebra; in our setting, the one-dimensional OPE endows $$ with the natural structure of a (homotopy) associative algebra.", "From the perspective of the full theory on $^n$ , the algebra $$ actually contains non-local operators: it consists precisely of the operators supported on spheres $S^{n-1}$ , which can originate either from local operators in the full theory or from nonlocal operators wrapped on a nontrivial cycle.", "The operator product of these sphere operators induced by radial ordering endows $$ with the aforementioned associative product.", "Moreover, the operator product of $S^{n-1}$ -operators with local operators implies that $_0$ is a module for the algebra $$ .", "Given any algebra $A$ and a module $M$ , finite dimensional over $$ , one defines the character by $a \\mapsto _V(\\exp (a))$ thought of as a map $HH_0(A) \\rightarrow $ , where $HH_0(A)$ is the zeroth Hochschild homology.", "In our situation, for observables we define the local character is the the character of the $$ -module $_0$ $_{} (_0) : HH_0() \\rightarrow [\\!", "[\\hbar ]\\!]", ".$ Of course, the space of local operators $_0$ is very rarely finite dimensional, so the above definition needs to be properly interpreted.", "In practice, there are additional gradings, or symmetries, present in a QFT which allow one to define the graded dimension of $_0$ .", "For instance, for a chiral conformal field theory, the conformal structure allows one to define the $q$ -character of local operators.", "For general holomorphic theories, there is a natural generalization, see Definition REF .", "There is another interpretation of this character from the point of view of quantum mechanics.", "Since the original theory is defined on all of $^n$ , the compactified theory on $_{>0}$ admits a natural boundary condition extending it to a theory on $_{\\ge 0}$ .", "One can describe this boundary condition by saying that the boundary operators supported at $0 \\in _{\\ge 0}$ —i.e., functions on field configurations compatible with the boundary conditions—are isomorphic to the local operators $_0$ of the original theory.", "In fact, the algebra of local operators in the quantum mechanics is essentially the (differential graded) Weyl algebra, formed from the symplectic vector space which is the cotangent bundle to (the spectrum of) holomorphic local operators in the upstairs theory.", "One can see this by considering the Dolbeault cohomology of punctured $d$ , which has classes in degree zero and $d-1$ that are paired by integration.", "(We discuss this further below in §.)", "The fields of the $\\beta \\gamma $ system on this geometry, after passing to the cohomology of $\\bar{\\partial }$ , are $Z^\\bullet \\otimes H^*_{\\bar{\\partial }}(d \\setminus 0)$ , which has a symplectic pairing in degree zero, and can be thought of as the cotangent bundle to $Z^\\bullet \\otimes H^*_{\\bar{\\partial }}(d)$ .", "The operator product discussed above on local operators is precisely the quantization of classical local operators with respect to the degree-zero Poisson bracket structure.", "As is familiar from elementary quantum mechanics, such algebras usually admit unique irreducible unitary representations, which are constructed by taking functions on a Lagrangian subspace of the relevant symplectic vector space.", "It is immediate to see that the operators that are local upstairs define a canonical choice of such a Lagrangian, akin to the zero section of a cotangent bundle.", "Hilbert spaces are associated to boundaries, and this choice of Lagrangian is to be interpreted as a choice of boundary condition in the manner discussed above.", "The resulting Hilbert space, over which the trace is taken, then depends on a choice of boundary condition, i.e.", "Lagrangian, analogous to polarization data in geometric quantization.", "Finally, the partition function of the original theory on $S^{n-1} \\times S^1$ can be thought of as a trace over the Hilbert space of this quantum-mechanical system, which is the corresponding module of $$ ." ], [ "The local character", "We now turn to the holomorphic situation.", "For a holomorphic theory (like the ones coming from twists of $=1$ in dimension four) on $^d$ , we have defined the holomorphic local operators $^\\text{hol}_w$ at $w \\in ^d$ .", "These are simply on-shell local operators of the underlying free BV theory, but as a cochain complex are equipped with the differential $Q^\\text{hol} + \\lbrace I^\\text{hol}, -\\rbrace $ where $Q^\\text{hol}$ is a linear holomorphic differential operator and $I^\\text{hol}$ is the holomorphic interaction.", "Let $^\\text{hol}_0$ be the holomorphic local operators of a holomorphic theory on $^d$ as defined in Definition REF .", "The bare ${\\bf q}$ -character is defined by the formal series $\\chi ({\\bf q}) = \\sum _{j_1,\\ldots ,j_d \\in } q_{1}^{j_1} \\cdots q_{d}^{j_d} \\dim (^{(j_1,\\ldots ,j_d)}_0) \\in [\\!", "[q_1^{\\pm }, \\ldots , q_d^{\\pm }]\\!]", ".$ Here, $^{(j_1,\\ldots ,j_d)}$ labels the $(j_1,\\ldots ,j_d)$ -eigenspace corresponding to the action of the maximal torus $T^d \\subset U(d)$ .", "There is the following algebraic way to think about this ${\\bf q}$ -character.", "Because the space of local operators is a $U(d)$ -representation, there is a map of algebras $U((d)) \\rightarrow (_0) .$ The character only depends on the Cartan Lie subalgebra $^d \\subset (d)$ which we think of as being generated by the scaling operators $L_0^{i} = z^i \\frac{\\partial }{\\partial z_i} \\;\\;\\; , \\;\\;\\; 1 \\le i \\le d$ where no summation convention is used.", "Restricting to the Cartan, we obtain a map of algebras $\\rho : U(^d) \\rightarrow (_0)$ .", "By definition, this map factors through endomorphisms of the $T^d$ -eigenspaces $U(^d) \\rightarrow \\bigoplus _{(j_1,\\ldots ,j_d)} (^{(j_1,\\ldots ,j_d)}_0) .$ The character is obtained from the induced map at the level of Hochschild homology: $HH_*(\\rho ) : HH_*(U(^d)) \\rightarrow \\bigoplus _{(j_1,\\ldots ,j_d)} HH_*((^{(j_1,\\ldots ,j_d)}_0)) .$ Indeed, if we assume that each $^{(j_1,\\ldots ,j_d)}_0$ is finite dimensional, Morita invariance implies that this map of graded vector spaces is given by a single linear map $HH_*(\\rho ) : HH_*(U(^d)) \\rightarrow \\bigoplus _{(j_1,\\ldots ,j_d)} HH_0((^{(j_1,\\ldots ,j_d)}_0)) = \\bigoplus _{(j_1,\\ldots ,j_d)} .$ Choosing an isomorphism $\\oplus _{(j_1,\\ldots ,j_d)} = [q_1^\\pm ,\\ldots , q^{\\pm }]$ we witness the ${\\bf q}$ -character above as the image of $1 \\in HH_*(U(^d))$ under this map $\\chi _{\\bf q} (_0) = HH_*(\\rho ) (1) .$ More concretely, we can express the character as $\\chi _{\\bf q} (_0) = _{_0} (q_1^{L_0^1} \\cdots q_d^{L_0^d})$ .", "When a holomorphic theory posses extra symmetries, there are equivariant versions of the $q$ -character.", "For instance, if the theory has an additional $U(1)$ -symmetry we can define the multi-variable character $\\chi ({\\bf q} , u) = \\sum _{j_1,\\ldots ,j_d \\in } \\sum _{k \\in } q_{1}^{j_1} \\cdots q_{d}^{j_d} u^k \\dim (^{(j_1,\\ldots ,j_d), k}_0) \\in [\\!", "[q_1^{\\pm }, \\ldots , q_d^{\\pm }]\\!]", ".$ where $^{(j_1,\\ldots ,j_d), k}_0$ is the $(j_1, \\ldots , j_d), k$ -eigenspace of the holomorphic local operators with respect to $T^d \\times U(1) \\subset U(d) \\times U(1)$ ." ], [ "Local operators of the free theory on $^2$", "We now turn our focus to a particular holomorphic theory: the free $\\beta \\gamma $ system on $^2$ .", "This will be our first calculation of a holomorphic character.", "Before we proceed, we present a description of the local operators of the theory on $^2$ .", "Recall, the BV fields of the $\\beta \\gamma $ system with values in a complex vector space $V$ is the Dolbeault complex on $^2$ with values in the vector space $V \\oplus 2 z \\cdot V^\\vee [1] .$ When $V$ is ungraded, it is thus in cohomological degree zero, and $V^\\vee $ in degree $(-1)$ .", "Fix a basis $\\lbrace e_i\\rbrace _{i=1}^{N = \\dim (V)}$ for $V$ and let $\\lbrace e^i\\rbrace $ be the dual basis.", "Solutions to the classical equations of motion are parametrized by fields $\\begin{aligned}[c]\\gamma ^0_i &\\in ^\\text{hol}(^2) V , \\\\\\beta ^{0;j}\\, 2 z &\\in \\Omega ^{2,hol}(^2) V^\\vee [1] = 2 z \\cdot ^\\text{hol}(^2) V^\\vee [1].\\end{aligned}$ We label the corresponding linear local holomorphic operators (supported at $w = 0 \\in ^2$ ) with bold letters as $\\begin{array}{ccclll}_{n_1,n_2; i} & : & \\gamma ^0 & \\mapsto & \\frac{\\partial ^{n_1}}{\\partial z_1^{n_1}} \\frac{\\partial ^{n_2}}{\\partial z_2^{n_2}} \\gamma ^0_i (z=0) \\\\_{n_1+1, n_2+1}^j & : & \\beta ^{0} 2 z & \\mapsto & \\frac{\\partial ^{n_1}}{\\partial z_1^{n_1}} \\frac{\\partial ^{n_2}}{\\partial z_2^{n_2}} \\beta ^{j} (z=0) ,\\end{array}$ where $n_1,n_2 \\ge 0$ and $i,j \\in \\lbrace 1,\\ldots ,N\\rbrace $ .", "We use the bold fonts $, $ to distinguish linear operators from their fields $\\beta , \\gamma $ .", "Note that the ghost degree of $_{n_1,n_2; i}$ is 0 and the ghost degree of $_{n_1+1, n_2+1}^j$ is $+1$ .", "Using this basis, it is immediate to verify the following description of local holomorphic operators.", "Let $_0^\\text{hol}$ be the local holomorphic operators at $w=0$ of the free $\\beta \\gamma $ system on $^2$ .", "There is a graded isomorphism $_0^\\text{hol} \\cong \\left( ([\\![z_1,z_2]\\!]", "V)^\\vee \\oplus ([\\![z_1,z_2]\\!]", "V^\\vee )^\\vee [-1] \\right) [\\hbar ]$ which on linear generators sends $z_1^{-n_1}z_2^{-n_2} e_i + z_1^{-m_1} z_2^{-m_2} e^j \\mapsto _{n_1,n_2;i} + _{m_1+1,m_2+1}^j$ where $\\lbrace e^i\\rbrace $ is a basis for $V$ and $\\lbrace e_i\\rbrace $ is the dual basis.", "With this description of the local operators of the $\\beta \\gamma $ system on $^2$ in hand, we move on to present a formula for the character." ], [ "Symmetries in $4d$ {{formula:74eb5532-b956-4799-a345-feac98d3cabe}}", "We will present the character of the free $\\beta \\gamma $ system on $^2$ in two equivalent ways.", "The first is natural from the description of the theory as a holomorphic one.", "The other arises from most naturally from the description of the theory from the twist of $4d$ $=1$ supersymmetry." ], [ "The first description of the holomorphic character", "We can summarize the symmetries present in the free holomorphic theory on $^2$ as follows.", "For the various $U(1)$ symmetries, we use the notation $U(1)_{y}$ when we want to stress which variable for the Cartan, or fugacity, used in the expression of the character.", "The $U(2)$ symmetry, present in any holomorphic theory on $^2$ , whose character will decompose with respect to its Cartan $U(1)_{q_1} \\times U(1)_{q_2}$ that we label by $q_1,q_2$ ; The $U(1)_z$ -flavor symmetry.", "Here, $$ has weight $+1$ and $$ has weight $-1$ ; The $U(1)_u$ symmetry present on the BV complex corresponding to the ghost weight.", "Note that while the action has ghost degree zero, there are local operators of nontrivial ghost degree: The operator $$ has ghost degree $+1$ .", "For general free $\\beta \\gamma $ systems, these fugacities are the generalization to arbitrary complex dimension of the regraded fugacities used in the discussion of the elliptic genus in .", "Note that these are all symmetries of the classical BV theory.", "In fact, they all extend (uniquely) to symmetries of the quantum theory.", "The symmetry by $U(2) \\times U(1)_z \\times U(1)_u$ on the classical free $\\beta \\gamma $ system with values in the complex vector space $V$ lifts to a symmetry of the quantization.", "The differential on the quantum observables is of the form $+ \\hbar \\Delta $ .", "The operator $$ is manifestly equivariant for the action of $U(2)$ .", "Since $U(1)_z \\times U(1)_u$ does not act on spacetime, $$ trivially commutes with its action.", "Further, the action of $U(2)$ is through linear automorphisms, and since the BV Laplacian $\\Delta $ is a second order differential operator, it certainly commutes with the action of $U(2)$ .", "Likewise, since $U(1)_z \\times U(1)_u$ is compatible with the $(-1)$ -symplectic pairing, it automatically is compatible with $\\Delta $ .", "In conclusion, each of the bulleted symmetries above extend by $\\hbar $ -linearity to symmetries of the quantum observables of the free theory.", "We now compute the local character with respect to the group $U(2)_{q_1,q_2} \\times U(1)_z \\times U(1)_u$ .", "The local character of the free $\\beta \\gamma $ system on $^2$ is equal to $\\chi (q_1,q_2 ; z ; u) = \\prod _{n_1, n_2 \\ge 0} \\frac{1 - z^{-1} u q_1^{n_1 + 1} q_2^{n_2 + 1}}{1 - z q_1^{n_1} q_2^{n_2}} \\in [\\!", "[q_1^{\\pm },q_2^{\\pm }, z^\\pm , u]\\!]", ".$ The specialization $u = 1$ is well-defined and recovers the elliptic $\\Gamma $ -function $\\chi (q_1,q_2 ; z ; u) |_{u=1} = \\Gamma ^\\text{ell} (q_1,q_2 ; z) .$ For an introduction to the elliptic $\\Gamma $ -function and other related hypergeometric series we refer to the textbook reference .", "For fixed $n_1,n_2 \\ge 0$ , let $V^\\vee _{n_1,n_2}$ denote the linear span of operators $\\lbrace _{n_1,n_2; i}\\rbrace _{i=1}^N$ .", "As a vector space $V^\\vee _{n_1,n_2} \\cong V^*$ , but we want to remember the weights under $U(2)$ .", "Likewise, for $n_1 , n_2 > 0$ , let $V_{n_1,n_2} \\cong V$ be the linear span of the operators $\\lbrace _{n_1,n_2}^j\\rbrace _{j=1}^N$ .", "The holomorphic local operators, then, decompose as $^\\text{hol}_{0} = \\left( \\left(\\bigoplus _{n_1,n_2 \\ge 0} V_{n_1,n_2}^*\\right) \\oplus \\left(\\bigoplus _{n_1,n_2 > 0} V_{n_1,n_2}[-1] \\right)\\right) [\\hbar ]$ The actions of the remaining symmetry groups are easy to read off.", "On $V_{n_1,n_2}^\\vee $ , the group $U(1)_{z} \\times U(1)_{u}$ acts by $(+1, 0)$ .", "On $V^\\vee _{n_1,n_2}$ , the group $U(1)_{z} \\times U(1)_{u}$ acts by $(-1, +1)$ .", "To compute the character of the local operators it suffices to compute it on the vector space $\\left( \\left(\\bigoplus _{n_1,n_2 \\ge 0} V_{n_1,n_2}^*\\right) \\oplus \\left(\\bigoplus _{n_1,n_2 > 0} \\oplus V_{n_1,n_2}[-1] \\right)\\right) \\cong \\left(\\bigoplus _{n_1,n_2 \\ge 0} V_{n_1,n_2}^*\\right) \\left(\\bigoplus _{n_1,n_2 > 0} V_{n_1,n_2} \\right) .$ We have used the convention that as (ungraded) vector spaces the symmetric algebra of a vector space in odd degree is the exterior algebra.For instance, if $W$ is an ordinary vector space, $(W[-1]) = \\Lambda (W)$ as ungraded vector spaces.", "We can further simplify the right-hand side as $\\bigotimes _{n_1, n_2 \\ge 0} \\left((V^*_{n_1,n_2})\\right) \\bigotimes \\bigotimes _{n_1,n_2 > 0} \\left((V_{n_1,n_2})\\right) .$ The character of the symmetric algebra $(V^\\vee _{n_1,n_2})$ contributes $\\frac{1}{1-z q_1^{n_1}q_2^{n_2}}$ and the character of $(V_{n_1,n_2})$ contributes $1- z^{-1} u q_1^{n_1+1}q_2^{n_2+1} .$ The formula for character in the statement of the proposition follows from the fact that the character of a tensor product is the product of the characters.", "Our expression for the character of the local operators of the $\\beta \\gamma $ system on $^2$ agrees with the partition function of the $= 1$ supersymmetric chiral multiplet on the manifold $S^3 \\times S^1$ , computed in , , , .", "For a direct calculation of the partition function in the holomorphically twisted theory, which agrees with our answer here, see below." ], [ "A physical expression of the character", "There is another useful way to decompose the character we have just computed.", "It will be useful later on once we introduce the superpotential.", "The variant amounts to decomposing the local operators with respect to the double cover $SU(2) \\times U(1)$ of $U(2)$ .", "The symmetries are: The Lorentz $SU(2)_p$ symmetry whose Cartan we label by the coordinate $p$ .", "This action arises on operators through the fundamental action of $SU(2)$ on $^2$ ; The $U(1)_q$ symmetry arising from the grading ${\\rm tw}_2$ in Table REF .", "For this symmetry, $_{n_1,n_2 ; i}$ has weight $n_1+n_2$ and $_{n_1,n_2}^i$ has weight $n_1+n_2 + 2$ .", "The $U(1)_u$ symmetry whose corresponding grading we denoted ${\\rm tw}_1$ in Table REF (note that this is precisely the ghost degree in the holomorphic twist); The $U(1)_z$ -flavor symmetry.", "Here, $$ has weight $+1$ and $$ has weight $-1$ .", "Table: U(2)U(2)-equivariant gradings of fields.", "Here the gradings preserved after the twist are the stabilizers of the line Q - +s 0 Q_- + s_0: these are the combinations tw 1 =R+gh\\text{tw}_1 = R + \\text{gh} and tw 2 =R+L\\text{tw}_2 = R + L. Adding s int s_\\text{int} further breaks the grading to tw 1 -tw 2 \\text{tw}_1 - \\text{tw}_2.The local character of the holomorphic twist of the free $4d$ $=1$ multiplet with respect to the symmetries above is $\\chi (p,q; z;u) \\prod _{m \\ge 0} \\prod _{\\ell = 0}^m \\frac{1 - u z^{-1} q^{m+2} p^{2 \\ell - m}}{1- z q^m p^{2 \\ell -m}}$ We will first decompose the weights with respect to the grading given by $U(1)_q$ .", "On linear operators, the decomposition is $\\left(\\bigoplus _{m \\ge 0} V_m^\\vee \\right) \\oplus \\left(\\bigoplus _{m \\ge 0} V_{m+2} \\right) [-1]$ Let us first compute the contribution of the local operators $\\left(\\bigoplus _{m \\ge 0} V_m^\\vee \\right)$ to the local character.", "Since this space has ${\\rm tw}_1$ grading zero, it suffices to compute the $SU(2)_p \\times U(1)_q \\times U(1)_z$ character.", "For each $m$ , $V_m^\\vee $ is an irreducible representation of $SU(2)$ , and hence we have a decomposition $\\prod _{m \\ge 0} \\chi _{SU(2)_p \\times U(1)_q \\times U(1)_z} \\left( (V_m^\\vee ) \\right) = \\prod _{m \\ge 0} \\sum _{k \\ge 0} z^k q^{km} \\chi _{SU(2)_p}\\left(^k(V_m^\\vee )\\right)$ The sum on the right hand side is the standard generating function for the determinant, so we can rewrite this as $\\prod _{m \\ge 0} \\frac{1}{\\det (1 - z q^m A)}$ where the determinant is taken in the $V^\\vee _m$ representation and $A$ is the $2 \\times 2$ matrix ${\\rm diag}(p, p^{-1})$ .", "To compute this determinant, we choose the basis $\\lbrace z_1^{\\ell } z_2^{m-\\ell }\\rbrace _{\\ell = 0}^{m}$ for $V_m^\\vee $ .", "Since, $A (z_1^{\\ell } z_2^{m-\\ell }) = p^{2\\ell -m} z_1^{\\ell } z_2^{m-\\ell }$ the expression for the character reduces to $\\prod _{m \\ge 0} \\prod _{\\ell = 0}^m \\frac{1}{1-z q^m p^{2 \\ell -m}} .$ Similarly, we can compute the contribution of $\\left(\\oplus _{m \\ge 0} V_{m+2}\\right)[-1]$ to the character which gives $\\prod _{m \\ge 0} \\prod _{\\ell = 0}^m \\left(1 - u z^{-1} q^{m+2} p^{2 \\ell - m}\\right)$ Note that the change of variables $q \\rightarrow (q_1q_2)^{1/2}$ and $p \\rightarrow (q_1/q_2)^{1/2}$ returns the expression for the character in Proposition REF .", "This is consistent with the fact that $SU(2) \\times U(1)$ , whose Cartan we labeled by $q,p$ , is a double cover of the group $U(2)$ , whose Cartan we labeled by $q_1,q_2$ .", "So far, we have treated the entire target $V$ as weight $+1$ with respect to the flavor symmetry $U(1)_z$ .", "If $\\dim _(V) = N$ , then the flavor symmetry is in fact $U(N)$ , and we can introduce a flavor fugacity for the entire Cartan subalgebra, thus enhancing the free character.", "Labeling the $i$ th fugacity by $z_i$ , $i=1,\\ldots , N$ , this enhanced free character becomes $\\prod _{i=1}^N \\chi (q_1,q_2; z_i ; u)$ ." ], [ "Partition function on Hopf surfaces", "In this section we show how the local character we have computed above is identical to the partition function of the holomorphic twist of $4d$ $=1$ chiral multiplet on a particular complex surface called a Hopf surface.", "We choose to focus on a class of Hopf surfaces which are diagonal.", "These compact complex surfaces are defined for any two complex numbers $q_1,q_2$ satisfying $1 < |q_1| \\le |q_2|$ by the quotient $X = \\left.", "\\left(^2 \\setminus 0\\right) \\;\\; \\right\\bad.", "\\;\\; \\sim $ where the relation is $(z_1,z_2) \\sim (q^{n}_1 z_1, q^n_2 z_2)$ for $n \\in $ .", "As a smooth manifold $X_{q_1,q_2}$ is diffeomorphic to $S^3 \\times S^1$ , and the Dolbeault cohomology is $H^{0,0}(X_{q_1,q_2}) = H^{0,1} (X_{q_1,q_2}) = H^{2,1}(X_{q_1,q_2}) = H^{2,2}(X_{q_1,q_2}) = $ with all other Dolbeault cohomology groups zero.", "In particular, $X_{q_1,q_2}$ is not Kähler.", "Our goal is to compare the formula for the local character of the holomorphic theory computed in the last section to the partition function of the theory on Hopf manifolds.", "The relation between the two quantities is evidence for a higher dimensional state-operator correspondence.", "There are some key differences between the usual CFT picture that we wish to point out.", "Firstly, in CFT one uses Weyl transformations to transform $^n$ to $S^{n-1} \\times $ and then traces out the remaining direction to obtain the partition function on $S^{n-1} \\times S^1$ .", "For us, the holomorphic theory on $^2$ restricts to one on $^2 \\setminus 0$ which we can then descend to one on the complex manifold $X_{q_1,q_2} \\cong S^3 \\times S^1$ .", "On the other hand, if we perform the reduction in two stages: $^2 \\setminus 0 {\\cong } S^3 \\times _{>0} \\rightarrow S^3 \\times S^1 = X_{q_1,q_2}$ then we can think about the partition function as related to Hochschild homology of the algebra obtained from the theory on $^2 \\setminus 0$ .", "Thus, in the holomorphic case, the relation between the partition function and the trace of local operators in manifest.", "The calculations in this section are an explicit test of this relationship.", "The variables involved in the local character consisted of $q_1,q_2$ , which labeled the Cartan of the $U(2)$ symmetry group acting on $^2$ by rotations.", "At the level of the partition function, these variables label the complex structure moduli on the Hopf manifold.", "The other local symmetry which we wish to match up with the partition function is the $U(1)_z$ -flavor symmetry.", "Globally, we can encode this symmetry by working with a background $U(1)$ connection, which by holomorphicity we can take to be of type $(0,1)$ .", "That is, we consider the free $\\beta \\gamma $ system on $X_{q_1,q_2}$ in the presence of a background $(0,1)$ -gauge field $A_f \\in \\Omega ^{0,1}(X_{q_1,q_2})$ , encoding the $U(1)$ -flavor symmetry: $\\int _{X_{q_1,q_2}} \\beta \\gamma + \\int _{X_{q_1,q_2}} \\beta A_f \\gamma .$ Globally, $A_f$ corresponds with a generator of the cohomology group $H^{0,1}(X_{q_1,q_2}) = \\cdot a_f$ , and to be consistent with the formulas above, we will label the holonomy of $A_f$ by the variable $z$ .", "In turn, the partition function will be a function of the variables $q_1,q_2,z$ , just as the local character is.", "The Hopf manifold can be viewed as the total space of a holomorphic fibration $\\begin{tikzcd}T^2 [r] & X_{q_1,q_2} [d] \\\\& ^1\\end{tikzcd}$ which is topologically obtained from the Hopf fibration $S^1 \\rightarrow S^3 \\rightarrow S^2$ by taking the product with a circle $S^1$ .", "In particular, there is a natural (smooth, not holomorphic) map $\\pi : X_{q_1, q_2} \\rightarrow S^3$ .", "We will compute this partition function by first compactifying along $\\pi $ to obtain a 3-dimensional theory on $S^3$ , with an infinite tower of fields corresponding to the winding modes around $S^1$ .", "Then, we use a formula for the partition function of partially holomorphic theory on $S^3$ , which turns out to be equal to the reduction of our holomorphic theory on $X_{q_1,q_2}$ .", "The spirit of our calculation is very similar to the approach in at the level of the holomorphic twist.", "When we compactify, we must remember the higher Kaluza–Klein modes, but it is perhaps easier to imagine first the situation of dimensional reduction." ], [ "Dimensional reduction", "The dimensional reduction of the four-dimensional $=1$ supersymmetry algebra to three dimensions is the three-dimensional $=2$ supersymmetry algebra, and the four-dimensional chiral multiplet reduces to the $=2$ chiral multiplet in three dimensions.", "Upon choosing a holomorphic twist $Q \\in S_+^{4d}$ , two of three translations remain exact upon reduction.", "There is then a natural description of the resulting theory, the minimal twist of the three-dimensional $=2$ chiral multiplet, as follows.", "Following , we will refer to the twist as “holomorphic/topological matter.” The twists and dimensional reductions fit into the following diagram of theories: $\\begin{tikzcd}4d: & \\text{$=1$ chirals} [rrr,\"\\text{holomorphic twist}\"] [d, \"\\text{dimensional reduction}\"^{\\prime }] & & & \\beta \\gamma {\\rm \\; system} [d, \"\\text{dimensional reduction}\"] \\\\3d: & \\text{$= 2$ chirals} [rrr,\"\\text{minimal twist}\"] & & & \\text{ hol./top.\\ matter}\\end{tikzcd}$ For simplicity, we will give a local description of the three-dimensional theory, that we refer to as holomorphic/topological matter, on the 3-manifold $\\times $ .The notation is to remind us that we are using the complex structure on $$ and just the real structure on $$ .", "In general, a choice of nilpotent supercharge in the three-dimensional $=2$ algebra corresponds to a transverse holomorphic foliation structure, which can be used to give a coordinate independent description of the theory.", "We refer the reader to  for more details.", "The fields of the twisted $3d$ theory (in the BV formalism) are given by $ \\gamma ^{3d} & \\in \\Omega ^{0,*}(_z) \\Omega ^*(_t) V \\\\ \\beta ^{3d} & \\in \\Omega ^{1,*}(_z) \\Omega ^*(_t) V^\\vee [1]$ and the action functional is S(3d, 3d) = 3d 3d where $ is the total de Rham operator on $ = 3$.Notice that only the piece $ + ṭ t$ of the de Rham operator contributes to the action.$" ], [ "Compactification", "The compactification of the twist of $4d$ $=1$ is a bit more subtle.", "When we compactify along the map $\\pi : X_{q_1,q_2} \\rightarrow S^3$ , we want to remember not just the zero modes but all of the higher modes as well.", "The resulting 3-dimensional theory is of a similar form to the holomorphic/topological matter system, but there is an infinite tower of fields corresponding to the winding modes along $S^1$ .", "Locally, we can write the fields in the form $\\gamma ^{3d}_\\pi = \\sum _{n\\in } \\gamma _n^{3d}(z,, t) e^{in \\theta } \\\\\\beta _\\pi ^{3d} = \\sum _{n \\in } \\beta _n^{3d} (z,,t) e^{i n \\theta } .$ where $\\gamma _n^{3d}, \\beta _n^{3d}$ are 3-dimensional fields as in (REF ) and () for each $n \\in $ , and $\\theta $ is a coordinate on $S^1$ .", "The theory, upon compactification, is of the form Sn(n3d, n3d) = S3 n3d n3d + n3d af n3d + n3d Ab, n n3d Here, $a_f$ is the constant background $U(1)$ -flavor connection, and $A_{b,n}$ is a background connection that is proportional to the winding mode $A_{b} = i n ṭ + \\cdots .$ This background connection has the effect of shifting the energy of the modes by the integer $n$ .", "In particular, when $n=0$ this background field vanishes (up to a gauge transformation) and we are left with the dimensionally reduced theory from before.", "In §6.2 of the partition function along $S^3$ , equipped with a THF structure, of the 3d holomorphic/topological matter theory is computed.", "For the free theory (REF ), with constant background connection $a_f$ , the answer is $Z_{3d}^{\\rm chiral}(S^3_{q_1,q_2}) = \\prod _{n_1,n_2 \\ge 0} \\frac{n_1 \\tau _1 + n_2 \\tau _2 - i (\\tau _1 + \\tau _2) + i a_f}{n_1 \\tau _1 + n_2 \\tau _2 - i a_f}$ where $q_i = e^{2\\pi i \\tau _i}$ .", "Now, the full partition function of the theory on $X_{q_1,q_2}$ is written as a product of the 3-dimensional partition functions over the winding modes Z4dchiral(Xq1,q2) = n Z3dchiral, (n)(S3q1,q2) where $Z_{3d}^{{\\rm chiral}, (n)}(S^3_{q_1,q_2})$ is the partition function of the $3d$ theory corresponding to the $n$ th mode.", "For each $n \\in $ , labeling the winding mode, we have the similar looking formula of the partition function of the theory (REF ): $Z_{3d}^{{\\rm chiral}, (n)}(S^3_{q_1,q_2}) = \\prod _{n_1,n_2 \\ge 0} \\frac{n_1 \\tau _1 + n_2 \\tau _2 + n - i (\\tau _1 + \\tau _2) + i a_f}{n_1 \\tau _1 + n_2 \\tau _2 + n - i a_f} .$ After simplifications involving a regularization scheme—see for instance—the product (REF ) reduces to the elliptic $\\Gamma $ -function $Z_{4d}^{\\rm chiral}(X_{q_1,q_2}) = \\Gamma _{ell}(q_1,q_2 ; z)$ where $z = e^{2 \\pi i a_f}$ is the holonomy of the $U(1)$ -flavor connection $a_f$ .", "The holomorphic twist of the $=(1,0)$ hypermultiplet in six dimensions is equivalent to the $\\beta \\gamma $ system on $^3$ ; see  and .", "One can compute the holomorphic local character in a way completely similar to the method here.", "It would be useful to compare the calculation of the partition function on the Hopf 3-fold diffeomorphic to $S^5 \\times S^1$ to to the local character as we've just done for Hopf surfaces." ], [ "Turning on a superpotential", "In this section we perform the calculation of the local character of the $\\beta \\gamma $ system on $^2$ , valued in $V$ , in the presence of a holomorphic interaction defined by the holomorphic potential $W : V \\rightarrow $ .", "Recall, we have seen how this theory arises as the twist of the theory of the $=1$ chiral multiplet deformed by the $F$ -term superpotential determined by $W$ .", "Our method to compute the local character is direct.", "In the BV formalism, the differential on observables can alway be split up as $s = s_{\\rm free} + s_{\\rm int}$ where $s_{\\rm free}$ comes from the free part of the action and $s_{\\rm int}$ from the interacting part.", "In turn, this induces a spectral sequence whose first page computes the cohomology with respect to $s_{\\rm free}$ and the differentials on the higher pages are determined by $s_{\\rm int}$ .", "For holomorphic local operators, when $Q^\\text{hol} = 0$ , we have implicitly already taken the cohomology with respect to the free part, which is always equal to the Dolbeault operator of some holomorphic vector bundle.", "Thus, all that remains to compute is the cohomology with respect to $s_{\\rm int}$ ." ], [ "The holomorphic $\\sigma $ -model", "The theory in the presence of superpotential interactions can be understood geometrically.", "Our results can be interpreted as saying, analogous to Remark REF , that the theory is equivalent to a holomorphic sigma model whose target space is the derived critical locus of $W$ .", "Let us unpack this a bit.", "As we've already remarked, the free $\\beta \\gamma $ system valued in $V$ makes sense in any complex dimension $d$ as the cotangent theory to the moduli space of holomorphic maps ${\\rm Map}^\\text{hol}(^d, V)$ .", "One way of writing this is as $T^*[-1] \\; {\\rm Map}(^d_{} , V)$ where $^d_{}$ indicates the derived manifold whose dg ring of functions is $\\Omega ^{0,*}(^d)$ , which resolves holomorphic functions on $^d$ .", "Thus, the $\\beta \\gamma $ system is the cotangent theory of a very natural holomorphic $\\sigma $ -model into a vector space target.", "In particular, the $\\beta \\gamma $ system on $^2$ on with values in the complex vector space $V$ is the BV theory whose fields are $T^*[-1] \\; {\\rm Map}(^2_{}, V) & = T^*[-1] \\left(\\Omega ^{0,*}(^2) V \\right) \\\\ & = \\Omega ^{0,*}(^2) V \\oplus \\Omega ^{2,*}(^2) V^\\vee [1] .$ We can rewrite the fields once we choose a Calabi–Yau form on $^2$ , which we always assume is simply $2 z$ .", "Indeed, up to issues of compact support, we have $T^*[-1] \\; {\\rm Map}(^2_{}, V) = {\\rm Map}\\left((^2_{}, 2 z) , T^*[1] V\\right)$ where on the right hand side the $(+1)$ -shifted cotangent bundle to $V$ has appeared.", "This gives us an AKSZ description of the $\\beta \\gamma $ system.", "If we choose boundary conditions at $\\infty $ in $^2$ , the Calabi–Yau form endows $^2$ with the structure of a 2-oriented dg manifold.", "The standard pairing on $T^*[1] V$ endows it with the structure of a 1-shifted symplectic structure, and the free $\\beta \\gamma $ system is equivalent to the resulting AKSZ theory.", "Thus, once we choose a holomorphic volume form, the free $\\beta \\gamma $ system is the holomorphic $\\sigma $ -model with target the derived manifold $T^*[1] V$ equipped with the dg ring of functions ((T*[1] V), Q = 0) = (* (V) *(V), 0) with zero differential.", "Here, $^k V$ is placed in degree $+k$ .", "Using this description, it is easy to see what happens when we turn on the interaction $L_W = \\int 2 z \\,W(\\gamma )$ determined by the holomorphic potential $W \\in (V)$ .", "Recall, the Koszul resolution is a cochain complex which computes the derived critical locus of $W$ .", "It has the form $\\left((T^*[-1] V), \\lbrace W, -\\rbrace \\right)$ where $\\lbrace -,-\\rbrace $ is the $(-1)$ -shifted Poisson bracket associated to the standard $(-1)$ -shifted symplectic structure on $T^*[-1] V$ .", "Notice that the graded ring $(T^*[1] V)$ in (REF ) differs from the graded ring of the standard Koszul resolution, but they agree if we take the gradings modulo 2.", "Thus, we can identify the $/2$ graded ring of functions on the target of the holomorphic $\\sigma $ -model with the underlying $/2$ -graded ring of the Koszul resolution of $W$ .", "The effect of turning on $W$ deforms the $/2$ -graded dg ring (REF ) to ((T*[1] V), {W, -}) which we can, in turn, identify with the underlying $/2$ -graded dg ring of the standard Koszul resolution.", "In conclusion, the $\\beta \\gamma $ system deformed by $L_W$ is equal to the $/2$ -graded theory given by the AKSZ $\\sigma $ -model ${\\rm Map}(^2_{}, T_W^*[1] V)$ where $T_W^*[1] V$ denotes the odd symplectic $/2$ -graded dg manifold whose ring of functions is (REF ).", "Thus, we can interpret the theory in the presence of a superpotential $W$ as the $\\sigma $ -model of $^2$ mapping into the derived critical locus of $W$ .", "Here, we forget the $$ -graded dg manifold $^2_{}$ down to a $/2$ dg manifold in the obvious way.", "This has the effect of only remembering the Dolbeault form degree modulo 2." ], [ "The chiral Jacobi ring", "To perform the calculation of the character, we first collect some facts about the $\\beta \\gamma $ system in the presence of a superpotential.", "As above, denote the fields of the $\\beta \\gamma $ system on $^2$ by i = 0i + 1i + 2i 0,*(2),       0 ; i + 1 ; i + 2 ; i 2,*(2) [1] where $i = 1, \\ldots , N = \\dim _V$ .", "The BV pairing between fields is the evaluation pairing on $V$ together with the wedge product of forms $(\\beta , \\gamma ) \\mapsto [\\beta \\wedge \\gamma \\:_V]_\\text{top}$ , so that the antifield to $\\gamma ^{a}_i$ is $\\beta ^{0, 2-a ; i}$ .", "The classical BV action of the twisted theory is L = i , + Lint(), where Lint() = 12 (Wij(0) i0,1j0,1 + Wi(0) i0,2) 2 z .", "The local operators, as in §REF , only depend on the lowest component of the $\\beta ,\\gamma $ fields and are denoted by $_{n_1,n_2 ; i}, _{n_1+1,n_2+1}^j$ .", "To see the interaction spectral sequence for the local operators supported at $w=0 \\in ^2$ , we will write the bicomplex arising from separating the BV differential $s = \\lbrace S,-\\rbrace $ into two parts, corresponding to the decomposition $S= S_{\\rm free} + S_\\text{int}$ .", "The cochain complex of local operators at 0 are of the form $\\left(_0 , s = s_{\\rm free} + s_{\\rm int}\\right)$ where $s_0$ is the piece of the BV differential coming from the free part of the action, and $s_{\\rm int}$ comes from the interactions.", "It is clear that s0 = i , acting with opposite signs on fields and antifields, so that the free BV complex is a copy of the Dolbeault complex ($\\gamma $ ) in ghost number zero, together with another copy of the Dolbeault complex ($\\beta $ ), shifted by the Calabi–Yau form, in ghost number $-1$ .", "Thus, the $E_1$ page of the spectral sequence is precisely the holomorphic local operators $_0^\\text{hol}$ of the free $\\beta \\gamma $ system, see Remark REF .", "Thus, we can identify the first page of the spectral sequence is (0hol , sint) .", "Recall that the holomorphic local operators of the free system have been described in §REF .", "The cohomology of this complex is the next page of the spectral sequence, and is governed solely by the superpotential $W$ .", "Since $L_\\text{int}$ only depends on the fields $\\gamma $ , $s_\\text{int}$ can only act nontrivially on antifields.", "From the form of the antibracket, the differential on local operators is as follows: sint: n1,n2i Lintn1,n2;i.", "To write down an explicit formula for this differential, we set up some notation." ], [ "Suppose $B$ is any commutative algebra and let $\\xi : B \\rightarrow $ be an algebra map.", "Given an element $F \\in (V)$ , we can extend it to one $\\tilde{F}(\\xi ) \\in (B _V)$ as follows.", "Let $F_n : V^{n} \\rightarrow $ be the $n$ th homogenous component of $F$ .", "Consider the composition $(V B)^{n} {F_n _{B}^{n}} B^{n} {m} B {\\xi } $ where $m$ is the multiplication on $B$ .", "Symmetrizing, we obtain an element $\\tilde{F}_{n}(\\xi ) \\in ^n(B V)^\\vee $ .", "Then, the extension is defined as the sum $\\tilde{F}(\\xi ) = \\sum _n \\tilde{F}_{\\xi , n}$ .", "This construction $F \\in (V) \\rightsquigarrow \\tilde{F}(\\xi ) \\in (B V)$ has the following geometric interpretation.", "Think of $B$ as defining the affine scheme $(B)$ .", "Then, $(B V)$ can be thought of as observables on the space of maps $(B) \\rightarrow V$ (here, $V$ is just a linear space, but this construction can be easily modified for general varieties or even stacks).", "The map of algebras $\\xi : B \\rightarrow $ defines a linear map $\\xi : B \\times V \\rightarrow V$ .", "Equivalently, $\\xi $ can be thought of as an element in $(B)$ and hence determines a map ${\\rm ev}_\\xi : ((B) , V) \\rightarrow V$ by evaluation.", "Then, given $F \\in (V)$ , we can pull-back along ${\\rm ev}_\\xi $ to obtain an element ${\\rm ev}^*(F) = \\tilde{F}(\\xi ) \\in (((B) , V)) = (B _V)$ .", "Now, specialize to the case $B = [\\!", "[z_1, z_2]\\!", "]$ .", "Geometrically, we are looking at the space of maps from the formal disk to $V$ $^2 \\rightarrow V .$ We use the notation $\\tilde{F}_{n_1,n_2} := F(\\xi )$ where $\\xi : [\\![z_1,z_2]\\!]", "\\rightarrow $ is the linear functional $f \\mapsto \\partial _{z_1}^{n_1} \\partial _{z_2}^{n_2} f$ .", "Via the above construction, the potential $W \\in (V)$ defines an observable $\\tilde{\\partial _i W}_{n_1,n_2} \\in ([\\![z_1,z_2]\\!]", "V) .$ Note that this composite observable is a polynomial in the $\\gamma _{n_1,n_2 ; j}$ observables we've defined above, Using this notation, the differential $s_{\\rm int}$ can be read off as $s_{\\rm int} : ^{i}_{n_1 + 1, n_2+1} \\mapsto \\tilde{\\partial _i W}_{n_1, n_2} () .$ On the right-hand side, we use the notation $()$ to stress that the resulting operator only depends on the local field $\\gamma $ ." ], [ "A remark on the Jacobian ring", "Classically, the Jacobian ring of a polynomial $W \\in [x_1,\\ldots ,x_N] = (V)$ is ${\\rm Jac}(W) = [x_1,\\ldots , x_N] / \\partial _1 W, \\ldots , \\partial _N W\\: .$ This ring appears as the $B$ -model chiral ring of a two-dimensional $=(2,2)$ Landau–Ginzburg theory .", "We introduce a slight holomorphic variant of the Jacobi ring that one might think of as the Jacobian ring for the mapping space $^2 \\rightarrow V$ .", "We have already seen that a piece of the space of local operators of the $\\beta \\gamma $ system on $^2$ is of the form $([\\![z_1,z_2]\\!]", "V)$ .", "The ring we consider is a quotient of this by some ideal we now describe.", "As above, let $W \\in (V)$ be any polynomial.", "Given an algebra map $\\xi : B \\rightarrow $ , we have seen in §REF how to extend a polynomial $F \\in (V)$ to one $\\tilde{F}(\\xi ) \\in (B V)$ .", "In the case $B = [\\![z_1,z_2]\\!", "]$ , consider the polynomials $\\partial _1 W ,\\ldots , \\partial _N W \\in (V)$ and the associated observables $\\tilde{\\partial _1 W}_{n_1, n_2} (), \\ldots , \\tilde{\\partial _N W}_{n_1, n_2} () \\in ([\\![z_1,z_2]\\!]", "V) \\subset ^\\text{hol}_0 .$ Define the two-dimensional chiral Jacobi ring of $W \\in (V)$ to be $^\\text{hol} (W) = \\left.", "([\\![z_1,z_2]\\!]", "V) \\; / \\; \\left.", "{\\partial _1 W}_{n_1, n_2} (), \\ldots , \\tilde{\\partial _N W}_{n_1, n_2} () \\; | \\; n_1,n_2 \\ge 0 \\right.", "\\right.", ".$ Note that the usual Jacobian ring sits inside the chiral one as the $U(2)$ -weight $(0,0)$ subspace and is of the form $(V) / \\partial _1 W , \\ldots , \\partial _N W\\:$ .", "With this definition in hand, the following lemma is easy to prove.", "The cohomology of the local holomorphic observables of the $\\beta \\gamma $ system on $^2$ in the presence of a potential $W$ , see (REF ), is isomorphic to the chiral Jacobi ring: $H^*(^\\text{hol}_0 , s_{\\rm int}) \\cong ^\\text{hol} (W) .$ Indeed, the subspace which is killed by $s_{\\rm int}$ is $([z_1,z_2] V) \\subset ^\\text{hol}_0$ .", "These are the local operators generated by $_{n_1,n_2;i}$ .", "By definition, the image of $s_{\\rm int}$ is the subspace $\\tilde{\\partial W}\\:$ .", "In particular, if $W$ is a non-degenerate quadratic polynomial then we see that the cohomology vanishes so that there are no non-trivial local operators in this case.", "This is familiar to the case of the $B$ -model and the ordinary Jacobi ring.", "One can make a similar definition in the complex dimension one case.", "We define the one-dimensional chiral Jacobi ring to be $\\left.", "([\\![z]\\!]", "V) \\; / \\; \\left.", "{\\partial _1 W}_{n}, \\ldots , \\tilde{\\partial _N W}_{n} \\; | \\; n \\ge 0 \\right.", "\\right.$ This ring appears as the cohomology of the observables of the holomorphic twist of the $(2,2)$ supersymmetric $\\sigma $ -model into $V$ .", "Note that the further quotient of this ring by the ideal $(z [\\![z]\\!]", "V)$ is precisely the ordinary Jacobi ring of $W$ , which is the chiral ring of the Landau–Ginzburg model of $W$ .", "The elliptic genus of Landau–Ginzburg models was first computed in , while gave an analysis of the chiral algebra appearing in the holomorphic twist." ], [ "The homogenous character", "We now turn to compute the character of the holomorphic theory in the presence of the superpotential.", "We rely on Lemma REF .", "Note that since the theory is no longer $$ -graded, we no longer have the $U(1)_{u}$ -symmetry coming from ghost number.", "Thus, the character is only a function of the fugacities $(p,q;z)$ , where we use the notation of §REF .", "Let $V = $ and suppose $W$ is homogenous of degree $N+1$ .", "Then, the $SU(2) \\times U(1) \\times U(1)$ character of the holomorphic local operators is given by $\\chi _W (p,q;z) = \\prod _{m \\ge 0} \\prod _{\\ell = 0}^m \\frac{1 - z^N q^m p^{2\\ell -m}}{1-zq^m p^{2\\ell -m}} .$ We compute this using the description of the cohomology in terms of the chiral Jacobi ring in the proposition above.", "We have written the cohomology as a quotient $([[z_1,z_2]] V) / \\tilde{\\partial W}\\:$ .", "Before taking the quotient, the character for $([[z_1,z_2]] V)$ contributes m 0 = 0m 11-zqm p2-m , see the proof of Proposition REF .", "It suffices to compute the character of the subspace $\\tilde{\\partial W}\\:$ .", "Note that as a $([[z_1,z_2]] V)$ -module this subspace is generated by the image of $([z_1,z_2] V^\\vee [1])$ under the differential $s_{\\rm int}$ .", "Since $s_{\\rm int}$ has flavor $U(1)_{z}$ -weight $N$ , we see that the character of $\\tilde{\\partial W}\\:$ is m 0 = 0m 11-zqm p2-m zN qm p2-m .", "Taking the difference of (REF ) and (REF ) yields the result.", "The reader will recognize the method above as a close analogue of the computation of the Hilbert series of a complete intersection.", "Indeed, nondegenerate superpotentials are defined by the criterion that the critical points of $W$ be isolated—in other words, that the critical locus is a complete intersection.", "Based on our earlier results, however, it should be clear that neither our method nor our identification with the holomorphic $\\sigma $ -model on the derived critical locus depends on the nondegeneracy of $W$ .", "This should lead to interesting relations with index computations for four-dimensional $=1$ $\\sigma $ -models, as well as possibly to theorems relating elliptic genera for $\\sigma $ -models and Landau–Ginzburg theories in two dimensions; we look forward to exploring this in future work.", "There is another, efficient way of arriving at the formula for the character in the presence of a superpotential.", "In the free theory, we computed the character $\\chi (q_1,q_2 ; z ;u)$ of the graded vector space of holomorphic local operators with respect to the action of the group $U(2)_{q_1,q_2} \\times U(1)_z \\times U(1)_{u}$ where the first factor comes from the natural action on $^2$ by rotations, the second factor is the flavor symmetry, and the last factor encodes the ghost number grading.", "Alternatively, we can replace $U(2)$ by its 2-fold cover $SU(2)_p \\times U(1)_q$ as in §REF .", "Notice that the differential $s_{\\rm int}$ does not preserve the full symmetry group $SU(2)_p \\times U(1)_q \\times U(1)_u$ .", "First off, $s_{\\rm int}$ depends on the choice of a volume form, which we take to be the standard one.", "Though this preserves $SU(2)_p$ , the operator $s_{\\rm int}$ has $U(1)_q$ weight $-2$ .", "Secondly, in the case that $W$ is a homogenous polynomial of degree $N+1$ , we see that $s_{\\rm int}$ has $U(1)_z$ weight $N+1$ .", "Finally, $s_{\\rm int}$ has $$ -ghost number $-1$ , so it has weight $-1$ under $U(1)_u$ .", "Putting all this together, we see that only a total $SU(2)_p \\times U(1) \\times U(1)$ symmetry survives in the case that $W$ is homogenous of degree $N+1$ , and in terms of the variables of the free theory we can obtain the interacting character by substituting $\\chi (p_{\\rm free} ,q_{\\rm free} ; z_{\\rm free} ; u_{\\rm free}) \\rightarrow \\chi _W(p_{\\rm int} , q_{\\rm int} ; z_{\\rm int})$ $p_{\\rm free} = p_{\\rm int} \\; \\; , \\;\\; q_{\\rm free} = q_{\\rm int} \\; \\; , \\;\\; z_{\\rm free} = z_{\\rm int} \\;\\; , \\;\\; u_{\\rm free} = q_{\\rm int}^{-2} p_{\\rm int}^{N+1} .$ One can check immediately that this is compatible with the calculation above.", "This is the analogue, in two complex dimensions, of the method used for elliptic genera of Landau–Ginzburg models in  and (in closer notation) in ; the corresponding spectral sequence was studied in .", "As in Remark REF , in the case that $V = ^n$ , there is an enhancement of the character where we weight each of the flavor directions with its own fugacity $z_i$ , $i=1,\\ldots ,n$ .", "The free character is given as a product $\\prod _{i=1}^n \\chi (p,q ; z_i ; u)$ .", "In the case of a potential which is non-degenerate and quasi-homogenous, one has the relation $W(\\lambda ^{w_1} x_1, \\ldots , \\lambda ^{w_n} x_n) = \\lambda ^{N} W(x_1,\\ldots x_n);$ we then arrive at the character by substituting $u = z_i^{(N+1)/w_i} q^{-2}$ in the $i$ -th term of the product, obtaining $\\prod _{i=1}^N \\chi _W(p,q ; z_i, u = z_i^{(N+1)/w_i}q^{-2})$ ." ], [ "Holomorphic flavor symmetry", "Consider the $=1$ chiral multiplet with matter fields valued in the vector space $V$ .", "There is a natural flavor symmetry on the theory by the Lie algebra $\\fg = {gl}(V)$ that acts globally.", "In fact, this symmetry becomes local in the holomorphic twist: it is clear that the action is invariant under any local transformation that depends holomorphically on the spacetime.", "We therefore are interested in the infinite-dimensional symmetry algebra holX , where $X$ is the complex manifold on which the $\\beta \\gamma $ system has been placed.", "For the most part, in this section we will take $X = d \\setminus 0$ , and of course will mostly be interested in the case $d = 2$ .", "In the case of a general field theory on $^n$ , we have explained how the operators restricted to spheres in $^n \\setminus 0$ are endowed with an algebra structure via the OPE in the radial direction.", "A similar result holds true for the OPE of the current algebra $^\\text{hol}_X \\otimes _$ , or really its derived replacement $\\Omega ^{0,*}_X _\\fg ,$ where the differential is the $\\bar{\\partial }$ operator on the Dolbeault complex—making it into a resolution of the sheaf of holomorphic functions—and the bracket comes from the bracket on $\\fg $ together with the product structure on the Dolbeault complex.", "One can think of this as a free resolution of the above sheaf of Lie algebras, where “free” refers to free modules over functions on spacetime.", "In , a factorization algebra is associated to this current algebra, which we call $(\\fg )$ , on any complex manifold $X$ .", "In particular, it exists on the complex manifold $X = ^d \\setminus 0$ .", "The factorization product in the radial direction produces a dg associative algebra from $(\\fg )$ , which in the case $d = 1$ is isomorphic to the enveloping algebra of the ordinary Kac–Moody algebra $(^\\times ) \\fg $ .", "For $d > 1$ , we thus obtain higher dimensional versions of the Kac–Moody algebra , .", "Of course, when acting on a field theory, corrections to the current algebra may arise when the symmetry is quantized.", "For the radial operators, this manifests as a central extension of the classical current algebra.", "When $d=1$ , this is the usual central extension of the Kac–Moody vertex algebra, but for general $d$ central extensions are labeled by elements of the space $^{d+1} ((n))^{{\\rm GL}(n)} .$ As described in , , such an element defines an $L_\\infty $ central extension $0 \\rightarrow \\rightarrow \\tilde{\\fg }^*_\\theta \\rightarrow A^*_d \\fg \\rightarrow 0$ where $A_d$ is a certain algebraic dg module for the punctured disk $D^d \\setminus 0$ .", "If $\\theta $ is such an element, we will denote the centrally extended current algebra by $_\\theta (\\fg )$ which is explicitly given by the $A_\\infty $ -enveloping algebra of $\\tilde{\\fg }_\\theta $ ; see .", "Note that when $d=1$ , the ordinary Kac–Moody extension simply arises as a central extension of a Lie algebra.", "In higher dimensions, the term $\\theta $ deforms the classical Lie algebra of symmetries to an $L_\\infty $ algebra with nontrivial higher operation in degree $(d+1)$ .", "This central extension is controlled by a holomorphic analog of the Konishi anomaly.", "We will see the explicit instance of this in the case $d=2$ below.", "Finally, for a general field theory we have described how the radial operators act on the local operators.", "This means that for a theory with a classical current symmetry by $(\\fg )$ , like the $\\beta \\gamma $ system, the quantized local observables will be a module for the deformed current algebra $_\\theta (\\fg )$ .", "This phenomena is familiar in two-dimensional conformal field theories: the naive action by a finite-dimensional Lie algebra on the local observables is promoted to an action by an infinite-dimensional current algebra.", "We wish to emphasize that this is a general feature of holomorphic theories in any complex dimension; essentially, this is because derivatives in the action are the obstruction to global symmetries being local, and a subset of the derivative operators become nullhomotopic and disappear from the action in the holomorphic twist." ], [ "Free field realization", "Based on the work , we spell out how the current algebra outlined above witnesses a local enhancement of the flavor symmetry found in the holomorphic twist of the $4d$ $=1$ chiral multiplet.", "Following this, we specialize to the complex surface $^2 \\setminus 0$ and extract the algebra of $S^3$ operators and its action on the local holomorphic operators of the $\\beta \\gamma $ system on $^2$ .", "On any complex manifold $X$ , one has the sheaf of commutative dg algebras $\\Omega ^{0,*}(X)$ which is a fine resolution of the sheaf holomorphic functions on $X$ .", "Further, we can tensor with $(n)$ to obtain a sheaf of dg Lie algebras $\\Omega ^{0,*}(X) (n)$ .", "This sheaf comprises the linear generators of the current algebra described above.", "Let's specialize to the case $\\dim _X = 2$ .", "If $V$ is an complex $n$ -dimensional vector space, we can explicitly describe the symmetry of the current algebra by coupling the fields $\\Omega ^{0,*}(X) (n)$ as background gauge fields of the free $\\beta \\gamma $ system with values in $V$ .", "Indeed, if $\\alpha \\in \\Omega ^{0,*}(X) (n)$ , then the action functional of the $\\beta \\gamma $ system is deformed by the term $\\int _X \\beta , \\alpha \\cdot \\gamma \\:_V .$ Here, $\\alpha \\cdot \\gamma $ extends the natural action of $(n)$ on $V$ together with the wedge product of Dolbeault forms.", "Also, as usual, $-,-\\:$ denotes the linear pairing between $V$ and its dual.", "Writing out the full action, we see that this is nothing but the $\\beta \\gamma $ system where we have deformed the $$ operator to $+ \\alpha $ .", "We can interpret this as the induced deformation on the associated bundle obtained from deforming the trivial principal holomorphic $G$ -bundle on $X$ .", "Without much more work, one can study deformations of non-trivial holomorphic bundles as well, but it will play no role for us.", "The machinery of associates a factorization algebra to any QFT.", "There is also a factorization algebra associated to the symmetries of a QFT, which in this example is the current factorization algebra $(\\fg )$ .", "To an open set $U \\subset X$ it assigns the cochain complex $_*\\left(\\Omega ^{0,*}_c (U , \\fg )\\right) = \\left(\\left(\\Omega ^{0,*}_c(U, \\fg )[1]\\right) , \\kappa \\right)$ where $\\kappa $ is the Chevalley-Eilenberg differential for the dg Lie algebra $\\Omega ^{0,*}_c(U, \\fg )$ .", "In a version of Noether's theorem for factorization algebras is formulated, which from the classical setup above produces a map of factorization algebras from $(\\fg )$ to the factorization algebra of observables of the $\\beta \\gamma $ system.", "Below, we focus on the value of the factorization algebra on punctured affine space $^d \\setminus 0$ , and what this map of factorization algebras tells us about the symmetries of the holomorphic local operators." ], [ "A model for $^2 \\setminus 0$", "We now specialize to the case $X = ^2 \\setminus 0$ .", "It's possible to choose a model for the resolution of holomorphic functions on $^d \\setminus 0$ that turn out to be convenient formulating the above structures algebraically.", "In the case $X = d \\setminus 0$ , a good one was considered in .", "It is constructed as follows: First off, let R = z1,z1,...,zd,zd][1zz] be functions on the punctured affine space, and consider $\\tilde{R}^*$ the free graded-commutative $R$ algebra generated by $d\\bar{z}_i$ in degree 1.", "We take $A^*_d$ to be the graded subalgebra of $\\tilde{R}$ consisting of elements satisfying the following two conditions: (a) The overall $\\bar{z}$ -degree of all elements is zero, and (b) The contraction with the Euler vector field = zi zi vanishes.", "Letting $\\xi = z_i \\bar{z}_i$ be the squared radius, the result after the first step consists of elements in degree $k$ of the form fK dzK,       fK 1kz1,...,zd] [ z1, ..., zd ] .", "Here $K \\subseteq \\lbrace 1,\\ldots , d\\rbrace $ is a multi-index, with $k = \\# K$ .", "The Euler vector field condition means that K i K fK zi dzKi = 0, where $\\pm $ is the parity of the number of elements of $K$ preceding $i$ .", "In particular, this means that our algebra is only nonzero in degrees between zero and $(d-1)$ .", "For the sake of brevity, we will define the subalgebra Rd = z1,...,zd] [ z1, ..., zd ] R of elements which satisfy condition (a) above.", "Note that $R_d$ is generically not a polynomial algebra, since its generators satisfy the relation z1 z1 + + zd zd = 1.", "For instance, when $d=2$ , it is isomorphic to the quotient algebra a,b,c,d]/ac + bd = 1 , which we can think of as a quadric in a weighted projective space.", "Let $d=1$ .", "Then $\\bar{z}/\\xi = z^{-1}$ , so that the algebra reduces to R1 = z,z-1] concentrated in degree zero with zero differential.", "This recovers the usual story of Kac–Moody symmetry for theories on the punctured complex plane.", "Let's now consider the case $d=2$ in detail.", "The algebra $A_2^*$ is supported only in degrees zero and one; in degree one, it consists of elements of the form f1  dz1 + f2  dz2, subject to the condition that [eq:EVF2] f1 z1 + f2 z2 = 0.", "The differential then maps such an element to [eq:dbar2] ( - f1z2 + f2z1 ) dz1 dz2, which must be zero for consistency after the Euler vector field condition is imposed.", "But that condition () just means that each monomial term in $f_1$ corresponds to another monomial term in $f_2$ , of the form f1 z1a z2b+1a+b+2       - f2 z1a+1 z2ba+b+2.", "More briefly, we can write A1 = R2, where = z2   dz1 - z1   dz2 2 is the Bochner–Martinelli kernel in complex dimension two.", "It is then easy to verify by direct computation that f2z1 = f1z2 = z1a z2ba+b+3 [ (b+1) z1 z1 - (a+1) z2 z2 ], so that () vanishes.", "It remains to compute the image of the $\\bar{\\partial }$ differential inside of the degree-one piece of the algebra.", "Similar to above, we can consider the action of the differential on an allowed monomial in degree zero; this is mapped to z1a z2ba+b dz1   z1a-1 z2ba+b+1[ a z2 z2 - b z1 z1 ] + dz2  z1a z2b-1a+b+1 [ b z1 z1 - a z2 z2 ].", "This just has the effect of setting the generators $z_1$ and $z_2$ to zero in the cohomology of $\\bar{\\partial }$ , so that we can identify the cohomology $H^1(A_2^*)$ with the space of elements h ,       h [ z1, z2 ].", "Note that $L_{\\partial / \\partial z_i} \\omega = \\frac{_i}{\\xi } \\omega $ where $L_{(-)}$ is the Lie derivative.", "Thus, we can equivalently write the first cohomology as the free $[\\partial _{z_1}, \\partial _{z_2}]$ -module generated by $\\omega $ , which we can further identify with the dual of power series in two variables $[z_1,z_2]^\\vee = [\\partial _{z_1}, \\partial _{z_2}] \\cdot \\omega .$ This also makes the computation in degree zero easy.", "In that degree, the Euler vector field condition becomes vacuous, so that $H^0(A_2^*)$ just consists of elements f R2.", "The computation above further shows that the second set of generators fail to be closed, so that the kernel is precisely polynomials in $z_1$ and $z_2$ ." ], [ "Higher central extensions", "With the dg algebra $A_2^*$ understood, we can now recall the definition of the $d=2$ higher Kac–Moody algebra as in , .", "We start with the dg Lie algebra obtained from tensoring the ordinary Lie algebra $\\fg $ with $A_2^*$ .", "We can think of this as an $L_\\infty $ algebra with operations: $\\ell _1 (a X) = (a) X $ and $\\ell _2 (a X, b Y) = ab [X,Y]$ .", "For any $\\theta \\in ^3(\\fg ^\\vee )^\\fg $ the dg Lie algebra $A_2^* \\fg $ has an $L_\\infty $ algebra central extension 0 K  »* A*  » 0 where the 1-ary and 2-ary operations are $\\ell _1 (a X) = (a X) \\;\\; , \\;\\; \\ell _2 (a X, b Y) = ab [X,Y] \\;\\; , \\;\\; \\ell _1(K) = \\ell _1 (K, a X) = 0$ and the 3-ary operation is $\\ell _3 (a X, b Y, c Z) = \\theta (X,Y,Z) \\oint _{S^3} a \\partial b \\partial c$ where $a,b,c \\in A_2^*$ and $X,Y, Z \\in \\fg $ .", "Here $\\oint _{S^3}$ denotes the higher residue pairing, which agrees with the contour integration of a $(2,1)$ differential form along the 3-sphere." ], [ "The symmetry multiplet", "Just as the $\\beta \\gamma $ system on $^2$ arises as the minimal twist of the $4d$ $=1$ chiral multiplet, we can also realize the aforementioned current algebra from a of a certain $=1$ multiplet.", "In §REF , we argued that the holomorphic twist of the $=1$ vector multiplet returns the Dolbeault complex on $^2$ with values in the gauge Lie algebra.", "At the level of the BV theory, we have seen that the holomorphic twist of the $=1$ supersymmetric gauge theory is equivalent to holomorphic BF theory with fields $A \\in \\Omega ^{0,*}(^2, \\fg )[1] \\;\\; , \\;\\; B \\in \\Omega ^{2,*}(^2, \\fg ^\\vee ) .$ Using the BV formalism and a bit of trickery, we can use this computation to arrive at an understanding of the holomorphically twisted current multiplet.", "In general, in the BV formalism, antifields generate equations of motion for fields under the action of the BV differential: $s \\phi ^* = \\lbrace S, \\phi ^* \\rbrace = \\frac{\\delta S}{\\delta \\phi }.$ Since one knows that the coupling between the gauge field and the corresponding symmetry current takes the form $A_\\mu J^\\mu $ in the untwisted theory, it is apparent that one should identify the twist of the current multiplet with the twist of the antifield multiplet, up to a shift by one originating with the homological degree of the bracket.", "That is, we should take j 2,*(2, g)[1] as the definition of the twisted current multiplet.", "It is easy to see that a pairing of the form $\\langle j,\\rangle $ is well-defined and can appear in the action.", "We then find that operators in the fields $j$ , so functions on $\\Omega ^{2,*}(2, {g}^\\vee )[1]$ , is precisely the current algebra $(\\fg )$ that we consider in this section, as defined in (REF ).", "In summary, the (non-centrally extended version of the) current algebra $(\\fg )$ arises as the holomorphic twist of a shift of the anti-field piece of the $=1$ vector multiplet which describes flavor symmetries of the untwisted theory.", "The same analysis can be done to compute the holomorphic and topological twists of symmetry multiplets in $=2$ and $=4$ supersymmetry.", "In general they are given by deformations of the holomorphic current algebra we have just introduced.", "For example, in the holomorphic twist, the flavor multiplet of $=2$ supersymmetry is of the form $\\Omega ^{0,*}(^2) \\fg [\\epsilon ]$ where $\\epsilon $ is a formal variable of cohomological degree $+1$ ." ], [ "Local module", "Finally, we argue why the local holomorphic operators of the twist of $=1$ form a representation for the current algebra we have just introduced.", "As above, we look at the $\\beta \\gamma $ system with values in the $\\fg $ -representation $V$ .", "This is not a representation in the ordinary since; we have already seen that $\\fg _{\\theta }^*$ is most naturally exhibited as an $L_\\infty $ algebra.", "Correspondingly, the local operators form a $L_\\infty $ -module for this $L_\\infty $ algebra.", "By a $\\fg $ -$L_\\infty $ -module $V$ , we mean a map of $L_\\infty $ algebras $\\rho _V : \\fg \\rightsquigarrow (V)$.", "The action of the higher Kac–Moody current is through the higher “modes\" algebra of the $\\beta \\gamma $ system.", "This algebra is obtained from placing the $\\beta \\gamma $ system on $^2 \\setminus 0$ and projecting out the radial direction.", "So, as a vector space, it consists of operators supported on the 3-sphere $S^3$ .", "As we've already discussed, the algebra arising from the OPE of sphere operators is a dg analog of the Weyl algebra defined as follows.", "Again, take the algebra $A_2^*$ and consider the abelian dg Lie algebra $A^*_2 V^\\vee [1] \\oplus A_2^* V .$ The pairing between $V$ and $V^\\vee $ together with the residue pairing between $A_2^*$ and itself defines a central extension of dg Lie algebras 0 K V A*2 V[1] A2* V .", "The differential on $_V$ comes from the differential on $A_2^*$ and the bracket is $[a v^\\vee , b v] = \\hbar v, v^\\vee \\: \\oint _{S^3} a \\wedge b d z$ for $a,b \\in A_2^*$ and $v \\in V, v^\\vee \\in V^\\vee $ .", "The dg Weyl algebra is the enveloping algebra $U(_V)$ , which we refer to as the $S^3$ -modes algebra.", "Consider the dg algebra $A^*$ modeling derived algebraic functions on $^2 \\setminus 0$ that we introduced above.", "Define the $A^*$ -module of negative modes $A_{2,-} = H^{1}(A^*) .$ Since the cohomology of $A_2^*$ is concentrated in degrees $0,1$ , there is a quotient map of dg $A^*$ -modules $A_2^* \\rightarrow A_{2,-}[-1]$ .", "Define the dg ideal of positive modes $A^*_{+} = \\ker \\left(A_2^* \\rightarrow A_{2,-}[-1]\\right)$ , so that there is a short exact sequence of dg vector spaces $A_{2,+}^* \\rightarrow A_2^* \\rightarrow A_{2,-}[-1] .$ The holomorphic local operators arise as the vacuum Verma module associated to the short exact sequence above.", "Indeed, if we replace $A_2^*$ by $A_{2,+}$ in (REF ) we obtain an abelian dg Lie algebra $_{V,+}$ .", "It's abelian since the residue pairing vanishes when restricted to the positive modes.", "There is an isomorphism of vector spaces $^\\text{hol}_{0} \\cong U(_V) _{U(_{V,+})} _{K=1}$ thus endowing the holomorphic local operators with the structure of a dg module for $_V$ .", "The action of the higher Kac–Moody algebra factors through the $S^3$ -modes algebra.", "At the quantum level, the current algebra is built from the $L_\\infty $ algebra $\\tilde{\\fg }^*_{\\theta _V}$ as in $(\\ref {ext})$ where $\\theta _V$ is determined by a certain 1-loop anomaly analogous to the Konishi anomaly for $=1$ SUSY.", "The anomaly arises from trying to lift the free field realization of §REF to the quantum level.", "For the $\\beta \\gamma $ system with values in $V$ , it is shown in  that $\\theta _V$ is a multiple of the $\\fg $ -invariant cubic functional $X, Y, Z \\in ^3(\\fg ) \\rightarrow _V(XYZ) .$ For this $\\theta _V$ one can construct an $L_\\infty $ -morphism from $\\tilde{\\fg }^*_{\\theta _V}$ to the algebra of $S^3$ -modes of the $\\beta \\gamma $ system $\\tilde{\\fg }^*_{\\theta _V} \\rightsquigarrow U(_V)$ which we interpret as a higher dimensional analog of the free field realization in CFT.", "For an explicit formula we refer to .", "Since $_V$ acts on $^\\text{hol}_0$ by definition, there is an induced representation of the $L_\\infty $ algebra $\\tilde{\\fg }_{\\theta _V}^*$ on $_0^\\text{hol}$ through the cited $L_\\infty $ map.", "One way to understand this free field realization more explicitly is to construct a version of holomorphic local operators in the current algebra itself.", "Just as in the $\\beta \\gamma $ case, we can define the $L_\\infty $ vacuum module ${\\rm Vac}_{\\theta _V} (\\fg ) = U\\left(\\tilde{\\fg }^*_\\theta \\right) _{U(A_+ \\fg \\oplus \\cdot K)} _{K = 1}$ where $_{K = 1}$ is the module for which $K$ acts by 1.", "We call this $\\tilde{\\fg }_{\\theta _V}$ -module the vacuum module at level 1.", "There is an embedding of the higher Kac–Moody vacuum module inside the holomorphic operators of the $\\beta \\gamma $ system described as follows.", "First off, as a vector space, we can identify ${\\rm Vac}_{\\theta _V} (\\fg )$ with $([z_1,z_2]^\\vee \\fg [-1]) .$ If $X \\in \\fg $ we write $X_{n_1, n_2}$ for the linear element $(z_1^{n_1} z_2^{n_2})^\\vee X \\in {\\rm Vac}_{\\theta _V}(\\fg )$ .", "The embedding ${\\rm Vac}_{\\theta _V} (\\fg ) \\rightarrow _0^\\text{hol}$ is defined on linear elements by $X_{n_1,n_2} \\mapsto \\sum _{k_1, k_2 \\ge 0} \\sum _{i,j} _{k_1, k_2}^j \\rho (X)^i_j _{n_1-k_1, n_2-k_2 ; i}$ where $\\rho : \\fg \\rightarrow (V)$ denotes the representation.", "This map is compatible with the free field realization above in the sense that it is a map of $\\tilde{\\fg }_{\\theta _V}^*$ -modules.", "In future work, we aim to use the description of the holomorphic local operators as a module for the higher current algebra to decompose the local character into characters of the current algebra." ], [ "Dimensional reduction", "In this section, we consider the dimensional reduction of the $\\beta \\gamma $ system to two dimensions.", "Upon reduction, four-dimensional minimal supersymmetry becomes $=(2,2)$ supersymmetry in two dimensions; one can therefore obtain the holomorphic twist of an $=(2,2)$ theory with $F$ -term interactions (i.e., a Landau–Ginzburg theory) by dimensionally reducing the $\\beta \\gamma $ system on 2 along $, or considering the $$ system on a complex four-manifold which is a product of two Riemann surfaces.$ More generally, we can consider the dimensional reduction of the $\\beta \\gamma $ system on flat space, along a plane that may not coincide with a complex subspace of 2.", "Under such a reduction, there is an inclusion map of nilpotence varieties between that of the higher-dimensional and that of the lower-dimensional theory.", "However, this map is not a map of stratified spaces; the lower-dimensional nilpotence variety will, in general, be stratified more finely than the higher-dimensional one, so that (for example) holomorphic twists in four-dimensional minimal supersymmetry may reduce to either holomorphic or $B$ -type topological twists of two-dimensional $=(2,2)$ supersymmetry.", "In terms of the complexification $V_4$ , a complex structure on $V$ corresponds to a maximal isotropic subspace with respect to the standard bilinear (complex) inner product on 4, which can be thought of as the space of homotopically trivial translation operators in the holomorphic twist.", "A dimensional reduction, on the other hand, corresponds to the choice of a two-dimensional subspace of $V$ , corresponding to the translations that are set to zero.", "The inner product on the complexification of this space will always be nondegenerate.", "As such, the intersection of these spaces may have dimension either zero (generically) or one, so that their span (the space of all translations which act trivially in the twisted, dimensionally reduced theory) has dimension either four (generically) or three.", "The former case is a topological twist of $B$ -type, whereas the latter is the two-dimensional holomorphic twist.", "In the following subsections, we address each of these constructions in turn, beginning with a spacetime that is the product of two complex curves." ], [ "Product of Riemann surfaces", "Let $\\Sigma _1$ and $\\Sigma _2$ be two Riemann surfaces, and consider the $\\beta \\gamma $ system in two complex dimensions on the product $\\Sigma _1 \\times \\Sigma _2$ .", "There is a slight generalization of the $\\beta \\gamma $ system, as we've introduced it, that will be relevant in this section: we can replace functions on the complex surface by sections of an arbitrary holomorphic vector bundle, and require that the fields live in the Dolbeault resolution of holomorphic sections of that bundle.", "For the case at hand, we take this bundle to be the pullback of a holomorphic bundle $$ on $\\Sigma _2$ along the obvious projection $\\pi _2 : \\Sigma _1 \\times \\Sigma _2 \\rightarrow \\Sigma _2$ .", "The BV fields of the $\\beta \\gamma $ system on $\\Sigma _1 \\times \\Sigma _2$ with values in the bundle $\\pi _2^* $ are then of the form (, ) 0,* (1 2, 2* ) 2,*(1 2 , 2* )[1] with free action functional $\\int \\beta , \\gamma \\:_{}$ .The pairing $-,-\\:_{}$ is the fiberwise linear pairing between $$ and $^\\vee $ .", "The compactification along $\\Sigma _2$ of the two-dimensional $\\beta \\gamma $ system on the complex surface $\\Sigma _1 \\times \\Sigma _2$ with values in $\\pi _2^* $ is equivalent to the one-dimensional $\\beta \\gamma $ system on $\\Sigma _1$ , with values in the graded vector space H*(2,).", "As a word of caution, by the “$\\beta \\gamma $ system\" with values in a graded vector space we allow for the possibility for fields of both even and odd parity.", "In one complex dimension, the sectors of odd parity are commonly referred to as “$bc$ systems\" in the literature.", "We recognize the $\\beta \\gamma $ system in the proposition as the holomorphic twist of the $(0,2)$ -supersymmetric sigma model with values in the graded vector space $H_{}^*(\\Sigma _2,)$ .", "It can be checked directly that the the twist of the $(0,2)$ sigma model is given by such a $\\beta \\gamma $ system; see for instance .", "This is completely direct.", "By Dolbeault formality of Riemann surfaces, we can replace the Dolbeault cochain complex $\\Omega ^{0,*}(\\Sigma _2)$ with its cohomology.", "Thus, the complex fields of the $\\beta \\gamma $ system (REF ) are quasi-isomorphic to $\\Omega ^{0,*}(\\Sigma _1) H_{}^*(\\Sigma _2 , ) \\oplus \\Omega ^{1,*}(\\Sigma ^2) H_{}^*(\\Sigma _2, K_{\\Sigma _2} ^\\vee )[1],$ where we have used the fact that $\\Omega ^{2,*}(\\Sigma _1 \\times \\Sigma _2) \\cong \\Omega ^{1,*}(\\Sigma _1) \\Omega ^{1,*}(\\Sigma _2)$ .$$ denotes the completed tensor product, which agrees with the ordinary tensor product for finite dimensional vector spaces.", "By Serre duality, $H_{}^*(\\Sigma _2, K_{\\Sigma _2} ^\\vee ) \\cong \\left(H^*_{}(\\Sigma _2, )\\right)^\\vee [-1]$ , hence the fields can be written as $\\Omega ^{0,*}(\\Sigma _1) H_{}^*(\\Sigma _2 , ) \\oplus \\Omega ^{1,*}(\\Sigma _1) \\left(H^*_{}(\\Sigma _2, )\\right)^\\vee .$ These are the fields of the (ordinary) $\\beta \\gamma $ system on $\\Sigma _1$ with values in $H_{}^*(\\Sigma _2 , )$ .", "To check that the action functional is the correct one amounts to observing that the induced BV pairing on this space of fields comes from integration along $\\Sigma _1$ together with the linear pairing on $H_{}^*(\\Sigma _2 , )$ , which is obvious." ], [ "The case $\\Sigma _2 = T^2$ and dimensional reduction", "Consider the specific case that $\\Sigma _2 = T^2$ , and $$ is the trivial bundle with fiber $V$ .", "The compactified theory is equivalent to the $\\beta \\gamma b c$ system on $\\Sigma _1$ , whose fields are $(\\gamma _{1}, \\beta _{1}) \\in \\Gamma (\\Sigma _1, _{\\Sigma _1} V \\oplus K_\\Sigma V^\\vee )$ and $(c, b) \\in \\Gamma (\\Sigma , _{\\Sigma _1} V [1] \\oplus K_{\\Sigma _1} V^\\vee [-1])$ with action functional S(1, 1, c, b) = 1 1, 1V + 1b, cV .", "We can also obtain this by naive dimensional reduction of the $\\beta \\gamma $ system on 2: we simply take all Dolbeault forms to be independent of the $z_2$ coordinate, obtaining $\\Omega ^{0,*}( V)[d \\bar{z}_2] \\oplus dz_2\\cdot \\Omega ^{1,*}( V^\\vee ) [d\\bar{z}_2].$ Identifying the antifield of $\\gamma _1 \\in \\Omega ^{0,*}($ with $\\beta _1 \\in dz_2 \\, d\\bar{z}_2 \\cdot \\Omega ^{1,*}($ , and similarly the antifield of $c \\in d\\bar{z}_2\\cdot \\Omega ^{0,*}($ with $b \\in dz_2 \\cdot \\Omega ^{1,*}(,$ , we recover precisely the $\\beta \\gamma b c$ system above, with $d\\bar{z}_2]$ playing the role of $H^*_{\\bar{\\partial }}(T^2)$ .", "This $\\beta \\gamma bc$ system is the holomorphic twist of the $(2,2)$ -supersymmetric $\\sigma $ -model in two dimensions.", "In other words, compactification of the holomorphic theory along $T^2$ in $\\Sigma _1 \\times T^2$ is equivalent to dimensional reduction.", "Furthermore, dimensional reduction commutes with the holomorphic twist." ], [ "The case $\\Sigma _2 = ^1$", "Let $$ be a fixed line bundle on $^1$ and consider the following version of the higher dimensional $\\beta \\gamma $ system on $_z \\times ^1$ whose fields are $(\\gamma , \\beta ) \\in \\Omega ^{0,*}(_z \\times ^1 , \\pi ^* {V}) \\oplus \\Omega ^{2, *}(_z \\times ^1 , \\pi ^* ^\\vee {V}^\\vee )[1]$ where $\\pi : _z \\times ^1 \\rightarrow ^1$ is the projection, and ${V}$ denotes the trivial bundle with fiber $V$ .", "Just as above, it is easy to read off the compactification of this theory along $^1$ using Dolbeault formality (or by specializing Proposition REF ).", "The fields are $\\begin{aligned}[c]\\gamma _{1} &\\in \\Omega ^{0,*}(_z) H^{*}(^1, ) V , \\\\\\beta _{1} &\\in \\Omega ^{1,*}(_z) H^*(^1, K_{^1} ^\\vee ) V^\\vee [1].\\end{aligned}$ By Serre duality, this is precisely the one-dimensional $\\beta \\gamma $ system on $_z$ with values in the vector space $H_{\\bar{\\partial }}^*(^1 , ) V$ .", "This confirms the results of  at the level of the twist, which is a special case of Proposition REF ." ], [ "General reduction", "In this section, we return to considering dimensional reduction of the flat $\\beta \\gamma $ system along a plane that does not necessarily define a complex subspace of 2.", "The dimensional reduction of the $\\beta \\gamma $ system on $^4$ , with respect to a fixed real subspace $^2 \\subseteq ^4$ , is a family over one component of the nilpotence variety (which is equivalently the space of complex structures on $^4$ ).", "Its fields are [ *,*(, + + - ] , so that the spectral sequence from the two-dimensional holomorphic twist to the B-model is nothing other than the Hodge-to-de-Rham spectral sequence.", "We consider the fields of the $\\beta \\gamma $ system, at the point on the nilpotence variety with coordinates $_+,_-$ .", "As we have computed above, the differential of the free theory is then determined by a BV action of the form L = , t + Lint,       = dzi ( + zi + - ij zj ), so that the complex before dimensional reduction is just the sum of Dolbeault complexes with respect to a deformed complex structure: [eq:t-deformed] 0,*(2, ) 2,*(2, )[1].", "Upon dimensional reduction, we simply take the fields that appear to be independent of $z_2$ and $\\bar{z}_2$ , and replace corresponding derivatives by zero, so that the differential reduces to +  dz1 z1 - -   dz2 z1.", "This means that we can rewrite the fields () after dimensional reduction as a sum of total de Rham complexes of $, after reinterpreting the odd generator $ dz2$ as~$ dz1$:\\begin{equation}\\begin{aligned}[c]\\Omega ^{0,*}(2, \\bar{\\partial }_) &\\rightarrow \\left[ \\Omega ^{0,*}([d\\bar{z}_2], \\bar{\\partial }_\\right] \\\\& \\cong \\left[ \\Omega ^{*,*}(, _+ \\bar{\\partial }- _- \\partial \\right].\\end{aligned}\\end{equation}In the free theory, there is therefore a spectral sequence from the local operators in the holomorphic twist (contributing to the elliptic genus) to those contributing to the $ B$-model chiral ring, which naturally appears from the family of complexes computing the local operators over the four-dimensional nilpotence variety.", "It is nothing other than the Hodge-to-de-Rham spectral sequence on~$ .", "A similar spectral sequence passes from the holomorphic twist to the $A$ -model chiral ring, but cannot be seen by dimensional reduction from four dimensions; for theories of chiral superfields, this is simply a cancelling differential." ] ]
1906.04221
[ [ "Radiation pressure clear-out of dusty photoevaporating discs" ], [ "Abstract Theoretical models of protoplanetary disc dispersal predict a phase where photoevaporation has truncated the disc at several AU, creating a pressure trap which is dust-rich.", "Previous models predicted this phase could be long-lived (~Myr), contrary to the observational constraints.", "We show that dust in the pressure trap can be removed from the disc by radiation pressure exerting a significant acceleration, and hence radial velocity, on small dust particles that reside in the surface layers of the disc.", "The dust in the pressure trap is not subject to radial drift so it can grow to reach sizes large enough to fragment.", "Hence small particles removed from the surface layers are replaced by the fragments of larger particles.", "This link means radiation pressure can deplete the dust at all particle sizes.", "Through a combination of 1D and 2D models, along with secular models that follow the disc's long-term evolution, we show that radiation pressure can deplete dust from pressure traps created by photoevaporation in ~1e5 years, while the photoevaporation created cavity still resides at 10s of AU.", "After this phase of radiation pressure removal of dust, the disc is gas-rich and dust depleted and radially optically thin to stellar light, having observational signatures similar to a gas-rich, young debris disc.", "Indeed many of the young stars (~<10 Myr old) classified as hosting a debris disc may rather be discs that have undergone this process." ], [ "Introduction", "Protoplanetary discs are the environments in which planets form and migrate.", "These discs contain both gas and dust particles and are observed to live for up to $\\sim 10$  Myr, before being dispersed [29], [33].", "Since many observed exoplanets (including the ubiquitous, close-in super-Earths and mini-Neptunes) contain voluminous hydrogen/helium rich envelopes [70], it is hypothesised they formed before the gas disc dispersed [53], [35].", "Therefore, understanding the disc dispersal timescale is crucial to our understanding of planet formation.", "Protoplanetary discs deplete their solid and gas reservoirs through a variety of processes.", "These discs are accretion discs, so gas and dust particles are accreted onto the central proto-star.", "Gas and dust can also be sequestered into forming planets.", "However, perhaps the most crucial process for disc dispersal is the loss of gas (and tiny dust particles) through a photoevaporative wind.", "A photoevaporative wind occurs because high-energy photons (UV and X-rays) heat-up the surface layers of the disc to temperatures of-order the escape temperature, causing the gas to escape in a thermal driven wind [34], [26].", "The mass-loss rates depend on which radiation bands dominate the heating, with rates in the range $10^{-10}-10^{-8}$  M$_\\odot $  yr$^{-1}$ [24], [68], [41]; however, the general evolutionary pathway that photoevaporating discs follow is the same.", "At early times the accretion rate vastly exceeds the photoevaportive mass-loss rate, and the disc evolves as a standard accretion disc.", "Once the photoevaporation rate and accretion rate become comparable, then gas that would have accreted onto the star is removed from the disc in the wind, starving the inner regions.", "Ultimately, photoevaporation completely cuts off the supply of gas to the inner disc, opening a gap at the radius where the disc's X-ray and UV heated atmosphere is unbound enough to escape, which is typically around 1 AU $(M_*/M_\\odot )$ [2], [27], [45].", "Since the inner disc is cut-off from resupply, it drains onto the central star rapidly, on its local viscous time [16], [59], leaving behind a large hole extending out from the star to a radius of order 1-10 AU, then remaining gas and dust rich disc that is photoevaporated to large radii.", "The photoevaporation model successfully explains the “two-timescale” nature of protoplanetary disc evolution, where the inner regions of protoplanetary discs appear to evolve slowly on Myr timescales, before dispersing on a much more rapid timescale [37], [20], [38], [21].", "Furthermore, slow-moving ($\\sim 5-10$  km s$^{-1}$ ) ionized winds are observed to be occurring in many nearby discs hosting young stars [31], [54], [57] and are consistent with the photoevaporation model [4], [19], [55], [48], [23].", "The photoevaporation model can also explain a large fraction of observed “transition discs” [46], [18], specifically those with holes $\\lesssim 10$  AU and accretion rates $\\lesssim 10^{-9}$  M$_\\odot $  yr$^{-1}$ [45] and even those with larger holes and higher accretion rates in more recent models that incorporate CO depletion in the outer disc [25].", "Transition discs are protoplanetary discs with evidence for a large hole or cavity in their discs [18], but they are known to be a heterogeneous class of objects [46], [65] and their origins are not always clear.", "However, a specific prediction of the standard photoevaporation scenario is that there should be a large number of transition discs with hole sizes $\\gtrsim 10$  AU but that are no longer accreting.", "This final long-lived stage of disc dispersal gives rise to transition discs which have lifetimes between $10^5$ and $10^6$  years, but remain optically thick – “relic discs” [45].", "The long disc lifetimes emerge from the simple fact that discs store most of their mass at large radii, but photoevaporative clearing proceeds from the inside out, so it will always take longer to remove the larger disc mass that resides at larger distance.", "While several discs satisfy this criterion [17], the number of observed non-accreting transition discs with large holes falls far below the theoretical expectations [45], [46].", "Studies by [15] and [30] showed that optically thick relic discs are rare and many non-accreting stars that show evidence for a circumstellar disc are more consistent with young, radially optically thin, debris discs.", "Since the relic disc phase emerges from simple arguments, it implies that some other mechanism operates to clear these discs more rapidly than originally envisioned.", "[47] and [49] suggested a dynamical instability (thermal sweeping) caused by X-ray heating of the inner edge of transition discs could lead to rapid dispersal of relic discs.", "However, recent numerical simulations by [32] has indicated that thermal sweeping is inefficient for the majority of observed discs.", "Therefore, the conundrum remains.", "The photoevaporation scenario produces an overproduction of non-accreting transition discs relative to observations.", "Here we revisit this problem on a new tack.", "Many disc dispersal models have focused on the removal of the gas component; however, it is the considerably lower mass dust component of the disc that dominates the majority of the observable tracers.", "Here we focus on the removal of the dust from transition discs created by photoevaporation and show that radiation pressure from the central star can deplete dust discs on timescales $\\sim 10^5$  yrs.", "In Section  we provide an overview of this mechanism.", "In Section  we derive the expected dust mass-loss rates for this mechanism.", "In Section  we incorporate the numerically derived dust-mass loss rates in our disc evolution code to evaluate the secular behavior of discs in this phase and show our main results including a comparison of the expected SEDs from our model compared to observations." ], [ "Overview of the Scenario", "Our new mechanism is grounded within the photoevaporative disc clearing scenario, where a combination of viscous accretion and photoevaporative mass-loss triggers inside-out disc clearing and gap opening when the accretion rate drops below the photoevaporation rate [16].", "In particular, all our quantitative calculations are done within the framework of the X-ray photoevaporation model [43], [45], [47].", "We emphasize that our model will work in the context of any photoevaporation model (e.g.", "EUV/FUV), or indeed any scenario where the disc contains a significant pressure trap (see Section REF ).", "Therefore, our scenario could be applied to models where the accretion is not driven by viscous stresses, but by magnetised winds [6], [7], [28], [60], [8], [9].", "In models which attempt to describe wind-driven disc evolution, a pressure trap is typically formed after inner disc clearing [69]; however, it is not clear how important our new mechanism will be without performing specific calculationsThis is because our mechanism requires vertical diffusion of particles, which may be less efficient in the wind-driven accretion scenario.. After photoevaporation has opened a gap in the protoplanetary disc and the inner disc has drained onto the star, the gas surface density is maximized at a slightly larger radius than the cavity radius.", "This surface density maximum is important as it produces a pressure maximum in the disc.", "Pressure maxima are regions where there is no radial pressure support in the gas, and it rotates at the Keplerian velocity ($v_K$ ).", "Dust particles also have no pressure support, thus at the gas pressure maxima there is no drag, and dust particles tend to become trapped.", "In Figure REF , we show the gas and dust profiles for a photoevaporating disc.", "This snapshot is shown after gap-opening, and the inner disc has drained onto the star.", "The model shown is the “median” model of [45] that is evolving under the X-ray photoevaporation model, see [45] and Section  for a detailed description of this calculation.", "We note that the gas column in the pressure trap is below that required for ionization from the X-rays (which has an typical absorption column of $\\sim $ 8 g cm$^{-2}$ , e.g.", "[64]), implying MRI driven accretion can extend all the way to the mid-plane.", "Figure: The gas (solid) and dust (dashed) surface density distribution in a evolving and photoevaporating disc after ∼3.55\\sim 3.55 Myr of evolution.", "The disc shown is losing mass through the X-ray photoevaporation prescription, the calculation shown is the “median” model described by .Figure REF shows that the gas surface density profile created by photoevaporation traps a significant amount of dust in its pressure trap (located at $\\sim 5.5$  AU).", "The pressure trap does not occur at the maximum gas surface density as there is a temperature gradient in the disc.", "The pressure trap has a dust surface density nearly two-orders of magnitude larger than the surrounding regions and is radially optically thick to stellar radiation.", "The spectral energy distributions (SEDs) of these photoevaporating discs are inconsistent with the observational constraints that show most non-accreting stars (Weak T Tauri Stars – WTTs) either have no disc emission or show radially optically thin disc emission.", "Only by the time the cavity has reached radii $\\sim 100$  AU, after a subsequent $\\sim 1$  Myr of evolution, are these photoevaporating discs compatible with the observations [47].", "Dust traps have another consequence: the lack of radial drift and high dust densities mean dust-particles can rapidly grow by coagulation [56].", "Ultimately it is collisional induced fragmentation that limits the growth, where turbulent induced relative velocities between dust particles exceed the fragmentation velocity.", "This growth to the “fragmentation barrier” causes large particles to collide and produce smaller grains which are then able to coagulate into larger grains, so a coagulation-fragmentation equilibrium cascade is quickly established [11].", "The maximum grain size is obtained by equating the turbulent relative velocities to the fragmentation velocity ($\\sim $ 10-50 m s$^{-1}$ for icy rich grains – [67], [66]) to approximately obtain [10]: $a_{\\rm frac} &=&\\frac{2\\Sigma _g}{\\pi \\alpha \\rho _i}\\left(\\frac{u_f}{c_s}\\right)^2=1\\,{\\rm mm} \\left(\\frac{\\Sigma _g}{1~{\\rm g~cm^{-2}}}\\right)\\left(\\frac{\\alpha }{10^{-3}}\\right)^{-1}\\nonumber \\\\ &\\times &\\left(\\frac{\\rho _i}{1.25\\,{\\rm g~cm^{-3}}}\\right)^{-1}\\left(\\frac{u_f}{10~{\\rm m~s^{-1}}}\\right)^2\\left(\\frac{T}{150~{\\rm K}}\\right)^{-1}$ where $\\Sigma _g$ is the gas surface density, $\\alpha $ is the Shakura-Sunyaev turbulent parameter, $\\rho _i$ is the internal density of the dust particles, $u_f$ is the fragmentation velocity, $c_s$ is the sound-speed and $T$ the gas temperature.", "We have evaluated Equation REF for parameters we find in the pressure maxima of photoevaporating discs resulting from the X-ray photoevaporation model (e.g.", "Figure REF , [45]).", "This growth-fragmentation cascade is key to our scenario; if small dust particles are removed through an external process, then they will be replaced by fragmentation of large particles and vice-versa.", "Therefore, depleting any range of particle sizes depletes all particle sizes from the pressure trap.", "The vertical distribution of the dust particles is size dependent.", "Turbulence easily lofts small dust particles, so they settle towards the mid-plane slower than large particles.", "Therefore the large particles (e.g.", "mm-sized) are typically settled towards the mid-plane, and the small particles (e.g.", "micron-sized) are lofted to several scale heights.", "For small particles with dimensionless stopping times ($\\tau _s=\\pi \\rho _i a\\Omega _K/\\rho _g v_t$ , with $\\Omega _K$ the Keplerian angular velocity, $\\rho _g$ the gas density and $v_t$ the mean thermal speed of gas particles) smaller than the viscous $\\alpha $ , the dust-scale height ($H_d$ ) relative to the gas scale height ($H$ ) is $H_d=H\\sqrt{\\alpha /\\tau _s}$ , [71].", "Therefore, the small dust particles intercept the stellar radiation at several scale heights above the mid-plane, and it is the small dust particles that lie above the disc's photosphere." ], [ "The role of radiation pressure", "The dust particles that lie above the photosphere experience an additional acceleration due to radiation pressure arising from stellar photons.", "The strength of the radiation pressure relative to gravity can be characterized by the ratio of the radiative acceleration to the gravitational acceleration ($\\beta $ ), where: $\\beta _* = \\frac{\\kappa L_*}{4\\pi G c M_*}$ $\\kappa $ is opacity of an individual dust grain, $L_*$ is the stellar luminosity, $G$ is the gravitational constant, $c$ the speed of light and $M_*$ the stellar mass.", "In the absence of gas, particles with $\\beta >0.5$ are on unbound orbits and can escape the star.", "However, the situation is different in the presence of gas.", "Small particles with $\\tau _s<1$ experience gas drag that causes them to follow circular orbits; however, the reduced effective gravity (due to the radiation pressure support) they experience mean the dust-particles orbit the star slower than the gas.", "The gas-drag that arises causes the dust-particles to gain angular momentum from the gas and flow outwards with a velocity given by [63], [50]: $v_d^R\\approx u^R_d + \\beta \\tau _s v_K$ where $u_d^R$ is the radial gas velocity and $v_K$ is the Keplerian orbital velocity.", "The physics is identical to the case of gas-drag induced by pressure gradients in the gas, but in this case, it is the radiation pressure that modifies the orbital velocity of the dust, rather than the pressure gradient modifying the orbital velocity of the gas.", "Therefore, in the presence of gas, even particles with $\\beta \\ll 0.5$ can be driven away from the star.", "[63] studied the radiation pressure driven outflows in the case of optically thick primordial disc structures.", "They showed that in almost all cases the radiation pressure driven dust mass-flux through the optically thin surface layers was lower than the inward mass-flux of dust through the rest of the optically thick disc.", "Furthermore, in the case studied by [63] where the flaring photosphere means that dust particles that are driven radially outwards above the photosphere from one radius will drop below the photosphere at a slightly larger radius.", "In the case of a pressure trap, there will be no mass-flux through the optically thick portions of the disc.", "Since the pressure trap completely dominates the local dust mass (see Figure REF ) then the photosphere will not flare.", "Instead, the photosphere will follow lines of constant co-latitude (i.e.", "it will just be radially outwards from the star).", "Therefore, unlike the case in a primordial disc, dust particles that are driven radially outwards above the photosphere will always remain above the photosphere.", "The density above the photosphere is approximately $\\sim 1/\\kappa \\ell $ (where $\\ell $ is the length scale on which the density if varying).", "In the case of the dust trap, the length scale on which the dust density is varying is significantly smaller than the primordial disc case (which is of order $R$ ).", "The narrow dust trap results in much higher dust densities above the photosphere, and consequently, the mass-fluxes can be much larger than in the primordial disc case studied by [63]." ], [ "Disc dispersal scenario", "Our new disc dispersal scenario is an extension of the previous arguments about dust trapping and the vertical distribution of particles of different sizes, but now we explicitly focus on the role of radiation pressure.", "Dust-particles are trapped in the pressure trap and grow to the fragmentation limit ($a_{\\rm frag}$ ).", "Fragmentation creates small particles which are lofted to several scale heights by turbulence.", "The small particles dominate the opacity and set the photosphere's position.", "The small particles above the photosphere experience radiation pressure and are removed from the dust trap and disc due to the large radial velocities they attain.", "The removed small particles are replaced by new small particles that originate from the fragmentation of larger particles, consequently reducing the mass in the large particles.", "Thus, the removal of small dust-particles results in the depletion of dust particles of all sizes.", "This connection between the small particles and large particles only holds if the fragmentation timescale is fast enough to replenish the lost small particles.", "We explore this requirement further in Section  in light of our results.", "Our scenario is summarised in Figure REF .", "Radiation pressure clearing can also lead to run-away emptying of the dust-trap: as the dust-mass in the trap is depleted, the photosphere moves to smaller heights, so a more substantial fraction of the dust-mass lies above the photosphere which results in the faster clearing.", "Furthermore, as the photosphere moves to lower heights larger particles (which contain a more substantial fraction of the mass) also sit above the photosphere.", "For particles with sizes $a\\gtrsim \\lambda _*$ (where $\\lambda _*$ is a representative wavelength of stellar photons $\\sim 0.5~\\mu $ m), that follow the Epstein drag law, then the dust velocity ($v_d^R\\propto \\beta \\tau _s v_K$ ) is roughly independent of particle size, so the mass-loss rate accelerates as a more significant fraction of the dust mass is above the photosphere.", "Ultimately, the dust-trap will be depleted until the drift of dust particles from the outer portions of the disc balances the rate at which radiation pressure removes it.", "Figure REF indicates that this will happen when the dust surface density in the trap reaches a level of order $\\Sigma \\sim 10^{-3}$  g cm$^{-2}$ , whence the pressure trap and the disc is radially optically thin.", "We can get a rough estimate of the dust-mass loss rate by writing it as: $\\dot{M}_{\\rm dust}\\approx 2 \\pi R_{\\rm trap} H_d \\rho _{\\rm phot} v_d $ where $\\rho _{\\rm phot}\\sim 1/\\kappa H_{w}$ is the dust density above the photosphere, $H_{w}$ is the radial scale length of the dust distribution above the photosphere, $R_{\\rm trap}$ is the orbital separation of the trap and $H_d$ is the dust scale height above the photosphere.", "Evaluating this expression for nominal parameters we find: $\\dot{M}_{\\rm dust} &=& 3.2\\times 10^{-6}{\\,{\\rm M}_\\oplus }~{\\rm yr}^{-1}\\,\\beta \\left(\\frac{M_*}{1~{\\rm M}_\\odot }\\right)^{1/2}\\left(\\frac{R_{\\rm trap}}{5~{\\rm AU}}\\right)^{1/2}\\nonumber \\\\&\\times &\\left(\\frac{H_d}{H_w}\\right)\\left(\\frac{\\tau _s}{10^{-2}}\\right)\\left(\\frac{\\kappa }{10^4~{\\rm cm}^2~{\\rm g}^{-1}}\\right)^{-1}$ Note in the above, the reference opacity value is for an individual micron-sized dust grain, rather than the opacity for some gas and dust mixture.", "Comparing this mass-loss rate to the mass contained in the pressure-trap in Figure REF which is $\\sim 0.1$  M$_\\oplus $ , indicates that radiation pressure is certainly capable of depleting the dust trap on a timescale $\\sim 10^5$  years.", "However, equations REF  & REF beguile the complexity of this calculation.", "The parameters $H_d$ , $H_w$ are controlled by the radiative transfer problem, which in turn are controlled by the dust dynamics.", "Furthermore, the stopping time $\\tau _s$ is a local stopping time (not the mid-plane value) and depends on the height of the photosphere in the disc and the opacity ($\\kappa $ ) also depends on the particle size distribution, which also depends on the height of the photosphere.", "Finally, the height of the photosphere also obviously depends on the radiative transfer problem.", "Therefore, while the potential for rapid dust clearing due to radiation pressure clearly exists, we must ultimately appeal to numerical calculations to accurately determine the mass-fluxes.", "This is because several of the parameters can vary significantly, in particular, the opacity is strongly sensitive to the particle size distribution and the stopping time depends on the gas density which varies rapidly with height $(\\rho _g \\propto \\exp (-Z^2/2H^2))$ ." ], [ "Radiation pressure driven outflows", "The scenario we wish to numerically study is essentially shown in Figure REF .", "The goal of this section is to calculate the dust mass-loss rates so that they can be included into evolutionary calculations of viscously evolving and photoevaporating discs (Section ).", "Since the timescale to reach steady-state in the dust is likely to be much shorter than the evolutionary timescale for the gas, in all our calculations we fix the gas density profile and evolve only the dust density.", "We use two types of calculation, firstly 2D axis-symmetric calculations and secondly a reduced one-dimensional problem that solves for the vertical dust distribution in the dust-trap.", "The reason for this is that while 2D calculations are informative, we are unable to perform a large enough parameter study to determine the rate of dust removal for inclusion in long-term evolutionary calculations.", "Therefore, we use our 2D calculations to benchmark our basic picture of the outflows and to estimate the width of the dust-trap above the photosphere ($H_{w}$ ), which we need to know a priori in the 1D models.", "The basic evolutionary equation for the dust is given by the following 2D advection diffusion equation [62]: $\\frac{\\partial \\rho _d^i}{\\partial t} + \\nabla \\cdot \\left[\\rho _d^i \\mathbf {v}_d^i-\\frac{\\rho _g\\nu }{{\\rm Sc}}\\nabla \\left(\\frac{\\rho _d^i}{\\rho _g}\\right) \\right]= 0$ where $\\rho _d^i$ is the dust density for particles of size $a^i$ , $\\nu $ is the kinematic viscosity in the gas and ${\\rm Sc}$ is the Schmidt number which measures the ratio of kinematic viscosity of the gas to the diffusivity of the dust.", "Unless otherwise explicitly stated we adopt ${\\rm Sc}=1$ and work in cylindrical co-ordinates.", "In the above equation we have written the Schmidt number as a scalar; however, the Schmidt number need not bee the same in the vertical and radial direction.", "We follow [63], [50] and adopt the the terminal velocity approximation, essentially assuming that the dust particles spiral inwards or outwards through circular Keplerian orbits – an approximation that is valid when $\\tau _s<1$ .", "In this case the radial dust velocity becomes: $v^R_d=\\frac{\\tau _s^{-1}u^R_d+(\\beta -\\eta )v_K}{\\tau _s+\\tau _s^{-1}}$ where $\\eta $ is a dimensionless measure of the pressure gradient given by: $\\eta =-\\frac{1}{R\\Omega _K^2\\rho _g}\\frac{\\partial P}{\\partial R}$ Again adopting the terminal velocity approximation the vertical dust velocity becomes: $v^Z_d=-(1-\\beta )\\Omega _K\\tau _sZ$ Finally, for the small particles were are interested in here the Esptein drag-law gives a dimensionless stopping time of the form: $\\tau _s=\\frac{\\rho _ia^i\\Omega _K}{\\rho _gv_t}$ We can reduce our 2D calculation to an approximate problem in 1D inside the dust-trap.", "To do this, we note that inside the dust-trap $\\partial /\\partial R(\\rho _d^i/\\rho _g)\\approx 0$ as the dust-density reaches a maximum inside the trap.", "In the optically thick regions of the disc the radial dust velocity is approximately zero inside the pressure trap.", "However, in the optically thin regions the radial dust velocity is approximately $v_d^R\\approx \\tau _s\\beta v_K$ .", "Therefore, we approximate the radial advection term as: $\\frac{1}{R}\\frac{\\partial }{\\partial R}\\left(R\\rho _d^iv_d^R\\right)\\approx \\rho _d^i\\tau _s\\beta v_K/H_{\\rm w}$ Therefore, the reduced 1D problem becomes: $\\frac{\\partial \\rho _d^i}{\\partial t}+\\frac{\\partial }{\\partial Z}\\left[\\rho _d^iv_d^Z-\\frac{\\rho _g\\nu }{{\\rm Sc}}\\frac{\\partial }{\\partial Z}\\left(\\frac{\\rho _d^i}{\\rho _g}\\right)\\right]=-\\frac{\\rho _d^i\\tau _s\\beta v_K}{H_w}$ where the loss of dust due to radiation pressure now appears as a sink term in the RHS of the vertical advection-diffusion equation.", "With knowledge of $H_w$ then Equation REF can be used to estimate the dust mass-loss rates from the pressure trap.", "We note that $H_w$ is not the width of the pressure trap in the mid-plane, but the radial scale length of the dust distribution above the photosphere and as such depends on the radiative transfer problem.", "We have found the value of $H_w$ cannot be estimated from the gas distribution alone and we use our full 2D calculations to calibrate an appropriate value.", "In all models, we need to perform a radiative transfer calculation which requires that we know the opacity of dust particles as a function of frequency and particle size.", "The opacity of an individual, spherical dust grain is given by: $\\kappa =\\frac{3Q(a,\\lambda )}{4\\rho a}$ where $Q(a,\\lambda )$ is the radiative efficiency.", "In our calculations we use a simplified model for the radiative efficiency of $Q=1$ for $2\\pi a < \\lambda $ and $Q=(2\\pi a/\\lambda )^{1.5}$ for $2\\pi a > \\lambda $ , where our chosen emissivity index of 1.5 in the Rayleigh limit is close to that found for water-ice covered silicate grains [14].", "The use of a simplified opacity model does not account for any resonances that may exist; however, its simplicity allows us to isolate the dominate physics in our simulations.", "In figure REF we show the resulting $\\beta $ parameter for a solar luminosity, $0.7$  M$_\\odot $ star with an effective temperature of 4500 K, we calculate the characteristic frequency of the star using Wien's displacement law, giving $\\lambda _*=0.65$  $\\mu $ m. The internal density of the dust particles is set to 1.25 g cm$^{-3}$ , appropriate for icy grains.", "Figure: The value of the ratio of acceleration due to radiation pressure compared to acceleration due to gravity (β * \\beta _*) assuming the extinction to the star is negligible.", "The thick line is for a 0.7 M ⊙ _\\odot star with a radius of 1.7 R ⊙ _\\odot and effective temperature of 4500 K. The thin line is for a 0.5 M ⊙ _\\odot star with a radius of 1.2 R ⊙ _\\odot and effective temperature of 3500 K.This figure indicates that small sub-micron sized dust grains are below the blow-out size; however, larger particles, e.g.", "10$\\mu $ m sized particles still have sufficiently large $\\beta $ parameters that they can reach significant radial velocities in the presence of gas." ], [ "Two-dimensional calculations", "We solve the two-dimensional problem on a cylindrical grid with 220 evenly spaced cells in the radial direction and 300 evenly spaced cells in the vertical direction.", "The parameters of the simulation setup are chosen to closely match those for the evolutionary photoevaporation calculations performed by [45].", "Namely a star with mass 0.7 M$_\\odot $ , radius 1.7 R$_\\odot $ and effective temperature of 4500 K. The gas disc has a mean-molecular-weight of 2.3 times the mass of Hydrogen and is vertically isothermal, with a radial, $R^{-1/2}$ , temperature profile.", "The kinematic viscosity is parameterized using the standard $\\alpha $ prescription with $\\nu =\\alpha c_s H$ and $\\alpha =2.5\\times 10^{-3}$ .", "This value of $\\alpha $ is adopted as it is the value identified by [45] as provides agreement between the X-ray photoevaporation model and the observed protoplanetary disc population.", "The gas surface density profile is taken from the viscous evolutionary calculations of [45] (i.e.", "the black line in Figure REF ) and is distributed in the vertical direction assuming hydrostatic equilibrium, in this model the pressure trap lies at approximately $5.5$  AU in the mid-plane.", "The inner radial boundary is set to $4\\times 10^{13}$  cm, and the outer boundary is at $1.75\\times 10^{14}$  cm, the lower vertical boundary is at the mid-plane while the higher vertical boundary is at $4.5\\times 10^{13}$  cm.", "We use reflection boundary conditions in the mid-plane and outflow boundaries on all the others.", "Equation REF is solved using an explicit integration that is first-order in time and second-order in space.", "The advection term is treated using a second-order upwind method that employs a van-Leer limiter.", "To avoid unnecessarily short-times steps that arise due to super-Keplerian radial velocities than can occur high in the disc's atmosphere we employ both a density floor of $10^{-27}$  g cm$^{-3}$ and a maximum radial dust speed that is equal to the Keplerian-velocity.", "The dust density is so low at this point that these choices do not affect our results.", "Radiation pressure is included using a short-characteristics ray-tracing scheme where the extinction coefficient in each cell is assumed to be constant.", "The $\\beta $ parameter is then calculated via $\\beta =\\beta _*\\exp (-\\tau _*)$ where $\\tau _*$ is the optical depth to the stellar irradiation at a wavelength $\\lambda _*$ .", "The dust is initially contained within the pressure trap, such that its mid-plane density falls off as $R^{\\pm 8}$ either side of the pressure trap and is vertically well mixed with the gas.", "The dust density in the mid-plane of the pressure trap is initialised at $10^{-12}$ g cm$^{-3}$ We integrate the problem until a quasi-steady state is reached (when the dust mass-loss rate has stabilised but is slowly evolving as dust is lost from the grid).", "As we do not have a dust coagulation and fragmentation routine to include in our 2d calculations, we perform these calculations with a single particle size at a time and use them to calibrate our 1D calculations.", "The results for 1 micron sized particles is shown in Figure REF .", "Figure: The top panel shows the dust density and direction of the dust flux (including radiation pressure, advection, radial drift and diffusion), the bottom panel shows the magnitude of the dust velocity (in the poloidal direction) and the solid, dashed and dot-dashed lines show the positions of the τ * =0.1\\tau _*=0.1, 1.0 and 10 surfaces respectively.", "This snapshot is after ∼2340\\sim 2340 years of evolution.This Figure shows that our schematic picture is accurate and there is a rapid radial outflow above the photosphere.", "Furthermore, over a range of cavity radii, dust masses and particle sizes we find a value of $H_w\\approx 0.05 R_{\\rm trap}$ represents a reasonable approximation of the radial scale of the dust density above the photosphere.", "Therefore, in the 1D calculations detailed in Section REF we adopt this value." ], [ "One-dimensional calculations", "To solve the reduced 1D problem (Equation REF ) we also need to know the optical depth as a function of height.", "To estimate this, we assume that the dust distribution has a radial distribution given by: $\\rho _d^i(R,Z)=\\rho _d^i(R_{\\rm trap},Z)\\exp \\left(-\\frac{(R-R_{\\rm trap})^2}{2H_w^2}\\right)$ We choose this form of the density profile as it is smooth, analytic and is of the correct form in the neighbourhood of the pressure trap.", "This density profile is then prescribed onto a spherical polar grid, and a ray-tracing calculation is performed to calculate the optical depth throughout this density structure.", "We then use cubic interpolation to calculate the values of the optical depth on the 1D vertical grid.", "We compare our 1D method to our 2D calculations for single particle sizes in Figure REF where we show the mass-flux as a function of height above the pressure trap in both the 1D (solid lines) and 2D (dashed lines) calculations.", "Figure: Comparison of the mass-fluxes as a function of height taken from 1D and 2D models for particle sizes of 1 (1D - thin line, 2D - dashed black line), 3 (1D - medium thickness, 2D - dashed cyan line, and 10 microns (1D - thick line, 2D - dashed green line).We find agreement between the two simulations is good, with the general form of the mass-flux and its variation with particle size being accurately described.", "We find that in general, the total integrated dust mass-loss rates emanating from the dust traps agree to within $\\sim 25$ % over a range of relevant particle sizes, dust-trap radii and dust densities.", "The rapid fall of in the dust-mass flux for the 2D case is due to inward radial drift becoming important.", "This feature arises from the fact that in our 2D vertically isothermal gas structures the pressure trap does not lie exactly along a line of a fixed cylindrical radius.", "Thus as the height is increased above the pressure trap inward radial drift results in inward transport of dust (at very low velocities) until the optical depth is low-enough for radiation pressure to dominate.", "As the first step, here we treat the coagulation-fragmentation cascade by enforcing a steady-state particle size distribution on the dust surface density.", "Note this does not mean we locally enforce a particle size distribution.", "The vertical distribution of particles of a fixed size is free to take whatever form the balance of turbulent stirring, and gravitational settling requires.", "Our procedure is as follows: for each 1D simulation, we pick a minimum and maximum particle size and power-law size distribution.", "Then at the end of each time-step we adjust the density profiles in the following manner: we require the dust surface density distribution to follow a power-law profile in size such that $\\Sigma _d(a){\\rm d}a\\propto m(a) a^{-p}{\\rm d}a$ , where $m(a)$ is the mass of a particle of size $a$ .", "Comparing this required surface-density profile to the current surface density profile one can determine a correction factor $f(a)$ needed to enforce the current dust surface density profile to become the required one.", "We then update the current density profile to the required one by the following expression $\\rho _d^{\\rm wanted}(a,Z)=f(a)\\rho _d^{\\rm current}(a,Z)$ .", "This procedure allows us to maintain the correct vertical distribution for individual particle sizes, while still adjusting the dust particle size distribution to match those resulting from steady-state coagulation-fragmentation simulations.", "While this procedure is no substitute for a fully consistent particle size calculation, it does provide valuable insight into the importance of various parameters as we can vary the minimum and maximum grains sizes as well as the steepness of the power-law distribution.", "To determine the dust mass-loss rates relevant for incorporation into evolutionary calculations in Section  we need to know the dust mass-loss rates as a function of dust surface density, orbital radius, particle size distribution parameters, gas temperature and gas surface density.", "Such a complete parameter study is unfeasible even with the reduced 1D problem.", "However, since the evolution of the dust does not affect the long-term evolution of the gas (in the “standard” photoevaporation scenario described here), many of these parameters are known as a function of orbital radius from the previously computed gas evolutionary calculations taken from [45].", "For example, the gas surface density in the pressure trap is known as a function of orbital radius; furthermore, if we assume that turbulent fragmentation sets the maximum particle size, then Equation REF allows us to calculate the maximum particle size as a function of orbital separation.", "The gas temperature in the pressure trap can also be computed as a function of trap location.", "The models of [45] adopted an $R^{-1/2}$ temperature profile arising from a passively heated flared disc structure [36] where the temperature at 1 AU is set to 100 K. Such a temperature profile is applicable for the case where the dust surface density distribution is smoothly varying with radius, such that the flaring angle of the disc's photosphere with respect to the incident stellar photons is small and also slowly varying with radius [13].", "This setup is not exactly the case for the situation in the pressure-trap where the rapidly varying dust density means the flaring angle with respect to the star will be much larger (as indicated by the small value of $H_w=0.05$ ), this will consequently mean the temperature in the pressure trap will be larger than the temperature profile adopted by [45].", "Therefore, to account for the enhanced temperature in the pressure trap, we make use of the fact that the flaring angle depends on the radial scale on which the dust density at the photosphere is varying.", "In the case of a smoothly varying disc, the length scale on which the density at the photosphere is varying is $\\sim R$ ; however, for our dust trap, it is $\\sim H_w$ .", "Thus as the disc temperature is proportional to the flaring angle to the one quarter power [14], then we can modify the [45] temperature profile by a factor of $(R/H_w)^{1/4}$ to find the gas temperature in the dust trap as: $T_{\\rm trap}=100\\,{\\rm K}\\left(\\frac{R}{H_w}\\right)^{1/4}\\left(\\frac{R_{\\rm trap}}{1~{\\rm AU}}\\right)^{-1/2}$ This results in an approximately factor two increase in the temperature compared to the smooth primordial disc and is consistent with the enhanced MIR emission from the inner edges of transition discs [22].", "With the gas surface density, maximum particle size and gas temperature known as a function of trap location we then use our 1D models to compute the dust mass-loss rates as a function of trap location and dust surface density.", "In our nominal model we further adopt a minimum particle size of 0.1 $\\mu $ m, and an MRN dust-mass power-law index of $p=3.5$ [39]; however, we vary these parameters (and others) below.", "The resulting mass-loss rates for this nominal case are shown in Figure REF .", "Figure: The colour map shows the dust mass-loss rates as a function of pressure trap radius and dust surface density.", "The thick contours show the contours of fixed clearing time for 10 4 10^4 (solid), 10 5 10^5 (dashed) and 10 6 10^6 years (dot-dashed).", "The dotted contour shows the trace of dust surface density as a function of trap radius for the case with no radiation pressure driven dust mass-loss.", "In regions where the dust surface density curve (dotted line) lies below the 10 5 10^5 year line rapid clearing, consistent with the observations, is possible.", "Therefore, for this model rapid clearing is possible soon after gap opening when the trap radius is ≲5\\lesssim 5~AU.", "The trap becomes radially optically thin around a surface density of 10 -3 10^{-3} g cm -2 ^{-2}.This Figure also includes lines showing the evolution of the dust surface-density in the trap with orbital separation and contours of constant clearing time (e.g.", "$M_{\\rm dust}/\\dot{M}$ ).", "If the dust surface density curve lies below a clearing time curve this indicates that clearing of the dust-trap on this timescale is possibleNote it is only “possible” as the trap could be replenished with dust by radial drift as discussed in Section .. To be consistent with the observations, the dust surface-density curve should lie below the $10^5$  year clearing timescale contour.", "Figure REF indicates that radiation pressure driven mass loss is certainly capable of depleted photoevaporative dust trap in of order $10^5$  years, and certainly before the cavity reaches large radii, as it takes $\\sim $ 1 Myr for the disc to be photoevaporated out to $\\sim 100$  AU.", "The fact that the dust-surface density evolution as a function of cavity radius crosses lines of constant clear-out time multiple times indicates that the mass-loss that occurs in the early stage is important.", "This evolutionary pathway arises because after gap opening and before inner disc draining the photoevaporation profile is smooth (as XUV photons cannot directly impinge on the mid-plane of the outer disc).", "However, after the inner disc drains onto the star and direct irradiation of the outer disc's mid-plane is possible the photoevaporation profile is much more concentrated around the disc's inner edge [2], [43].", "This switch in photoevaporation profile sharpens the pressure gradient in the vicinity of the pressure trap, causing the evolution of the dust-surface density to peak around the point where the inner disc has finished draining onto the star.", "So if enough dust-mass can be removed during the earliest phase of disc clearing, then the clearing of the pressure trap may be very rapid.", "Alternatively, the trap could become slightly longer lived.", "We note that the dust mass-loss rates drop below surface densities of $\\sim 10^{-3}$  g cm$^{-2}$ as the trap is radially optically thin and the mass-loss rates then depend on the total amount of mass currently in the trap.", "We can explore the role of various other parameters in Figure REF .", "Figure: The variation in the 10 5 10^5 year clearing time, with auxiliary parameters.", "The thick solid line is the 10 5 10^5 year clearing line from Figure .", "The thin solid line is the evolution of the dust surface density in the trap without mass-loss.", "As in Figure , in regions where the dust surface density curve (thin solid line) lies below the 10 5 10^5 year line rapid clearing, consistent with the observations, is possible.", "The left panel considers variations in the minimum and maximum particle size: the dashed line shows a model with the fragmentation velocity increased to 50 m s -1 ^{-1} which increases the maximum particle size in the dust trap; the dash-dotted line shows a model with a minimum grain size of 1 μ\\mu m. The middle panel shows variations in the power-law size index pp with the dot-dashed line showing a more bottom heavy distribution with p=3p=3 and the dot-dashed line shows a more top-heavy distribution with p=4p=4.", "The right-hand panel shows the effect of varying the Schmidt number with the dashed line showing Sc =0.3{\\rm Sc}=0.3 and the dot-dashed line showing Sc =0.1{\\rm Sc}=0.1.In the left panel we vary $a_{\\rm min}$ and $a_{\\rm max}$ .", "The dashed line shows a model with $a_{\\rm min}$ increased to 1 $\\mu $ m; this is a realistic possibility as Figure REF shows that particles with sizes smaller than $\\sim 1~\\mu $ m are below the blow-out size.", "Therefore, in a realistic dust coagulation-fragmentation calculation, it is possible that particles smaller than $\\sim 1~\\mu $ m are rapidly removed before fragmentation of larger particles can replace them; this process typically sets the minimum particle size in debris discs [40].", "The dashed-dotted line shows a model where we increase the fragmentation speed to 50 m s$^{-1}$ , hence increasing the maximum particle size, such a fragmentation speed has been found from numerical studies of icy grains [66].", "Both these changes to the nominal model promote even more rapid clearing.", "This result arises because the opacity is dominated by the mass fraction in the smallest grains.", "Both these changes, increasing $a_{\\rm min}$ and increasing $a_{\\rm max}$ , lower the opacity, hence at fixed dust surface density the total dust-mass above the photosphere and susceptible to radiation pressure is larger.", "In this case, both of these models would lead to very rapid clear out of the dust trap.", "The middle panel shows variations in the dust-size power-law distribution where we show values of $p=3$ (dashed), $p=3.5$ (solid, nominal model) and $p=4$ dot-dashed.", "As in the previous case, as one decreases the dust mass-fraction in the smallest grains (by decreasing $p$ ) then a larger fraction of the dust mass lies above the photosphere due to the lower opacity, leading to a more rapid clearing.", "Finally, the right panel shows the effect of changing the dust-diffusion coefficient, by decreasing the value of the Schmidt number (higher vertical diffusivity).", "In the outer disc non-ideal MHD effects, such as ambipolar diffusion may be important.", "Simulation that included ambipolar diffusion indicated enhanced diffusivity in the vertical direction [72].", "Furthermore, pure hydrodynamic turbulence driven by the vertical shear instability (which can arise from a varying angular velocity with height e.g.", "[42]), results in higher diffusivities in the vertical compared radial direction [61].", "At small radii smaller Schmidt numbers result in a more rapid clearing, as the higher diffusivity negates dust settling, resulting in a higher fraction of the dust-mass above the photosphere.", "We summarise, that while our nominal model suggests rapid $\\sim 10^5$  year clearing, most physical corrections to the chosen parameters, except for a top-heavy particle size distribution result in an even more rapid clearing." ], [ "Secular Evolution", "To asses whether radiation pressure can clear out dust from a pressure-trap created by photoevaporation, we include our dust-mass loss rates in long-term secular evolution calculations of the gas, dust and particle size distributions.", "We use the gas and dust evolutionary code detailed by [50], which has been modified by [35] to include the [12] particle size evolutionary model.", "Those previous works include detailed discussions of the numerical scheme and algorithms which we do not repeat here.", "Essentially we solve the evolution equation for the gas surface density: $\\frac{\\partial \\Sigma _g}{\\partial t}=\\frac{3}{R}\\frac{\\partial }{\\partial R}\\left[R^{1/2}\\frac{\\partial }{\\partial R}\\left(\\Sigma _g\\nu R^{1/2}\\right)\\right]-\\dot{\\Sigma }_w $ where the $\\dot{\\Sigma }_w$ represents the gas lost due to photoevaporation.", "The form of $\\dot{\\Sigma }_w$ is taken from the fits to numerical simulations for the X-ray photoevaporation model by [47].", "The setup is identical to the “median” model described by [45].", "The surface density is initially a zero-time Lynden-Bell & Pringle similarity solution with initial disc mass of $0.07$  M$_\\odot $ and scale radius of 18 AU.", "We adopt a power-law viscosity of the form $\\nu \\propto R$ which is normalised such that the viscous $\\alpha $ parameter is $2.5\\times 10^{-3}$ at 1 AU.", "Coupled with the gas equation, we also solve for the evolution of the dust-surface density as: $\\frac{\\partial \\Sigma _d}{\\partial t}+\\frac{1}{R}\\frac{\\partial }{\\partial R}\\left[R\\Sigma _d v_d^R-\\nu R \\Sigma _g\\frac{\\partial }{\\partial R}\\left(\\frac{\\Sigma _d}{\\Sigma _g}\\right)\\right]=-\\dot{\\Sigma }_{\\rm d} $ here $v_d$ is the standard vertically, and particle-sized averaged form of Equation REF detailed by [12], note we do not include the radiation pressure term in this equation as it is focused on the optically thick mid-plane.", "The term $\\dot{\\Sigma }_{\\rm d}$ represents the mass-loss from the pressure trap due to radiation pressure, we neglect dust mass loss due to particle entrained in the photoevaporative flow as it is negligible [44] and start with an initial local dust-to-gas ratio of 0.01 everywhere.", "To convert the total dust mass-loss rates obtained in the previous Section into a surface mass-loss profile, we adopt a Gaussian radial profile for the dust-mass loss centred on the pressure trap with a scale size of $H_w$ .", "We ensure that the integrated mass-loss profile matches the required dust-mass loss rate for the location of the pressure trap and the current dust-surface density in the pressure trap.", "The mass-loss of dust due to radiation pressure only begins once photoevaporation has opened a gap and created a pressure-trap.", "The [12] model evolves the dust size distribution using a simple two population model, firstly a monomer size which we set to 0.1 $\\mu $ m and secondly a representative maximum grain size.", "The maximum grain size is set by whichever of drift limited growth, turbulent fragmentation or fragmentation arising from radial drift gives the smallest particle size.", "Furthermore, an initial and transitory growth phase is included as suggested by [12] where particles grow from the monomer size up-to one of dust size limits.", "Equations REF and REF are integrated on a grid that is uniformly spaced in $R^{1/3}$ .", "The grid has an inner boundary at $3.75\\times 10^{11}$  cm and outer boundary at $3.75\\times 10^{16}$  cm and contains 1000 cells.", "We adopt a zero-torque boundary condition on the gas at both the inner and outer boundaries and outflow boundary conditions on the dust." ], [ "Results", "In Figure REF we show the evolution of the gas and dust surface densities for a model which includes no mass-loss due to radiation pressure (top panel) compared to our nominal model (bottom panel).", "This Figure shows, as expected from previous considerations [3], [47], that without an extra dust mass-loss prescription photoevaporation generates dust-traps that are optically thick to a large radius (the dust trap is optically thick is it has a surface density $\\gtrsim 10^{-3}$ , for standard opacity choices).", "However, once the nominal mass-loss prescription is included, when the cavity exceeds approximately $\\sim 30$  AU very rapid, run-away dust clearing occurs making the cavity optically thin when the cavity radius is about $\\sim 40$  AU approximately $5\\times 10^5$  years after gas accretion ceases.", "Figure: The evolution of the dust (black, thick) and gas (blue, thin) surface densities in a photoevaporating disc.", "The top panel shows a model with no radiation pressure clear-out and is the previous generation of X-ray photoevaporation model.", "The bottom panel is the same model, but including dust mass-loss for the “nominal” case.", "The model is initially shown just after gap opening (after ∼3.4\\sim 3.4 Myr of evolution) and then every 0.15Myr, gas accretion onto the star ceases after ∼3.55\\sim 3.55 Myr.", "The darkness of the dust lines increases with time.We can see the dependence of the dust parameters explored previously in Figure REF , where we show the dust surface density in the pressure trap as a function of pressure trap radius.", "Figure: The evolution of the dust surface density in the pressure trap as a function of radius, for the case of no dust-mass loss, or nominal model and models where the dust parameters are varied.This Figure shows that in all cases expect that with a bottom heavy dust size distribution ($p=4$ ) that rapid clearing of the pressure trap can proceed before the dust trap radii becomes large.", "Of particular note are the cases where we modify the dust populations away from the nominal case, by increasing $a_{\\rm min}$ or $a_{\\rm max}$ or by giving the dust size population a top-heavy distribution.", "As hinted at by Figure REF , we end up with very rapid clearing.", "The timescale for clearing is not $\\ll 10^5$  years as indicated by Figure REF .", "Radial drift resupplies lost dust-mass back into the trap; however, in these cases the dust trap becomes radially optically thin by the time the trap radius has reached $\\sim 10$  AU (note that since these models have lower opacity overall than the nominal case the trap becomes optically thin at surface densities around $3\\times 10^{-3}$  g cm$^{-2}$ rather than the $\\sim 10^{-3}$  g cm$^{-2}$ value for the nominal model).", "Finally, as the models of [45] identified those young stars with lower than average photoevaporation rates (due to low stellar X-ray luminosity) as the most prevalent cause of long-lived relic discs, we run one final model using the nominal dust parameters for a disc with an X-ray luminosity of $3\\times 10^{29}$  erg s$^{-1}$ .", "Figure: The evolution of the dust (black, thick) and gas (blue, thin) surface densities in a photoevaporating disc, with a low X-ray luminosity of 3×10 29 3\\times 10^{29} erg s -1 ^{-1}.", "The top panel shows a model with no radiation pressure clear-out.", "The bottom panel is the same model, but including dust mass-loss for the “nominal” dust parameters.", "The model is initially shown just after gap opening (after ∼8.27\\sim 8.27 Myr of evolution) and then every 0.2Myr.", "The darkness of the dust lines increases with time.For this low X-ray luminosity case, the dust cavity is cleared when the cavity radius reaches $\\sim 20$  AU, even with the nominal dust parameters.", "The reason for the relatively more rapid clearing is that the dust surface densities initially present in the trap are lower than our standard case.", "This lower surface density is because lower X-ray luminosities result in lower photoevaporation rates which mean more disc material has accreted onto the star by the time photoevaporation has cleared the inner disc.", "Therefore, radiation pressure driven dust clearing can proceed efficiently around low X-ray stars as well, modifications to the dust population discussed above like higher minimum grain sizes or larger fragmentation velocities are only going to make clearing even more efficient." ], [ "Observational signatures", "This work was motivated by the fact that the standard photoevaporation model (e.g.", "top panels of Figures REF & REF ) predicts transition discs which are not accreting but have radially optically thick walls near the cavity edge which give rise to large MIR/FIR excesses above the photosphere [16], [1], [45].", "Transition discs with these characteristics are not observed [46], [15], [30], [51].", "Therefore, in this section, we compute spectral energy distributions of our discs on the point at which the pressure trap has become radially optically thin.", "Since the goal of this section is to compute representative SEDs we do not use a full numerical radiative transfer approach, rather we estimate the disc's radial temperature profile and then calculate the spectral energy distribution as: $\\lambda F_{\\lambda }= \\frac{\\lambda }{d^2}\\int _0^\\infty 2\\pi R B_\\lambda (T(R))\\left[1-\\exp \\left(-\\tau _\\lambda \\right)\\right]{\\rm d}R$ where $d$ is the distance to the source, which we set to 150 pc (corresponding to a typical distance to a young star in the Gould Belt) and $B_\\lambda $ is the Planck function.", "To estimate the radial temperature profile we smoothly match an optically thin temperature profile to an optically thick temperature profile [50], adopting the black-body temperature for optically thin radiation and the optically thick temperature profile used by [45].", "In this case, our temperature profile becomes: $T(R) &=& T_{\\rm BB}\\left(\\frac{R}{1~{\\rm AU}}\\right)^{-1/2}\\exp \\left(-\\tau _*\\right)\\nonumber \\\\&& +100\\,{\\rm K}\\left(\\frac{R}{1~{\\rm AU}}\\right)^{-1/2}\\left[1-\\exp \\left(-\\tau _*\\right)\\right]$ where $T_{\\rm BB}$ is the black-body temperature, and $\\tau _*$ is the mid-plane optical depth to the stellar irradiation.", "If there is a radially optical thick wall that occurs at a gap edge, we include an additional blackbody component for this wall at its local temperature.", "The opacities are calculated as a function of radii using our maximum grain size determined from the dust evolution algorithm and a power-law dust size distribution with $p=3.5$ with a minimum grain size of 0.1$\\mu $ m (irrelevant of the actual dust parameters chosen for the trap).", "The absorption efficiencies are calculated identically to those described in Section .", "The resultant SEDs for the nominal model, high $a_{\\rm min}$ (the high $a_{\\rm max}$ SED looks similar to this model) and model with dust size distribution parameter $p=3$ .", "They are plotted in Figure REF at the point when the dust in the pressure trap becomes radially optically thin.", "This point occurs at a dust trap radius of $\\sim 40$  AU and time of 4.25 Myr for the nominal model; $\\sim 15$  AU and 3.76 Myr for the high $a_{\\rm min}$ case and $\\sim 8$  AU and 3.59 Myr for the $p=3$ model.", "Additionally, we also include an SED for a model which does not include any dust mass-loss due to radiation pressure (i.e.", "the original photoevaporation model of [45]), this is shown when the hole radius is at 10 AU and is clearly inconsistent with the observations.", "This disc has a radially optically thick dust wall at the gap edge, and the disc's SED is dominated by MIR emission from this wall which is reprocessing a large fraction of the star's luminosity [22].", "Figure: The spectral energy distributions of the young star and disc for disc evolution scenarios without dust-mass loss due to radiation pressure (thin line) and for various models including dust mass-loss due to radiation pressure (thick lines).", "The stellar SED is shown as the dotted line.", "The square points show the Spitzer MIPS and ALMA fluxes of the typical non-accreting transition disc SZ112 with fluxes taken from , .These SEDs compare favourably to the photometry reported by [58], [15] and [30] for non-accreting stars that also showed an IR excess, who reported MIR/FIR fluxes in the range $\\sim 10^{-12}$ erg s$^{-1}$  cm$^{-2}$ and ALMA 1.3mm fluxes of order $10^{-15}$ erg s cm$^{-2}$ for a sample of nearby young stars at a distance of 150 pc.", "In the Figure, we explicitly show Spitzer MIPS and ALMA photometry for the non-accreting transition disc SZ112 which is consistent with the models which include dust-mass loss due to radiation pressure.", "Therefore, our model produces SEDs consistent with what has been previously classified as young debris discs.", "This classification is not surprising as young stars identified as hosting young debris discs are done so because they show an optically thin SED." ], [ "Discussion", "We have studied the impact of radiation-pressure on clearing dust traps created by photoevaporation.", "Small dust particles are removed from the pressure trap in the upper layers of the disc, which are then replaced by fragmentation of larger particles in the mid-plane.", "This coupling of radiation pressure removal to fragmentation means that radiation pressure is effectively able to remove dust particles of all sizes.", "This coupling will cease to be effective when the collision timescale for the largest grains becomes longer than the vertical diffusion timescale.", "The collision timescale for largest grains is $\\sim \\Omega _K^{-1}(\\Sigma _g/\\Sigma _d)$ , while the vertical diffusion timescale is $\\sim \\Omega _K^{-1}(Z_{\\rm phot}/H)^2/\\alpha $ .", "Therefore, provided the dust-to-gas ratio is $\\gtrsim 10^{-3}$ which is generally true for optically thick dust traps the collision frequency of the largest grains is high enough that we do not need to worry about the resupply of smaller grains to photospheric heights (typically 2-3 $H$ , see Figure REF ).", "Therefore, the clearing of discs is either limited by the diffusion timescale of small particles from the mid-plane to above the photosphere or by the clearing time-scale of small particles from above the photosphere themselves.", "Our calculation indicates that unless the parameters of the system are such that the opacity of the dust trap to incoming stellar photons is higher than we have estimated then radiation pressure clearing is an essential ingredient of disc dispersal, potentially solving the issue of long-lived “relic” discs.", "Clarity on this issue of whether this model does indeed satisfy all existing observational constraints must, unfortunately, wait until more sophisticated calculations are completed (see Section REF ).", "However, we can sketch out some of the basic concepts below.", "Once photoevaporation opens a gap and causes a pressure trap, dust will rapidly grow and become trapped in said pressure trap.", "Unlike the primordial disc case [63], once the pressure trap forms, the photosphere will no-longer intercept the disc again at large radius and radiation pressure begins clearing small dust particles from the disc.", "This process leads to a run-away reduction in the surface density in the disc until radial drift of new dust-particles can refill the dust-trap, whence the dust-to-gas ratio is low, and the disc's mid-plane is radially optically thin to a large radius.", "Hence the final stages of the disc's lifetime will be characterised by a dust-depleted, gas-rich disc which contains somewhere in the region of $\\sim 0.1$  M$_{\\rm jup}$ of gas.", "Photoevaporation of this disc will then continue to erode the disc to a large radius; although the thermodynamics of this new dust-depleted disc needs to be studied to determine if thermal sweeping is relevant.", "Such a disc would be observationally classified as a gas-rich, cold debris disc; where the origin of the dust is primordial, rather than secondary as often assumed." ], [ "Millimetre-bright transition discs", "We must comment on how this process may operate in mm-bright transition discs.", "Millimetre-bright transition discs are those discs which are also thought to contain a pressure trap that is created through some unknown mechanism, potentially planet-disc interactions [18], [51].", "The observed dust surface densities in these pressure traps are more of the order of 1 g cm$^{-2}$ [5], [52].", "Since these objects are observed to be long-lived radiation pressure clear-out cannot be fast in these objects.", "The very high-surface densities imply the photosphere sits at a large height in the disc, indicating the dust mass-fraction above the photosphere is small, and hence the dust clearing timescales are long.", "A simple 1D calculation for parameters typical of mm-bright transition discs indicates they are stable on Myr timescales.", "These long depletion times do not mean that radiation pressure clear-out is not interesting, in-fact one of the mysteries of mm-bright transition discs is the lack of small grains close to their star.", "We speculate if radiation pressure could remove small dust particles from the surface layers of their dust-traps before they can drift into the inner disc then this could provide a solution to this outstanding problem." ], [ "Future direction", "While our results are promising and indicate that radiation pressure driven mass-loss is vital in disc dispersal, it is not entirely clear what the exact dust mass-loss rates are.", "This uncertainty is because the mass-loss rates are sensitive to parameters we have merely assumed rather than calculated self-consistently.", "The minimum and maximum grain sizes, as well as the dust-particle size distribution, are assumed in all our above calculations.", "To model radiation pressure clear-out all these parameters need to be calculated explicitly because the mass-loss essentially depends on the dust mass-fraction above the photosphere, but the photospheric position depends on the 2D particle size distribution.", "Therefore, to accurately calculate the clearing timescales and observable properties we need to incorporate a dust evolution algorithm and thermal structure solver into our 2D code presented in this paper." ], [ "Summary", "We have introduced a new disc dispersal mechanism that takes place in the final stages of a disc's lifetime.", "Our new mechanism focuses on the removal of dust from the pressure trap that is created when photoevaporation opens a cavity in an evolved protoplanetary disc.", "We find that radiation pressure can efficiently remove small particles from the surface layers of the disc, in the vicinity of the pressure trap; these small particles are then replaced by the collisional fragmentation of larger particles in the mid-plane.", "With this new dust mass-loss process the photoevaporation model (in particular the X-ray photoevaporation model) does not suffer from a severe “relic” disc problem, as optically thick discs with hole sizes of $\\gtrsim 50$  AU are never created.", "The photoevaporation created dust traps are depleted on timescales ranging from $\\lesssim 10^5$  years while the cavity radius is $\\lesssim 10$  AU to a few $10^5$  years by the point the cavity radius has receded to $\\sim 40$  AU.", "Clearing of dust from the pressure traps proceeds until the point where radial-drift from the outer regions of the disc balances radiation pressure driven loss, but by this point, the disc's mid-plane is radially optically thin to a large radius.", "The clearing time is sensitive to the opacity structure, and therefore particle size distribution in the vicinity of the pressure trap, where lower opacities results in faster clearing timescale.", "Ultimately, the combination of photoevaporation and radiation pressure driven mass-loss results in a disc which observationally appears as a young gas-rich debris disc.", "Indeed many of the previously identified gas-rich debris discs could indeed be discs in this stage of their clearing, where the dust is primordial in origin.", "We hypothesise a sensitive CO survey of young Weak T Tauri stars may find a significant fraction of them that still host $\\sim 0.1$  M$_{\\rm jup}$ gas reservoirs at large radii." ], [ "Acknowledgements", "We thank the referee for a constructive report which improved the manuscript.", "We are grateful to Richard Booth, Cathie Clarke, Ruth Murray-Clay and Giovanni Rosotti for interesting discussions.", "JEO is supported by a Royal Society University Research Fellowship." ] ]
1906.04265
[ [ "The Regression Discontinuity Design" ], [ "Abstract This handbook chapter gives an introduction to the sharp regression discontinuity design, covering identification, estimation, inference, and falsification methods." ], [ "Introduction", "The Regression Discontinuity (RD) design has emerged in the last decades as one of the most credible non-experimental research strategies to study causal treatment effects.", "The distinctive feature behind the RD design is that all units receive a score, and a treatment is offered to all units whose score exceeds a known cutoff, and withheld from all the units whose score is below the cutoff.", "Under the assumption that the units' characteristics do not change abruptly at the cutoff, the change in treatment status induced by the discontinuous treatment assignment rule can be used to study different causal treatment effects on outcomes of interest.", "The RD design was originally proposed by [38] in the context of an education policy, where an honorary certificate was given to students with test scores above a threshold.", "Over time, the design has become common in areas beyond education, and is now routinely used by scholars and policy-makers across the social, behavioral, and biomedical sciences.", "In particular, the RD design is now part of the standard quantitative toolkit of political science research, and has been used to study the effect of many different interventions including party incumbency, foreign aid, and campaign persuasion.", "In this chapter, we provide an overview of the basic RD framework, discussing the main assumptions required for identification, estimation, and inference.", "We first discuss the most common approach for RD analysis, the continuity-based framework, which relies on assumptions of continuity of the conditional expectations of potential outcomes given the score, and defines the basic parameter of interest as an average treatment effect at the cutoff.", "We discuss how to estimate this effect using local polynomials, devoting special attention to the role of the bandwidth, which determines the neighborhood around the cutoff where the analysis is implemented.", "We consider the bias-variance trade-off inherent in the most common bandwidth selection method (which is based on mean-squared-error minimization), and how to make valid inferences with this bandwidth choice.", "We also discuss the local nature of the RD parameter, including recent developments in extrapolation methods that may enhance the external validity of RD-based results.", "In the second part of this chapter, we overview an alternative framework for RD analysis that, instead of relying on continuity of the potential outcome regression functions, makes the assumption that the treatment is as-if randomly assigned in a neighborhood around the cutoff.", "This interpretation was the intuition provided by [38] in their original contribution, though it now has become less common due to the stronger nature of the assumptions it requires.", "We discuss situations in which this local randomization framework for RD analysis may be relevant, focusing on cases where the running variable has mass points, which occurs very frequently in applications.", "To conclude, we discuss a battery of data-driven falsification tests that can provide empirical evidence about the validity of the design and the plausibility of its key identifying assumptions.", "These falsification tests are intuitive and easy to implement, and thus should be included as part of any RD analysis in order to enhance its credibility and replicability.", "Due to space limitations, we do not discuss variations and extensions of the canonical (sharp) RD designs, such as fuzzy, kink, geographic, multi-cutoff or multi-score RD designs.", "A practical introduction to those topics can be found in [16], [17], in the recent edited volume [14], and the references therein.", "For a recent review on program evaluation methods see [1]." ], [ "General Setup", "We start by introducing the basic notation and framework.", "We consider a study where there are multiple units from a population of interest (such as politicians, parties, students, households or firms), and each unit $i$ has a score or running variable, denoted by $X_i$ .", "This running variable could be, for example, a party's vote share in a congressional district, a student's score from a standardized test, a household's poverty index, or a firm's total revenues in a certain period of time.", "This running variable may be continuous, in which case no two units will have the same value of $X_i$ , or not, in which case the same value of $X_i$ might be shared by multiple units.", "The latter case is usually called “discrete”, but in many empirical applications the score variable is actually both.", "In the simplest RD design, each unit receives a binary treatment $D_i$ when their score exceeds some fixed threshold $c$ , and does not receive the treatment otherwise.", "This type of RD design is commonly known as the sharp RD design, where the word sharp refers to the fact that the assignment of treatment coincides with the actual treatment taken—that is, compliance with treatment assignment is perfect.", "When treatment compliance is imperfect, the RD design becomes a fuzzy RD design and its analysis requires additional methods beyond the scope of this chapter (see the Introduction for references).", "The methods described here for analyzing sharp RD designs can be applied directly in the context of fuzzy RD designs when the parameter of interest is the intention-to-treat effect.", "The sharp RD treatment assignment rule can be formally written as $D_i=I(X_i\\ge c)={\\left\\lbrace \\begin{array}{ll}1 & \\quad \\text{if }X_i\\ge c \\\\0 & \\quad \\text{if }X_i< c\\end{array}\\right.", "},$ where $I(\\cdot )$ is the indicator function.", "For example, $D_i$ could be a scholarship for college students that is assigned to those with a score of 7 or higher in an entry exam on a scale from 0 to 10.", "In this example, $X_i$ is the exam score, $c=7$ is the cutoff used for treatment assignment, and $D_i=I(X_i\\ge 7$ ) is the binary variable that indicates receipt of the scholarship.", "Our goal is to assess the effect of the binary treatment $D_i$ on a certain outcome variable.", "For instance, in the previous scholarship example, we may be interested in analyzing whether the scholarship increases the academic performance during college or the probability of graduating.", "This problem can be formalized within the potential outcomes framework [32].", "In this framework, each unit $i$ from the population of interest has two potential outcomes, denoted $Y_i(1)$ and $Y_i(0)$ , which measure the outcome that would be observed for unit $i$ with and without treatment, respectively.", "For example, for a certain college student $i$ , $Y_i(1)$ could be the student's GPA at a certain stage had the student received the scholarship, and $Y_i(0)$ the student's GPA had she not received the scholarship.", "The individual-level treatment “effect” for unit $i$ is defined as the difference between the potential oucomes under treatment and control status, $\\tau _i=Y_i(1)-Y_i(0)$ .", "Because the same unit can never be observed under both treated and control status (a student can either receive or not receive the scholarship, but not both), one of the potential outcomes is always unobservable.", "The observed outcome, denoted $Y_i$ , equals $Y_i(1)$ when $i$ is treated and $Y_i(0)$ if $i$ is untreated, that is, $Y_i=Y_i(1)\\cdot D_i+Y_i(0)\\cdot (1-D_i)={\\left\\lbrace \\begin{array}{ll}Y_i(1) &\\quad \\text{if }D_i=1 \\\\Y_i(0) &\\quad \\text{if }D_i=0\\end{array}\\right.", "}.$ The observed outcome can never provide information on both potential outcomes.", "Hence, for each unit in the population, one of the potential outcomes is observed, and the other one is a counterfactual.", "This problem is known as the fundamental problem of causal inference [30].", "The RD design provides a way to address this problem by comparing treated units that are “slightly above” the cutoff to control units that are “slightly below” it.", "The rationale behind this comparison is that, under appropriate assumptions that will be made more precise in the upcoming sections, treated and control units in a small neighborhood or window around the cutoff are comparable in the sense of having similar observed and unobserved characteristics (with the only exception being treatment status).", "Thus, observing the outcomes of units just below the cutoff provides a valid measure of the average outcome that treated units just above the cutoff would have had if they had not received the treatment.", "In the remainder of this chapter, we describe two alternative approaches for analyzing RD designs.", "The first one, which we call the continuity-based framework, assumes that the observed sample is a random draw from an infinite population of interest, and invokes assumptions of continuity.", "In this framework, identification of the parameter of interest, defined precisely in the next section, relies on assuming that the average potential outcomes given the score are continuous as a function of the score.", "This assumption implies that the researcher can compare units marginally above the cutoff to units marginally below to identify (and estimate) the average treatment effect at the cutoff.", "The second approach for RD analysis, which we call the local randomization framework, assumes that the treatment of interest is as-if randomly assigned in a small region around the cutoff.", "This approach formalizes the interpretation of RD designs as local experiments, and allows researchers to use the standard tools from the classical analysis of experiments.", "In addition, if the researcher is willing to assume that potential outcomes are fixed (non-random) and that the $n$ units that are observed in the sample conform the finite population of interest, this approach also allows the researcher to use finite-sample exact randomization inference tools, which are specially appealing in applications where the number of observations near the cutoff is small.", "For both frameworks, we discuss the parameters of interest, estimation, inference, and bandwidth or window selection methods.", "We then compare the two approaches and provide a series of falsification methods that are commonly employed to assess the validity of the RD design.", "See also [23] for an overview and practical comparisons between these RD approaches." ], [ "The Continuity-Based Framework", "Under the continuity-based framework, the observed data $\\lbrace Y_i(1),Y_i(0),X_i,D_i\\rbrace $ , for $i=1,2,\\ldots ,n$ , is a random sample from an infinite population of interest (or data generating process).", "The main objects of interest under this framework are the conditional expectation functions of the potential outcomes, $\\mu _1(x)=\\mathbb {E}[Y_i(1)|X_i=x]\\qquad \\text{and}\\qquad \\mu _0(x)=\\mathbb {E}[Y_i(0)|X_i=x],$ which capture the population average of the potential outcomes for each value of the score.", "In the sharp RD design, for each value of $x$ , only one of these functions is observed: $\\mu _1(x)$ is observed for $x$ at or above the cutoff, and $\\mu _0(x)$ is observed for values of $x$ below the cutoff.", "The observed conditional expectation function is $\\mu (x)=\\mathbb {E}[Y_i|X_i=x]={\\left\\lbrace \\begin{array}{ll}\\mu _1(x) &\\quad \\text{if }x \\ge c \\\\\\mu _0(x) &\\quad \\text{if }x<c \\text{.}\\end{array}\\right.", "}$ We start by defining the function $\\tau (x)$ , which gives the average treatment effect conditional on $X_i=x$ : $\\tau (x)=\\mathbb {E}[Y_i(1)-Y_i(0)|X_i=x]=\\mu _1(x)-\\mu _0(x)\\text{.", "}$ The first step is to establish conditions for identification, that is, conditions under which we can write the parameter of interest, which depends on unobservable quantities due to the fundamental problem of causal inference, in terms of observable (i.e., identifiable) and thus estimable quantities.", "In the continuity-based framework, the key assumption for identification is that $\\mu _1(x)$ and $\\mu _0(x)$ are continuous functions of the score at the cutoff point $x=c$ .", "Intuitively and informally, this condition states that the observable and unobservable characteristics that determine the average potential outcomes do not jump abruptly at the cutoff.", "When this assumption holds, the only difference between units on opposite sides of the cutoff whose scores are “very close” to the cutoff is their treatment status.", "Intuitively, we can think that treated and control units with very different score values will generally be very different in terms of important observable and unobservable characteristics affecting the outcome of interest but, as their scores approach the cutoff and become similar in that dimension, the only remaining difference between them will be their treatment status, thus ensuring comparability between units just above and just below the cutoff, at least in terms of their potential outcome mean regression functions.", "More formally, [29] showed that, when conditional expectation functions are continuous in $x$ at the cutoff level $x=c$ , $\\tau (c) = \\lim _{x\\downarrow c}\\mathbb {E}[Y_i|X_i=x]-\\lim _{x\\uparrow c}\\mathbb {E}[Y_i|X_i=x],$ so that the difference between average observed outcomes for units just above and just below the cutoff is equal to the average treatment effect at the cutoff, $\\tau (c)=\\mathbb {E}[Y_i(1)-Y_i(0)|X_i=c]$ .", "Note that this identification result expresses the estimand $\\tau (c)$ , which is unobservable, as a function of two limits that depend only on observable (i.e., identifiable) quantities that are estimable from the data.", "As a consequence, in a sharp RD design, a natural parameter of interest is $\\tau (c)$ , the average treatment effect at the cutoff.", "This parameter captures the average effect of the treatment on the outcome of interest, given that the value of the score is equal to the cutoff.", "It is useful to compare this parameter to the average treatment effect, $\\mathtt {ATE} = \\mathbb {E}[Y_i(1)-Y_i(0)]$ , which is the difference that we would see in average outcomes if all units were switched from control to treatment.", "In contrast to $\\mathtt {ATE}$ , which is a weighted average of $\\tau (x)$ over $x$ because $\\mathtt {ATE}=\\mathbb {E}[\\tau (X_i)]$ , $\\tau (c)$ is only the average effect of the treatment at a particular value of the score, $x=c$ .", "For this reason, the RD parameter of interest $\\tau (c)$ is often referred to as a local average treatment effect, because it is only informative of the effect of the treatment for units whose value of the score is at (or, loosely speaking, in a local neighborhood of) the cutoff.", "This limits the external validity of the RD parameter $\\tau (c)$ .", "A recent and growing literature studies how to extrapolate treatment effects in RD designs [2], [25], [20], [4], [21].", "The main advantage of the identification result in (REF ) is that it relies on continuity conditions of $\\mu _1(x)$ and $\\mu _0(x)$ at $x=c$ , which are nonparametric in nature and reasonable in a wide array of empirical applications.", "Section describes several falsification strategies to provide indirect empirical evidence to assess the plausibility of this assumption.", "Assuming continuity holds, the estimation of the RD parameter $\\tau (c)$ can proceed without making parametric assumptions about the particular form of $\\mathbb {E}[Y_i|X_i=x]$ .", "Instead, estimation can proceed by using nonparametric methods to approximate the regression function $\\mathbb {E}[Y_i|X_i=x]$ , separately for values of $x$ above and below the cutoff.", "However, estimation and inference via nonparametric local approximations near the cutoff is not without challenges.", "When the score is continuous, there are in general no units with value of the score exactly equal to the cutoff.", "Thus, estimation of the limits of $\\mathbb {E}[Y_i|X_i=x]$ as $x$ tends to the cutoff from above or below will necessarily require extrapolation.", "To this end, estimation in RD designs requires specifying a neighborhood or bandwidth around the cutoff in which to approximate the regression function $\\mathbb {E}[Y_i|X_i=x]$ , and then, based on that approximation, calculate the value that the function has exactly at $x=c$ .", "In what follows, we describe different methods for estimation and bandwidth selection under the continuity-based framework." ], [ "Bandwidth Selection", "Selecting the bandwidth around the cutoff in which to estimate the effect is a crucial step in RD analysis, as the results and conclusions are typically sensitive to this choice.", "We now briefly outline some common methods for bandwidth selection in RD designs.", "See also [24] for an overview of neighborhood selection methods in RD designs.", "The approach for bandwidth selection used in early RD studies is what we call ad-hoc bandwidth selection, in which the researcher chooses a bandwidth without a systematic data-driven criterion, perhaps relying on intuition or prior knowledge about the particular context.", "This approach is not recommended since it lacks objectivity, does not have a rigorous justification and, by leaving bandwidth selection to the discretion of the researcher, opens the door for specification searches.", "For these reasons, the ad-hoc approach to bandwidth selection has been replaced by systematic, data-driven criteria.", "In the RD continuity-based framework, the most widely used bandwidth selection criterion in empirical practice is the mean squared error (MSE) criterion [33], [11], [3], [9], which relies on a tradeoff between the bias and variance of the RD point estimator.", "The bandwidth determines the neighborhood of observations around the cutoff that will be used to approximate the unknown function $\\mathbb {E}[Y_i|X_i=x]$ above and below the cutoff.", "Intuitively, choosing a very small bandwidth around the cutoff will tend to reduce the misspecification error in the approximation, thus reducing bias.", "A very small bandwidth, however, requires discarding a large fraction of the observations and hence reduces the sample, leading to estimators with larger variance.", "Conversely, choosing a very large bandwidth allows the researcher to gain precision using more observations for estimation and inference, but at the expense of a larger misspecification error, since the function $\\mathbb {E}[Y_i|X_i=x]$ now has to be approximated over a larger range.", "The goal of bandwidth selection methods based on this tradeoff is therefore to find the bandwidth that optimally balances bias and variance.", "We let $\\hat{\\tau }$ denote a local polynomial estimator of the RD treatment effect $\\tau (c)$ —we explain how to construct this estimator in the next section.", "For a given bandwidth $h$ and a total sample size $n$ , the MSE of $\\hat{\\tau }$ is $\\mathsf {MSE}(\\hat{\\tau })=\\mathsf {Bias}^2(\\hat{\\tau })+\\mathsf {Variance}(\\hat{\\tau }) = {B}^2 + {V}\\text{,}$ which is the sum of the squared bias and the variance of the estimator.", "The MSE-optimal bandwidth, $h_{\\mathsf {MSE}}$ , is the value of $h$ that balances bias and variance by minimizing the MSE of $\\hat{\\tau }$ , $h_{\\mathsf {MSE}}=\\operatornamewithlimits{arg\\,min}_{h>0} \\mathsf {MSE}(\\hat{\\tau }) \\text{.", "}$ The shape of the MSE depends on the specific estimator chosen.", "For example, when $\\hat{\\tau }$ is obtained using local linear regression (LLR), which will be discussed in the next section, the MSE can be approximated by $\\mathsf {MSE}(\\hat{\\tau })\\approx h^4\\mathsf {B}^2+\\frac{1}{nh}\\mathsf {V}$ where $\\mathsf {B}$ and $\\mathsf {V}$ are constants that depend on the data generating process and specific features of the estimator used.", "This expression clearly highlights how a smaller bandwidth reduces the bias term while increasing the variance and vice versa.", "In this case, the optimal bandwidth, simply obtained by setting the derivative of the above expression with respect to $h$ equal to zero, is $h_{\\mathsf {MSE}}^{\\mathsf {LLR}}=\\mathsf {C}_\\mathsf {MSE} \\cdot n^{-1/5},$ where the constant $\\mathsf {C}_\\mathsf {MSE}=(\\mathsf {V}/(4\\mathsf {B}^2))^{1/5}$ is unknown but estimable.", "This shows that the MSE-optimal bandwidth for a local linear estimator is proportional to $n^{-1/5}$ .", "While $h_{\\mathsf {MSE}}$ is optimal for point estimation, it is generally not optimal for conducting inference.", "[5], [6], [7] show how to choose the bandwidth to obtain confidence intervals minimizing the coverage error probability (CER).", "More precisely, let $\\mathsf {IC}(\\hat{\\tau })$ be an $\\alpha $ -level confidence interval for the RD parameter $\\tau (c)$ based on the estimator $\\hat{\\tau }$ .", "A CER-optimal bandwidth makes the coverage probability as close as possible to the desired level $1-\\alpha $ : $h_{\\mathsf {CER}}=\\operatornamewithlimits{arg\\,min}_{h>0} |\\mathbb {P}[\\tau (c)\\in \\mathsf {IC}(\\hat{\\tau })]-(1-\\alpha )| \\text{.", "}$ For the case of local linear regression, the CER-optimal $h$ is $h_\\mathsf {CER}^{\\mathsf {LLR}}=\\mathsf {C}_\\mathsf {CER}\\cdot n^{-1/4},$ where again the constant $\\mathsf {C}_\\mathsf {CER}$ unknown, because depends in part on the data generating process, but estimable.", "Hence, the CER-optimal bandwidth is smaller than the MSE-optimal bandwidth, at least in large samples.", "Based on the ideas above, several variations of optimal bandwidth selectors exist, including one-sided CER-optimal and MSE-optimal bandwidths with and without accounting for covariate adjustment, clustering, or other specific features.", "In all cases, these bandwidth selectors are implemented in two steps: first the constant (e.g., $\\mathsf {C}_\\mathsf {MSE}$ or $\\mathsf {C}_\\mathsf {CER}$ ) is estimated, and then the bandwidth is chosen using that preliminary estimate and the appropriate rate formula (e.g., $n^{-1/5}$ or $n^{-1/4}$ )." ], [ "Estimation and Inference", "Given a bandwidth $h$ , continuity-based estimation in RD designs consists on estimating the outcome regression functions, given the score, separately for treated and control units whose scores are within the bandwidth.", "Recall from Equation (REF ) that we need to estimate the limits of the conditional expectation function of the observed outcome from the right and from the left.", "One possible approach would be to simply estimate the difference in average outcomes between treated and controls within $h$ .", "This strategy is equivalent to fitting a regression including only an intercept at each side of the cutoff.", "However, since the goal is to estimate two boundary points, this local constant approach will have a bias that can be reduced by including a slope term in the regression.", "More generally, the most common approach for point estimation in the continuity-based RD framework is to employ local polynomial methods [26], which involve fitting a polynomial of order $p$ separately on each side of the cutoff, only for observations inside the bandwidth.", "Local polynomial approximations usually include a weighting scheme that places more weight on observations that are closer to the cutoff; this weighting scheme is based on a kernel function, which we denote by $K(\\cdot )$ .", "More formally, the treatment effect is estimated as: $\\hat{\\tau }=\\hat{\\alpha }_{+}-\\hat{\\alpha }_{-}$ where $\\hat{\\alpha }_+$ is obtained as the intercept from the (possibly misspecified) regression model: $Y_i=\\alpha _++\\beta _{1+}(X_i-c)+\\ldots +\\beta _{p+}(X_i-c)^p+u_i$ on the treated observations using weights $K((X_i-c)/h)$ , and similarly $\\hat{\\alpha }_-$ is obtained as the intercept from an analogous regression fit employing only the control observations.", "Although theoretically a large value of $p$ can capture more features of the unobserved regression functions, $\\mu _1(x)$ and $\\mu _0(x)$ , in practice high-order polynomials can have erratic behavior, especially when estimating boundary points, a fact usually known as Runge's phenomenon [12].", "In addition, global polynomials can lead to counter-intuitive weighting schemes, as discussed by [28].", "Common choices for $p$ are $p=1$ or $p=2$ .", "As we can see, once the bandwidth has been appropriately chosen, the implementation of local polynomial regression reduces to simply fitting two linear or quadratic regressions via weighted least-squares—see [16] for an extended discussion and practical introduction.", "Despite the implementation and algebraic similarities between ordinary least squares (OLS) methods and local polynomial methods, there is a crucial difference: OLS methods assume that the polynomial used for estimation is the true form of the function, while local polynomial methods see it as just an approximation to an unknown regression function.", "Thus, inherent in the use of local polynomial methods is the idea that the resulting estimate will contain a certain error of approximation or misspecification bias.", "This difference between OLS and local polynomial methods turns out to be very consequential for inference purposes—that is, for testing statistical hypotheses and constructing confidence intervals.", "The conventional OLS inference procedure to test the null hypothesis of no treatment effect at the cutoff, $\\mathsf {H_0}: \\tau (c)=0$ , relies on the assumption that the distribution of the t-statistic is approximately standard normal in large samples: $\\frac{\\hat{\\tau }}{\\sqrt{{V}}}\\overset{a}{\\sim } \\mathcal {N}(0,1),$ where ${V}$ is the (conditional) variance of $\\hat{\\tau }$ , that is, the square of the standard error.", "However, this will only occur in cases where the misspecification bias or approximation error of the estimator $\\hat{\\tau }$ for $\\tau (c)$ becomes sufficiently small in large samples, so that the distribution of the t-statistic is correctly centered at zero.", "In general, this will not occur in RD analysis, where the local polynomials are used as a nonparametric approximation device, and do not make any specific functional form assumptions about the regression functions $\\mu _1(x)$ and $\\mu _0(x)$ , which will be generally misspecified.", "The general approximation to the t-statistic in the presence of misspecification error is $\\frac{\\hat{\\tau } - {B}}{\\sqrt{{V}}}\\overset{a}{\\sim } \\mathcal {N}(0,1),$ where ${B}$ is the (conditional) bias of $\\hat{\\tau }$ for $\\tau (c)$ .", "This approximation will be equivalent to the one in (REF ) only when ${B}/\\sqrt{{V}}$ is small, at least in large samples.", "More generally, it is crucial to account for the bias ${B}$ when conducting inference.", "The magnitude of the bias depends on the shape of the true regression functions and on the length of the bandwidth.", "As discussed before, the smaller the bandwidth, the smaller the bias.", "Although the conventional asymptotic approximation in (REF ) will be valid in some special cases, such as when the bandwidth is small enough, it is not valid in general.", "In particular, if the bandwidth chosen for implementation is the MSE-optimal bandwidth discussed in the prior section, the bias will remain even in large samples, making inferences based on (REF ) invalid.", "In other words, the MSE-optimal bandwidth, which is optimal for point estimation, is too large when conducting inference according to the usual OLS approximations.", "Generally valid inferences thus require researchers to use the asymptotic approximation in (REF ), which contains the bias.", "In particular, [11] propose a way to construct a t-statistic that corrects the bias of the estimator (thus making the approximation valid for more bandwidth choices, including the MSE-optimal choice) and simultaneously adjusts the standard errors to account for the variability that is introduced in the bias correction step—this additional variability is introduced because the bias is unknown and thus must be estimated.", "This approach is known as robust bias-corrected inference.", "Based on the approximation (REF ), [11] propose robust bias-corrected confidence intervals $\\mathtt {CI}_{\\mathtt {rbc}}= \\left[ ~ \\big (\\hat{\\tau }-\\hat{{B}}\\big ) \\pm 1.96 \\cdot \\sqrt{{V}_{\\mathtt {bc}}} ~ \\right], $ where, in general, ${V}_{\\mathtt {bc}} > {V}$ because ${V}_{\\mathtt {bc}}$ includes the variability of estimating ${B}$ with $\\hat{{B}}$ .", "In terms of implementation, the infeasible variance ${V}_{\\mathtt {bc}}$ can be replaced by a consistent estimator $\\hat{{V}}_{\\mathtt {bc}}$ , which can account for heteroskedasticity and clustering as apprpriate.", "Robust bias correction methods for RD designs have been further developed in recent years.", "For example, see [9] for robust bias correction inference in the context of RD designs with covariate adjustments, clustered data, and other empirically relevant features.", "In addition, see [5], [6], [7] for theoretical results justifying some of features of robust bias correction inference.", "Finally, see [27] and [31] for two recent applications and empirical comparisons of robust bias correction methods.", "[colframe=blue!25, colback=blue!10, coltitle=blue!20!black, title = Continuity-based framework: summary] Key assumptions: Random potential outcomes drawn from an infinite population The regression functions are continuous at the cutoff Bandwidth selection: Systematic, data-driven selection based on non-parametric methods Optimality criteria: MSE, coverage error Estimation: Nonparametric local polynomial regression within bandwidth Choice parameters: order of the polynomial, weighting method (kernel) Inference: Large-sample normal approximation Robust, bias corrected" ], [ "The Local Randomization Framework", "The local randomization approach to RD analysis provides an alternative to the continuity-based framework.", "Instead of relying on assumptions about the continuity of regression functions and their approximation and extrapolation, this approach is based on the idea that, close enough to the cutoff, the treatment can be interpreted to be “as good as randomly assigned”.", "The intuition is that, if units either have no knowledge of the cutoff or have no ability to precisely manipulate their own score, units whose scores are close enough to the cutoff will have the same chance of being barely above the cutoff as barely below it.", "If this is true, close enough to the cutoff, the RD design may create experimental-like variation in treatment assignment.", "The idea that RD designs create conditions that resemble an experiment near the cutoff has been present since the origins of the method [38], and has been sometimes proposed as a heuristic interpretation of continuity-based RD results.", "[15] used this local randomization idea to develop a formal framework, and to derive alternative assumptions for the analysis of RD designs, which are stronger than the typical continuity conditions.", "The formal local randomization framework was further developed by [23].", "The central idea behind the local randomization approach is to assume the existence of a neighborhood or window around the cutoff where the assignment to being above or below the cutoff behaves as it would have behaved in an actual experiment.", "In other words, the local randomization RD approach makes the assumption that there is a window around the cutoff where assignment to treatment is as-if experimental.", "The formalization of these assumptions requires a more general notation.", "In prior sections, we used $Y_i(D_i)$ to denote the potential outcome under treatment $D_i$ , which could be equal to one (treatment) or zero (control).", "Since $D_i = I(X_i \\ge c)$ , this also allowed the score $X_i$ to indirectly affect the potential outcomes; moreover, this notation did not prevent $Y_i(\\cdot )$ from being a function of $X_i$ , but this was not explicitly noted.", "We now generalize the notation to explicitly note that the potential outcomes may be a direct function of $X_i$ , so we write $Y_i(D_i,X_i)$ .", "In addition, note that here and in all prior sections we are implicitly assuming that potential outcomes only depend on unit $i$ 's own treatment assignment and running variable, an assumption known as SUTVA (stable unit treatment value assumption).", "While some of the methods described in this section are robust to some violations of the SUTVA, we impose this assumption to ease exposition.", "See [23] for more discussion.", "To formalize the local randomization RD approach, we assume that there exists a window $W_0$ around the cutoff where the following two conditions hold: Unconfounded Assignment.", "The distribution function of the score inside the window, $F_{X_i | X_i \\in W_0}(r)$ , does not depend on the potential outcomes, is the same for all units, and is known: $F_{X_i | X_i \\in W_0}(x) = F_0(x),$ where $F_0(x)$ is a known distribution function.", "Exclusion Restriction.", "The potential outcomes do not depend on the value of the running variable inside the window, except via the treatment assignment indicator $Y_i(d,x)=Y_i(d)\\quad \\forall \\, i \\text{ such that } X_i \\in W_0\\text{.", "}$ This condition requires the potential outcomes to be unrelated to the score inside the window.", "Importantly, these two assumptions would not be satisfied by randomly assigning the treatment inside $W_0$ , because the random assignment of $D_i$ inside $W_0$ does not by itself guarantee that the score and the potential outcomes are unrelated (the exclusion restriction).", "For example, imagine a RD design based on elections, where the treatment is the electoral victory of a political party, the score is the vote share, and the party wins the election if the vote share is above 50%.", "Even if, in very close races, election winners were chosen randomly instead of based on their actual vote share, donors might still believe that districts where the party obtained a bare majority are more likely to support the party again, and thus they may donate more money to the races where the party's vote share was just above 50% than to races where the party was just below 50%.", "If donations are effective in boosting the party, this would induce a positive relationship near the cutoff between the running variable (vote share) and the outcome of interest (victory in the future election).", "The discussion above illustrates why the unconfounded assignment assumption in equation (REF ) is not enough for a local randomization approach to RD analysis.", "We must explicitly assume that the score and the potential outcomes are unrelated inside $W_0$ , which is not implied by (REF ).", "This issue is discussed in detail by [37], who use several examples to show that the exclusion restriction in (REF ) is neither implied by assuming statistical independence between the potential outcomes and the treatment in $W_0$ , nor by assuming that the running variable is randomly assigned in $W_0$ .", "In addition, see [36] for a discussion of the status of RD designs among observational studies, and [39] for a discussion of the connection between RD designs and natural experiments." ], [ "Estimation and Inference within a Known Window", "The local randomization conditions (REF ) and (REF ) open new possibilities for RD estimation and inference.", "Of course, these conditions are strong and, just like the continuity conditions in Section , they are not implied by the RD treatment assignment rule but rather must be assumed in addition to it [36].", "Because these assumptions are strong and are inherently untestable, it is crucial for researchers to provide as much information as possible regarding their plausibility.", "We discuss this issue in Section , where we present several strategies for empirical falsification of the RD assumptions.", "The key assumption of the local randomization approach is that there exists a neighborhood around the cutoff in which (REF ) and (REF ) hold—implying that we can treat the RD design as a randomized experiment near the cutoff.", "We denote this neighborhood by $W_0=[c-w,c+w]$ , where $c$ continues to be the RD cutoff, but we now use the notation $w$ as opposed to $h$ to emphasize that $w$ will be chosen and interpreted differently from the previous section.", "Furthermore, to ease the exposition, we start by assuming that $W_0$ is known, and then discuss how to select $W_0$ based on observable information.", "This data-driven window selection step will be crucial in applications, as in most empirical examples $W_0$ is fundamentally unknown, if it exists at all—but see [31] for an exception.", "Given a window $W_0$ , the local randomization framework summarized by assumptions (REF ) and (REF ) allows us to analyze the RD design employing the standard tools of the classical analysis of experiments.", "Depending on the available number of observations inside the window, the experimental analysis can follow two different approaches.", "In the Fisherian approach, also known as a randomization inference approach, potential outcomes are considered non-random, the assignment mechanism is assumed to be known, and this assignment is used to calculate the exact finite-sample distribution of a test statistic of interest under the null hypothesis that the treatment effect is zero for every unit.", "On the other hand, in the large-sample approach, the potential outcomes may be fixed or random, the assignment mechanism need not be known, and the finite-sample distribution of the test statistic is approximated under the assumption that the number of observations is large.", "Thus, in contrast to the Fisherian approach, in the large-sample approach inferences are based on test statistics whose finite-sample properties are unknown, but whose null distribution can be approximated by a Normal distribution under the assumption that the sample size is large enough.", "Below we briefly review both Fisherian and large-sample methods for analysis of RD designs under a local randomization framework.", "Fisherian methods will be most useful when the number of observations near the cutoff is small, which may render large-sample methods invalid.", "In contrast, in applications with many observations, large-sample methods will be the most natural approach, and Fisherian methods can be used as a robustness check." ], [ "Fisherian approach", "In the Fisherian framework, the potential outcomes are seen as fixed, non-random magnitudes from a finite population of $n$ units.", "The information on the observed sample of units $i=1,\\ldots ,n$ is not seen as a random draw from an infinite population, but as the population of interest.", "This feature allows for the derivation of the finite-sample-exact distribution of test statistics without relying on approximations.", "We follow the notation in [23], adapting slightly our previous notation.", "Let $\\mathbf {X}=(X_1,\\ldots ,X_n)^{\\prime }$ denote the $n\\times 1$ column vector collecting the observed running variable of all units in the sample, and $\\mathbf {D}=(D_1,\\ldots ,D_n)^{\\prime }$ be the vector collecting treatment assignments.", "The non-random potential outcomes for each unit $i$ are denoted by $y_i(d,x)$ where $d$ and $x$ are possible values for $D_i$ and $X_i$ .", "All the potential outcomes are collected in the vector $\\mathbf {y}(\\mathbf {d},\\mathbf {x})$ .", "The vector of observed outcomes is simply the vector of potential outcomes, evaluated at the observed values of the treatment and running variable, $\\mathbf {Y}=\\mathbf {y}(\\mathbf {D},\\mathbf {X})$ .", "Because potential outcomes are assumed non-random, all the randomness in the model enters through the running variable vector $\\mathbf {X}$ , and the treatment assignment $\\mathbf {D}$ which is a function of it.", "In what follows, we let the subscript “0” indicate the subvector inside the neighborhood $W_0$ , so that $\\mathbf {X}_0$ , $\\mathbf {D}_0$ and $\\mathbf {Y}_0$ denote the vectors of running variables, treatment assignments and observed outcomes inside $W_0$ .", "Finally, $N_0^+$ will denote the number of observations inside the neighborhood and above the cutoff (treated units inside $W_0$ ), and $N_0^-$ the number of units in the neighborhood below the cutoff (control units in $W_0$ ), with $N_0=N_0^++N_0^-$ .", "Note that using the fixed-potential outcomes notation, the exclusion restriction becomes $y_i(d,x)=y_i(d),\\; \\forall \\, i \\text{ in } W_0$ [15].", "In this Fisherian framework, a natural null hypothesis to test for the presence of a treatment effect is the sharp null of no effect: $\\mathsf {H}^s_0:\\quad y_i(1)=y_i(0), \\quad \\forall i \\text{ in } W_0\\text{.", "}$ This sharp null hypothesis states that switching treatment status does not affect potential outcomes, implying that the treatment does not have an effect on any unit inside the window.", "In this context, a hypothesis is sharp when it allows the researcher to impute all the missing potential outcomes.", "Thus, $\\mathsf {H}^s_0$ is sharp because when there is no effect, all the missing potential outcomes are equal to the observed ones.", "Under $\\mathsf {H}^s_0$ , the researcher can impute all the missing potential outcomes and, since the assignment mechanism is assumed to be known, it is possible to calculate the distribution of any test statistic $T(\\mathbf {D}_0,\\mathbf {Y}_0)$ to assess how far in the tails the observed statistic falls.", "This reasoning provides a way to calculate a p-value for $\\mathsf {H}^s_0$ that is finite-sample exact and does not require any distributional approximation.", "This randomization inference p-value is obtained by calculating the value of $T(\\mathbf {D}_0,\\mathbf {Y}_0)$ for all possible values of the treatment vector inside the window $\\mathbf {D}_0$ , and calculating the probability of $T(\\mathbf {D}_0,\\mathbf {Y}_0)$ being larger than the observed value $T_\\mathsf {obs}$ .", "See [15], [23] and [22] for further details and implementation issues.", "See also [17] for a practical introduction to local randomization methods.", "In addition to testing the null hypothesis of no treatment effect, the researcher may be interested in obtaining a point estimate for the effect.", "When condition (REF ) holds, a difference in means between treated and controls inside the window, $\\Delta =\\frac{1}{N^+_0}\\sum _{i=1}^n Y_iD_i - \\frac{1}{N^-_0}\\sum _{i=1}^n Y_i(1-D_i),$ where the sum runs over all observations inside $W_0$ , is unbiased for the sample average treatment effect in $W_0$ , $\\tau _0=\\frac{1}{N_0}\\sum _{i=1}^n (y_i(1)-y_i(0)).$ However, it is important to emphasize that the randomization inference method described above cannot test hypotheses on $\\tau _0$ because the null hypothesis that $\\tau _0=0$ is not sharp, that is, does not allow the researcher to unequivocally impute all the missing potential outcomes, without further restrictive assumptions, which is a necessary condition to use Fisherian methods.", "Hence, under the assumptions imposed so far, hypothesis testing on $\\tau _0$ has to be based on asymptotic approximations, as described in Section REF .", "The assumption that the potential outcomes do not depend on the running variable, stated in Equation (REF ), can be relaxed by assuming a local parametric model for the relationship between $\\mathbf {Y}_0$ and $\\mathbf {X}_0$ .", "Specifically, [23] assume there exists a transformation $\\phi (\\cdot )$ such that the transformed outcomes do not depend on $\\mathbf {X}_0$ .", "This transformation could be, for instance, a linear adjustment that removes the slope whenever the relationship between outcomes and the running variable is assumed to be linear.", "The case where potential outcomes do not depend on the running variable is a particular case in which $\\phi (\\cdot )$ is the identity function.", "Both inference and estimation can therefore be conducted using the transformed outcomes when the assumption that potential outcomes are unrelated is not reasonable, or as a robustness check." ], [ "Large-Sample approach", "In the most common large-sample approach, we treat potential outcomes as random variables, and often see the units in the study as a random sample from a larger population.", "(Though in the Neyman large-sample approach, potential outcomes are fixed; see [32] for more discussion.)", "In addition to the randomness of the potential outcomes, this approach differs from the Fisherian approach in its null hypothesis of interest.", "Given the randomness of the potential outcomes, the focus is no longer on the sharp null but rather typically on the hypothesis that the average treatment effect is zero.", "In our RD context, this null hypothesis can be written as $\\mathsf {H}^s_0:\\quad \\mathbb {E}[Y_i(1)]=\\mathbb {E}[Y_i(0)], \\quad \\forall i \\text{ in } W_0$ Inference in this case is based on the usual large-sample methods for the analysis of experiments, relying on usual difference-in-means tests and Normal-based confidence intervals.", "See [32] and [17] for details." ], [ "Window Selection", "In practice, the window $W_0$ in which the RD design can be seen as a randomized experiment is not known and needs to be estimated.", "[15] propose a window selection mechanism based on the idea that in a randomized experiment, the distribution of observed covariates has to be equal between treated and controls.", "Thus, if the local assumption is plausible in any window, it should be in a window where we cannot reject that the pre-determined characteristics of treated and control units are on average identical.", "The idea of this procedure is to select a test statistic that summarizes differences in a vector of covariates between groups, such as difference-in-means or the Kolmogorov-Smirnov statistic, and start with an initial “small” window.", "Inside this initial window, the researcher conducts a test of the null hypothesis that covariates are balanced between treated and control groups.", "This can be done, for example, by assessing whether the minimum p-value from the tests of differences-in-means for each covariate is larger than some specified level, or by conducting a joint test using for instance a Hotelling statistic.", "If the null hypothesis is not rejected, enlarge the window and repeat the process.", "The selected window will be the widest window in which the null hypothesis is not rejected.", "Common choices for the test statistic $T(\\mathbf {D}_0,\\mathbf {Y}_0)$ are the difference-in-means between treated and controls, the two-sample Kolmogorov-Smirnov statistic or the rank sum statistic.", "The minimum window to start the procedure should contain enough observations to ensure enough statistical power to reject the null hypothesis of covariate balance.", "The appropriate minimum number of observations will naturally depend on unknown, application-specific parameters, but based on standard power calculations we suggest using no less than approximately 10 observations in each group.", "See [15] and [23] for methodological details, [17] for a practical introduction, and [22] for software implementation.", "[colframe=blue!25, colback=blue!10, coltitle=blue!20!black, title=Local randomization framework: summary] Key assumptions: There exists a window $W_0$ in which the treatment assignment mechanism satisfies two conditions: Probability of receiving a particular score value in $W_0$ does not depend on the potential outcomes Exclusion restriction or parametric relationship between $\\mathbf {Y}$ and $\\mathbf {X}$ in $W_0$ Window selection: Goal: Find a window where the key assumptions are plausible Iterative procedure to balance observed covariates between groups Choice parameters: test statistic, stopping rule Estimation: Difference in means between treated and controls within neighborhood OR Flexible parametric modeling to account for the effect of $X_i$ Inference: Fisherian randomization-based inference or large-sample inference Conditional on sample and chosen window Choice parameter: test statistic, randomization mechanism in Fisherian" ], [ "Falsification Methods", "Every time researchers use an RD design, they must rely on identification assumptions that are fundamentally untestable, and that do not hold by construction.", "If we employ a continuity-based approach, we must assume that the regression functions are smooth functions of the score at the cutoff.", "If, on the other hand, we employ a local randomization approach, we must assume that there exists a window where the treatment behaves as if it had been randomly assigned.", "These assumptions may be violated for many reasons.", "Thus, it is crucial for researchers to provide as much empirical evidence as possible about its validity.", "Although testing the assumptions directly is not possible, there are several empirical regularities that we expect to hold in most cases where the assumptions are met.", "We discuss some of these tests below.", "Our discussion is brief, but we refer the reader to [16] for an extensive practical discussion of RD falsification methods, and additional references.", "Covariate Balance.", "If either the continuity or local randomization assumptions hold, the treatment should not have an effect on any predetermined covariates, that is, on covariates whose values are realized before the treatment is assigned.", "Since the treatment effect on predetermined covariates is zero by construction, consistent evidence of non-zero effects on covariates that are likely to be confounders would raise questions about the validity of the RD assumptions.", "For implementation, researchers should analyze each covariate as if it were an outcome.", "In the continuity-based approach, this requires choosing a bandwidth and performing local polynomial estimation and inference within that bandwidth.", "Note that the optimal bandwidth is naturally different for each covariate.", "In the local randomization approach, the null hypothesis of no effect should be tested for each covariate using the same choices as used for the outcome.", "If the window is chosen using the covariate balance procedure discussed above, the selected window will automatically be a region where no treatment effects on covariates are found.", "Density of Running Variable.", "Another common falsification test is to study the number of observations near the cutoff.", "If units cannot manipulate precisely the value of the score that they receive, we should expect as many observations just above the cutoff as just below it.", "In contrast, for example, if units had the power to affect their score and they knew that the treatment were very beneficial, we should expect more people just above the cutoff (where the treatment is received) than below it.", "In the continuity-based framework, the procedure is to test the null hypothesis that the density of the running variable is continuous at the cutoff [35], which can be implemented in a more robust way via the novel density estimator proposed in [19].", "In the local randomization framework, [23] propose a novel implementation via a finite sample exact binomial test of the null hypothesis that the number of treated and control observations in the chosen window is compatible with a 50% probability of treatment assignment.", "Alternative cutoff values.", "Another falsification test estimates the treatment effect on the outcome at a cutoff value different from the actual cutoff used for the RD treatment assignment, using the same procedures used to estimate the effect in the actual cutoff but only using observations that share the same treatment status (all treatment observations if the artificial cutoff is above the real one, or all control observations if the artificial cutoff is below the real cutoff).", "The idea is that no treatment effect should be found at the artificial cutoff, since the treatment status is not changing.", "Alternative bandwidth and window choices.", "Another approach is to study the robustness of the results to small changes in the size of the bandwidth or window.", "For implementation, the main analysis is typically repeated for values of the bandwidth or window that are slightly smaller and/or larger than the values used in the main analysis.", "If the effects completely change or disappear for small changes in the chosen neighborhood, researchers should be cautious in interpreting their results." ], [ "Empirical Illustration", "To illustrate all the RD methods discussed so far, we partially re-analyze the study by [34].", "These authors study municipal mayor elections in Brazil between 1996 and 2012, examining the effect of a party's victory in the current election on the probability that the party wins a future election for mayor in the same municipality.", "The unit of analysis is the municipality, the score is the party's margin of victory at election $t$ —defined as the party's vote share minus the vote share of the party's strongest opponent, and the treatment is the party's victory at $t$ .", "Their original analysis focuses on the unconditional victory of the party at $t+1$ as the outcome of interest.", "In this illustration, our outcome of interest is instead the party's margin of victory at $t+1$ , which is only defined for those municipalities where the incumbent party runs for reelection at $t+1$ .", "We analyze this effect for the incumbent party (defined as the that party won election $t-1$ , whatever this party is) in the full sample.", "[34] discuss the interpretation and validity issues that arise when conditioning on the party's decision to re-run, but we ignore such issues here for the purposes of illustration.", "In addition to the outcome and score variables used for the main empirical analysis, our covariate-adjusted local polynomial methods, window selection procedure, and falsification approaches employ seven covariates at the municipality level: per-capita GDP, population, number of effective parties, and indicators for whether each of four parties (the Democratas, PSDB, PT and PMDB) won the prior ($t-1$ ) election.", "We implement the continuity-based analysis with the rdrobust software [10], [13], [8], the local randomization analysis using the rdlocand software [22], and the density test falsification using the rddensity software [18].", "The packages can be obtained for R and Stata from https://sites.google.com/site/rdpackages/.", "We do not present the code to conserve space, but the full code employed is available in the packages' website.", "[16], [17] offer a detailed tutorial on how to use these packages, employing a different empirical illustration." ], [ "Falsification Analysis", "We start by presenting a falsification analysis.", "In order to falsify the continuity-based analysis, we analyze the density of the running variable, and also the effect of the RD treatment on several predetermined covariates.", "We start by reporting the result of a continuity-based density test, using the local polynomial density estimator developed by [19].", "The estimated difference in the density of the running variable at the cutoff is $-0.0753$ , and the p-value associated with the test of the null hypothesis that this difference is zero is $0.94$ .", "This test is illustrated in Figure REF , which shows the local-polynomial-estimated density of the incumbent party's margin of victory at $t$ at the cutoff, separately estimated from above and below the cutoff.", "These results indicate that the density of the running variable does not change abruptly at the cutoff, and are thus consistent with the assumption that parties do not precisely manipulate their margin of victory to ensure a win in close races.", "In addition, we also implemented the finite sample exact binomial tests proposed in [23], which confirmed the empirical results obtained via local polynomial density methods.", "We do not report these numerical result to conserve space, but they can be consulted using the accompaying replication files.", "Figure: Estimated density of running variableWe also present local polynomial point estimates of the effect of the incumbent party's victory on each of the seven predetermined covariates mentioned above, and we perform robust local-polynomial inference to obtain confidence intervals and p-values for these effects.", "Since these covariates are all determined before the outcome of the election at $t$ is known, the treatment effect on each of them is zero by construction.", "Our estimated effects and statistical inferences should therefore be consistent with these known null effects.", "We present the results graphically in Figures REF and REF using typical RD plots [12] where binned means of the outcome within intervals of the score are plotted against the mid point of the score in each interval.", "A fourth-order polynomial, separately estimated above and below the cutoff, is superimposed to show the global shape of the regression functions.", "In these plots, we also report the formal local polynomial point estimate, $95\\%$ robust confidence interval, robust p-value, and number of observations within the bandwidth.", "The bandwidth (not reported) is chosen in each case to be MSE-optimal.", "As we can see, the incumbent party's bare victory at $t$ does not have an effect on any of the covariates.", "All 95% confidence intervals contain zero, most of these intervals are approximately symmetric around zero, and most point estimates are small.", "These results show that there are no obvious or notable covariate differences at the cutoff between municipalities where the incumbent party barely won at $t$ and municipalities where the incumbent party barely lost at $t$ .", "Figure: RD Effects on Predetermined CovariatesFigure: RD Effects on Predetermined CovariatesSince the evidence from our falsification analysis is consistent with the validity of our RD design, we now proceed to analyze the treatment effect on the main outcome of interest—the incumbent party's margin of victory at $t+1$ .", "This effect is illustrated in Figure REF .", "A stark jump can be seen at the cutoff, where the margin of victory of the incumbent party at $t+1$ abruptly decreases as the score crosses the cutoff.", "This indicates that municipalities where the incumbent party barely wins at $t$ obtain a lower margin of victory at election $t+1$ compared to municipalities where the incumbent party barely loses at $t$ , one of the main substantive findings in [34].", "Figure: Effect of Victory at tt on Vote Margin at t+1t+1Incumbent Party, Brazilian Mayoral Elections, 1996-2012We now analyze this effect formally.", "We first analyze RD effects using the continuity-based framework, employing local polynomial methods with $p=1$ and a MSE-optimal bandwidth.", "For inference, we use robust bias-corrected 95% confidence intervals.", "As we can see in Table REF , the MSE-optimal bandwidth is estimated to be around 15.3 percentage points, and within this bandwidth, the RD local-polynomial point estimate is about -6.3.", "This shows that, at the cutoff, a victory at $t$ reduces the incumbent party's vote margin at $t+1$ by about 6 percentage points in those municipalities where the party seeks reelection.", "The 95% robust bias-corrected confidence interval ranges from -10.224 to -2.945, rejecting the null hypothesis of no effect with a robust p-value of about 0.0004.", "Including covariates leads to very similar results: the MSE-optimal bandwidth changes to 14.45, and the point estimate moves from -6.28 to -6.10, a very small change, as expected when the covariates are truly predetermined.", "Table: Continuity-based RD Analysis: Effect of Victory at tt on Vote Margin at t+1t+1Incumbent Party, Brazilian Mayoral Elections, 1996-2012Second, we analyze the main outcome using a local randomization approach.", "For this, we must choose the window around the cutoff where the assumption of local randomization appears plausible (if such a window exists).", "We implement our window selection procedure using the list of covariates mentioned above, an increment of 0.1 percentage points, and a cutoff p-value of 0.15.", "We use Fisherian randomization-based inference with the difference-in-means as the test statistic and assuming a fixed-margins randomization procedure using the actual number of treated and controls in each window.", "As shown in Table REF , starting at the $[0.05, -0.05]$ window and considering all symmetric windows in 0.01 increments, we see that all windows between $[0.05, -0.05]$ and $[0.15, -0.15]$ have a minimum p-value above 0.15.", "The window $[0.08, -0.08]$ is the first window where the minimum p-value drops below 0.15, indeed, it drops all the way to 0.061.", "Thus, our selected window is $[-0.15, 0.15]$ , which has exactly 38 observations on each side of the cutoff.", "Table: Minimum p-value in first 20 symmetric windows around cutoffRunning variable is Vote Margin at tt of Incumbent Party, Brazilian Mayoral Elections, 1996-2012In order to further illustrate the results in Table REF , Figure REF shows the associated p-values for all symmetric windows in 0.01 increments between $[0.05, -0.05]$ and $[-2.00, 2.00]$ .", "Figure: Window Selector Based on CovariatesIncumbent Party, Brazilian Mayoral Elections, 1996-2012Running variable is Incumbent party's Margin of Victory at ttIn table REF , we present our inference results in the chosen window $[-0.15, 0.15]$ , reporting both Fisherian inference (using the same choices as those used in the window selection procedure) and large-sample p-values.", "The treated-control difference-in-means is $-9.992$ , with a Fisherian p-value of approximately $0.083$ and a large-sample p-value of about $0.070$ , rejecting both the sharp null hypothesis and the hypothesis of no average effect at 10% level.", "The fact that the point estimate continues to be negative and that the p-values are 8% and below suggests that the continuity-based results are broadly robust to a local-randomization assumption, as both approaches lead to similar conclusions.", "The local randomization p-value is much larger than the p-value from the continuity-based local polynomial analysis, but this is likely due, at least in part, to the loss of observations, as the sample size goes from a total of 3,412 (1,740+1,672) observations to just 39 (19+20).", "(The discrepancy in the number of observations in $[-0.15,0.15]$ between the outcome analysis and the window-selector analysis stems from missing values in the covariates.)", "Table: Local Randomization RD Analysis: Effect of Victory at tt on Vote Margin at t+1t+1Incumbent Party, Brazilian Mayoral Elections, 1996-2012" ], [ "Final Remarks", "We reviewed two alternative frameworks for analyzing sharp RD designs.", "First, the continuity-based approach, which is more common in empirical work, assumes that the unknown regression functions are continuous at the cutoff.", "Estimation is conducted nonparametrically using local polynomial methods, and bandwidth selection relies on minimizing a criterion such as the MSE or the coverage error probability.", "Inference under this framework relies on large sample distributional approximations, and requires robust bias correction to account for misspecification errors local to the cutoff.", "Second, the local randomization approach formalizes the intuition that RD designs can be interpreted as local experiments in a window around the cutoff.", "In this case, the window is chosen to ensure that treated and controls are comparable in terms of observed predetermined characteristics, as in a randomized experiment.", "Within this window, inference is conducted using randomization inference methods assuming that potential outcomes are non-random, or other canonical analysis of experiments methods based on large sample approximations.", "These two approaches rely on different assumptions, each with its own advantages and disadvantages, and thus we see them as complementary.", "On the one hand, the continuity-based approach is agnostic about the data generating process and does not require any modeling or distributional assumptions on the regression functions.", "This generality comes at the expense of basing inference on large-sample approximations, which may not be reliable when the sample size is small (a case that is common in RD designs, given their local nature).", "On the other hand, the Fisherian local randomization approach provides tools to conduct inference that is exact in finite samples and does not rely on distributional approximations.", "This type of inference is more reliable than large-sample-based inference when the sample size is small.", "And if the sample size near the cutoff is large, the analysis can also be conducted using standard large-sample methods for the analysis of experiments.", "However, the conclusions drawn under the local randomization approach (either Fisherian or large-sample) require stronger assumptions (unconfounded assignment, exclusion restriction) than the continuity-based approach, are conditional on a specific sample and window, and do not generalize to other samples or populations.", "In sum, as in [23], we recommend the continuity-based approach as the default approach for analysis, since it does not require parametric modeling assumptions and automatically accounts for misspecification bias in the regression functions when conducting estimation and inference.", "The local randomization approach can be used as a robustness check, especially when the sample size is small and the large-sample approximations may not be reliable.", "There is one particular case, however, in which the continuity-based approach is not applicable: when the running variable exhibits only a few distinct values or mass points (even if the sample size is large because of repeated values).", "In this case, the nonparametric methods for estimation, inference, and bandwidth selection described above do not apply, since they are developed under the assumption of local approximations and continuity of the score variable, which are violated by construction when the running variable is discrete with a small number of mass points.", "Thus, in settings where the running variable has few mass points, local randomization methods, possibly employing only the closest observations to the cutoff, are a more natural approach for analysis.", "We refer the reader to [17] for a more detailed discussion and practical illustration of this point." ] ]
1906.04242
[ [ "Multiple Exclusion Statistics" ], [ "Abstract A new distribution for systems of particles in equilibrium obeying exclusion of correlated states is presented following the Haldane's state counting.", "It relies upon an ansatz to deal with the multiple exclusion that takes place when the states accessible to single particles are spatially correlated and it can be simultaneously excluded by more than one particle.", "The Haldane's statistics and Wu's distribution are recovered in the limit of non-correlated states of the multiple exclusion statistics.", "In addition, an exclusion spectrum function $\\mathcal{G}(n)$ is introduced to account for the dependence of the state exclusion on the occupation-number $n$.", "Results of thermodynamics and state occupation are shown for ideal lattice gases of linear particles of size $k$ ($k$-mers) where multiple exclusion occurs.", "Remarkable agreement is found with Grand-Canonical Monte Carlo simulations from $k$=2 to 10 where multiple exclusion dominates as $k$ increases." ], [ "Multiple Exclusion Statistics Julian J. Riccardo Corresponding author.", "[email protected] Jose L. Riccardo, Antonio J. Ramirez-Pastor, Marcelo P. PasinettiDepartamento de Física, Instituto de Física Aplicada, Universidad Nacional de San Luis-CONICET, Ejército de los Andes 950, D5700BWS, San Luis, Argentina.", "A new distribution for systems of particles in equilibrium obeying exclusion of correlated states is presented following the Haldane's state counting.", "It relies upon an ansatz to deal with the multiple exclusion that takes place when the states accessible to single particles are spatially correlated and it can be simultaneously excluded by more than one particle.", "The Haldane's statistics and Wu's distribution are recovered in the limit of non-correlated states of the multiple exclusion statistics.", "In addition, an exclusion spectrum function $\\mathcal {G}(n)$ is introduced to account for the dependence of the state exclusion on the occupation-number $n$ .", "Results of thermodynamics and state occupation are shown for ideal lattice gases of linear particles of size $k$ ($k$ -mers) where multiple exclusion occurs.", "Remarkable agreement is found with Grand-Canonical Monte Carlo simulations from $k$ =2 to 10 where multiple exclusion dominates as $k$ increases.", "Quantum fractional statistics has drawn considerable interest in condensed matter physics since the early theoretical contributions [1], [2], [3], [4], [5], [6], [7] and because of its ability to describe physical phenomena such as fractional quantum Hall effect [8], [9], [4], spinor excitations in quantum antiferromagnets [10], [11], high-temperature superconductivity [12], quantum systems in low dimensions [13], [14], [15], [16] and, more recently, its implications in the field of cosmology and dark matter.", "Concerning the quantum physics of strongly interacting many-particle systems, in a seminal work, Haldane [5] introduced the Quantum Fractional Statistics (FE) and the definition of the statistical exclusion parameter $g$ , $0\\le g\\le 1$ , being the Bose-Einstein (BE) and Fermi-Dirac (FD) the boundary statistics for $g=0$ and $g=1$ , respectively.", "Later Wu [17] derived the statistical distribution for an ideal gas of fractional-statistic particles.", "These papers were a major contribution to describe quantum systems in one and two dimensions like anyons in a strong magnetic field in the lowest Landau level [18] and excitations in pure Laughlin liquids [8], [19], [20].", "On the other hand, classical statistical mechanics of interacting large particles of arbitrary size and shape is a relevant problem since it is a major challenge to properly account for the generally complex entropic contribution to the free energy.", "Many physical systems, ranging from small polyatomics, alkanes, to protein adlayers, resemble these characteristics.", "The multisite occupancy problem has been addressed since long ago by the approximations of Flory-Huggins [21], [22], [23], [24] for binary solutions, lattice gases of particles of arbitrary size and shape made of a number $k$ of linked units ($k$ -mers) [25] and it has been referred as the prototype of the lattice problem [26].", "Among the motivations we can also mention Cooper and vortex pairs modelling [27], [28], clusters diffusion on regular surfaces [29], [30] and thermodynamics of polyatomic adlayers [31], [32], [33], which represents a current open problem in statistical physics of gas-solid interfaces.", "The FE and Wu's distribution were already reinterpreted in the domain $g > 1$ to model the thermodynamics of linear $k$ -mers ideal lattice gases behaving statistically like \"superfermions\" [34] and resulting in the exact one-dimensional (1D) solution for $g=k$ [35].", "As shown later, in 1D it does not arise effective correlations between states, however it does in two or higher dimensions as considered here.", "This work addresses the statistical mechanics of identical particles in equilibrium occupying a set of spatially correlated states and obeying statistical exclusion in a confined region of the space.", "We refer as multiple exclusion the fact that, because of spatial correlations, the states accessible to single-particles can be simultaneously excluded by more than one particle in the system and it is not related to mutual exclusion as clearly defined by Haldane and Wu [5], [17] to refer to exclusion statistics between different species within a space region.", "A classical realization of multiple exclusion phenomena are the physical models of lattice gases of $k$ -mers.", "In what follows, we develop a statistics for systems of many particles with state exclusion between spatially correlated states, which reduces to Haldane-Wu's FE for statistically independent states (constant exclusion $g$ ) and, correspondingly, to the FD and BE ones.", "Let us consider a system of volume $V$ containing $N$ identical particles having $G$ states accessible to a single particle.", "The canonical partition function is $Q(N,T,V)= \\sum _{i} e^{-\\beta H_{i}(N)}$ where $H_{i}(N)$ denotes the Hamiltonian of the $i^{th}$ state and $\\beta =1/k_{b}T$ ($k_b$ is the Boltzmann constant).", "For the sake of simplicity, we address a homogeneous system of $N$ non-interacting identical particles in the volume $V$ (other than the fact that the states they can occupy are not independent one of each other).", "By defining $d_{N}$ as the number of states in $V$ accessible to the $N^{th}$ particle after $(N-1)$ have been added to $V$ , then $Q(N,T,V)= W(N) e^{-\\beta N U_{o}} q_{i}^{N}$ with [5] $W(N)= \\frac{(d_{N}+N-1)!}{N!", "\\ (d_{N}-1)!", "}$ where $U_{o}$ and $q_{i}$ are the energy per particle and the internal partition function, respectively.", "In the limit $n=\\lim _{N,G \\rightarrow \\infty } N/G$ , the thermodynamic functions are ${}\\begin{aligned}\\beta \\tilde{F}(n,T)&=\\lim _{N,G \\rightarrow \\infty }\\frac{F(N,T,V)}{G}=\\lim _{N,G \\rightarrow \\infty }\\frac{\\ln Q(N,T,V)}{G} \\\\&=\\beta nU_{o}-[\\tilde{d}(n)+n] \\ln [\\tilde{d}(n)+n] + \\tilde{d}(n) \\ln \\tilde{d}(n)\\\\& \\ \\ + n \\ln n\\end{aligned}$ ${}\\begin{aligned}\\frac{\\tilde{S}(n,T)}{k_{b}T}&=\\lim _{N,G \\rightarrow \\infty }\\frac{S(N,T,V)}{G} \\\\&=[\\tilde{d}(n)+n] \\ln [\\tilde{d}(n)+n] - \\tilde{d}(n) \\ln \\tilde{d}(n) - n \\ln n\\end{aligned}$ and the chemical potential, $\\mu =\\left(\\frac{\\partial \\tilde{F}}{\\partial n}\\right)_{T,V}$ , satisfies ${}K(T) \\ e^{\\beta \\mu }= \\frac{n \\ \\left[ \\tilde{d}(n) \\right]^{\\tilde{d}^{\\prime }(n)}}{\\left[ \\tilde{d}(n)+n\\right] ^{\\tilde{d}^{\\prime }(n)+1 }},$ where $\\tilde{d}(n)=\\lim _{N,G \\rightarrow \\infty } d_{N}/G$ , $\\tilde{d}^{\\prime }(n)= d[\\tilde{d}(n) ]/dn$ and $K(T)=e^{-\\beta U_{o}} \\ q_{i}$ .", "From Eq.", "(REF ), two related quantities are defined which will be later useful to fully interpret the state exclusion under spatial correlations.", "If the system of particles in $V$ is now assumed to exchange particles with a bath at chemical potential $\\mu $ and temperature $T$ , the time evolution of the state occupation $n$ is given by ${}\\frac{dn}{dt}= P_{o} \\ W_{o \\rightarrow \\bullet }- P_{\\bullet } \\ W_{\\bullet \\rightarrow o},$ where $P_{o}(P_{\\bullet })$ is the average fraction of empty (occupied) states in $V$ and $W_{o \\rightarrow \\bullet }(W_{\\bullet \\rightarrow o})$ the transition rate for an empty(occupied) state to get occupied (empty).", "In equilibrium, $dn/dt=0$ , $W_{o \\rightarrow \\bullet }/W_{\\bullet \\rightarrow o}=P_{\\bullet }/P_{o}=e^{\\beta (\\mu -U_{o})}$ , $P_{\\bullet }=n$ .", "From Eq.", "(REF ) and (REF ) ${}P_{o}(n)=P_{\\bullet }(n) \\ e^{-\\beta (\\mu -U_{o})}= \\frac{\\left[ \\tilde{d}(n)+n\\right] ^{\\tilde{d}^{\\prime }(n)+1 }}{\\left[ \\tilde{d}(n) \\right]^{\\tilde{d}^{\\prime }(n)} }.$ In addition, we introduce a new useful quantity, namely the exclusion spectrum function $\\mathcal {G}(n)$ , being the average number of excluded states per particle at occupation $n$ [36].", "Thus, $\\mathcal {G}(n)=\\left\\langle \\frac{1}{N} \\sum _{iº=1}^{G} e_{i} \\right\\rangle $ ${}\\begin{aligned}\\mathcal {G}(n)&=\\left\\langle \\frac{G}{N}\\frac{1}{G} \\sum _{i=1}^{G} e_{i} \\right\\rangle =\\frac{1}{n}\\left[ 1-P_{o}(n)\\right]=\\frac{1}{n}-\\frac{1}{e^{\\beta (\\mu -U_{o})}}\\end{aligned}$ where $e_{i}=1$ if the state $i$ out of $G$ is either occupied or excluded by any of the $N$ particles, or $e_{i}=0$ otherwise, and the average is assumed to be taken over the canonical ensemble.", "The identity $\\left\\langle \\frac{1}{G}\\sum _{i=1}^{G} e_{i} +P_{o}\\right\\rangle =1$ follows from the definition of $P_{o}$ .", "$\\mathcal {G}(n)$ characterizes the density dependence of the state exclusion for a spatially correlated many-particle system from zero-density to saturation.", "It is worth noticing that the rightmost side of Eq.", "(REF ) also provides an operational formula to infer the exclusion spectrum $\\mathcal {G}(n)$ from experiments.", "For instance, for adsorbed species under equilibrium conditions ($\\mu ,T$ ), $n$ is related to the surface coverage (so called adsorption isotherm) and $U_{o}$ is obtained from the low density regime of $n(\\mu ,T)$ .", "Spatially correlated states leading to multiple exclusion can be visualized, for instance, in the classical system of linear particles occupying sites on a square lattice (Fig.", "1).", "Given the set of states for a single particle containing all its possible configurations on the lattice, clearly an isolated dimer ($C_{1}$ ) occupies one state plus excluding six more states from being occupied by other particles.", "For a larger number of particles on the lattice there exist configurations in which some states are excluded simultaneously by neighboring particles ($C_{2}$ , $C_{3}$ and $C_{4}$ ).", "This is called here “multiple exclusion\" arising from spatial correlation between states, and it has significant effects on the thermodynamics of the system.", "Figure: Local configurations of dimers on a square lattice.", "C 1 C_{1} shows the states (dashed) excluded by an isolated particle.", "C 2 C_{2}, C 3 C_{3} and C 4 C_{4} depict states (dashed) multiply excluded by neighboring dimers, 1, 2 and 6 for C 3 C_{3}, C 2 C_{2} and C 4 C_{4}, respectively.It is known that the exact counting of configurations for an arbitrary number of particles on the lattice seems a hopeless task and it is still a relevant open problem in classical statistical mechanics.", "From here on, $d_{N}(\\tilde{d}(n))$ is obtained through an approximation extending the Haldane-Wu's state counting procedure to a system of correlated states which determines the analytic multiple exclusion statistical distribution and the thermodynamics of the system.", "Given that the total number of states in $V$ is $G$ , as we add particles from the 1st to the $(N-1)^{th}$ , the recursion relations can be written: $d_{1}=G$ , $d_{2}=d_{1}-\\mathcal {N}_{1},...,d_{N}=d_{N-1}-\\mathcal {N}_{N-1}$ , where $\\mathcal {N}_{j}$ is the number of states occupied plus excluded only by the $j^{th}$ particle.", "Considering that a particle $j^{th}$ added to $V$ occupies one state and in addition it excludes a yet undetermined number of states out of $G$ , we write the relation $\\mathcal {N}_{j}=1+\\mathcal {G}_{cj}$ , where $\\mathcal {G}_{cj}$ is the number of states excluded only by the $j^{th}$ particle [it does not account for the states excluded by $j$ which were already excluded by any of the particles $1,...,(j-1)$ because of the spatial correlations or so-called multiple state exclusion].", "$\\mathcal {G}_{cj}$ has to be rationalized as an average of over all the configurations of particles $1,....,j$ on the $G$ states.", "For $j \\rightarrow N$ and $N,G \\rightarrow \\infty $ with $N/G=n$ , it is straightforward that $\\mathcal {G}_{cj}$ will converge to a value depending only on the ratio $N/G=n$ (as observed in simulation).", "Now we establish the following ansatz to determine $d_{N}$ [36] ${}\\mathcal {N}_{j}=1+\\mathcal {G}_{cj}=1+g_{c}\\dfrac{d_{j}}{G},$ where $\\mathcal {G}_{cj}=g_{c}\\dfrac{d_{j}}{G}$ , i.e, a system-dependent exclusion constant $g_{c}$ times the fraction $\\dfrac{d_{j}}{G}$ of states that can be excluded by particle $j$ .", "It is worth mentioning that the second term in Eq.", "(REF ) resembles a sort of mean-field or effective-field approximation on the set of states which in the limit $N,G \\rightarrow \\infty $ will depend only on the mean occupation number $n=N/G$ .", "Based on Eq.", "(REF ) we can rewrite the recursion relations as: $ d_{1}=G, d_{2}=d_{1}-\\left[ 1+g_{c} \\frac{d_{1}}{G} \\right], d_{3}=d_{2}-\\left[1+g_{c} \\frac{d_{2}}{G} \\right]=G\\left[ 1-\\frac{g_{c}}{G}\\right]^{2}-\\left[ 1-\\frac{g_{c}}{G}\\right]-1,...,d_{N}=d_{N-1}-\\left[1+g_{c} \\frac{d_{N-1}}{G} \\right]=G \\left[ 1-\\frac{g_{c}}{G}\\right]^{N-1}-\\sum _{i=0}^{N-2} \\left[ 1-\\frac{g_{c}}{G}\\right]^{i}$ .", "By taking the limit $\\tilde{d}(n)=\\lim _{N,G \\rightarrow \\infty }d_{N}/G$ it yields $\\tilde{d}(n)=e^{-n g_{c}}- n$ .", "$\\tilde{d}(n)$ is defined except for two constants, say $\\tilde{d}(n)=C_{1} e^{-n g_{c}}-C_{2} n$ , provided that it must satisfy the boundary conditions $\\tilde{d}(0)=1$ and $\\tilde{d}(n_{m})=\\tilde{d}(1/g)=0$ , where the usual Haldane's exclusion constant $g$ is used here to denote the number of states excluded per particle at maximum occupation, $n_{m}=N_{m}/G=(G/g)/G=1/g$ .", "Thus, $C_{1}=1$ and $C_{2}=g e^{-\\frac{g_{c}}{g}}$ and finally ${}\\tilde{d}(n)=e^{-ng_{c}}-ge^{-\\frac{g_{c}}{g}}n.$ We may even think of $g_{c}$ in Eq.", "(REF ) as depending on $j$ , i.e., $g_{cj}$ .", "The recursion relations will lead to $d_{N}=d_{N-1}\\left[1-g_{c(N-1)}/G \\right]-1=G \\prod _{j=1}^{N-1}\\left[1-g_{cj}/G \\right] - \\sum _{i=2}^{N-1}\\prod _{j=i}^{N-1} \\left[1-g_{cj}/G \\right]-1$ .", "If $g_{cj}=g_{cN}+\\Delta _{j,N}$ , where $\\Delta _{j,N}$ is finite, then $d_{N}=G{\\left[1-g_{cN}/G \\right]^{N-1}-\\sum _{j=0}^{N-1}\\left[1-g_{cN}/G \\right]^{j}+\\mathcal {O}(1/G)}$ .", "In the $\\lim _{N,G \\rightarrow \\infty }d_{N}/G$ it yields $\\tilde{d}(n)=e^{-ng_{c}(n)}-n$ where $g_{c}(n)=\\lim _{N,G \\rightarrow \\infty }g_{cN}$ .", "From this, the ansatz (REF ) is the simplest assumption on $g_{c}(n)$ , $g_{c}(n)=g_{c}=$ constant, through which state exclusion is introduced in the state counting in presence of spatial correlations.", "This results in a fairly accurate approximation, as shown by comparing predicted observables and simulations for linear particle lattice gases.", "The exclusion constant $g_{c}$ is fully determined by the zero density limit of the mean number of states excluded particle, $\\mathcal {G}(n)$ .", "Accordingly, from Eqs.", "(REF ),(REF ) and (REF ) ${}\\begin{aligned}\\mathcal {G}_{o}=\\lim _{n\\rightarrow 0}\\mathcal {G}(n)=\\lim _{n\\rightarrow 0} \\left[1-P_{o}(n)\\right]/n=2g e^{-g_{c}/g}+2g_{c}-1\\end{aligned}$ $\\mathcal {G}_{o}$ being the state exclusion at zero density, i.e, number of states excluded by an isolated particle in the system.", "Moreover, $\\lim _{n\\rightarrow n_{m}}\\mathcal {G}(n)=\\lim _{n\\rightarrow n_{m}} \\left[1-P_{o}(n)\\right]/n=g$ .", "The two exclusion constants, $g_{c}$ and $g$ in Eq.", "(REF ), come from the infinite dilution and saturation limits of $\\mathcal {G}(n)$ , respectively.", "From here on, we analyze linear $k$ -mers ideal lattice gases under the proposed framework.", "We mean by linear $k$ -mers, linear rigid particles made of $k$ identical beads occupying $k$ consecutive sites (one bead per site) on a regular lattice.", "For instance, this is a simple model for small polyatomics/hydrocarbons adlayers.", "For $k$ -mers on a one-dimensional (1D) lattice, $g=k$ , $\\mathcal {G}_{o}=2k-1=2g-1$ , the solution of Eq.", "(REF ) is $g_{c}=0$ $\\forall k(\\forall g)$ and the case reduces to Haldane's FE and Wu's distribution with $g=k$ resulting in the exact density dependence of the chemical potential $\\mu \\equiv \\mu (n)_{T,V}$ from Eq.", "(REF ) (already derived in [34] for non-interacting $k$ -mers in 1D).", "In a $k$ -mer 1D lattice gas, each state of $N$ $k$ -mers on a lattice with $M=G$ sites and $n=N/M$ can be mapped onto a one of $N$ monomers on a equivalent lattice with $M^{\\prime }= M-(k-1)N$ sites and $n^{\\prime }=N/M^{\\prime }=n/[1-(g-1)n]$ .", "Thus, there is not effective spatial correlation between excluded states for $k$ -mers in 1D.", "On the other hand, for $k$ -mers on a square lattice of $M$ sites, $G=2M$ , $n_{m}=N_{m}/G=(M/k)/2M=1/(2k)=1/g$ , then $g=2k$ and $\\mathcal {G}_{o}=k^{2}+2k-1=\\frac{g^{2}}{4}+g-1$ .", "The solution of Eq.", "(REF ) is $g_{c}=\\frac{g^{2}}{8}+\\frac{g}{2}+g \\mathcal {L}(z)$ for $g \\ge 4$ , where $\\mathcal {L}(z)$ is the positive solution of $z=\\mathcal {W}(z) e^{\\mathcal {W}(z)}$ , $ \\mathcal {W}(z)$ being the Lambert function, namely, the inverse of $f(x)=x e^{x}, \\ x=\\mathcal {W}(x e^{x})$ .", "Accordingly, $g_{c}=0$ for $k=2(g=4)$ , $g_{c}=4.807$ for $k=3(g=6)$ , $g_{c}=9.586$ for $k=4(g=8)$ ,$g_{c}=15.344$ for $k=5(g=10)$ , $g_{c}=22.096$ for $k=6(g=12)$ , $g_{c}=29.838$ for $k=7(g=14)$ , $g_{c}=38.563$ for $k=8(g=16)$ , $g_{c}=48.267$ for $k=9(g=18)$ , $g_{c}=58.950$ for $k=10(g=20)$ .", "Furthermore, $\\lim _{k\\rightarrow \\infty }{g}_{c}=\\mathcal {G}_{o}/2$ .", "From Eq.", "(REF ), the occupation number, $n$ , in general satisfies the following relation, formally almost identical to the transcendental equation first derived by Wu [17] ${}\\left[\\tilde{d}(n)+n\\right]^{\\tilde{d^{\\prime }}+1} \\left[\\tilde{d}(n)\\right]^{-\\tilde{d^{\\prime }}}=n \\ e^{\\beta \\left(U_{o}-\\mu \\right) }=n \\ \\xi ,$ where $\\xi =e^{\\beta \\left(U_{o}-\\mu \\right) }$ .", "From the explicit form of $\\tilde{d}(n)$ [Eq.", "(REF )], the distribution function can be symbolically written as ${}n=\\frac{e^{-g_{c} n}}{w(\\xi ) + g \\ e^{-g_{c}/g}},$ similar to Wu's distribution where $n\\equiv n(\\xi )$ is the solution of the transcendental Eq.", "(REF ) and $w(\\xi )=\\tilde{d}(n)/n$ .", "For particles with exclusion parameter $g$ on spatially non-correlated states, $g_{c}=0$ , $\\tilde{d}(n)=1-gn$ and the Haldane's FE statistics is recovered and Eq.", "(REF ) reduces to the Wu's distribution [17].", "Furthermore, $\\tilde{d^{\\prime }}(n)=-g$ for $g_{c}=0$ , thus $W(n)=\\xi -1$ for $g=0$ and $w(n)=\\xi $ for $g=1$ , resulting Eq.", "(REF ) the BE and FD statistics, respectively.", "Given that $w(n)=\\tilde{d}(n)/n\\ge 0$ , from Eq.", "(REF ) the occupation-number's range is $0\\le n\\le 1/g$ .", "At temperature $T=0$ (absolute scale), the distribution takes the step-like form $n=1/g$ for $U_{o}<\\mu $ and $n=0$ for $U_{o}>\\mu $ , as expected.", "Simulations of $k$ -mers lattice gases were carried out in the Grand Canonical Ensemble through the efficient algorithm introduced by Kundu et al.", "[37], [39] to overcome the sampling slowdown at high density due to the jamming effects.", "The temperature, chemical potential $\\beta \\mu $ and system's size are held fixed and the number of particles on the lattice is allowed to fluctuate through non-local changes, i.e, insertion and deletion of $k$ -mers at a time (in contrast to the standard Metropolis algorithm).", "Shortly, given a configuration of $k$ -mers on the lattice, one MCstep is fulfilled by removing all horizontal $k$ -mers and keeping the vertical ones.", "The probabilities corresponding to horizontal segments of unoccupied sites are exactly calculated and stored for all the segment sizes.", "Then segments are occupied by $k$ -mers with probabilities accordingly determined.", "An identical procedure is carried out in the vertical direction.", "A reproduction of these calculations is out of the scope of this work.", "The detailed discussion is found in the original work Refs.", "[37], [38], [39].", "The algorithm has proved to be ergodic, it satisfies the Detailed Balance Principle and equilibrium is reached after typically $10^{7}$ MC steps.", "$L \\times L$ square lattices with periodic boundary conditions were used.", "The ratio $L/k$ was set to 120.", "With this value of $L/k$ , we verified that finite size effects are negligible.", "The observables $\\mathcal {G}(n)$ [Eq.", "(REF )] and $n=\\left\\langle N \\right\\rangle /G= \\left\\langle N \\right\\rangle /(2L^{2})$ , were calculated by averaging over $10^{7}$ configurations.", "The distribution function $n$ versus $\\beta (\\mu -U_{o})$ [Eq.", "(REF )]) is represented in Fig.", "REF and compared with simulation for linear particles of size $k=2$ to $k=10$ .", "Figure: State occupation number nn versus β(μ-U o )\\beta (\\mu -U_{o}) for k=2,4,5,6,7,8,10k=2,4,5,6,7,8,10 on a square lattice.", "Lines represent the analytical predictions from Eq.", "(); symbols come from simulations.", "Inset shows the case k=10k=10 for a smaller g c =39g_{c}=39 as to visualize the state exclusion effect of the nematic ordering.The analytical predictions are accurate for all the particle sizes, being much better as $k$ increases up to $k=7$ .", "The ansatz in Eq.", "(REF ) does not account explicitly for system's dimensionality, shape or particles size and lattice structure, but all the state correlations are embedded in the exclusion constant $g_{c}$ .", "For instance, the solid line in Fig.", "REF for $k=2$ represents approximately the simulation results for dimers on the square lattice, $k=2 \\ (\\mathcal {G}_{o}=7,g=4)$ , and it does exactly for tetramers on a 1D lattice, $k=4 \\ (\\mathcal {G}_{o}=7,g=4)$ .", "For both cases the solution of Eq.", "(REF ) is $g_{c}=0$ .", "For $k\\ge 7$ , it is known a nematic transition develops at intermediate lattice coverage with particles aligned along a lattice direction in compact clusters [40].", "Its effect is clearly seen in Fig.2 the case for $k=10$ at intermediate occupation where simulation and analytical function do not match.", "However, because the nematic ordering increases the number of multiply excluded states per particle, $n$ can be very accurately represented by the multiple exclusion statistics for a smaller value of the constant $g_{c}$ [according to the meaning of the corresponding term in Eq.", "(REF )] as shown in the inset of Fig.", "REF .", "In addition, results for the exclusion spectra $\\mathcal {G}(n)$ from Eq.", "(REF ) are shown in Fig.", "REF as a function of the lattice coverage $\\theta =k<N>/M$ , where $<N>$ and $M$ represent the average number of particles on the lattice and the number of lattice sites, respectively.", "Given that $\\theta =k<N>/M=k<N>/(G/2)=2k<N>/G=g n$ , all the quantities above can be expressed in the nomenclature of lattice coverage by the variable change $n=\\theta /g$ with $0\\le \\theta \\le 1$ .", "The adsorption isotherm ($\\mu $ vs $\\theta $ ) follows straightforwardly from Eq.", "(REF ) and (REF ), $ \\beta \\mu =\\ln [\\frac{\\theta }{g}]+[g_{c} e^{(-\\theta g_{c}/g)}+ge^{(-g_{c}/g)}-1] \\ln [e^{(-\\theta g_{c}/g)}-e^{(-g_{c}/g)} \\theta +\\theta /g]- [g_{c} e^{(-\\theta g_{c}/g)}+ge^{(-g_{c}/g)}] \\ln [e^{(-\\theta g_{c}/g)}-e^{(-g_{c}/g)} \\theta ]+\\beta U_{o}$ .", "Figure: Exclusion spectrum 𝒢(θ)\\mathcal {G}(\\theta ) for k=2k=2 to k=10k=10 (from bottom to top).", "Solid lines are analytical results from Eq.", "() with n=θ/g=θ/(2k)n=\\theta /g=\\theta /(2k).", "Symbols represent simulations.Concerning the new quantity we have introduced, $\\mathcal {G}(\\theta )$ , the predictions from this work [Eq.", "(REF ) along with (REF ) and (REF )] reproduce significantly well the exclusion per particle for all $k$ as density varies.", "This appears as a very useful function in the presence of correlations since can be obtained directly either from the distribution $n(\\mu )$ or from experiments providing a relevant average measurement about the spatial configuration of particles in the system from thermodynamics.", "The limiting values being $\\mathcal {G}(0)=\\mathcal {G}_{o}$ and $\\mathcal {G}(1)=g$ .", "Additionally, state exclusion can be observed through $\\mathcal {G}(\\theta )$ in the presence of particle interactions and order-disorder transitions, as it will be presented in future work.", "Finally, an approach to the equilibrium statistics of many-particle systems with exclusion having spatially correlated states for single-particles has been put forward, the statistical distribution has been obtained, a useful exclusion spectrum function has been defined and the results applied to 2D-lattices from small to large linear particles, resulting in a significant agreement for such a complex statistical systems.", "The formalism can be straightforwardly applied to other particles/lattice geometries and higher dimensions.", "In addition, the analysis could be extended to more complex off-lattice systems in the presence of mutual exclusion (such as hard disks and spheres in the continuum).", "This work is in progress.", "This paper was supported in part by CONICET and Universidad Nacional de San Luis, Argentina." ] ]
1906.04300
[ [ "Lyman-alpha in the GJ 1132 System: Stellar Emission and Planetary\n Atmospheric Evolution" ], [ "Abstract GJ 1132b, which orbits an M dwarf, is one of the few known Earth-sized planets, and at 12 pc away it is one of the closest known transiting planets.", "Receiving roughly 19x Earth's insolation, this planet is too hot to be habitable but can inform us about the volatile content of rocky planet atmospheres around cool stars.", "Using Hubble STIS spectra, we search for a transit in the Lyman-alpha line of neutral hydrogen (Ly-alpha).", "If we were to observe a deep Ly-alpha absorption signature, that would indicate the presence of a neutral hydrogen envelope flowing from GJ 1132b.", "On the other hand, ruling out deep absorption from neutral hydrogen may indicate that this planet does not have a detectable amount of hydrogen loss, is not losing hydrogen, or lost hydrogen and other volatiles early in the star's life.", "We do not detect a transit and determine a 2-sigma upper limit on the effective envelope radius of 0.36 R* in the red wing of the Ly-alpha line, which is the only portion of the spectrum we detect after absorption by the ISM.", "We analyze the Ly-alpha spectrum and stellar variability of GJ1132, which is a slowly-rotating 0.18 solar mass M dwarf with previously uncharacterized UV activity.", "Our data show stellar variabilities of 5-22%, which is consistent with the M dwarf UV variabilities of up to 41% found by \\citet{Loyd2014}.", "Understanding the role that UV variability plays in planetary atmospheres is crucial to assess atmospheric evolution and the habitability of cooler rocky exoplanets." ], [ "Introduction", "The recent discoveries of terrestrial planets orbiting nearby M dwarfs [20], [3], [16], [4], [36] provide us with the first opportunity to study small terrestrial planets outside our solar system, and observatories such as the Hubble Space Telescope allow us to analyze the atmospheres of these rocky exoplanets.", "Additionally, it is important that we learn as much as we can about these planets as we prepare for atmospheric characterization with the James Webb Space Telescope [14], [38].", "JWST will provide unique characterization advantages due to its collecting area, spectral range, and array of instruments that allow for both transmission and emission spectroscopy [2].", "M dwarfs have been preferred targets for studying Earth-like planets due to their size and temperature which allow for easier detection and characterization of terrestrial exoplanets.", "However, the variability and high UV-to-bolometric flux ratio of these stars makes habitability a point of contention [48], [54].", "It is currently unknown whether rocky planets around M dwarfs can retain atmospheres and liquid surface water or if UV irradiation and frequent flaring render these planets uninhabitable [47], [21], [34], [9].", "On the contrary, UV irradiation may boost the photochemical synthesis of the building blocks of life [44].", "We must study the UV irradiation environments of these planets, especially given that individual M stars with the same spectral type can exhibit very different UV properties [60], and a lifetime of UV flux from the host star can have profound impacts on the composition and evolution of their planetary atmospheres.", "Table: GJ 1132 system parameters.One aspect of terrestrial planet habitability is volatile retention, including that of water in the planet's atmosphere.", "One possible pathway of evolution for water on M dwarf terrestrial worlds is the evaporation of surface water and subsequent photolytic destruction of H$_2$ O into H and O species [9], [24].", "The atmosphere then loses the neutral hydrogen while the oxygen is combined into O$_2$ /O$_3$ and/or resorbed into surface sinks [58], [53], [34], [48], [23].", "In this way, large amounts of neutral H can be generated and subsequently lost from planetary atmospheres.", "Studies have shown O$_2$ and O$_3$ alone to be unreliable biosignatures for M dwarf planets because they possess abiotic formation mechanisms [52], though they are still important indicators when used with other biomarkers [35].", "Understanding atmospheric photochemistry for terrestrial worlds orbiting M dwarfs is critical to our search for life.", "[28] and [17] discovered that Gliese 436b, a warm Neptune orbiting an M dwarf, has a 56.3$\\pm $ 3.5% transit depth in the blue-shifted wing of the stellar Ly$\\alpha $ line.", "[30] further studied this system to solidify the previous results and verify the predictions made for the structure of the outflowing gas made by [7].", "For planets of this size and insolation, atmospheric escape can happen as a result of the warming of the upper layers of the atmosphere, which expand and will evaporate if particles begin reaching escape velocity [55], [29], [39].", "Figure: Image of a STIS x2d spectrum.", "Geocoronal Lyα\\alpha is shown as a long vertical line while the GJ 1132 Lyα\\alpha emission is shown in the center.Figure: All 14 STIS Lyα\\alpha spectra in visits 1 (a) and 2 (b) and the averaged stacked spectrum (c).", "The shape of the stellar Lyα\\alpha line is a Voigt profile which has been reshaped by convolution with the STIS line spread function and ISM absorption by neutral atomic hydrogen and deuterium.", "The integration regions for summing up the total Lyα\\alpha flux are the shaded blue and red areas in (b), with a region in the middle that we omit due to the geocoronal emission.", "It is apparent that the blue-shifted region of the spectrum is at the noise level, and therefore unlikely to give us any viable information.", "We set the reference velocity for the spectral profiles at 35 km s -1 35~\\rm {km~s^{-1}}, as this is the cited system velocity .", "[37] find that the source of this outflowing hydrogen is from the H$_2$ -dominated atmosphere of Gl 436b, with reactions fueled by OH$^-$ .", "Ly$\\alpha $ photons from the M dwarf host star dissociate atmospheric H$_2$ O into OH and H, which destroy H$_2$ .", "HI at high altitudes where escape is occurring is formed primarily through dissociation of H$_2$ with contributions from the photolyzed H$_2$ O.", "Modeling of Gl 436b [5], [7] demonstrates that the combination of low radiation pressure, low photo-ionization, and charge-exchange with the stellar wind can determine the structure of the outflowing hydrogen, which manifests as a difference in whether the light curve shows a transit in the blue-shifted region of Ly$\\alpha $ or the red-shifted region and imprints a specific spectro-temporal signature to the blue-shifted absorption.", "[30] used new observations to confirm the [7] predictive simulations that this exosphere is shaped by charge-exchange and radiative braking.", "As giant hydrogen clouds have thus been detected around warm Neptunes [10], it opens the possibility for the atmospheric characterization of smaller, terrestrial planets.", "[37] also find that photolysis of H$_2$ O also increases CO$_2$ concentrations.", "For Earth-like planets orbiting M dwarfs, understanding the photochemical interaction of Ly$\\alpha $ photons with water is very important for the evolution and habitability of a planet's atmosphere." ], [ "GJ 1132b", "GJ 1132b is a small terrestrial planet discovered through the MEarth project [3].", "It orbits a 0.181 M$_$ M dwarf located 12 parsecs away with an orbital period of 1.6 days [16].", "Table REF summarizes its basic properties.", "This is one of the nearest known transiting rocky exoplanets and therefore provides us with a unique opportunity to study terrestrial atmospheric evolution and composition.", "While GJ 1132b is too hot to have liquid surface water, it is important to establish whether this planet and others like it retain substantial atmospheres under the intense UV irradiation of their M dwarf host stars.", "Knowing whether warm super-Earths such as GJ 1132b regularly retain volatiles such as water in their atmospheres constrains parameter space for our understanding of atmospheric survivability and habitability.", "[15] rule out a low mean-molecular weight atmosphere for this planet by analyzing ground-based transmission spectra at 700-1040 nm.", "By fitting transmission models for atmospheric pressures of 1-1000 mbar and varying atmospheric composition, they find that all low mean-molecular weight atmospheres are a poor fit to the data, which is better described as a flat transmission spectrum that could be due to a $>$ 10x solar metallicity or $>$ 10$\\%$ water abundance.", "Whether these results imply GJ 1132b has a high mean molecular weight atmosphere or no atmosphere at all remains to be seen.", "If we detect a Ly$\\alpha $ transit then this implies UV photolysis of H$_2$ O into neutral H and O, leading to outflowing neutral H. The oxygen could recombine into O$_2$ and O$_3$ , resulting in a high mean-molecular weight atmosphere, and wholesale oxidation of the surface.", "This work serves as the first characterization of whether there is a neutral hydrogen envelope outflowing from GJ 1132b as well as an opportunity to characterize the deepest (longest integration) Ly$\\alpha $ spectrum of any quiet M dwarf of this mass.", "Figure: Intrinsic Lyα\\alpha profile for GJ 1132b, with 200 random MCMC samples in gray.", "The absorption and intrinsic emission models were modeled with the Lyapy software which assumes a Voigt profile for the emission and parameterizes the ISM absorption into velocity, line width, and column density.", "Here, the line center is in the system's rest frame." ], [ "Solar System Analogs", "The atmospheric evolution and photochemistry we evaluate here is similar to what we have seen in Mars and Venus.", "Much of Mars' volatile history has been studied in the context of Ly$\\alpha $ observations of a neutral H corona that surrounds present-day Mars.", "[13] use Ly$\\alpha $ observations to constrain Martian neutral H loss coronal structure, similar to what we attempt in this work.", "Indeed, Mars has historically lost H$_2$ O via photochemical destruction and escape of neutral H [40], [61], though the solar wind-driven escape mechanisms for Mars are not necessarily the same as what we propose for GJ 1132b in this work.", "Venus has long been the example for what happens when a terrestrial planet is irradiated beyond the point of habitability, as is more than likely the case with GJ 1132b.", "Venus experienced a runaway greenhouse effect which caused volatile loss and destruction of H$_2$ O.", "[25] study the effects of solar UV radiation on an early Venus atmosphere.", "They find that within a billion years, Venus could have lost most of a terrestrial ocean of water through hydrodynamic escape of neutral H, after photochemical destruction of H$_2$ O. GJ 1132b has a higher surface gravity than Venus, which would extend this time scale of hydrogen loss, but it also has a much higher insolation which would reduce the hydrogen loss timescale.", "Later in this work, we will estimate the expected maximum mass loss rate for GJ 1132b based on the stellar Ly$\\alpha $ profile.", "The rest of the paper will be as follows.", "In §2 we describe the methods of analyzing the STIS data, reconstructing the stellar spectrum, and analyzing the light curves.", "In §3 we describe the transit fit and intrinsic spectrum results.", "We discuss the results and their implications in §4, including estimates of the mass loss rate from this planet's atmosphere.", "In §5 we describe what pictures of GJ 1132b's atmosphere we are left with.", "Figure: Corner plot showing the samples used in recreating the intrinsic emission profile.", "We omitted the stellar radial velocity samples because the prior was well constrained by independent radial velocity measurements.", "In this plot, log(A) is the log of the emission amplitude (which has units of erg s -1 ^{-1} cm -2 ^{-2} Å -1 ^{-1}), FWHM is the emission Full Width Half Maximum in km s -1 ^{-1}, logN(HI) is the log of the column density of neutral ISM hydrogen (which has units of cm -2 ^{-2}), b is the ISM Doppler parameter in km s -1 ^{-1}, and v HI _{\\rm {HI}} is the ISM cloud velocity in km s -1 ^{-1}." ], [ "Hubble STIS Observations", "To study the potential existence of a neutral hydrogen envelope around this planet, we scheduled 2 transit observations of 7 orbits each (2 observations several hours from mid-transit for an out of transit measurement and 5 observations spanning the transit) with the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST)Cycle 24 GO proposal 14757, PI: Z Berta-Thompson.", "We used the G140M grating with the 52”x0.05” slit, collecting data in TIME-TAG mode with the FUV-MAMA photon-counting detector.", "This resulted in 14 spectra containing the Ly$\\alpha $ emission line (1216 Å), which show a broad profile that has been centrally absorbed by neutral ISM atomic hydrogen.", "We re-extracted the spectra and corrected for geocoronal emission using the calstis pipeline [22].", "The STIS spectrum extraction involved background subtraction which accounts for geocoronal emission (see Fig.", "REF ), leaving us only with the need to model the stellar emission and ISM absorption.", "We omit data points from both visits that fall within the geocoronal emission signal, wavelengths from both visits that overlapped with strong geocoronal emission and therefore had high photon noise.", "We thus define our blue-shifted region to be $<$ -60 km s$^{-1}$ and our red-shifted region to be $>$ 10 km s$^{-1}$ relative to the star.", "One potential source of variability is where the target star falls on the slit.", "If it fell directly on the slit, then the observed flux will be more than if the star was partially off the slit.", "To account for this, we scheduled ACQ/PEAK observations at the start of each HST orbit to center the star on the slit and minimize this variability.", "In order to analyze the light curves with higher temporal resolution, we used the STIS time-tag mode to split each of the 14 2 ks exposures into 4 separate 0.5 ks sub-exposures.", "This detector records the arrival time of every single photon, which is what allows us to create sub-exposures in time-tag mode.", "Each 2D spectrum sub-exposure was then converted into a 1D spectrum.", "To do this, we first defined an extraction window around the target spectrum (see Fig.", "REF ) and summed up all the flux in that window along the spatial axis.", "Extraction windows were also defined on either side of the target in order to estimate the background and subtract that from the target window.", "This results in a noisy line core but eliminates the geocoronal emission signature (Fig.", "REF a & REF b).", "These steps were all performed with calstis.", "Figure: Modeled light curves from both visits.", "In addition to the calibrated flux values, we display the flux in photons s -1 ^{-1} because the SNR is very low at Lyα\\alpha and this motivated us to use a Poisson likelihood in our analysis of the light curves.", "Some data points fall to negative values, which can happen when the data point has effectively no flux and then data reduction processes (such as background subtraction) subtract a slightly higher amount of flux.", "The gray bars indicate what we calculate as a 15% \"stellar variability\" fudge factor - acquired by calculating what size of error bars would be necessary to result in a χ 2 \\chi ^2 value of 1 for our best fit models.", "The blue wing light curves don't provide much information due to their extremely low flux but we can see from the red wing fits that there is an upper limit on the transit depth." ], [ "Stellar Spectrum Reconstruction", "With the same spectra used for light curve analysis, we created a single weighted average spectrum, representing 29.3 ks (8.1 hrs) of integration at Ly$\\alpha $ across 14 exposures (Fig.", "REF c).", "This stacked spectrum was used with LyaPy modeling program [59] that uses a 9-dimensional MCMC to reconstruct the intrinsic stellar spectrum assuming a Voigt profile.", "Modeling observed Ly$\\alpha $ spectra is tricky because of the neutral ISM hydrogen found between us and GJ 1132.", "This ISM hydrogen has its own column density, velocity, and line width which creates a characteristic absorption profile within our Ly$\\alpha $ emission line.", "This model takes 3 ISM absorption parameters (column density, cloud velocity, Doppler parameter) and models the line core absorption while simultaneously modeling the intrinsic emission which would give us the resulting observations.", "Turbulent velocity of the ISM is assumed to be negligible, with the line width dominated by thermal broadening.", "A fixed deuterium-to-hydrogen ratio of $1.56\\times 10^{-5}$ [56] is also applied to account for the deuterium absorption and emission near Ly$\\alpha $ .", "Modeling the ISM parameters required us to approximate the local interstellar medium as a single cloud with uniform velocity, column density, and Doppler parameter.", "While the local ISM is more complex than this single component and contains two clouds (G, Cet) in the line of sight toward GJ 1132 [42], our MCMC results strongly favored the velocity of the G cloud, so we defined the ISM priors based on this cloud [42], [43].", "We use uniform priors for the emission amplitude and FWHM, and Gaussian priors for the HI column density, stellar velocity, HI Doppler width, and HI ISM velocity.", "The HI column density and Doppler width parameter spaces were both truncated in order to prevent the model from exploring physically unrealistic values.", "For N$_{\\rm {HI}}$ , we restrict the parameter space to 10$^{16}$ -10$^{20}$ cm$^{-2}$ , based on the stellar distance (12.04 pc) and typical n$_{\\rm {HI}}$ values of $0.01-0.1$  cm$^{-3}$ [42], [57].", "We limit the Doppler width to 6-18 km s$^{-1}$ , based on estimates of the Local Interstellar Cloud (LIC) ISM temperatures [42]." ], [ "Light Curve Analysis", "The extracted 1D spectra were then split into a blue-shifted regime and red-shifted regime, on either side of the Ly$\\alpha $ core (Fig.", "REF c) so that we could integrate the total blue-shifted and red-shifted flux and create 4 total light curves from the 2 visits (Fig.", "REF ).", "Each of these light curves was fitted with a BATMAN [27] light curve using a 2-parameter MCMC with the emcee package [18].", "The BATMAN models assume that the transiting object is an opaque disk, which is usually appropriate for modeling planetary sizes.", "However, we are modeling a possible hydrogen exosphere which may or may not be disk-like, and which would have varying opacity with radius.", "For this work, we use the BATMAN modeling software with the understanding that our results tell us the effective radius of a cartoon hydrogen exosphere, with an assumed spherical geometry.", "Figure: Joint posterior distribution for the R p _p/R * _* distributions for both visits.", "Poisson likelihoods were used due to the low photon count regime of these spectra.We fit for R$_p$ /R$_*$ and the baseline flux using a Poisson likelihood for each visit.", "We use a Poisson distribution because at Ly$\\alpha $ , the STIS detector is receiving very few photons.", "Our log(likelihood) function is: $\\ln (likelihood)=\\sum _i[d_i\\ln (m_i)-m_i-\\ln (d_i!", ")]$ where $d_i$ is the total (gross) number of photons detected and $m_i$ is the modeled number of photons detected.", "The photon model is acquired by taking a BATMAN model of in-transit photons and adding the sky photons, which is data provided through the calstis reduction pipeline.", "Uniform priors are assumed for both R$_p$ /R$_*$ and the baseline flux.", "We restrict our parameter space to explore only effective cloud radii $>$  0, representing physically plausible clouds that block light during transit.", "By taking simple averages of the light curve fluxes, we find the ratio of the in-transit flux compared with out-of-transit flux to be $1.01\\pm 0.16$ for the visit 1 red-wing flux and $0.97\\pm 0.13$ for the visit 2 red-wing.", "As both are consistent with no detectable transit, the constraints we obtain from the fitting procedure will represent upper limits on the effective size of any hypothetical cloud." ], [ "Spectrum Reconstruction", "Figure REF shows the best fit emission model with 1-sigma models and a corner plot to display the most crucial modeling parameters, with MCMC results shown in Table REF and Figure REF .", "This result gives us the total Ly$\\alpha $ flux for this M dwarf.", "The results of the stellar spectrum reconstruction indicate that there is one component of Ly$\\alpha $ flux, though that is potentially a result of the low SNR regime of these observations.", "Additionally, our fit indicates that there is one dominant source of ISM absorption between us and GJ 1132 - a single cloud with velocity $-3.1$  km s$^{-1}$ , HI column density $10^{17.9}$  cm$^{-2}$ and Doppler parameter $13.9$ km s$^{-1}$ .", "Our current understanding of LIC [42], [43] indicates that there should be 2 clouds, G and Cet in the line of sight of GJ 1132, but our derived v$_{HI}$ is consistent with the velocity of G, which is reported as $-2.73\\pm 0.94$  km s$^{-1}$ .", "We take this to mean that the G cloud is the dominant source of absorption and that we can subsequently reconstruct this spectrum under a single-cloud assumption.", "Figure: Comparison of F[Lyα\\alpha ]/F[bol] for GJ 1132 compared with stars in the MUSCLES Treasury Survey , , TRAPPIST-1 , HD 97658 , GJ 436 , GJ 3470 , as well as the Sun .", "The stars shown here are all M and K dwarfs that are known exoplanet hosts.", "The error bars on GJ 1132 are statistical errors based on our modeling, so we have included the flux ratios from both visits (9 months apart) to display the variability we see in the data, labeled V1 and V2.By integrating the reconstructed emission profile, we find a Ly$\\alpha $ flux of 2.88$^{+0.42}_{-0.31}$ x10$^{-14}$ erg s$^{-1}$  cm$^{-2}$ which gives f[Ly$\\alpha $ ]/f[bol] = 2.9$\\pm $ 0.4x10$^{-5}$ , where we have calculated the bolometric luminosity of GJ 1132b as: $f_{\\rm {bol}} = \\sigma T_{\\rm {eff}}^4\\left(\\frac{R_*}{\\rm {distance}}\\right)^2,$ Where values for the T$_{\\rm {eff}}$ and R$_*$ were taken from [4] and the distance to the star is taken from [16].", "Compared with the Sun which has f[Ly$\\alpha $ ]/f[bol] = 4.6x10$^{-6}$ [32], we can see that this M dwarf emits fractionally 6x more of its radiation in the ultraviolet.", "Given the intra-visit stellar variability, we also modeled the average Ly$\\alpha $ spectra for visits 1 and 2 separately.", "All modeled parameters (see Fig.", "REF ) were consistent between visits except the FWHM, which were different by 3-$\\sigma $ , and the total integrated fluxes which differed by 2-$\\sigma $ (2.90$^{+0.47}_{-0.41}$ x10$^{-14}$ erg s$^{-1}$  cm$^{-2}$ for visit 1 and 4.30$^{+0.52}_{-0.43}$ x10$^{-14}$ erg s$^{-1}$  cm$^{-2}$ for visit 2).", "For the calculation of mass loss rates in section §4.1, we use the integrated flux of the combined reconstructed spectrum (Fig.", "REF )." ], [ "Light Curve Modeling", "The light curves for both visits are shown in Figure REF .", "MCMC modeling of these light curves resulted in best fit parameters shown in Table REF .", "We report no statistically significant transits, but we can use the modeling results to calculate limits on the hydrogen cloud parameters.", "To ensure that we were not biasing our results by converting from the measured flux counts to photons s$^{-1}$ , we also analyzed the flux-calibrated light curves with Gaussian likelihoods based on pipeline errors and found the results did not significantly differ from what we present here." ], [ "The STIS Breathing Effect", "There is a well-known intra-orbit systematic which shows up in Hubble STIS observations known as the breathing effect which can result in a change of amplitude of about $0.1\\%$ over the course of an HST orbit.", "[12], [49], [9].", "This effect is small compared to the photon uncertainty in these observations, but to examine this STIS systematic, we perform our light curve analysis on the non-time-tagged data.", "We find that the results are consistent with our time-tagged analysis, so we posit that this effect does not significantly alter our conclusions." ], [ "Stellar Variability", "The red wing of our spectral data show a highly variable stellar Ly$\\alpha $ flux over the course of these HST visits and we quantify this variability as a Gaussian uncertainty, $\\sigma _x^2 =\\sigma _{\\rm {measured}}^2-\\sigma _{\\rm {photometric}}^2,$ where $\\sigma _{\\rm {measured}}$ is our RMS noise and $\\sigma _{\\rm {photometric}}$ is the calstis-generated error propagated through our spectral integration.", "Within one 90-minute HST orbit, we see flux variabilities ($\\sigma _x$ ) of 5-16% for visit 1 and 7-18% for visit 2.", "Among one entire 18-hour visit, variability is 20% for visit 1 and 14% for visit 2 while in the 9 months between the two visits, there is a 22% offset.", "These results are consistent with the 1-41% M dwarf UV variability found by [33]." ], [ "Discussion", "With 14 STIS exposures, we have characterized a long-integration Ly$\\alpha $ spectrum and furthered our understanding of the intensity of UV flux from this M dwarf.", "[19] find that as much as half of the UV flux of quiescent M dwarfs is emitted at Ly$\\alpha $ , so knowing the total amount of flux at this wavelength serves as a proxy for the total amount of UV flux for this type of star.", "Our measurement of this Ly$\\alpha $ flux provides a useful input for photochemical models of haze, atmospheric escape, and molecular abundances in this planet's atmosphere.", "From the red-shifted light curves, we can calculate a 2-$\\sigma $ upper limit on the radius of this potential hydrogen cloud outflowing from GJ 1132b.", "We calculate this upper limit (see Fig.", "REF ) by taking the joint (visit 1 & visit 2) posterior distributions that resulted from MCMC modeling of these light curves and integrating the CDF to the 95% confidence interval and examining the corresponding R$_p$ /R$_*$ .", "The 2-$\\sigma $ upper limit from the red-shifted Ly$\\alpha $ spectra gives us an R$_p$ /R$_*$ of 0.36.", "The upper limit R$_p$ /R$_*$ from the blue-shifted light curves is 0.62 but given the very low SNR of that data, this is not a meaningful constraint.", "The red-shifted result is an upper limit on the effective radius of a hydrogen coma, and the real coma could be much more diffuse and asymmetric.", "Table: Intrinsic emission line model parameters taken from MCMC samples, with 1-σ\\sigma error bars.", "Total Flux (1 Au) is the flux if it were measured 1 Au from the star, whereas the Total Flux is the flux as measured at HST." ], [ "GJ 1132b Atmospheric Loss", "In order to connect our results to an upper limit on the possible mass loss rate of neutral H from this planet's atmosphere, we follow the procedure outlined in [28].", "Assuming a spherically symmetric outflowing cloud of neutral H, the equation for mass loss is $\\dot{M}_{HI} = 4\\pi r^2v(r)n_{HI}(r)$ Where $v(r)$ is the outflowing particle velocity and $n_{HI}(r)$ is the number density of HI at a given radius, r. For this calculation, we will be examining our 2-$\\sigma $ upper limit radius at which the cloud becomes optically thick, where (R$_p$ /R$_*$ )$^2$ = $\\delta $ = 0.13.", "We assume a $v$ range of $10-100$  km s$^{-1}$ , which is the range of the planet's escape velocity (10 km s$^{-1}$ ) and the stellar escape velocity (100 km s$^{-1}$ ).", "[28] reduce Equation (3) to $\\dot{M}_{HI} = \\frac{2\\delta {R_*}mv}{\\sigma _0}$ with a Ly$\\alpha $ absorption cross-section $\\sigma _0$ defined as $\\sigma _0 = \\frac{\\sqrt{\\pi }e^2}{m_ec\\Delta \\nu _D}f$ where $e$ is the electron charge, $m_e$ is the electron mass, $c$ is the speed of light, $f$ is the particle oscillator strength (taken to be $0.4161$ for HI) and $\\Delta \\nu _D$ is the Doppler width, $b/\\lambda _0$ , where we use $100~\\rm {km~s^{-1}}$ for b, as was done in [28].", "This gives us an upper limit mass loss rate of $\\dot{M}_{HI} < 0.86\\times 10^{9}$  g s$^{-1}$ for neutral hydrogen, corresponding to $15.4\\times 10^9$  g s$^{-1}$ of water decomposition, assuming all escaping neutral H comes from H$_2$ O.", "If this upper-limit mass loss rate was sustained, GJ 1132b would lose an Earth ocean in approximately 6 Myr.", "If we had actually detected mass loss at this high rate, it would likely indicate that there had been recent delivery or outgassing of water on GJ 1132b, because primordial atmospheric water would have been lost on time scales much shorter than the present age of the system.", "Table: Light curve fit results for MCMC sampling where Poisson likelihoods were used.Figure: Simulations of the GJ1132 system showing the dynamics of a hypothetical outflowing hydrogen cloud.", "The left panel shows a top-down view of the system, as a hydrogen tail extends in a trailing orbit.", "The right panel shows the view from an Earth line of sight, at mid-transit.We can also calculate the energy limited mass loss rate, corresponding to the the ratio of the incoming XUV energy to the work required to lift the particles out of the atmosphere: $\\dot{M} = \\frac{F_{XUV}\\pi R_p^2}{GM_p~R_p^{-1}} = \\frac{F_{XUV}\\pi R_p^3}{GM_p}.$ The total F$_{\\rm {XUV}}$ is the flux value at the orbit of GJ 1132b.", "Using our derived Ly$\\alpha $ flux, the Lyapy package calculates stellar EUV spectrum and luminosity from 100-1171 Å based on [31].", "From that EUV spectrum, we then calculate the 5-100 Å XUV flux based on relations described in [26].", "Assuming 100$\\%$ efficiency, we obtain an energy-limited neutral hydrogen mass loss rate of $3.0\\times 10^9$  g s$^{-1}$ estimated from the stellar spectrum reconstruction.", "This energy-limited escape rate is commensurate with the upper-limit we calculate based on the transit depth and stellar properties in the previous section.", "If we assume a heating efficiency of $1\\%$ [7], then we arrive at a low expected neutral hydrogen loss rate of $3.0\\times 10^7$  g s$^{-1}$ , below the level of detectability with these data." ], [ "Simulating HI Outflow from GJ 1132b ", "Figure REF shows simulation results for neutral hydrogen outflowing from GJ 1132b from the EVaporating Exoplanet code (EVE) [8], [7].", "This code performs a 3D numerical particle simulation given stellar input parameters and atmospheric composition assumptions.", "These simulations were performed using the Ly$\\alpha $ spectrum derived in this work, where the full XUV spectrum has been found as described in the previous section.", "This spectrum is used directly in EVE to calculate the photoionization of the neutral H atoms and calculate theoretical Ly$\\alpha $ spectra during the transit of the planet as they would be observed with HST/STIS.", "In addition, our Ly$\\alpha $ spectrum is used to calculate the radiation pressure felt by the escaping neutral hydrogen, which informs the dynamics of the expanding cloud.", "Figure: EVE simulated absorption spectra in-transit and 4 hours pre-transit.", "We can see that the only region of significant absorption is at 1215.51215.5 Å, where absorption peaks at about 12%\\% as seen in the bottom panel.", "While there is a larger expected flux decrease in the blue wing, the signal is largely in the region that the ISM absorbs and our data are too noisy in the blue wing to detect the possible absorption signal seen in the models.", "The mass loss rate corresponding to the above model is 1×\\times 10 7 ^7g s -1 ^{-1}.EVE simulations were created with the following assumptions: The outflowing neutral hydrogen atoms escape from the Roche lobe altitude ($\\sim 5~R_p$ ) at a rate of $1\\times 10^7$ g s$^{-1}$ , modeled as a Maxwellian velocity distribution with upward bulk velocity of 5 km s$^{-1}$ and temperature of 7000 K, resulting in a cloud which could absorb upwards of 80$\\%$ of the flux in the blue wing.", "However, GJ 1132 has a positive radial velocity, so blue-shifted flux falls into the regime of ISM absorption and the signal is lost.", "Simulations of the in-transit and out-of-transit absorption spectra as they would be observed at infinite resolution by HST are shown in Figure REF .", "However, the simulations don't rule out that some thermospheric neutral H may absorb some extra flux in the red wing [46].", "We note that for planets around M dwarfs, the upward velocity may have a strong influence on the extension of the hydrogen coma.", "The thermosphere is simulated as a 3D grid within the Roche Lobe, defined by a hydrostatic density profile, and the temperature and upward velocity from above.", "The exosphere is collisionless with its dynamics dominated by radiation pressure.", "There might be other processes shaping the exosphere of GJ 1132b (magnetic field, collisions with the stellar wind, the escaping outflow remaining collisional at larger altitudes than the Roche lobe), but for these simulations we take the simplest possible approach based on what we actually know of the system.", "Finally, we do not include self-shielding effects of HI atoms within the exosphere, as we do not expect the exosphere is dense enough for self-shielding to significantly alter the results.", "The integrated Ly$\\alpha $ spectrum corresponds with a maximum ratio of stellar radiation pressure to stellar gravity of 0.4, which puts this system in the regime of radiative breaking [5], which has a slight effect of pushing neutral hydrogen to a larger orbit.", "However, the gas is not blown away so the size of the hydrogen cloud will increase if we increase the outward particle velocity.", "Since the exosphere is not accelerated, most of its absorption is close to 0 km s$^{-1}$ in the stellar reference frame, with some blue-shifted absorption because atoms in the tail move to a slightly larger orbit than the planet.", "This indicates that the lack of blue-shifted flux in our observations, due to ISM absorption, is a hindrance to fully understanding the possible hydrogen cloud around this planet.", "The upper limit cloud size that we quote is based on the observed red-shifted flux in a system which is moving away from us at 35 km s$^{-1}$ , so any cloud absorption of flux closer to the line center is outside of the scope of what we can detect." ], [ "Conclusions", "In this work we make the first characterization of the exosphere of GJ 1132b.", "Until a telescope like LUVOIR [45], these observations will likely be the deepest possible characterization for Ly$\\alpha $ transits of this system.", "If this planet has a cloud of neutral hydrogen escaping from its upper atmosphere, the effective size of that cloud must be less than $0.36$  R$_*$ ($7.3$  R$_{\\rm {p}}$ ) in the red-shifted wing.", "The blue wing indicates an upper limit of $0.62$  R$_*$ ($12.6$  R$_{\\rm {p}}$ ), though this is a very weak constraint.", "In addition, we were able to model the intrinsic Ly$\\alpha $ spectrum of this star.", "This Ly$\\alpha $ transit's upper limit R$_p$ /R$_*$ implies a maximum hydrogen escape rate of $0.08-0.8\\times 10^9$  g s$^{-1}$ .", "If this is the case, GJ 1132b loses an Earth ocean of water between $6-60$  Myr.", "Since the mass loss rate scales linearly with $F_{\\rm {XUV}}$ , we estimate that if this planet were in the habitable zone of its star, about 5x further than its current orbit [48], the planet would lose an Earth ocean of water in as little as 0.15-1.5 Gyr.", "However, these values are based on 2-$\\sigma $ upper limits and theoretical calculations suggest mass loss rates lower than these values, so further Ly$\\alpha $ observations are needed to better constrain this mass loss.", "In addition, these estimates are based on the current calculated UV flux of GJ 1132, which likely decreases over the star's lifetime [51] and this results in an underestimate of the mass loss.", "The relative Ly$\\alpha $ /Bolometric flux is roughly 1 order of magnitude higher for this M dwarf than it is for the Sun, which has grave implications for photolytic destruction of molecules in planets around M dwarfs of this mass.", "Even when considering the EUV spectrum of GJ 1132 [59] and the EUV flux of the Sun [62], we find that GJ 1132 emits 6x as much EUV flux (relative to F$_{\\rm {bol}}$ ) as the Sun.", "This work leaves us with several possible pictures of the atmosphere of GJ 1132b: The real atmospheric loss rates may be comparable to these upper limits, or they may be much less, which leaves us with an open question about the atmosphere and volatile content of GJ 1132b.", "There could be some loss, but below the detection limit of our instruments.", "If there is a neutral hydrogen envelope around GJ 1132b, then this super-Earth is actively losing water driven by photochemical destruction and hydrodynamic escape of H. The remaining atmosphere will then be rich in oxygen species such as O$_2$ and the greenhouse gas CO$_2$ .", "GJ 1132b could be Mars-like or Venus-like, having lost its H$_2$ O long ago, with a thick CO$_2$ and O$_2$ atmosphere remaining, or no atmosphere at all.", "We posit that this is the most likely scenario, and thermal emission observations with JWST [38] would give further insight to the atmospheric composition of GJ 1132b.", "There might be a giant cloud of neutral hydrogen around GJ1132b based on the EVE simulations, which is undetectable because of ISM absorption.", "However, if there are other volatiles in the atmosphere we could detect this cloud using other tracers such as carbon or oxygen with HST in the FUV, or helium [50] with ground-based high-resolution infrared spectrographs [1], [41] or with JWST.", "GJ 1132b presents one of our first opportunities to study terrestrial exoplanet atmospheres and their evolution.", "While future space observatories will allow us to probe longer wavelength atmospheric signatures, these observations are our current best tool for understanding the hydrogen content and possible volatile content loss of this warm rocky exoplanet." ], [ "acknowledgments", "This work is based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555, and financially supported through proposal HST-GO-14757 through the same contract.", "This material is also based upon work supported by the National Science Foundation under grant AST-1616624.", "This publication was made possible through the support of a grant from the John Templeton Foundation.", "The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.", "ZKBT acknowledges financial support for part of this work through the MIT Torres Fellowship for Exoplanet Research.", "JAD is thankful for the support of the Heising-Simons Foundation.", "This project has been carried out in part in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF).", "VB and DE acknowledge the financial support of the SNSF.", "ERN acknowledges support from the National Science Foundation Astronomy & Astrophysics Postdoctoral Fellowship program (Award #1602597).", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Four Aces; grant agreement No 724427).", "This work was also supported by the NSF GRFP, DGE 1650115." ] ]
1906.04274
[ [ "Exploration via Hindsight Goal Generation" ], [ "Abstract Goal-oriented reinforcement learning has recently been a practical framework for robotic manipulation tasks, in which an agent is required to reach a certain goal defined by a function on the state space.", "However, the sparsity of such reward definition makes traditional reinforcement learning algorithms very inefficient.", "Hindsight Experience Replay (HER), a recent advance, has greatly improved sample efficiency and practical applicability for such problems.", "It exploits previous replays by constructing imaginary goals in a simple heuristic way, acting like an implicit curriculum to alleviate the challenge of sparse reward signal.", "In this paper, we introduce Hindsight Goal Generation (HGG), a novel algorithmic framework that generates valuable hindsight goals which are easy for an agent to achieve in the short term and are also potential for guiding the agent to reach the actual goal in the long term.", "We have extensively evaluated our goal generation algorithm on a number of robotic manipulation tasks and demonstrated substantially improvement over the original HER in terms of sample efficiency." ], [ "Introduction", "Recent advances in deep reinforcement learning (RL), including policy gradient methods [45], [46] and Q-learning [30], have demonstrated a large number of successful applications in solving hard sequential decision problems, including robotics [24], games [47], [30], and recommendation systems [23], among others.", "To train a well-behaved policy, deep reinforcement learning algorithms use neural networks as functional approximators to learn a state-action value function or a policy distribution to optimize a long-term expected return.", "The convergence of the training process, particularly in Q-learning, is heavily dependent on the temporal pattern of the reward function [51].", "For example, if only a non-zero reward/return is provided at the end of an rollout of a trajectory with length $L$ , while no rewards are observed before the $L$ -th time step, the Bellman updates of the Q-function would become very inefficient, requiring at least $L$ steps to propagate the final return to the Q-function of all earlier state-action pairs.", "Such sparse or episodic reward signals are ubiquitous in many real-world problems, including complex games and robotic manipulation tasks [2].", "Therefore, despite its notable success, the application of RL is still quite limited to real-world problems, where the reward functions can be sparse and very hard to engineer [34].", "In practice, human experts need to design reward functions which would reflect the task needed to be solved and also be carefully shaped in a dense way for the optimization in RL algorithms to ensure good performance.", "However, the design of such dense reward functions is non-trivial in most real-world problems with sparse rewards.", "For example, in goal-oriented robotics tasks, an agent is required to reach some state satisfying predefined conditions or within a state set of interest.", "Many previous efforts have shown that the sparse indicator rewards, instead of the engineered dense rewards, often provide better practical performance when trained with deep Q-learning and policy optimization algorithms [2].", "In this paper, we will focus on improving training and exploration for goal-oriented RL problems.", "A notable advance is called Hindsight Experience Replay (HER) [2], which greatly improves the practical success of off-policy deep Q-learning for goal-oriented RL problems, including several difficult robotic manipulation tasks.", "The key idea of HER is to revisit previous states in the experience replay and construct a number of achieved hindsight goals based on these visited intermediate states.", "Then the hindsight goals and the related trajectories are used to train an universal value function parameterized by a goal input by algorithms such as deep deterministic policy gradient (DDPG, [26]).", "A good way to think of the success of HER is to view HER as an implicit curriculum which first learns with the intermediate goals that are easy to achieve using current value function and then later with the more difficult goals that are closer to the final goal.", "A notable difference between HER and curriculum learning is that HER does not require an explicit distribution of the initial environment states, which appears to be more applicable to many real problems.", "In this paper, we study the problem of automatically generating valuable hindsight goals which are more effective for exploration.", "Different from the random curriculum heuristics used in the original HER, where a goal is drawn as an achieved state in a random trajectory, we propose a new approach that finds intermediate goals that are easy to achieve in the short term and also would likely lead to reach the final goal in the long term.", "To do so, we first approximate the value function of the actual goal distribution by a lower bound that decomposes into two terms, a value function based on a hindsight goal distribution and the Wasserstein distance between the two distributions.", "Then, we introduce an efficient discrete Wasserstein Barycenter solver to generate a set of hindsight goals that optimizes the lower bound.", "Finally, such goals are used for exploration.", "In the experiments, we evaluate our Hindsight Goal Generation approach on a broad set of robotic manipulation tasks.", "By incorporating the hindsight goals, a significant improvement on sample efficiency is demonstrated over DDPG+HER.", "Ablation studies show that our exploration strategy is robust across a wide set of hyper-parameters." ], [ "Background", "Reinforcement Learning The goal of reinforcement learning agent is to interact with a given environment and maximize its expected cumulative reward.", "The environment is usually modeled by a Markov Decision Process (MDP), given by tuples $\\left<\\mathcal {S}, \\mathcal {A}, P, R, \\gamma \\right>,$ where $\\mathcal {S}, \\mathcal {A}$ represent the set of states and actions respectively.", "$P:\\mathcal {S}\\times \\mathcal {A}\\rightarrow \\mathcal {S}$ is the transition function and $R:\\mathcal {S}\\times \\mathcal {A}\\rightarrow [0,1]$ is the reward function.", "$\\gamma $ is the discount factor.", "The agent trys to find a policy $\\pi :\\mathcal {S}\\rightarrow \\mathcal {A}$ that maximizes its expected curriculum reward $V^{\\pi }(s_0)$ , where $s_0=s$ is usually given or drawn from a distribution $\\mu _0$ of initial state.", "The value function $V^{\\pi }(s)$ is defined as $V^{\\pi }(s)=\\mathbb {E}_{s_0=s, a_t\\sim \\pi (\\cdot \\mid s_t), s_{t+1}\\sim P(\\cdot \\mid s_t,a_t)}\\left[\\sum _{t=0}^{\\infty }\\gamma ^tR(s_t,a_t)\\right].$ Goal-oriented MDP In this paper, we consider a specific class of MDP called goal-oriented MDP.", "We use $\\mathcal {G}$ to denote the set of goals.", "Different from traditional MDP, the reward function $R$ is a goal-conditioned sparse and binary signal indicating whether the goal is achieved: $R_g(s_t,a_t,s_{t+1})&:=\\left\\lbrace \\begin{array}{cc}0, & \\Vert \\phi (s_{t+1})-g\\Vert _2\\le \\delta _g \\\\-1, & \\text{otherwise}.\\end{array}\\right.$ $\\phi :\\mathcal {S}\\rightarrow \\mathcal {G}$ is a known and tractable mapping that defines goal representation.", "$\\delta _g$ is a given threshold indicating whether the goal is considered to be reached (see [38]).", "Universal value function The idea of universal value function is to use a single functional approximator, such as neural networks, to represent a large number of value functions.", "For the goal-oriented MDPs, the goal-based value function of a policy $\\pi $ for any given goal $g$ is defined as $V^{\\pi }(s,g)$ , for all state $s\\in \\mathcal {S}.$ That is $V^\\pi (s,g):=\\mathbb {E}_{s_0=s, a_t\\sim \\pi (\\cdot \\mid s_t, g), s_{t+1}\\sim P(\\cdot \\mid s_t,a_t)}\\left[\\sum _{t=0}^\\infty \\gamma ^tR_g(s_t,a_t,s_{t+1})\\right].$ Let $\\mathcal {T}^*:\\mathcal {S}\\times \\mathcal {G}\\rightarrow [0,1]$ be the joint distribution over starting state $s_0\\in \\mathcal {S}$ and goal $g\\in \\mathcal {G}.$ .", "That is, at the start of every episode, a state-goal pair $(s_0,g)$ will be drawn from the task distribution $\\mathcal {T}^*$ .", "The agent tries to find a policy $\\pi :\\mathcal {S}\\times \\mathcal {G}\\rightarrow \\mathcal {A}$ that maximizes the expectation of discounted cumulative reward $V^\\pi (\\mathcal {T}^*):=\\mathop {\\mathbb {E}}_{(s_0,g)\\sim \\mathcal {T}^*}[V^\\pi (s_0,g)]$ Goal-oriented MDP characterizes several reinforcement benchmark tasks, such as the robotics tasks in the OpenAI gym environment [38].", "For example, in the FetchPush (see Figure REF ) task, the agent needs to learn pushing a box to a designated point.", "In this task, the state of the system $s$ contains the status for both the robot and the box.", "The goal $g$ , on the other hand, only indicates the designated position of the box.", "Thus, the mapping $\\phi $ is defined as a mapping from a system state $s$ to the position of the box in $s$ .", "Access to Simulator One of the common assumption made by previous work is an universal simulator that allows the environment to be reset to any given state [15], [11].", "This kind of simulator is excessively powerful, and hard to build when acting in the real world.", "On the contrary, our method does not require an universal simulator, and thus is more realizable." ], [ "Related Work", "Multi-Goal RL The role of goal-conditioned policy has been investigated widely in deep reinforcement learning scenarios [39].", "A few examples include grasping skills in imitation learning [36], [48], disentangling task knowledge from environment [28], [18], and constituting lower-level controller in hierarchical RL [35], [32], [21], [12].", "By learning a universal value function which parameterizes the goal using a function approximator [43], an agent is able to learn multiple tasks simultaneously [22], [53] and identify important decision states [20].", "It is shown that multi-task learning with goal-conditioned policy improves the generalizability to unseen goals (e.g., [43]).", "Hindsight Experience Replay Hindsight Experience Replay [2] is an effective experience replay strategy which generates reward signal from failure trajectories.", "The idea of hindsight experience replay can be extended to various goal-conditioned problems, such as hierarchical RL [25], dynamic goal pursuit [13], goal-conditioned imitation [9], [50] and visual robotics applications [33], [42].", "It is also shown that hindsight experience replay can be combined with on-policy reinforcement learning algorithms by importance sampling [40].", "Curriculum Learning in RL Curriculum learning in RL usually suggests using a sequence of auxiliary tasks to guide policy optimization, which is also related to multi-task learning, lifelong learning, and transfer learning.", "The research interest in automatic curriculum design has seen rapid growth recently, where approaches have been proposed to schedule a given set of auxiliary tasks [41], [8], and to provide intrinsic motivation [17], [37], [49], [7].", "Generating goals which leads to high-value states could substantially improve the sample efficiency of RL agent [19].", "Guided exploration through curriculum generation is also an active research topic, where either the initial state [15] or the goal position [5], [16] is considered as a manipulable factor to generate the intermediate tasks.", "However, most curriculum learning methods are domain-specific, and it is still open to build a generalized framework for curriculum learning." ], [ "Automatic Hindsight Goal Generation", "As discussed in the previous section, HER provides an effective solution to resolve the sparse reward challenge in object manipulation tasks, in which achieved state in some past trajectories will be replayed as imaginary goals.", "In the other words, HER modifies the task distribution in replay buffer to generate a set of auxiliary nearby goals which can used for further exploration and improve the performance of an off-policy RL agent which is expected to reach a very distant goal.", "However, the distribution of hindsight goals where the policy is trained on might differ significantly from the original task or goal distribution.", "Take Figure REF as an example, the desired goal distribution is lying on the red segment, which is far away from the initial position.", "In this situation, those hindsight goals may not be effective enough to promote policy optimization in original task.", "The goal of our work is to develop a new approach to generate valuable hindsight goals that will improve the performance on the original task.", "In the rest of this section, we will present a new algorithmic framework as well as our implementation for automatic hindsight goal generation for better exploration." ], [ "Algorithmic Framework", "Following [16], our approach relies on the following generalizability assumption.", "Assumption 1 A value function of a policy $\\pi $ for a specific goal $g$ has some generalizability to another goal $g^{\\prime }$ close to $g$ .", "One possible mathematical characterization for Assumption REF is via the Lipschitz continuity.", "Similar assumptions have been widely applied in many scenarios [3], [27]: $\\left|V^{\\pi }(s,g)-V^{\\pi }(s^{\\prime },g^{\\prime })\\right|\\le L\\cdot d((s,g),(s^{\\prime },g^{\\prime })), $ where $d((s,g),(s^{\\prime },g^{\\prime }))$ is a metric defined by $d((s,g),(s^{\\prime },g^{\\prime }))=c\\Vert \\phi (s)-\\phi (s^{\\prime })\\Vert _2+\\Vert g-g^{\\prime }\\Vert _2.$ for some hyperparameter $c>0$ that provides a trade-off between the distances between initial states and the distance between final goals.", "$\\phi (\\cdot )$ is a state abstraction to map from the state space to the goal space.", "When experimenting with the tasks in the OpenAI Gym environment [38], we simply adopt the state-goal mappings as defined in (REF ).", "Although the Lipschitz continuity may not hold for every $s,s^{\\prime }\\in \\mathcal {S}, g,g^{\\prime }\\in \\mathcal {G}, $ we only require continuity over some specific region.", "It is reasonable to claim that bound Eq.", "(REF ) holds for most of the $(s,g),(s^{\\prime },g^{\\prime })$ when $d((s,g),(s^{\\prime },g^{\\prime }))$ is not too large.", "Partly due to the reward sparsity of the distant goals, optimizing the expected cumulative reward (see Eq.", "(REF )) from scratch is very difficult.", "Instead, we propose to optimize a relaxed lower bound which introduces intermediate goals that may be easier to optimize.", "Here we provide Theorem REF that establishes the such a lower bound.", "theoremthmbound Assuming that the generalizability condition (Eq.", "(REF )) holds for two distributions $(s,g)\\sim \\mathcal {T}$ and $(s^{\\prime },g^{\\prime })\\sim \\mathcal {T}^{\\prime }$ , we have $V^\\pi (\\mathcal {T}^{\\prime })\\ge V^\\pi (\\mathcal {T})-L\\cdot D(\\mathcal {T},\\mathcal {T}^{\\prime }).$ where $D(\\cdot ,\\cdot )$ is the Wasserstein distance based on $d(\\cdot ,\\cdot )$ $D(\\mathcal {T}^{(1)},\\mathcal {T}^{(2)})=&\\inf _{\\mu \\in \\Gamma (\\mathcal {T}^{(1)},\\mathcal {T}^{(2)})} \\left({\\mathbb {E}}_{\\mu }[d(({s_0}^{(1)},g^{(1)}),({s_0}^{(2)},g^{(2)}))]\\right)$ where $\\Gamma (\\mathcal {T}^{(1)},\\mathcal {T}^{(2)})$ denotes the collection of all joint distribution $\\mu ({s_0}^{(1)},g^{(1)},{s_0}^{(2)},g^{(2)})$ whose marginal probabilities are $\\mathcal {T}^{(1)},\\mathcal {T}^{(2)}$ , respectively.", "The proof of Theorem 1 is deferred to Appendix .", "It follows from Theorem 1 that optimizing cumulative rewards Eq.", "(REF ) can be relaxed into the following surrogate problem $\\max _{\\mathcal {T}, \\pi } \\quad V^\\pi (\\mathcal {T})-L\\cdot D(\\mathcal {T},\\mathcal {T}^*).$ Note that this new objective function is very intuitive.", "Instead of optimizing with the difficult goal/task distribution $\\mathcal {T}^*$ , we hope to find a collection of surrogate goals $\\mathcal {T}$ , which are both easy to optimize and are also close or converging towards $\\mathcal {T}^*$ .", "However the joint optimization of $\\pi $ and $\\mathcal {T}$ is non-trivial.", "This is because a) $\\mathcal {T}$ is a high-dimensional distribution over tasks, b) policy $\\pi $ is optimized with respect to a shifting task distribution $\\mathcal {T}$ , c) the estimation of value function $V^\\pi (\\mathcal {T})$ may not be quite accurate during training.", "Inspired by [2], we adopt the idea of using hindsight goals here.", "We first enforce $\\mathcal {T}$ to be a finite set of $K$ particles which can only be from those already achieved states/goals from the replay buffer $B$ .", "In another word, the support of the set $\\mathcal {T}$ should lie inside $B$ .", "In the meanwhile, we notice that a direct implementation of problem Eq.", "(REF ) may lead to degeneration of hindsight goal selection of the training process, i.e., the goals may be all drawn from a single trajectory, thus not being able to provide sufficient exploration.", "Therefore, we introduce an extra diversity constraint, i.e, for every trajectory $\\tau \\in B$ , at most $\\mu $ states can be selected in $\\mathcal {T}$ .", "In practice, we find that simply setting it to 1 would result in reasonable performance.", "It is shown in Section REF that this diversity constraint indeed improves the robustness of our algorithm.", "Finally, the optimization problem we aim to solve is, $\\max _{\\pi , \\mathcal {T}:|\\mathcal {T}|=K} \\quad &{V}^\\pi (\\mathcal {T})-L\\cdot D(\\mathcal {T},\\mathcal {T}^*)\\\\\\text{s.t. }", "\\quad &\\sum _{s_0,s_t\\in \\tau } \\mathbb {1}[(s_0,\\phi (s_t)) \\in \\mathcal {T}]\\le 1, \\quad \\forall \\tau \\in B \\\\&\\sum _{\\tau \\in B}\\sum _{s_0,s_t\\in \\tau } \\mathbb {1}[(s_0,\\phi (s_t)) \\in \\mathcal {T}] = K.$ To solve the above optimization, we adapt a two-stage iterative algorithm.", "First, we apply a policy optimization algorithm, for example DDPG, to maximize the value function conditioned on the task set $\\mathcal {T}$ .", "Then we fix $\\pi $ and optimize the the hindsight set $\\mathcal {T}$ subject to the diversity constraint, which is a variant of the well-known Wasserstein Barycenter problem with a bias term (the value function) for each particle.", "Then we iterate the above process until the policy achieves a desirable performance or we reach a computation budget.", "It is not hard to see that the first optimization of value function is straightforward.", "In our work, we simply use the DDPG+HER framework for it.", "The second optimization of hindsight goals is non-trivial.", "In the following, we describe an efficient approximation algorithm for it." ], [ "Solving Wasserstein Barycenter Problem via Bipartite Matching", "Since we assume that $\\mathcal {T}$ is hindsight and with $K$ particles, we can approximately solve the above Wasserstein Barycenter problem in the combinatorial setting as a bipartite matching problem.", "Instead of dealing with $\\mathcal {T}^*$ , we draw $K$ samples from $\\mathcal {T}^*$ to empirically approximate it by a set of $K$ particles $\\widehat{\\mathcal {T}}^*$ .", "In this way, the hindsight task set $\\mathcal {T}$ can be solved in the following way.", "For every task instance $(\\hat{s}^i_0,\\hat{g}^i)\\in \\widehat{\\mathcal {T}}^*$ , we find a state trajectory $\\tau ^i=\\lbrace s^i_t\\rbrace \\in B$ that together minimizes the sum $\\sum _{(\\hat{s}^i_0,\\hat{g}^i)\\in \\widehat{\\mathcal {T}}^*}w((\\hat{s}^i_0,\\hat{g}^i),\\tau ^i) $ where we define $w((\\hat{s}^i_0,\\hat{g}^i),\\tau ^i) := c\\Vert \\phi (\\hat{s}^i_0)-\\phi (s^i_0)\\Vert _2 + \\min _{t}\\left(\\Vert \\hat{g}^i-\\phi (s^i_t)\\Vert _2-\\frac{1}{L}V^\\pi (s^i_0,\\phi (s^i_t))\\right).$ Finally we select each corresponding achieved state $s_t\\in \\tau $ to construct hindsight goal $(\\hat{s}_0,\\phi (s_t))\\in \\mathcal {T}$ .", "It is not hard to see that the above combinatorial optimization exactly identifies optimal solution $\\mathcal {T}$ in the above-mentioned Wasserstein Barycenter problem.", "In practice, the Lipschitz constant $L$ is unknown and therefore treated as a hyper-parameter.", "The optimal solution of the combinatorial problem in Eq.", "(REF ) can be solved efficiently by the well-known maximum weight bipartite matching [31], [10].", "The bipartite graph $G(\\lbrace V_x, V_y\\rbrace , E)$ is constructed as follows.", "Vertices are split into two partitions $V_x, V_y$ .", "Every vertex in $V_x$ represents a task instance $(\\hat{s}_0, \\hat{g})\\in \\hat{\\mathcal {T}}^*$ , and vertex in $V_y$ represents a trajectory $\\tau \\in B$ .", "The weight of edge connecting $(\\hat{s}_0, \\hat{g})$ and $\\tau $ is $-w((\\hat{s}_0, \\hat{g}), \\tau )$ as defined in Eq.", "(REF ).", "In this paper, we apply the Minimum Cost Maximum Flow algorithm to solve this bipartite matching problem (for example, see [1]).", "[htp] Exploration via Hindsight Goal Generation (HGG) [1] Initialize $\\pi $ initialize neural networks $B\\leftarrow \\emptyset $ iteration $=1,2,\\dots , N$ Sample $\\lbrace (\\hat{s}_0^i,\\hat{g}^i)\\rbrace _{i=1}^K\\sim \\mathcal {T}^*$ sample from target distribution Find $K$ distinct trajectories $\\lbrace \\tau ^i\\rbrace _{i=1}^K$ that minimize weighted bipartite matching $\\sum _{i=1}^K w((\\hat{s}_0^i,\\hat{g}^i),\\tau ^i) &= \\sum _{i=1}^K \\left(c\\Vert \\phi (\\hat{s}_0^i)-\\phi (s_0^i)\\Vert _2 + \\min _{t}\\left(\\Vert \\hat{g}^i-\\phi (s_t^i)\\Vert _2-\\frac{1}{L}V^{\\pi }(s^i_0,\\phi (s_t^i))\\right)\\right)$ Construct intermediate task distribution $\\lbrace (\\hat{s}_0^i,g^i)\\rbrace _{i=1}^M$ where $g^i &= \\phi \\left(\\mathop {\\arg \\min }_{s_t^i\\in \\tau _i}\\left(\\Vert \\hat{g}^i-\\phi (s_t^i)\\Vert _2-\\frac{1}{L}V^{\\pi }(s^i_0,\\phi (s_t^i))\\right)\\right)$ $i=1,2,\\dots , K$ $(s_0,g)\\leftarrow (\\hat{s}_0^i,g^i)$ critical step: hindsight goal-oriented exploration $t=0,1,\\dots , H-1$ $a_t\\leftarrow \\pi (\\cdot |s_t,g)+\\text{noise}$ together with $\\epsilon $ -greedy or Gaussian exploration $s_{t+1}\\sim P(\\cdot |s_t,a_t)$ $r_t\\leftarrow R_g(s_t,a_t,s_{t+1})$ $\\tau \\leftarrow \\lbrace s_0,a_0,r_0,s_1,\\dots \\rbrace $ $B\\leftarrow B\\cup \\lbrace \\tau \\rbrace $ $i=1\\dots M$ Sample a minibatch $b$ from replay buffer using HER Perform one step on value and policy update on minibatch $b$ using DDPG Overall Algorithm The overall description of our algorithm is shown in Algorithm REF .", "Note that our exploration strategy the only modification is in Step REF , in which we generate hindsight goals to guide the agent to collect more valuable trajectories.", "So it is complementary to other improvements in DDPG/HER around Step $\\ref {alg_replay_strategy}$ , such as the prioritized experience replay strategy [44], [54], [55] and other variants of hindsight experience replay [14], [4]." ], [ "Experiments", "Our experiment environments are based on the standard robotic manipulation environments in the OpenAI Gym [6]Our code is available at https://github.com/Stilwell-Git/Hindsight-Goal-Generation..", "In addition to the standard settings, to better visualize the improvement of the sample efficiency, we vary the target task distributions in the following ways: Fetch environments: Initial object position and goal are generated uniformly at random from two distant segments.", "Hand-manipulation environments : These tasks require the agent to rotate the object into a given pose, and only the rotations around $z$ -axis are considered here.", "We restrict the initial axis-angle in a small interval, and the target pose will be generated in its symmetry.", "That is, the object needs to be rotated in about $\\pi $ degree.", "Reach environment: FetchReach and HandReach do not support randomization of the initial state, so we restrict their target distribution to be a subset of the original goal space.", "Regarding baseline comparison, we consider the original DDPG+HER algorithm.", "We also investigate the integration of the experience replay prioritization strategies, such as the Energy-Based Prioritization (EBP) proposed by [54], which draws the prior knowledge of physics system to exploit valuable trajectories.", "More details of experiment settings are included in the Appendix ." ], [ "HGG Generates Better Hindsight Goals for Exploration", "We first check whether HGG is able to generate meaningful hindsight goals for exploration.", "We compare HGG and HER in the FetchPush environment.", "It is shown in Figure REF that HGG algorithm generates goals that gradually move towards the target region.", "Since those goals are hindsight, they are considered to be achieved during training.", "In comparison, the replay distribution of a DDPG+HER agent has been stuck around the initial position for many iterations, indicating that those goals may not be able to efficiently guide exploration.", "Performance on benchmark robotics tasks Figure: Learning curves for variant a number of goal-oriented robotic manipulation tasks.", "All curves presented in this figure are trained with default hyper-parameters included in Appendix .", "Note that since FetchReach and HandReach do not contain object instances for EBP, so we do not include the +EBP versions for them.Then we check whether the exploration provided by the goals generated by HGG can result in better policy training performance.", "As shown in Figure REF , we compare the vanilla HER, HER with Energy-Based Prioritization (HER+EBP), HGG, HGG+EBP.", "It is worth noting that since EBP is designed for the Bellman equation updates, it is complementary to our HGG-based exploration approach.", "Among the eight environments, HGG substantially outperforms HER on four and has comparable performance on the other four, which are either too simple or too difficult.", "When combined with EBP, HGG+EBP achieves the best performance on six environments that are eligible.", "Figure: Visualization of FetchPush with obstacle.Performance on tasks with obstacle In a more difficult task, crafted metric may be more suitable than $\\ell _2$ -distance used in Eq.", "(REF ).", "As shown in Figure REF , we created an environment based on FetchPush with a rigid obstacle.", "The object and the goal are uniformly generated in the green and the red segments respectively.", "The brown block is a static wall which cannot be moved.", "In addition to $\\ell _2$ , we also construct a distance metric based on the graph distance of a mesh grid on the plane, the blue line is a successful trajectory in such hand-craft distance measure.", "A more detailed description is deferred to Appendix REF .", "Intuitively speaking, this crafted distance should be better than $\\ell _2$ due to the existence of the obstacle.", "Experimental results suggest that such a crafted distance metric provides better guidance for goal generation and training, and significantly improves sample efficiency over $\\ell _2$ distance.", "It would be a future direction to investigate ways to obtain or learn a good metric." ], [ "Comparison with Explicit Curriculum Learning", "Since our method can be seen as an explicit curriculum learning for exploration, where we generate hindsight goals as intermediate task distribution, we also compare our method with another recently proposed curriculum learning method for RL.", "[16] leverages Least-Squares GAN [29] to mimic the set called Goals of Intermediate Difficult as exploration goal generator.", "Specifically, in our task settings, we define a goal set $GOID(\\pi ) = \\lbrace g:\\alpha \\le f(\\pi ,g) \\le 1-\\alpha \\rbrace ,$ where $f(\\pi ,g)$ represents the average success rate in a small region closed by goal $g$ .", "To sample from $GOID$ , we implement an oracle goal generator based on rejection sampling, which could uniformly sample goals from $GOID(\\pi )$ .", "Result in Figure REF indicates that our Hindsight Goal Generation substantially outperforms HER even with $GOID$ from the oracle generator.", "Note that this experiment is run on a environment with fixed initial state due to the limitation of [16].", "The choice of $\\alpha $ is also suggested by [16]." ], [ "Ablation Studies on Hyperparameter Selection", "In this section, we set up a set of ablation tests on several hyper-parameters used in the Hindsight Goal Generation algorithm.", "Lipschitz $L$ : The selection of Lipschitz constant is task dependent, since it iss related with scale of value function and goal distance.", "For the robotics tasks tested in this paper, we find that it is easier to set $L$ by first divided it with the upper bound of the distance between any two final goals in a environment.", "We test a few choices of $L$ on several environments and find that it is very easy to find a range of $L$ that works well and shows robustness for all the environments tested in this section.", "We show the learning curves on FetchPush with different $L$ .", "It appears that the performance of HGG is reasonable as long as $L$ is not too small.", "For all tasks we tested in the comparisons, we set $L=5.0$ .", "Distance weight $c$ : Parameter $c$ defines the trade-off between the initial state similarity and the goal similarity.", "Larger $c$ encourages our algorithm to choose hindsight goals that has closer initial state.", "Results in Figure REF indicates that the choice of $c$ is indeed robust.", "For all tasks we tested in the comparisons, we set $c=3.0$ .", "Number of hindsight goals $K$ : We find that for the simple tasks, the choice of $K$ is not critical.", "Even a greedy approach (corresponds to $K=1$ ) can achieved competitive performance, e.g.", "on FetchPush in the third panel of Figure REF .", "For more difficult environment, such as FetchPickAndPlace, larger batch size can significantly reduce the variance of training results.", "For all tasks tested in the comparisons, we ploted the best results given by $K\\in \\lbrace 50,100\\rbrace $ .", "Figure: Ablation study of hyper-parameter selection.", "Several curves are omitted in the forth panel to provide a clear view of variance comparison.", "A full version is deferred to Appendix ." ], [ "Conclusion", "We present a novel automatic hindsight goal generation algorithm, by which valuable hindsight imaginary tasks are generated to enable efficient exploration for goal-oriented off-policy reinforcement learning.", "We formulate this idea as a surrogate optimization to identify hindsight goals that are easy to achieve and also likely to lead to the actual goal.", "We introduce a combinatorial solver to generate such intermediate tasks.", "Extensive experiments demonstrated better goal-oriented exploration of our method over original HER and curriculum learning on a collection of robotic learning tasks.", "A future direction is to incorporate the controllable representation learning [52] to provide task-specific distance metric [18], [48], which may generalize our method to more complicated cases where the standard Wasserstein distance cannot be applied directly.", "Proof of Theorem 1 In this section we provide the proof of Theorem 1.", "* By Eq.", "(REF ), for any quadruple $(s, g, s^{\\prime }, g^{\\prime })$ , we have $V^{\\pi }(s^{\\prime }, g^{\\prime }) \\ge V^{\\pi }(s, g) - L \\cdot d((s, g), (s^{\\prime }, g^{\\prime })).", "$ For any $\\mu \\in \\Gamma (\\mathcal {T},\\mathcal {T}^{\\prime })$ , we sample $(s, g, s^{\\prime }, g^{\\prime }) \\sim \\mu $ and take the expectation on both sides of Eq.", "(REF ), and get $V^{\\pi }(\\mathcal {T}^{\\prime }) \\ge V^{\\pi }(\\mathcal {T}) - L\\cdot \\mathbb {E}_\\mu [d((s, g), (s^{\\prime }, g^{\\prime }))] .", "$ Since Eq.", "(REF ) holds for any $\\mu \\in \\Gamma (\\mathcal {T}, \\mathcal {T^{\\prime }})$ , we have $V^{\\pi }(\\mathcal {T}^{\\prime }) \\ge V^{\\pi }(\\mathcal {T}) - L\\cdot \\inf _{\\mu \\in \\Gamma (\\mathcal {T}, \\mathcal {T}^{\\prime })} \\left(\\mathbb {E}_\\mu [d((s, g), (s^{\\prime }, g^{\\prime }))] \\right) = V^{\\pi }(\\mathcal {T}) - L\\cdot D(\\mathcal {T}, \\mathcal {T^{\\prime }}) .$ Experiment Settings Modified Environments Figure: Visualization of modified task distribution in Fetch environments.", "The object is uniformly generated on the green segment, and the goal is uniformly generated on the red segment.Fetch Environments: FetchPush-v1: Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.15,-0.15,0)-(0.15,-0.15,0)$ , and the goal is uniformly generated on the segment $(-0.15,0.15,0)-(0.15,0.15,0)$ .", "FetchPickAndPlace-v1: Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.15,-0.15,0)-(0.15,-0.15,0)$ , and the goal is uniformly generated on the segment $(-0.15,0.15,0.45)-(0.15,0.15,0.45)$ .", "FetchSlide-v1: Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.05,-0.1,0)-(-0.05,0.1,0)$ , and the goal is uniformly generated on the segment $(0.55,-0.15,0)-(0.55,0.15,0)$ .", "Hand Environments: HandManipulateBlockRotate-v0, HandManipulateEggRotate-v0: Let $s_0$ be the default initial state defined in original simulator [38].", "The initial pose is generated by applying a rotation around $z$ -axis, where the rotation degree will be uniformly sampled from $[-\\pi /4,\\pi /4]$ .", "The goal is also rotated from $s_0$ around $z$ -axis, where the degree is uniformly sampled from $[\\pi -\\pi /4,\\pi +\\pi /4]$ .", "HandManipulatePenRotate-v0: We use the same setting as the original simulator.", "Reach Environments: FetchReach-v1: Let the origin $(0,0,0)$ denote the coordinate of gripper's initial position.", "Goal is uniformly generated on the segment $(-0.15,0.15,0.15)-(0.15,0.15,0.15)$ .", "HandReach-v0: Uniformly select one dimension of meeting point and add an offset of 0.005, where meeting point is defined in original simulator [38] Other attributes of the environment (such as horizon $H$ , reward function $R_g$ ) are kept the same as default.", "Evaluation Details All curves presented in this paper are plotted from 10 runs with random task initializations and seeds.", "Shaded region indicates 60% population around median.", "All curves are plotted using the same hyper-parameters (except ablation section).", "Following [2], an episode is considered successful if $\\Vert \\phi (s_H)-g\\Vert _2\\le \\delta _g$ is achieved, where $\\phi (s_H)$ is the object position at the end of the episode.", "$\\delta _g$ is the same threshold using in reward function (REF ).", "Details of Experiment with obstacle Using the same coordinate system as Appendix REF .", "Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.15,-0.15,0)-(-0.045,-0.15,0)$ , and the goal is uniformly generated on the segment $(-0.15,0.15,0)-(-0.045,0.15,0)$ .", "The wall lies on $(-0.3,0,0)-(0,0,0)$ .", "The crafted distance used in Figure REF is calculated by the following rules.", "The distance metric between two initial states is kept as before.", "The distance between the hindsight goal $g$ and the desired goal $g^*$ is evaluated as the summation of two parts.", "The first part is the $\\ell _2$ distance between the goal $g$ and its closest point $g^{\\prime }$ on the blue polygonal line shown in Figure REF .", "The second part the distance between $g^{\\prime }$ and $g^*$ along the blue line.", "The above two terms are comined with the same ratio used in Eq.", "(REF ).", "Details of Experiment REF Figure: Visualization of modified task distribution in Experiment .", "The initial position of the object is as shown in this figure, and the goal is uniformly generated in the blue region.", "Since the environment is deterministic, the success rate $f(\\pi ,g)$ is defines as $f(\\pi ,g)=\\int _{g^{\\prime }\\in \\mathcal {B}(g,\\delta _g)}\\mathbb {1}[\\pi \\text{ achieves success for the goal } g^{\\prime }]\\;d g^{\\prime },$ where $\\mathcal {B}(g,\\delta _g)$ indicates a ball with radius $\\delta _g$ , centered at $g$ .", "And $\\delta _g$ is the same threshold using in reward function (REF ) and success testing.", "The average success rate oracle $f(\\pi ,g)$ is estimated by $10^2$ samples.", "Implementation Details Hyper-Parameters Almost all hyper-parameters using DDPG and HER are kept the same as benchmark results, only following terms differ with [38]: number of MPI workers: 1; buffer size: $10^4$ trajectories.", "Other hyper-parameters: Actor and critic networks: 3 layers with 256 units and ReLU activation; Adam optimizer with $10^{-3}$ learning rate; Polyak-averaging coefficient: 0.95; Action $L_2$ -norm penalty coefficient: 1.0; Batch size: 256; Probability of random actions: 0.3; Scale of additive Gaussian noise: 0.2; Probability of HER experience replay: 0.8; Number of batches to replay after collecting one trajectory: 20.", "Hyper-parameters in weighted bipartite matching: Lipschitz constant $L$ : 5.0; Distance weight $c$ : 3.0; Number of hindsight goals $K$ : 50 or 100.", "Details on Data Processing In policy training of HGG, we sample minibatches using HER.", "As a normalization step, we use Lipschitz constant $L^*=\\frac{L}{(1-\\gamma )d^{max}}$ in back-end computation, where $d^{max}$ is the $\\ell _2$ -diameter of the goal space $\\mathcal {G}$ , and $L$ corresponds to the amount discussed in ablation study.", "To reduce computational cost of bipartite matching, we approximate the buffer set by a First-In-First-Out queue containing $10^3$ recent trajectories.", "An additional Gaussian noise $\\mathcal {N}(0,0.05I)$ is added to goals generated by HGG in Fetch environments.", "We don't add this term in Hand environments because the goal space is not $\\mathbb {R}^d$ .", "Additional Experiment Results Additional Visualization of Hindsight Goals Generated by HGG Figure: Additional visualization to illustrate the hindsight goals generated by HGG.To give better intuitive illustrations on our motivation, we provide an additional visualization of goal distribution generated by HGG on a complex manipulation task FetchPickAndPlace (Figures REF and REF ).", "In Figure REF , “blue to green” corresponds to the generated goals during training.", "HGG will guide the agent to understand the location of the object in the early stage, and move it to its nearby region.", "Then it will learn to move the object towards the easiest direction, i.e.", "pushing the object to the location underneath the actual goal, and finally pick it up.", "For those tasks which are hard to visualize, such as the HandManipultation tasks, we plotted the curves of distances between proposed exploratory goals and actually desired goals (Figure REF ), all experiment followed the similar learning dynamics.", "Evaluation on Standard Tasks In this section, we provide experiment results on standard Fetch tasks.", "The learning are shown in Figure REF .", "Figure: Learning curves for HGG and HER in standard task distribution created by .", "Additional Experiment Results on Section REF We provide the comparison of the performance of HGG and explicit curriculum learning on FetchPickAndPlace environment (see Figure REF ), showing that the result given in Section REF generalizes to a different environment.", "Figure: Comparison with explicit curriculum learning in FetchPickAndPlace.", "The initial position of the object is as shown in the left figure, and the goal is generated in the blue region following the default distribution created by .", "Ablation Study We provide full experiments on ablation study in Figure REF .", "Figure: A full version of ablation study." ], [ "Proof of Theorem 1", "In this section we provide the proof of Theorem 1.", "* By Eq.", "(REF ), for any quadruple $(s, g, s^{\\prime }, g^{\\prime })$ , we have $V^{\\pi }(s^{\\prime }, g^{\\prime }) \\ge V^{\\pi }(s, g) - L \\cdot d((s, g), (s^{\\prime }, g^{\\prime })).", "$ For any $\\mu \\in \\Gamma (\\mathcal {T},\\mathcal {T}^{\\prime })$ , we sample $(s, g, s^{\\prime }, g^{\\prime }) \\sim \\mu $ and take the expectation on both sides of Eq.", "(REF ), and get $V^{\\pi }(\\mathcal {T}^{\\prime }) \\ge V^{\\pi }(\\mathcal {T}) - L\\cdot \\mathbb {E}_\\mu [d((s, g), (s^{\\prime }, g^{\\prime }))] .", "$ Since Eq.", "(REF ) holds for any $\\mu \\in \\Gamma (\\mathcal {T}, \\mathcal {T^{\\prime }})$ , we have $V^{\\pi }(\\mathcal {T}^{\\prime }) \\ge V^{\\pi }(\\mathcal {T}) - L\\cdot \\inf _{\\mu \\in \\Gamma (\\mathcal {T}, \\mathcal {T}^{\\prime })} \\left(\\mathbb {E}_\\mu [d((s, g), (s^{\\prime }, g^{\\prime }))] \\right) = V^{\\pi }(\\mathcal {T}) - L\\cdot D(\\mathcal {T}, \\mathcal {T^{\\prime }}) .$" ], [ "Modified Environments", "Fetch Environments: FetchPush-v1: Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.15,-0.15,0)-(0.15,-0.15,0)$ , and the goal is uniformly generated on the segment $(-0.15,0.15,0)-(0.15,0.15,0)$ .", "FetchPickAndPlace-v1: Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.15,-0.15,0)-(0.15,-0.15,0)$ , and the goal is uniformly generated on the segment $(-0.15,0.15,0.45)-(0.15,0.15,0.45)$ .", "FetchSlide-v1: Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.05,-0.1,0)-(-0.05,0.1,0)$ , and the goal is uniformly generated on the segment $(0.55,-0.15,0)-(0.55,0.15,0)$ .", "Hand Environments: HandManipulateBlockRotate-v0, HandManipulateEggRotate-v0: Let $s_0$ be the default initial state defined in original simulator [38].", "The initial pose is generated by applying a rotation around $z$ -axis, where the rotation degree will be uniformly sampled from $[-\\pi /4,\\pi /4]$ .", "The goal is also rotated from $s_0$ around $z$ -axis, where the degree is uniformly sampled from $[\\pi -\\pi /4,\\pi +\\pi /4]$ .", "HandManipulatePenRotate-v0: We use the same setting as the original simulator.", "Reach Environments: FetchReach-v1: Let the origin $(0,0,0)$ denote the coordinate of gripper's initial position.", "Goal is uniformly generated on the segment $(-0.15,0.15,0.15)-(0.15,0.15,0.15)$ .", "HandReach-v0: Uniformly select one dimension of meeting point and add an offset of 0.005, where meeting point is defined in original simulator [38] Other attributes of the environment (such as horizon $H$ , reward function $R_g$ ) are kept the same as default." ], [ "Evaluation Details", " All curves presented in this paper are plotted from 10 runs with random task initializations and seeds.", "Shaded region indicates 60% population around median.", "All curves are plotted using the same hyper-parameters (except ablation section).", "Following [2], an episode is considered successful if $\\Vert \\phi (s_H)-g\\Vert _2\\le \\delta _g$ is achieved, where $\\phi (s_H)$ is the object position at the end of the episode.", "$\\delta _g$ is the same threshold using in reward function (REF )." ], [ "Details of Experiment with obstacle", "Using the same coordinate system as Appendix REF .", "Let the origin $(0,0,0)$ denote the projection of gripper's initial coordinate on the table.", "The object is uniformly generated on the segment $(-0.15,-0.15,0)-(-0.045,-0.15,0)$ , and the goal is uniformly generated on the segment $(-0.15,0.15,0)-(-0.045,0.15,0)$ .", "The wall lies on $(-0.3,0,0)-(0,0,0)$ .", "The crafted distance used in Figure REF is calculated by the following rules.", "The distance metric between two initial states is kept as before.", "The distance between the hindsight goal $g$ and the desired goal $g^*$ is evaluated as the summation of two parts.", "The first part is the $\\ell _2$ distance between the goal $g$ and its closest point $g^{\\prime }$ on the blue polygonal line shown in Figure REF .", "The second part the distance between $g^{\\prime }$ and $g^*$ along the blue line.", "The above two terms are comined with the same ratio used in Eq.", "(REF )." ], [ "Details of Experiment ", " Since the environment is deterministic, the success rate $f(\\pi ,g)$ is defines as $f(\\pi ,g)=\\int _{g^{\\prime }\\in \\mathcal {B}(g,\\delta _g)}\\mathbb {1}[\\pi \\text{ achieves success for the goal } g^{\\prime }]\\;d g^{\\prime },$ where $\\mathcal {B}(g,\\delta _g)$ indicates a ball with radius $\\delta _g$ , centered at $g$ .", "And $\\delta _g$ is the same threshold using in reward function (REF ) and success testing.", "The average success rate oracle $f(\\pi ,g)$ is estimated by $10^2$ samples." ], [ "Hyper-Parameters", "Almost all hyper-parameters using DDPG and HER are kept the same as benchmark results, only following terms differ with [38]: number of MPI workers: 1; buffer size: $10^4$ trajectories.", "Other hyper-parameters: Actor and critic networks: 3 layers with 256 units and ReLU activation; Adam optimizer with $10^{-3}$ learning rate; Polyak-averaging coefficient: 0.95; Action $L_2$ -norm penalty coefficient: 1.0; Batch size: 256; Probability of random actions: 0.3; Scale of additive Gaussian noise: 0.2; Probability of HER experience replay: 0.8; Number of batches to replay after collecting one trajectory: 20.", "Hyper-parameters in weighted bipartite matching: Lipschitz constant $L$ : 5.0; Distance weight $c$ : 3.0; Number of hindsight goals $K$ : 50 or 100." ], [ "Details on Data Processing", " In policy training of HGG, we sample minibatches using HER.", "As a normalization step, we use Lipschitz constant $L^*=\\frac{L}{(1-\\gamma )d^{max}}$ in back-end computation, where $d^{max}$ is the $\\ell _2$ -diameter of the goal space $\\mathcal {G}$ , and $L$ corresponds to the amount discussed in ablation study.", "To reduce computational cost of bipartite matching, we approximate the buffer set by a First-In-First-Out queue containing $10^3$ recent trajectories.", "An additional Gaussian noise $\\mathcal {N}(0,0.05I)$ is added to goals generated by HGG in Fetch environments.", "We don't add this term in Hand environments because the goal space is not $\\mathbb {R}^d$ ." ], [ "Additional Visualization of Hindsight Goals Generated by HGG", "To give better intuitive illustrations on our motivation, we provide an additional visualization of goal distribution generated by HGG on a complex manipulation task FetchPickAndPlace (Figures REF and REF ).", "In Figure REF , “blue to green” corresponds to the generated goals during training.", "HGG will guide the agent to understand the location of the object in the early stage, and move it to its nearby region.", "Then it will learn to move the object towards the easiest direction, i.e.", "pushing the object to the location underneath the actual goal, and finally pick it up.", "For those tasks which are hard to visualize, such as the HandManipultation tasks, we plotted the curves of distances between proposed exploratory goals and actually desired goals (Figure REF ), all experiment followed the similar learning dynamics." ], [ "Evaluation on Standard Tasks", "In this section, we provide experiment results on standard Fetch tasks.", "The learning are shown in Figure REF .", "Figure: Learning curves for HGG and HER in standard task distribution created by ." ], [ "Additional Experiment Results on Section ", "We provide the comparison of the performance of HGG and explicit curriculum learning on FetchPickAndPlace environment (see Figure REF ), showing that the result given in Section REF generalizes to a different environment.", "Figure: Comparison with explicit curriculum learning in FetchPickAndPlace.", "The initial position of the object is as shown in the left figure, and the goal is generated in the blue region following the default distribution created by ." ], [ "Ablation Study", "We provide full experiments on ablation study in Figure REF .", "Figure: A full version of ablation study." ] ]
1906.04279
[ [ "Floquet oscillations in periodically driven Dirac systems" ], [ "Abstract Electrons in a lattice exhibit time-periodic motion, known as Bloch oscillation, when subject to an additional static electric field.", "Here we show that a corresponding dynamics can occur upon replacing the spatially periodic potential by a time-periodic driving: Floquet oscillations of charge carriers in a spatially homogeneous system.", "The time lattice of the driving gives rise to Floquet bands that take on the role of the usual Bloch bands.", "For two different drivings (harmonic driving and periodic kicking through pulses) of systems with linear dispersion we demonstrate the existence of such oscillations, both by directly propagating wave packets and based on a complementary Floquet analysis.", "The Floquet oscillations feature richer oscillation patterns than their Bloch counterpart and enable the imaging of Floquet bands.", "Moreover, their period can be directly tuned through the driving frequency.", "Such oscillations should be experimentally observable in effective Dirac systems, such as graphene, when illuminated with circularly polarized light." ], [ "Introduction", "In the early days of quantum mechanics, F. Bloch and C. Zener [1], [2] predicted that electrons in a periodic potential, when accelerated by a constant external electric field, perform a time-periodic motion, by now well known as Bloch oscillation [3].", "It took about 60 years until this phenomenon was observed in biased semiconductor superlattices [4], [5], [6], [7].", "Since then Bloch oscillations or analogs of them have been found in various systems ranging from cold atom gases [8], [9] to classical optical [10], [11] and acoustic waves [12], to name a few.", "In 2014, Bloch oscillations due to the crystal lattice of a biased bulk semiconductor were eventually observed [13].", "In the meantime, scientific interest in tuning Bloch bands by means of external time-periodic driving has rapidly grown, especially since the proposal of so-called Floquet topological insulators [14] demonstrating the powerful influence external driving can exert on the properties of a crystal.", "Recent experiments also showed that Floquet band engineering allows for switching Bloch oscillations on and off [15].", "Moreover, additional driving can immensely increase the amplitude of conventional Bloch oscillations, giving rise to “super” Bloch oscillations [16], [17], [18], [19].", "Here we propose to consider the opposite limit of a time-periodically driven system without any spatial lattice, but still subject to a constant external electric field.", "We demonstrate that, most notably, still spatially periodic motion of the charge carriers can appear.", "We call this type of dynamics Floquet oscillations since they arise from the periodic repetitions of Floquet quasi-energy bands.", "So far very few works have addressed Bloch-type oscillations in the absence of an external lattice.", "One interesting prediction refers to Bloch oscillations of light, i.e.", "frequency oscillations of photons [20].", "Further Bloch-type oscillations were predicted theoretically for interacting 1d spinor gases [21] and recently observed in an atomic Bose liquid [22].", "In these settings interactions lead to the dynamical formation of periodic structures, which can yield oscillations à la Bloch.", "They do not, however, involve external drivings, and hence are of different nature than the Floquet oscillations predicted here.", "Specifically, we show that such periodic modes can emerge in spatially uniform systems governed by effective Dirac Hamiltonians, where the linear dispersion converts the energy periodicity of the Floquet spectrum into approximately $k$ -periodic bands.", "Most notably, Floquet oscillations are a quantum phenomenon distinctly different from the classical oscillatory motion of a charge in an ac field.", "Instead of following the driving frequency, they exhibit a frequency inversely proportional to it.", "Moreover, Floquet oscillations offer a possibility to directly image the Floquet quasi-band structure.", "Interestingly, they additionally show zitterbewegung features.", "We support our predictions by numerical calculations for two experimentally relevant prototypes of external driving, a periodic pulse sequence and circularly polarized radiation." ], [ "General concept of Floquet oscillations", "Consider a system $H_0(\\mathbf {k})$ (with momentum operator $\\mathbf {k}$ ) subject to a time-periodic driving $V_T(t)$ with period $T$ and frequency $\\omega = 2\\pi /T$ described by the Hamiltonian $H(\\mathbf {k},t)=H_0(\\mathbf {k}) + V_T(t) = H(\\mathbf {k},t+T) \\, .$ Via Floquet theory [23], [24], [25] the problem is transformed to finding the quasi-energy eigenvalues $\\epsilon (\\mathbf {k})$ of the Floquet Hamiltonian $\\mathcal {H}_F(\\mathbf {k})=H(\\mathbf {k},t)-\\textrm {i}\\hbar \\frac{\\partial }{\\partial t}$ .", "The quasi-energies $\\epsilon (\\mathbf {k})$ extend to infinity in $\\mathbf {k}$ -space in absence of a spatial lattice.", "However, they are periodic in quasi-energy $\\epsilon \\in [-\\hbar \\omega /2,\\hbar \\omega /2]$ forming a sequence of Floquet replica which are the analogue of usual Bloch bands – the latter being periodic in quasi-momentum $k\\in [-\\pi /a,\\pi /a]$ due to spatial periodicity with lattice constant $a$ .", "For the undriven system, we take an (effective) two-dimensional Dirac Hamiltonian $H_0(\\mathbf {k})=\\hbar v_F \\mathbf {k} \\cdot \\sigma ,$ with constant Fermi velocity $v_F$ and $\\sigma =(\\sigma _x, \\sigma _y)$ the vector of Pauli matrices.", "The spectrum of $H_0(\\mathbf {k})$ is composed of two energy branches $E_\\pm = \\pm \\hbar v_F k$ with $k = |\\mathbf {k}|$ .", "The Floquet bands $\\epsilon _\\pm $ that emerge when adding the time-periodic driving $V_T(t)$ to the system are sketched in Fig.", "REF .", "Due to the radial symmetry of the bands, we only show a cut along $k_y = 0$ .", "In the limit of infinitely small driving $V_T(t)$ , the bare Dirac cone $E_\\pm $ (black) is accompanied by a mesh of intersecting replica (gray) that are shifted in energy by multiples of $\\hbar \\omega $ .", "The blue dashed lines mark the corresponding first Brillouin zone (BZ) in energy.", "For finite $V_T(t)$ band gaps open at replica crossings and separated Floquet bands emerge (red dashed curves, see Appendix for an introduction to the relevant aspects of Floquet theory).", "The band gaps sketched in Fig.", "REF emerge since different replica are coupled.", "Under the influence of the drift field $E$ , a particle follows the band adiabatically along $k$ and successively absorbs/emits an increasing number $n$ of photons $\\hbar \\omega $ at each avoided crossing between two replica of the (undriven) Dirac cone with relative energy shift $n\\hbar \\omega $ .", "Hence, the farther away from $k=0$ , the more photons are needed to bridge the energy difference required for adiabatic dynamics associated with Floquet oscillations.", "As we will show below, time-periodic potentials $V_T$ with pronounced higher-order (in $\\omega $ ) Fourier components will correspondingly open band gaps farther away from $k=0$ compared to, e.g., single frequency harmonic driving.", "Due to the underlying linear (Dirac) dispersion, the Floquet bands are approximately $k$ -periodic, implying particularly pronounced Floquet oscillations with a well-defined frequency, in close analogy to Bloch oscillations General nonlinear dispersions $\\epsilon _\\pm (k)$ , e.g.", "two hyperbolic branches, give rise to sequences of intersections that are not equidistant in $k$ and thereby deny a constant Floquet oscillation period.. To gain insight into the Floquet oscillation dynamics let us be definite and consider the time evolution of a wave packet (WP) of Dirac electrons under the influence of an additional, constant electric field $\\mathbf {E} = E \\mathbf {e}_x$ .", "Here we consider the 1d motion along the field direction, generalizations to higher dimensions are straight forward.", "Due to the drift potential $V (x) = - eE x,\\ e>0,$ the WP is accelerated and its initial wave vector $k_i$ changes linearly in time [18]: $k(t)= k_i - (e E/\\hbar ) t \\, .$ For ordinary Bloch bands, the BZ with period $\\Delta k_{osc}\\!=\\!", "2\\pi /a$ is traversed in the time $T_B = 2\\pi /(a|eE|\\hbar ).$ While crossing the BZ, a charge carrier changes its velocity according to the change in slope of the $k$ -periodic band structure, leading to a Bloch oscillation in $k(t)$ with frequency $\\omega _B=2\\pi /T_B$ .", "For Floquet systems the velocity operator is given by $\\hat{v} = \\textrm {d}\\mathcal {H}_F/ \\textrm {d}k$  [18].", "The diagonal terms of a WP's velocity read $\\langle \\hat{v}_{\\alpha \\alpha } \\rangle \\!=\\!", "v_{\\alpha \\alpha }\\!=\\!", "\\textrm {d}\\epsilon _\\alpha / \\textrm {d}k$ , with $\\alpha \\!=\\!\\pm $ , and the corresponding position expectation value $\\langle \\hat{x}_{\\alpha \\alpha } \\rangle = x_{\\alpha \\alpha }$ is obtained by time integration of $v_{\\alpha \\alpha }$ .", "Using Eq.", "(REF ) this can be substituted by an integration over $k$ leading to $x_{\\alpha \\alpha }(k(t))= \\frac{\\hbar }{e E}\\left[\\epsilon _\\alpha (k(t)) - \\epsilon _\\alpha (k_i)\\right] \\, .$ These diagonal contributions to $\\langle \\hat{x} \\rangle (t)$ encode features of the Floquet band structure into the WP position.", "In particular, analogously to conventional Bloch oscillations, the WP is expected to perform oscillations for Floquet bands similar to the ones sketched in Fig.", "REF (red dashed lines).", "During one (Floquet) oscillation $k$ changes by the period $\\Delta k_{osc}$ of the Floquet bands (Fig.", "REF ).", "Hence, Eq.", "(REF ) implies the corresponding period $T_F = (\\hbar /|eE|) \\Delta k_{osc}$ .", "Due to the linear dispersion $E_\\pm (k)=\\pm \\hbar v_F k$ , we have $\\hbar v_F\\Delta k_{osc} = \\hbar \\omega $ so that the Floquet oscillation period reads $T_F = \\frac{\\hbar \\omega }{v_F |e E|} \\, .$ $T_F$ is proportional to the inverse period $1/T=\\omega /(2\\pi )$ of the driving in Eq.", "(REF ).", "While its analogue, the Bloch period $T_B$ , Eq.", "(REF ), is determined by the inverse (super-)lattice constant $1/a$ , usually fixed in experiment, $T_F$ can be tuned through $\\omega $ in a range such that $T_F > T$ ." ], [ "Results for representative driving protocols", "In the following we consider two representative types of driving well suited to generate Floquet oscillations: a periodic sequence of short pulses and a circularly polarized light field." ], [ "Short pulse sequence", "For the first driving protocol we use $V_T(t) = \\sum _{l=1}^{\\infty } \\Theta (t-(lT-\\Delta t)) \\Theta (lT-t) \\ M\\sigma _z \\,$ in Eq.", "(REF ) (with Heaviside function $\\Theta $ and Pauli matrix $\\sigma _z$ ).", "This pulse train periodically couples the two branches of the Dirac Hamiltonian $H_0({\\mathbf {k}})$ , Eq.", "(REF ), by opening a mass gap of strength $M$ and duration $\\Delta t$ , see lower inset in Fig.", "REF (a).", "The resulting Floquet spectrum can be tuned either by $M$ or by the ratio $\\Delta t/T$ .", "To be definite we choose the normalized pulse strength $M/\\hbar \\omega = 4.4/\\pi $ and $\\Delta t/T = 0.09$ .", "The resulting Floquet band structure for this representative set of parameters is shown in Fig.", "REF (a).", "The driving opens a sequence of gaps around $\\epsilon \\!=\\!0$ at the intersections of the original Dirac spectrum and its replicas.", "A detailed analysis of these Floquet bands and the $k$ -dependence of the gaps is given in Appendix .", "A WP with initial quasi-energy and wave number as marked by the blue dot in the zoomed in area in Fig.", "REF (a) (red box, $\\bar{k}_i v_F/\\omega = 0.25$ ) will undergo Floquet oscillations (blue arrows) when driven through the bands by a static electric field $E<0$ , Eq.", "(REF ), such that $k$ increases in time, Eq.", "(REF ).", "Notably, since the Floquet band maxima and hence the band width vary over $k$ -space, the resulting Floquet oscillation is expected to change its amplitude but not its frequency.", "In our simulations we choose the field strength $E$ such that, according to Eq.", "(REF ), $T_F/T = 2 \\pi \\hbar / T^2 v_F |eE| \\simeq 20.8 \\pi $ .", "We took initially Gaussian WPs of the form $\\tilde{\\Psi }(k,0) = \\sqrt{\\frac{1}{\\sqrt{\\pi }\\Delta k}} \\exp \\left( -\\frac{1}{2\\Delta k^2} (k\\!-\\!\\bar{k}_i)^2 \\right) \\cdot \\begin{pmatrix}1 \\\\ 1\\end{pmatrix},$ with width $\\Delta k$ and initial central mode $\\bar{k}_i$ .", "We employ two complementary approaches to compute and analyze Floquet oscillations: Floquet theory and direct time-integration of the full time-dependent effective Dirac equation including the $\\bf E$ -field.", "To compute the WP velocity within Floquet theory we start from Ehrenfest's theorem $v(t) =\\langle \\hat{v}(t) \\rangle = \\frac{i}{\\hbar } \\langle \\Psi (t) | \\left[ H(t), \\hat{x} \\right] | \\Psi (t) \\rangle ,$ where $\\left[ H(t), \\hat{x} \\right] = -i\\hbar v_F \\sigma _x$ for the Dirac case.", "$| \\Psi (t)\\rangle $ is obtained via the time-evolution operator of a Floquet system [27] that for a single $k$ -mode reads $U_k(t, 0) = \\sum _{\\alpha =\\pm } \\exp \\left(-\\frac{i}{\\hbar } \\epsilon _\\alpha (k)t \\right) | \\phi _{\\alpha ,k} (t) \\rangle \\langle \\phi _{\\alpha ,k} (0) | \\, .$ Here $\\epsilon _\\alpha $ are the Floquet quasi-energies and $| \\phi _{\\alpha ,k} (t) \\rangle $ the corresponding eigenstates of ${\\cal H}_F$ , including replicas $n\\hbar \\omega $ (see Eq.", "(REF )), each consisting of two branches $\\alpha \\!=\\!\\pm $ from the linear dispersion.", "The additional electric field induces a linear change of $k$ , which we account for by adjusting $k(t)$ according to Eq.", "(REF ).", "Applying $U_k(t, 0)$ to an initial (WP) state $| \\Psi (0)\\rangle = \\sum _{k_i} c_{\\alpha , k_i}|\\phi _{\\alpha , k_i}(0)\\rangle ,$ where $|c_{\\alpha , k_i}|^2 = |\\langle \\phi _{\\alpha , k_i}(0) | \\Psi (0)\\rangle |^2$ describes the initial occupation of branch $\\alpha $ , and plugging Eq.", "(REF ) into Ehrenfest's theorem gives $v(t) =& v_F \\sum _{k_i}\\sum _{\\alpha , \\beta =\\pm } c_{\\alpha , k_i}^\\ast c_{\\beta , k_i}\\langle \\phi _{\\alpha , k(t)}(t)| \\sigma _x |\\phi _{\\beta , k(t)} (t) \\rangle \\nonumber \\\\& \\qquad \\quad \\times \\exp \\left(- \\frac{i}{\\hbar } \\left[ \\epsilon _\\beta \\left(k(t)\\right)-\\epsilon _\\alpha \\left(k(t)\\right] \\right)t \\right) \\, .$ Here $k(t)$ is given by Eq.", "(REF ).", "The occupation $|c_{\\alpha , k_i}|^2$ is time-independent as long as different Floquet bands are far enough apart for Landau-Zener interband transitions [28], [29], [30], [31], [32], [33] to be neglected.", "This is the case for the time scales $t\\le 200T$ considered below.", "For the periodic pulse sequence, Eq.", "(REF ), we numerically compute $v(t)$ by means of Eq.", "(REF ) and $x(t)=\\int _0^t \\textrm {d}t^\\prime v(t^\\prime )$ .", "Due to the rectangular pulse shape we must include up to $n=500$ Floquet modes to achieve sufficient convergence.", "The results are shown in the left panel of Fig.", "REF (b) as red and green lines for $x(t)$ and $v(t)$ , respectively.", "They indeed show distinct Floquet oscillations, as predicted, with period $T_F \\simeq 20.5\\pi T$ , matching the expected value $T_F \\simeq 20.8\\pi T$ from Eq.", "(REF ).", "The off-diagonal velocity term ($\\alpha \\ne \\beta $ ) in Eq.", "(REF ) encodes the interference of states living in different Floquet bands, giving rise to an additional feature, zitterbewegung [34], [35] caused by the Floquet Dirac band structure (see Appendix ).", "Our second, complementary method to compute Floquet oscillations is based on the WP propagation algorithm “Time-dependent Quantum Transport” (TQT) [36], see also Appendix .", "The WP is discretized on a rectangular grid and the time-evolution operator for the full Hamiltonian including the drift field, $ H(t)\\!=\\!", "H_0(\\mathbf {k}) \\!+\\!", "V_T(t) \\!-\\!", "E x $ , is computed.", "Then the Lanczos method [37] is used to evaluate the action of the time-evolution operator on the WP and to compute $\\Psi (x,t)$ .", "The time-dependent position and velocity expectation values are then calculated through $x(t) = \\langle \\hat{x}(t) \\rangle = \\int |\\Psi (x, t)|^2 x \\ \\textrm {d}x$ and $v(t) = \\langle \\hat{v}(t) \\rangle = \\frac{\\textrm {d}}{\\textrm {d}t} \\langle \\hat{x}(t) \\rangle .$ The resulting TQT data is shown in Fig.", "REF (b), left panel, as black and blue curves for $x(t)$ and $v(t)$ , respectively.", "They approximately coincide with those computed within Floquet theory, also showing zitterbewegung on top of the Floquet oscillations.", "Moreover, the WP position $x(t)$ computed with TQT precisely reflects characteristic features of the Floquet quasi bands, shown in the red box in Fig.", "REF (a), namely the increasing amplitude and the sharpening of the maxima and minima although TQT directly integrates the time-dependent Schrödinger equation without using Floquet formalism.", "While the static electric drift field enters into the full Hamiltonian governing the numerically exact TQT time evolution, within the Floquet approach its effect is included via the acceleration theorem (REF ) into the time evolution, Eqs.", "(REF , REF ).", "The latter approximation, together with residual numerical errors from the cutoff in the Floquet quantum number, could explain the slight deviations between Floquet and TQT data in the left panel of Fig.", "REF (b).", "Finally, in the right panel of Fig.", "REF (b) we present snapshots of the absolute square $|\\Psi (x,t)|^2$ taken at the turning points (marked as black crosses) of the red curve in the left panel.", "They show clear-cut Floquet oscillations of the full WP in configuration space generated for the setting of a periodic pulse sequence." ], [ "Circularly polarized light", "The experimental realization of Floquet oscillations could be easier to achieve in an alternative setup, employing circularly polarized light as periodic driving.", "The associated vector potential $\\mathbf {A}$ enters (linearly) the Dirac Hamiltonian (REF ) via the minimal coupling $V_T(t) = \\mathbf {A}(t) \\cdot \\sigma = A \\begin{pmatrix}\\cos (\\omega t) \\\\ \\sin (\\omega t)\\end{pmatrix} \\cdot \\sigma .$ The Floquet quasi-bands of graphene illuminated by circularly polarized light have already been studied extensively [38], [39], [40], [41].", "Recently, also transport [42] and topological [43], [44], [45], [46] properties were investigated.", "Instead, here we focus on generating Floquet oscillations for realistic experimental parameters.", "To be closer to measurements, we explicitly treat the 2d case with an initial, radially symmetric Gaussian WP analogous to Eq.", "(REF ), with $k$ and $\\bar{k}_i$ replaced by $\\mathbf {k}$ and $\\bar{\\mathbf {k}}_i = (\\bar{k}_i , 0)$ .", "Using again TQT we simulate the WP dynamics (see Fig.", "REF ) in presence of the electric field $\\mathbf {E} = E \\mathbf {e}_x$ , where $T_F/T = 2\\pi \\hbar / T^2 v_F |eE| \\simeq 140$ .", "Figure: (a) Floquet bands of a Dirac system illuminated with circularlypolarized light with scaled amplitude A ˜=\\tilde{A} = 0.5 (black curves) and 1.1 (red curves).", "(b) Floquet oscillations: Mean position of a WP driven through the bands shownin (a) starting with momenta k ¯ i \\bar{k}_i marked as blue dots in (a).", "The initial time t 0 t_0 for the black curve is shifted to t 0 /T=107.5t_0/T=107.5 to highlight the close connectionto the Floquet band structure of panel (a).", "Additional oscillations due to the momentum change caused by the circulating electric field of the pulse are not resovled on this timescale.We calculate the Floquet band structure and the WP dynamics for two different dimensionless driving strengths $\\tilde{A}=v_F e A/(\\hbar \\omega )$ to demonstrate the field amplitude influence on the Floquet oscillation frequency.", "Figure REF (a) shows the radially symmetric Floquet band structure along the direction of the electric field.", "For small enough driving strengths, the distance of local band minima is independent of $k$ (black).", "For larger driving amplitudes, multiple-photon processes significantly alter the formation of band gaps.", "As displayed in Fig.", "REF (b), our TQT simulations of the position expectation values, Eq.", "(REF ), of two WPs with scaled initial momenta $\\bar{k}_i v_F/\\omega =-2.07$ and $-1.22$ for $\\tilde{A} =$ 1.1 and 0.5, respectively (marked by blue dots in panel (a)) clearly show Floquet oscillations, nicely reflecting the shape of the underlying Floquet band structure as expected.", "Since the gaps between unperturbed Dirac cones open in a smaller $k$ -range than for the periodic pulse train (REF ), there are less cycles of regular Floquet oscillations.", "At longer times Landau-Zener transitions to neighboring Floquet bands substantially alter the WP motion.", "Nevertheless, Fig.", "REF shows Floquet oscillations with 4 full periods.", "In the famous experiments of Bloch oscillations in superlattices [5], [6], [7] and bulk semi-conductors [13], their detection was possible even though only $1-3$ periods could be achieved.", "Moreover, oscillations involving many periods exist for the case of periodic kicking which can be experimentally realized through laser pulse trains.", "In analogy to Fig.", "REF (b), Fig.", "REF (a) shows the position expectation value, Eq.", "(REF ), along the electric field, Eq.", "(REF ), and the 2d shape of the WP.", "Here we choose $\\bar{k}_i v_F/\\omega = -0.64$ and $T_F/T = 2\\pi \\hbar / T^2 v_F |eE| \\simeq 14$ without shifting the initial time $t_0$ and including orange crosses to mark when the snapshots of the WP are taken.", "The snapshots displayed on the right show the absolute square of the WP in 2d space and the oscillation of its center of mass around $x=0$ .", "Note that according to Eq.", "(REF ) $T_F$ is inversely proportional to $E$ , therefore in Fig.", "REF the timescale of the Floquet oscillations is a factor of 10 smaller than in Fig.", "REF .", "Here, the oscillations induced by the circulating electric field, Eq.", "(REF ), are resolved on top of the slower Floquet oscillations.", "In Fig.", "REF (b), we plot the trajectory of the WP's center of mass in the $x$ -$y$ -plane.", "The red curve is meant to serve as a guide for the eye.", "In the trajectory, one can recognize the Floquet oscillations but also some additional features.", "The smaller oscillations have the same origin as the tiny oscillations on top of the comparably slow Floquet oscillations in Fig.", "REF (a): They arise from the momentum change due to the circulating electric field of the pulse, Eq.", "(REF ), and are modulated by zitterbewegung.", "The drift of the WP in $y$ -direction however has a less intuitive explanation.", "We assume that it is caused by the anomalous velocity, i.e.", "the Berry curvature of the Floquet bands, since it is orthogonal to the electric driving field.", "This assumption will be tested in future studies.", "Figure: (a) Simulation of the position expectation value of the WP in the direction of the electric field, Eq.", "(), (as shown in Fig.", "(b)) and its shape in position space.", "The orange crosses mark when the snapshots of the WP are taken.", "The tiny oscillations on top of the comparably slow Floquet oscillations are due to the momentum change caused by the circulating electric field of the pulse, Eq. ().", "(b) Trajectory of the WP's center of mass 〈𝐱 ^〉(t)\\langle \\hat{ \\mathbf {x}} \\rangle (t) displayed in (a) in xx-yy-plane.", "The red curve has been included as a guide for the eye." ], [ "Including trigonal warping and introducing experimental parameters", "Finally, we want to give a few more details on the experimental realizability of Floquet oscillations in graphene.", "First of all we numerically verify that when choosing the correct energy scales trigonal warping effects do not play a role.", "Therefore we extend our Hamiltonian $H_0(\\mathbf {k})$ , Eq.", "(REF ), to $H(\\mathbf {k}) = \\hbar v_F \\mathbf {k} \\cdot \\sigma - \\mu \\left[ \\left(k_y^2 - k_x^2 \\right) \\sigma _x + 2 k_x k_y \\sigma _y \\right],$ with $v_F = 10^6 \\,\\rm m/s$ and $\\mu = 3 a^2 t/8$ , where $a = 1.42 \\,\\rm Å$ is the nearest-neighbor distance and $t = 2.7 \\,\\rm eV$ the nearest-neighbor hopping strength of graphene [73].", "We set the light amplitude $A = 45 \\,\\rm nVs/m$ and the frequency $\\omega /2\\pi = 10 \\,\\rm THz$ .", "We simulate the wave packet motion for $\\bar{k}_i = 0.013 \\,\\rm 1/Å$ .", "The described parameters are equivalent to the unitless values chosen for Fig.", "REF .", "In Fig.", "REF we compare the Floquet oscillations obtained with and without the trigonal warping term and find no qualitative differences.", "However, for experimental realization in graphene the transport relaxation time $\\tau $ has to be higher than the period $T_F$ of the Floquet oscillations, Eq.", "(REF ), typically $\\tau = 1 - 20 \\,\\rm ps$ [55], [56].", "As $T_F \\simeq 13.5 \\,\\rm ps$ for $E = 30 \\,\\rm V/cm$ (Fig.", "REF (a)), good quality samples would already allow for the observation of one period.", "If we increase the electric field by a factor of 10 (Fig.", "REF (b)), $T_F \\simeq 1.3 \\,\\rm ps$ .", "Then, all 4 Floquet oscillation cycles could be detected.", "Note that for the higher electric field the Floquet oscillations become slightly altered.", "The additional oscillations are caused by Zitterbewegung and the momentum change induced by the circularly polarized light field and are resolved here since the Floquet oscillations occur on the time-scale of pico-seconds." ], [ "Conclusions", "The above analysis and simulations constitute a proof of principle for generating Floquet oscillations in systems with an effective Dirac dispersion.", "Concerning possible experimental realizations, graphene [47], topological insulators [48], [49] and cold atoms in artificial honeycomb lattices [50] appear suitable [51], [52].", "The latter have the advantages that one can additionally tune $v_F$ entering the Floquet frequency and that relaxation through disorder or interaction effects can be avoided.", "Other effective Dirac systems, e.g.", "for plasmons [53] or polaritons [54], could also be considered.", "In the following, we will quantitatively focus on realizations for charge carriers in real monolayer graphene.", "The radiation frequency $\\omega $ must be chosen such that several Floquet BZs lie in the energy range governed by the linear dispersion.", "In graphene, a frequency $\\nu = 2\\pi \\omega $ of $1-10\\,$ THz is small enough to accommodate roughly 50 to 5 Floquet replicas over the energy range of $400\\,$ meV, for which the linear Dirac cone is a very good approximation.", "Additionally, $\\omega > \\omega _F \\gg 2\\pi /\\tau $ is required, where $\\tau $ denotes a typical relaxation time of charge carriers.", "To realize the oscillations shown in Fig.", "REF (b) with $\\tilde{A}=1.1$ at a radiation frequency of $\\omega /2\\pi = 10 \\,\\rm THz$ , the moderate intensity (avoiding sample heating) of $I = \\frac{c \\epsilon _0}{2} A^2 \\omega ^2 \\simeq 1 \\frac{\\rm MW}{\\rm cm^2}$ is needed.", "Here $c$ is the speed of light and $\\epsilon _0$ the vacuum permittivity.", "Then an electric field $E\\simeq 0.3 \\frac{\\rm kV}{\\rm cm}$ is sufficient to generate Floquet oscillations of frequency $\\omega _F/2\\pi \\gtrsim 1 \\, \\rm THz$ .", "Hence $\\omega _F \\gtrsim 2\\pi /\\tau $ for typical inverse transport relaxation times $1/\\tau = 0.05 - 1 \\, \\rm THz$ of clean hexagonal boron nitride-encapsulated graphene [55], [56].", "Thus, Floquet oscillations could in principle be observed, opening an alternative way to generate THz radiation [57], [58].", "To conclude we showed that free particles in a static electric drift field and obeying a linear Dirac-type dispersion can perform spatially periodic motion, Floquet oscillations, when subject to time-periodic driving.", "The Floquet time lattice takes on the role of the spatial lattice required for conventional Bloch oscillations.", "Such Floquet oscillations feature zitterbewegung and characteristic amplitude modulations that could provide a tool to experimentally map the Floquet quasi-bands.", "A closer consideration of Landau-Zener transitions between different Floquet bands and the question of how the topology of Floquet bands [14] is reflected in corresponding Floquet oscillations opens interesting perspectives for future research.", "We thank Simon Maier for support with calculating Floquet band structures at an early stage of this work and Sergey Ganichev for helpful discussions and careful reading of the manuscript.", "We further thank an anonymous referee for suggesting to interprete Floquet oscillations in terms of multiple photon exchange.", "We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 314695032 – CRC 1277 (subproject A07)." ], [ "Basic relations in Floquet theory", "In the following we describe how to obtain the Floquet band structure for a finite driving strength $V_T(t)$ without the electric field, which is later included by a shift of $k$ when computing Floquet oscillations.", "Generally, the Floquet operator $\\mathcal {H}_F$ is given by $\\mathcal {H}_F = H_0(\\mathbf {k})+V_T(t)-i\\hbar \\partial _t,$ where $H_0(\\mathbf {k})$ is the Hamiltonian of the time-independent system.", "Since the eigenstates of $\\mathcal {H}_F$ , $\\Phi _\\alpha (\\mathbf {k},t)$ , are periodic in time, they can be expanded in a Fourier series: $\\Phi _\\alpha (\\mathbf {k}, t) = \\sum _{n=-\\infty }^\\infty \\mathbf {u}_\\alpha (\\mathbf {k}, n) \\textrm {e}^{in\\omega t}.$ The dimension of the coefficients $\\mathbf {u}_\\alpha (\\mathbf {k}, n)$ is equal to the number of branches $\\alpha $ of the time-independent Hamiltonian $H_0(\\mathbf {k})$ , i.e.", "two in our case.", "In order to find these coefficients and the corresponding eigenenergies $\\epsilon _\\alpha $ , the Floquet equation (REF ) is multiplied by $\\textrm {e}^{-im\\omega t}$ , $m \\in \\mathbb {Z}$ , and averaged over one period $T$ to end up with [59], [60] $\\begin{split}\\sum _{n=-\\infty }^{\\infty }& \\underbrace{\\left( \\mathcal {H}_{0F, mn}(\\mathbf {k}) + V_{F, mn} \\right)}_{\\mathcal {H}_{F, mn}(\\mathbf {k})} \\mathbf {u}_\\alpha (\\mathbf {k}, n)\\\\\\ &= \\epsilon _\\alpha (\\mathbf {k}) \\mathbf {u}_\\alpha (\\mathbf {k}, m).\\end{split}$ Here the contributions from $V_T(t)$ are denoted by $V_{F, mn}$ and the contributions from $H_0(\\mathbf {k})$ and $-i\\hbar \\partial _t$ by $\\mathcal {H}_{0F, mn}(\\mathbf {k})$ .", "Note that the resulting eigenvalue problem is time-independent and all the dynamics of a WP in the system are incorporated in the Floquet basis states [18].", "To account for this when projecting a WP from the time-dependent basis to the Floquet basis, the relation $\\langle k_o(t) \\rangle _t = k_{Floquet}$ is used, where $\\langle k_o(t) \\rangle _t = \\frac{1}{T}\\int _0^T k_o(t)\\ \\textrm {d}t$ .", "The time dependence of $k_o(t)$ is introduced by the time-periodic driving $V_T(t)$ .", "Hereafter, we refer to $k_{Floquet}$ when talking of wave numbers and suppress its subscript to simplify the notation.", "The Floquet Hamiltonian without coupling $\\mathcal {H}_{0F, mn}(\\mathbf {k}) = (\\hbar v_F \\mathbf {k} \\cdot \\sigma + m\\hbar \\omega ) \\delta _{mn}$ is diagonal and describes the Dirac bandstructure shown in Fig.", "REF .", "The driving Hamiltonian $V_{F, mn} = \\frac{1}{T} \\int _0^T \\textrm {d}t \\ V_T(t) \\textrm {e}^{i(n-m)\\omega t}$ on the other hand couples different Floquet modes $\\mathbf {u}_\\alpha (\\mathbf {k}, n)$ and thus can lead to the opening of band gaps in the originally linear spectrum (see Fig.", "REF ).", "For the numerical evaluation, the resulting infinite matrix $\\mathcal {H}_F$ has to be truncated at a finite value $\\pm n$ that corresponds to the number of Floquet replicas taken into account.", "When performing numerical calculations, one has to make sure that the results are converged for the chosen value of $n$ ." ], [ "Floquet quasi-band structure of a Dirac system with periodically opened mass gap", "Here we give a more detailed analysis of the Floquet band structure of a Dirac system with a periodically opened mass gap.", "The potential describing this time-dependent pulsing is given in Eq.", "(REF ).", "For the mass gap, we find with Eq.", "(REF ) $V_{F, mn} = \\frac{M}{2\\pi i (n-m)} \\left(1-\\textrm {e}^{-2\\pi i(n-m)\\frac{\\Delta t}{T}} \\right) \\sigma _z.$ Qualitatively, the two time scales $\\Delta t$ and $T$ involved are reflected in $k$ -space.", "While the longer scale $T$ makes for the high-frequency oscillations of the Floquet bands due to the replicas, the smaller time scale $\\Delta t$ is responsible for the slow modulation, i.e.", "the different gap sizes as function of $k$ shown in Fig.", "REF .", "On a more quantitative level, the dependence of the Floquet bands on the pulse amplitude $M$ and pulse duration $\\Delta t$ can be best understood when studying the influence of one pulse on a WP in the static Dirac cone.", "This has been done extensively in Ref.", "[61] and will only be summarized here.", "Let us consider a WP initially occupying states in the upper cone.", "The opening and closing of the gap causes a redistribution of the WP to both cones, leading to a new superposition state.", "The amplitude of the part now occupying the other cone - in our example the lower one - can easily be calculated analytically [61]: $A(k) = -\\frac{i}{\\sqrt{1+\\eta ^2}} \\sin \\left(\\mu \\sqrt{1+ \\eta ^2} \\right),$ where $\\eta = E_{k, \\pm }/M$ and $\\mu = M \\Delta t/\\hbar $ .", "This transition amplitude only depends on the initial energy $E_{k, \\pm }$ of the state and the pulse parameters.", "Figure: Floquet band structure (black curves) and corresponding transition probability P(k)P(k) (blue) for (a) μ=6.3\\mu = 6.3, Δt/T=0.5\\Delta t/T = 0.5 and (b) μ=0.4\\mu = 0.4, Δt/T=0.09\\Delta t/T=0.09.", "The bands shown in Fig.", "2(a) of the main paper are a zoom into those depicted in panel (b).", "For both parameter sets one can easily see that the gap in the Floquet bands closes whenever P(k)P(k) goes to zero.A numerical comparison of the corresponding transition probability $P(k)=|A(k)|^2$ and the Floquet band structure reveals that the gaps that open at the intersections of the repetitions of the original cone are directly related to the transition probability at that $k$ -value: The larger $P(k)$ , the larger the band gap.", "The reason for this dependence can be motivated in the following way.", "$P(k)$ describes for a single pulse the proportion of the WP that is transferred to the other band, i.e.", "how much a single pulse couples upper and lower band of the Dirac cone.", "On the other hand, the band gap of the Floquet bands is due to this coupling of (initially) linear band replicas.", "Therefore, it is not surprising that $P(k)$ and the band gap width are directly related.", "We show this for two exemplary band structures in Fig.", "REF .", "In panel (a) we set $\\mu = 6.3$ and $\\Delta t/T = 0.5$ .", "Since $P(k)=0$ around $kv_F/\\omega =0$ the original Dirac cone is preserved.", "For larger $k$ band gaps open.", "The resulting complex band structure is a perfect example of how nicely the Floquet band structure can be tuned based on the transition probability $P(k)$ .", "The band structure shown in panel (b) is the same as the one shown in Fig.", "2(a) of the main paper bur for a larger range of $k$ -values.", "There, a wide area around $kv_F/\\omega = 0$ is gapped.", "For an appropriate choice of parameters (as in panel (b)) a large $k$ -window, in which Landau-Zener transitions are suppressed, can be chosen to support Floquet oscillations.", "As a rule of thumb, the smaller $\\Delta t/T$ , the more band gaps open and thus allow for more periods of Floquet oscillations before Landau-Zener transitions diminish them." ], [ "Zitterbewegung in a Dirac system with periodically opened mass gap", "Zitterbewegung (ZB) was originally predicted by Schrödinger for the Dirac equation [34] but an analogue is also visible in multiband systems [62], [63], [64], [65], [66], [67].", "The reason is the interference of particle and antiparticle contributions in a WP, or respectively the contributions of different bands, in cases when the velocity operator does not commute with the Hamiltonian.", "The corresponding term is the offdiagonal term of the expectation value of the velocity operator and the frequency is given by the difference of the energies.", "In our case, we also have an effective two band model due to the Floquet bands with an offdiagonal term of the velocity as seen in Eq.", "(REF ).", "The frequency of the corresponding “Floquet zitterbewegung” is given by the energy gap between the two Floquet quasi-energies, $\\omega _{ZB} = \\left( \\epsilon _{\\beta }(k)-\\epsilon _{\\alpha }(k) \\right)/\\hbar .$ In Fig.", "REF we show the Fourier transform $\\tilde{v}(\\omega _v)$ of the velocity of a WP starting at different $k$ -values in the Floquet band structure to analyze the frequency spectrum.", "We denote the variable of the Fourier transform of the velocity by $\\omega _v$ .", "For these calculations no static electric field was applied, such that $k$ and therefore $\\omega _{ZB}$ stay constant in time.", "Before performing the Fourier transform to investigate the frequency of the oscillations, the mean velocity value was subtracted of the corresponding data to avoid a peak at $\\omega _{v} = 0$ .", "The dashed lines mark $\\omega _{ZB}$ as calculated by Eq.", "(REF ).", "Their good agreement with the spectrum confirms that the off-diagonal velocity in the Floquet picture describes ZB caused by the interference of states occupying different Floquet bands.", "Since the velocity has a rectangular shape, the peaks are repeated at higher harmonics.", "This rectangular shape can be explained in our example by the fact that the velocity can only change during the mass gap, which means that the harmonic oscillation of the ZB is effectively sampled with the driving frequency $\\omega $ ." ], [ "The c++ library \"Time-dependent Quantum Transport\" (TQT)", "To propagate a quantum state $|\\psi \\rangle $ , one has to solve the time-dependent Schrödinger equation, $\\textrm {i}\\hbar \\frac{\\partial }{\\partial t} |\\psi \\rangle = \\hat{H} |\\psi \\rangle ,$ with the Hamilton operator $\\hat{H}$ , which depends in general on time.", "Formally, it can be solved using the time-evolution operator $U(t,t_0) = \\mathcal {T}\\exp \\left(- \\frac{\\textrm {i}}{\\hbar } \\int _{t_0}^{t}\\hat{H}(t^\\prime ) \\,\\textrm {d}t^\\prime \\right),$ which is unitary and fulfills $U(t, t_0) = U(t, t^\\prime ) U(t^\\prime , t_0),$ where $t_0$ is the initial time, $t$ is some arbitrary later time and $t^\\prime $ is a time in between.", "The time-evolution of a state then yields $|\\psi (t) \\rangle = U(t, t_0) |\\psi (t_0)\\rangle .$ Moreover, for time-independent Hamiltonians, the time-evolution operator simplifies to $U(t_0, t) = \\exp \\left(-\\textrm {i}\\frac{\\hat{H}}{\\hbar } \\cdot (t-t_0)\\right).$ On the other hand, any function can be approximated by step-wise constant functions – the smaller the steps, the better the approximation.", "Thus, the time-ordered exponential of Eq.", "(REF ) can be estimated by $U(t_0, t_0+ N\\delta t) \\approx \\prod \\limits _{j=0}^{N-1} \\exp \\left(-\\textrm {i}\\frac{\\hat{H}(t_0+j\\delta t)}{\\hbar } \\cdot \\delta t\\right),$ where the Hamiltonian is made step-wise constant for the time duration $\\delta t$ .", "The advantage is that instead of the time-ordered product of Eq.", "(REF ), a rather easy multiplication can be performed.", "Of course, one has to make sure that the time step $\\delta t$ is small enough, such that the numerical result is converged.", "Using the time-evolution operator, we shifted the problem of solving the differential equation in Eq.", "(REF ), to having the Hamiltonian operator in an exponential, which is defined by its (infinite) series expansion.", "The publicly available c++ library “Time-dependent Quantum Transport” (TQT) by Viktor Krueckl [36] takes care of this expansion as efficient as possible for 1d or 2d systems.", "The expanded time-evolution operator acts on a numerically defined initial state in real space.", "Since an sufficiently smooth function can be approximated by its values at discrete points, the space is discretized by a grid, in 1d with $N_x$ and in 2d with $N_x\\times N_y$ points.", "Thus, the wave function becomes a complex valued $N_x$ -component vector or $(N_x\\times N_y)$ -matrix, respectively.", "For this work we usually take $N_x = 8192$ for 1d and $N_x \\times N_y = 8192\\times 256$ for 2d systems.", "The Hamiltonian can be either given as tight-binding Hamiltonian or as mixed position and momentum space representation, i.e.", "a function of both, the position and momentum operator, the latter being the Hamiltonian used in most cases for analytical calculations.", "In the mixed representation, instead of using the spatial derivative, the momentum operator acts in momentum space, i.e.", "the wave function is transformed by a fast Fourier transform, then the momentum operator acts as factor, and finally the inverse fast Fourier transformation is applied to get back to position space.", "The reason for using the Fourier transformation instead of the derivative is the numerical instability of the latter.", "Since the momentum operator acts several times (in higher orders $k_i^n$ ) in each small time step, the errors add up quickly.", "In this paper, only the mixed representation of position and momentum operator is used.", "Due to the explicitly time-dependent Hamiltonian in our problem, a Lanczos method is used to expand the time-evolution operator [68], [69] instead of a Chebyshev expansion[70], [71].", "The difference here is that instead of expanding in a fixed set of polynomials, the time-evolution operator is expanded in terms of the wave function $\\psi $ itself and powers of the Hamiltonian acting on the wave function $\\hat{H}^n\\psi $ .", "The thereby spanned subspace is a $N$ -dimensional Krylov subspace $\\mathcal {K} = \\operatorname{span}\\lbrace \\psi , \\hat{H}\\psi ,\\dots {}, \\hat{H}^{N-1}\\psi \\rbrace $ , which is orthonormalized to get the basis vectors $u_n$ by a Gram-Schmidt procedure during the recursive creation for better numerical stability: $u_0 &= \\frac{\\psi (t_0)}{|\\psi (t_0)|}, \\\\u_1 &= \\frac{\\hat{H} u_0 - \\alpha _0 u_0}{\\beta _0}, \\\\u_{n+1} &= \\frac{\\hat{H} u_n - \\alpha _n u_n - \\beta _{n-1}u_{n-1}}{\\beta _n},$ with the overlaps $\\alpha _n = \\langle u_n\\mid \\hat{H} \\mid u_n\\rangle $ and $\\beta _{n-1} = \\langle u_{n-1} \\mid \\hat{H} \\mid u_n \\rangle $ .", "Note that $u_n$ is a linear combination of powers of $\\hat{H}$ acting on $\\psi $ , with highest order $n$ .", "The truncated Hamiltonian in this subspace becomes tridiagonal $H_\\mathcal {K} = \\begin{pmatrix}\\alpha _0 & \\beta _0 & 0 & \\cdots {} & 0 \\\\\\beta _0 & \\alpha _1 & \\beta _1 & & 0 \\\\0 & \\beta _1 & \\alpha _2 & & 0 \\\\\\vdots {} & & & \\ddots {} & \\beta _{N-2} \\\\0 & \\cdots {} & 0 & \\beta _{N-2} & \\alpha _{N-1} \\\\\\end{pmatrix},$ which can be diagonalized by conventional algorithms and enables the calculation of approximate eigenvalues of the operator $\\hat{H}$ [37].", "With the matrix of eigenvectors $\\mathbf {T}$ and eigenvalues $\\mathbf {E}$ of the Hamiltonian in the reduced Krylov space $H_\\mathcal {K}$ , the time-evolution of one small time step is given by $\\psi (t+\\delta t) = \\sum \\limits _{n=0}^{N-1} \\left[\\mathbf {T}^t \\exp \\left( -\\frac{\\textrm {i}}{\\hbar } \\mathbf {E} \\delta t \\right) \\mathbf {T} \\;\\psi _\\mathcal {K}(t)\\right]_n \\cdot u_n.$ The expansion in the Krylov subspace is faster than a Taylor expansion [72] and for the Krylov space, a dimension $N$ in the range 10–40 is usually enough.", "It turned out that for the calculations in this paper, the dimension of the Krylov space of $N=15$ is sufficient.", "With the thus obtained time-dependent state on our discrete timeline, an arbitrary (observable) quantity like the position expectation value can be obtained as a function of the time for the propagation, which yields in our case the Floquet oscillations." ] ]
1906.04446
[ [ "Diffractive dijet production from the Color Glass Condensate and the\n small-$x$ gluon distributions" ], [ "Abstract We study exclusive dijet production in electron-proton deep inelastic scattering at a future Electron Ion Collider.", "We predict the elliptic modulation of the cross section as a function of the angle between the dijet transverse momentum and the recoil momentum, and show that this modulation is due to non-trivial angular correlations between the transverse coordinate and transverse momentum in the Wigner (or Husimi) distribution.", "The small-$x$ evolution is shown to decrease the elliptic modulation in the EIC kinematics, because of the growth of the proton with decreasing $x$." ], [ "Introduction", "The structure of the proton is a result of complicated non-perturbative many-body interactions between its fundamental building blocks, the quarks and gluons.", "This structure is encoded in various distribution functions, the simplest one being the collinear parton distribution functions that describe the parton density as a function of longitudinal momentum fraction $x$  carried by the parton (measured at a given scale $Q^2$ ).", "These distributions have been measured with great precision by experiments at HERA [1] by studying total electron-proton cross sections.", "More detailed information can be extracted from more differential observables that provide access to more differential distribution functions.", "For example, in exclusive photon or vector meson production the total momentum transfer is measurable, and via Fourier transform provides access to the spatial distribution of partons known as the generalized parton distribution function GPDF [2], [3].", "It is also possible to study the distribution of partons in the proton in transverse momentum space, described by transverse momentum dependent parton distribution functions (TMDs) [4].", "The most complete information of the proton structure is encoded in the Wigner distribution [5], [6], [7], which depends on both transverse coordinate and transverse momentum, in addition to the longitudinal momentum fraction $x$ .", "This quantum distribution is not positive definite and has a probabilistic interpretation only in certain semi-classical limits [8], [9], [10].", "To access the Wigner distribution, more differential observables than single particle production or total cross sections are needed.", "In Ref.", "[11], it was shown that diffractive dijet production, where two jets are produced in a process where no net color charge is exchanged with the target, is sensitive to the gluon Wigner distribution at small $x$ .", "A future Electron Ion Collider in the US [12], [13] or LHeC [14] at CERN would be able to measure this process over a wide kinematical region at high center-of-mass energies.", "At high energies or small $x$ the convenient effective theory to describe high energy scattering processes is provided by the Color Glass Condensate (CGC), which describes Quantum Chromodynamics (QCD) in the high energy limit.", "In Ref.", "[15], summarized here, we calculate both the diffractive dijet production cross section and the Wigner distribution in the CGC framework." ], [ "Dipole-proton interaction in the CGC", "At high energies the convenient degrees of freedom are Wilson lines $U(\\mathbf {x})$ that describe the color rotation that the parton encounters when propagating eikonally through the target.", "For a given target configuration, the Wilson lines are obtained by solving the Yang-Mills equations $U(\\mathbf {x}) = P \\exp \\left( -ig \\int \\mathrm {d}x^- \\frac{\\rho (x^-,\\mathbf {x})}{\\nabla ^2 + \\tilde{m}^2 }\\right).$ The color charge density $\\rho $  is assumed to be a local Gaussian variable with expectation value being related to the local density of the proton, which we assume to be Gaussian in this work, and $\\tilde{m}^2$  is an infrared regulator.", "After the Wilson lines are determined at the initial Bjorken-$x$ , their evolution to smaller $x$ is obtained by solving the perturbative JIMWLK evolution equations (see e.g. [16]).", "All parameters that control e.g.", "the density at the initial $x_0=0.01$ , the size of the proton and the values of the strong coupling and infrared regulators are constrained by the HERA structure function and diffractive vector meson production measurements [17].", "For a more detailed description of the setup, the reader is referred to [15].", "When Wilson lines are sampled on the lattice and evolved to smaller $x$  with the JIMWLK equation, it becomes possible to construct the dipole-target scattering amplitude at any $x$ $N\\left( \\mathbf {r}= \\mathbf {x}- \\mathbf {y}, \\mathbf {b}= \\frac{\\mathbf {x}+ \\mathbf {y}}{2} \\right) = 1 - \\frac{1}{N_c} \\langle \\mathrm {Tr} U(\\mathbf {x}) U^\\dagger (\\mathbf {y}) \\rangle ,$ where the average is taken over different possible target configurations.", "When we consider exclusive dijet production, the Fourier conjugates to the dijet momentum and to the recoil momentum are the dipole size $\\mathbf {r}$ and impact parameter $\\mathbf {b}$ .", "Having this in mind, we study the angular modulation of the dipole-proton scattering amplitude $N(\\mathbf {r},\\mathbf {b})$  calculated from the CGC framework.", "The dipole amplitude $N$  as a function of the angle between $\\mathbf {r}$ and $\\mathbf {b}$ is shown in Fig.", "REF .", "Note that in widely used dipole amplitude parametrizations such as IPsat [18] there would be no angular dependence.", "To quantify the evolution of the elliptic modulation of the dipole amplitude, we extract the Fourier harmonics $v_n$  writing the dipole amplitude as $ N(\\mathbf {r},\\mathbf {b}) = v_0 [1 + 2 v_2 \\cos 2\\theta (\\mathbf {r},\\mathbf {b}) ]$ .", "The extracted $v_0$ and $v_2$ coefficients at different rapidities are shown in Fig.", "REF .", "We find that the evolution suppresses the elliptic modulation (note that Bjorken-$x$ is related to the evolution rapidity as $x = x_0 e^{-y}$ with $x_0=0.01$ ).", "This is mainly due to the rapid growth of the proton density in the dilute region, resulting in a smoother and larger proton with smaller density gradients Figure: Rapidity evolution of the elliptic component of the Husimi distribution from Ref.", "." ], [ "Wigner and Husimi distributions", "As discussed in the Introduction, the gluon Wigner distribution $xW(\\mathbf {P}, \\mathbf {b}, x)$ contains the most complete information of the small-$x$ gluonic structure of the proton.", "In particular, it describes the gluon distribution as a function of both transverse coordinate $\\mathbf {b}$ and transverse momentum $\\mathbf {P}$ .", "The disadvantage is that due to the uncertainty principle it can not have a probabilistic interpretation, and we indeed show in Ref.", "[15] that when calculated from the CGC framework, the Wigner distribution becomes negative at small transverse momenta.", "If the Wigner distribution is smeared over both transverse coordinate and transverse momentum with the smearing parameters being inverse to each other, one obtains the so called Husimi distribution $xH(\\mathbf {P},\\mathbf {b},x)) = \\frac{1}{\\pi ^2} \\int \\mathrm {d}^2 \\mathbf {b}^{\\prime } \\mathrm {d}^2 \\mathbf {P}^{\\prime } e^{-\\frac{1}{l^2}(\\mathbf {b}-\\mathbf {b}^{\\prime })^2 - l^2(\\mathbf {P}-\\mathbf {P}^{\\prime })^2} xW(\\mathbf {P}^{\\prime },\\mathbf {b}^{\\prime },x).$ In this work we choose $l=1\\,\\mathrm {GeV}^{-1}$ , as it corresponds to a distance scale much smaller than the proton size, but does not result in too large smearing in momentum space that would wash out most of the transverse momentum dependence.", "In Ref.", "[15] it is shown that the Husimi and Wigner distributions agree at large $|\\mathbf {P}| \\gtrsim 1/l$ , and that the Husimi distribution calculated from the CGC framework following [19] is positive definite.", "To study the elliptic modulation (dependence on the angle between $\\mathbf {P}$  and $\\mathbf {b}$ ), we write the Husimi distribution as $xH(\\mathbf {P},\\mathbf {b},x) = v_0^H [1 + 2 v_2^H \\cos 2\\theta (\\mathbf {P},\\mathbf {b})].$ The rapidity evolution of the elliptic coefficient $v_2^H$ is shown in Fig.", "REF .", "Except at the smallest momenta, the evolution suppresses the elliptic modulation as expected based on the analysis of the dipole amplitude in Sec. .", "At the smallest momentum values the elliptic component first grows in the evolution, which can be understood as when the proton grows, large dipoles $|\\mathbf {r}| \\sim |\\mathbf {P}|^{-1}$ with large elliptic modulation start to contribute, before the proton grows enough that we start to see the decreasing density gradients also at small $|\\mathbf {P}|$ ." ], [ "Diffractive dijet production", "As discussed in Ref.", "[11], diffractive dijet production is sensitive to the gluon Wigner distribution.", "In particular, it is interesting to study diffractive dijet production as a function of the two momentum vectors, the average momentum $\\mathbf {P}= \\frac{1}{2}(\\mathbf {p}_1 - \\mathbf {p}_2)$  and the recoil momentum ${\\Delta }= \\mathbf {p}_1 + \\mathbf {p}_2$ , where $\\mathbf {p}_1$ and $\\mathbf {p}_2$ are the momenta of the individual jets.", "The connection to the Wigner distribution is shown in the correlation limit $|\\mathbf {P}| \\gg |{\\Delta }|$ , thus we use $|{\\Delta }|=0.1$ GeV in this work.", "The diffractive dijet production cross section in the CGC framework is derived in Ref. [20].", "The cross section as a function of jet momentum $\\mathbf {P}$  is shown in Fig.", "REF , where results obtained with different infrared regulators (with value of the coupling constant adjusted to describe the HERA structure function data) and using both fixed and running coupling evolution are shown, and our results are found to be insensitive to the infrared regularization.", "Here it is worth noticing that the Fourier conjugate to $\\mathbf {P}$ is the dipole size, and thus this can be seen as a diffraction off the $q\\bar{q}$ Fock state of the probing photon.", "Here we only consider charmed dijets, so the diffractive dip location can be estimated to be $|\\mathbf {r}_\\gamma |^{-1} \\sim \\sqrt{m_c^2 + Q^2} \\approx 1.4\\,\\mathrm {GeV}$ .", "To study the elliptic modulation we extract the Fourier harmonics of the dijet production cross section: $\\mathrm {d}\\sigma = v_0[1 + 2 v_2 \\cos 2\\theta (\\mathbf {P},{\\Delta })].$ The elliptic coefficient $v_2$ as a function of Bjorken-$x$ (denoted as $x_p$ ) is shown in Fig.", "REF , where we find that the energy evolution reduces $v_2$ by almost a factor of 2 in the EIC energy range.", "This is mostly due to the increasing proton size suppressing density gradients.", "The modulation is relatively small and likely difficult to measure.", "However, at larger $|{\\Delta }|$ we expect a much larger signal [21].", "For comparison, we also show the result obtained in the case where we do not perform the JIMWLK evolution towards small-$x$ , but just scale the overall proton density, in which case the $v_2$ is independent of energy.", "In the models where there is no dependence on the angle between $\\mathbf {r}$ and $\\mathbf {b}$ in the dipole amplitude, one gets exactly $v_2=0$  [20].", "Figure: Elliptic (v 2 v_2) modulation of dijet photoproduction as a function of momentum fraction x p x_p of the target.", "The values on top refer to the center of mass energies WW.", "Figure from Ref.", "." ], [ "Conclusions", "We have calculated the Wigner and Husimi distributions from the CGC framework and the diffractive dijet production cross section, which in principle is sensitive to the gluon Wigner distribution at small $x$ .", "By solving the perturbative JIMWLK evolution equations we find that in the gluon distributions the correlation between the momentum and coordinate space angles decreases when evolving towards small-$x$ , and predict that the elliptic modulation in the dijet production cross section decreases almost by a factor of 2 from the lowest to highest center of mass energies at the EIC." ], [ "Acknowledgements", "HM is supported by the Academy of Finland, project 314764, and by the European Research Council, Grant ERC-2015-CoG-681707.", "NM and BS are supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract No.DE- SC0012704.", "NM is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project 404640738." ] ]
1906.04389
[ [ "DoubleTransfer at MEDIQA 2019: Multi-Source Transfer Learning for\n Natural Language Understanding in the Medical Domain" ], [ "Abstract This paper describes our competing system to enter the MEDIQA-2019 competition.", "We use a multi-source transfer learning approach to transfer the knowledge from MT-DNN and SciBERT to natural language understanding tasks in the medical domain.", "For transfer learning fine-tuning, we use multi-task learning on NLI, RQE and QA tasks on general and medical domains to improve performance.", "The proposed methods are proved effective for natural language understanding in the medical domain, and we rank the first place on the QA task." ], [ "Background", "The MEDIQA 2019 shared tasks [4] aim to improve the current state-of-the-art systems for textual inference, question entailment and question answering in the medical domain.", "This ACL-BioNLP 2019 shared task is motivated by a need to develop relevant methods, techniques and gold standards for inference and entailment in the medical domain and their application to improve domain-specific information retrieval and question answering systems.", "The shared task consists of three parts: i) natural language inference (NLI) on MedNLI, ii) Recognizing Question Entailment (RQE), and iii) Question Answering (QA).", "Recent advancement in NLP such as BERT [6] has facilitated great improvements in many Natural Language Understanding (NLU) tasks [9].", "BERT first trains a language model on an unsupervised large-scale corpus, and then the pretrained model is fine-tuned to adapt to downstream NLU tasks.", "This fine-tuning process can be seen as a form of transfer learning, where BERT learns knowledge from the large-scale corpus and transfer it to downstream tasks.", "We investigate NLU in the medical (scientific) domain.", "From BERT, we need to adapt to i) The change from general domain corpus to scientific language; ii) The change from low-level language model tasks to complex NLU tasks.", "Although there is limited training data in NLU in the medical domain, we fortunately have pre-trained models from two intermediate steps: General NLU embeddings: We use MT-DNN [9] trained on GLUE benchmark[11].", "MT-DNN is trained on 10 tasks including NLI, question equivalence, and machine comprehension.", "These tasks correspond well to the target MEDIQA tasks but in different domains.", "Scientific embeddings: We use SciBERT [3], which is a BERT model, but trained on SemanticScholar scientific papers.", "Although SciBERT obtained state-of-the-art results on several single-sentence tasks, it lacks knowledge from other NLU tasks such as GLUE.", "In this paper, we investigate different methods to combine and transfer the knowledge from the two different sources and illustrate our results on the MEDIQA shared task.", "We name our method as DoubleTransfer, since it transfers knowledge from two different sources.", "Our method is based on fine-tuning both MT-DNN and SciBERT using multi-task learning, which has demonstrated the efficiency of knowledge transformation [5], [7], [13], [9], and integrating models from both domains with ensembles.", "[htb!]", "Multi-task Fine-tuning with External Datasets [1] In-domain datasets $\\mathcal {D}_1,...,\\mathcal {D}_{K_1}$ , External domain datasets $\\mathcal {D}_{K_1+1},...,\\mathcal {D}_{K_2}$ , max_epoch, mixture ratio $\\alpha $ Initialize the model $\\mathcal {M}$ epoch$=1,2,...$ , max_epoch Divide each dataset $\\mathcal {D}_k$ into $N_k$ mini-batches $\\mathcal {D}_k=\\lbrace b_1^k,...,b_{N_k}^k\\rbrace $ , $1\\le k\\le K_2$ $S\\leftarrow \\mathcal {D}_1\\cup \\mathcal {D}_2\\cup \\cdots \\cup \\mathcal {D}_{K_1}$ $N\\leftarrow N_1+N_2+\\cdots +N_{K_1}$ Randomly pick $\\lfloor \\alpha N \\rfloor $ mini-batches from $\\bigcup _{k=K_1}^{K_2} \\mathcal {D}_k$ and add to $S$ Assign mini-batches in $S$ in a random order to obtain a sequence $B=(b_1,...,b_L)$ , where $L=N+\\lfloor \\alpha N\\rfloor $ each mini-batch $b\\in B$ Perform gradient update on $\\mathcal {M}$ with loss $l(b)=\\sum _{(s_1,s_2)\\in b} l(s_1,s_2)$ Evaluate development set performance on $\\mathcal {D}_1,...,\\mathcal {D}_{K_1}$ Model with best evaluation performance Related Works.", "Transfer learning has been widely used in training models in the medical domain.", "For example, [10] leveraged the knowledge learned from SNLI to MedNLI; a transfer from general domain NLI to medical domain NLI.", "They also employed word embeddings trained on MIMIC-III medical notes, which can be seen as a language model in the scientific domain.", "SciBERT [3] studies transferring knowledge from SciBERT pretrained model to single-sentence classification tasks.", "Our problem is unique because of the prohibitive cost to train BERT: Either BERT or SciBERT requires a very long time to train, so we only explore how to combine the existing embeddings from SciBERT or MT-DNN.", "Transfer learning is also widely used in other tasks of NLP, such as machine translation [2] and machine reading comprehension [13].", "Figure: Illustration of the proposed multi-source multi-task learning method." ], [ "Methods", "We propose a multi-task learning method for the medical domain data.", "It employs datasets/tasks from both medical domain and external domains, and leverage the pre-trained model such as MT-DNN and SciBERT for fine-tuning.", "An overview of the proposed method is illustrated in Figure REF .", "To further improve the performance, we propose to ensemble models trained from different initialization in the evaluation stage.", "Below we detail our methods for fine-tuning and ensembles." ], [ "Fine-tuning details", "Algorithm.", "We fine-tune the two types of pre-trained models on all the three tasks using multi-task learning.", "As suggested by MEDIQA paper, we also fine-tune our model on MedQuAD [1], a medical QA dataset.", "We will provide details for fine-tuning on these datasets in Section REF .", "We additionally regularize the model by also training on MNLI [12].", "To prevent the negative transfer from MNLI, we put a larger weight on MEDIQA data by sampling MNLI data with less probability.", "Our algorithm is presented in Algorithm and illustrated as Figure REF , which is a mixture ratio method for multi-task learning inspired by [13].", "We start with in-domain datasets $\\mathcal {D}_1,...\\mathcal {D}_{K_1}$ (i.e., the MEDIQA tasks, $K_1=3$ ) and external datasets $\\mathcal {D}_{K_1+1},...,\\mathcal {D}_{K_2}$ (in this case MNLI).", "We cast all the training samples as sentence pairs $(s_1,s_2)\\in \\mathcal {D}_k, k=1,2,...,K_2$ .", "In each epoch of training, we use all mini-batches from in-domain data, while only a small proportion (controlled by $\\alpha $ ) of mini-batches from external datasets are used to train the model.", "In our experiments, the mixture ratio $\\alpha $ is set to 0.5.", "We use MedNLI, RQE, QA, and MedQuAD in medical domain as in-domain data and MNLI as external data.", "For MedNLI, we additionally find that using MedNLI as in-domain data and RQE, QA, MedQuAD as external data can also help boost performance.", "We use models trained using both setups of external data for ensembling.", "Pre-trained Models.", "We use three different types of initialization as the starting point for fine-tuning: i) the uncased MT-DNN large model from [9], ii) the cased knowledge-distilled MT-DNN model from [8], and iii) the uncased SciBERT model [3].", "We add a simple softmax layer (or linear layer for QA and MedQuAD tasks) atop BERT as the answer module for fine-tuning.", "For initialization in step in Algorithm , we initialize all BERT weights with the pretrained weights, and randomly initialize the answer layers.", "After multi-task fine-tuning, the joint model is further fine-tuned on each specific task to get better performance.", "We detail the training loss and fine-tuning process for each task in Section REF .", "Objectives.", "MedNLI and RQE are binary classification tasks, and we use a cross-entropy loss.", "Specifically, for a sentence pair $X$ we compute the loss $\\mathcal {L}(X)=-\\sum _c \\mathbb {1}(X,c) \\log (P_r(c|X)),$ where $c$ iterates over all possible classes, $\\mathbb {1}(X,c)$ is the binary indicator (0 or 1) if class label $c$ is the correct classification for $X$ , and $P_r(c|X)$ is the model prediction for probability of class $c$ for sample $X$ .", "We formulate QA and MedQuAD as regression tasks, and thus a MSE loss is used.", "Specifically, for a question-answer pair $(Q,A)$ we compute the MSE loss as $\\mathcal {L}(Q,A)=( y - \\mathtt {score}(Q,A))^2, $ where $y$ is the target relevance score for pair $(Q,A)$ , and $\\mathtt {score}(Q,A)$ is the model prediction for the same pair." ], [ "Model Ensembles", "After fine-tuning, we ensemble models trained from MT-DNN and SciBERT, and using different setups of in-domain and external datasets.", "The traditional methods typically fuse models by averaging the prediction probability of different models.", "For our setting, the in-domain data is very limited and it tends to overfit; this means the predictions can be arbitrarily close to 1, favoring to more over-fitting models.", "To prevent over-fitting, we ensemble the models by using a majority vote on their predictions, and resolving ties using sum of prediction probabilities.", "Suppose we have $M$ models, and the $m$ -th model predicts the answer $\\hat{p}_m$ for a specific question.", "For the classification task (MedNLI and RQE), we have $\\hat{p}_m\\in \\mathbb {R}^C$ , where $C$ is the number of categories.", "Let $\\hat{y}_m=\\operatornamewithlimits{arg\\,max}_i \\hat{p}_m^{(i)}$ be the prediction of model $m$ , where $\\hat{p}_m^{(i)}$ is the $i$ -th dimension of $\\hat{p}_m$ .", "The final prediction is chosen as $\\hat{y}_{\\text{ensemble}}=\\operatornamewithlimits{arg\\,max}_{y\\in \\text{maj}(\\lbrace \\hat{y}_m\\rbrace _{m=1}^M)}\\sum _{m=1}^M \\hat{p}_m^{(y)}.", "$ In other words, we first obtain the majority of predictions by computing the majority $\\text{maj}(\\lbrace \\hat{y}_m\\rbrace _{m=1}^M)$ , and resolve the ties by computing the sum of prediction probabilities $\\sum _{m=1}^M \\hat{p}_m^{(y)}$ .", "For QA tasks (QA and MedQuAD), the task is cast as a regression problem, where a positive number means correct answer, and negative otherwise.", "We have $\\hat{p}_m\\in \\mathbb {R}$ .", "We first compute the average score $\\hat{p}_{\\text{ensem}}=\\frac{1}{M}\\sum _{m=1}^M \\hat{p}_m$ .", "We also compute the prediction as $\\hat{y}_m=I(\\hat{p}_m\\ge 0)$ , where $I$ is the indicator function.", "We compute the ensemble prediction through a similar majority vote as the classification case: $\\hat{y}_{\\text{ensem}}={\\left\\lbrace \\begin{array}{ll}1, & \\text{if}~ \\sum _{m=1}^M \\hat{y}_m>M/2\\\\0, & \\text{if}~ \\sum _{m=1}^M \\hat{y}_m<M/2\\\\I(\\hat{p}_{\\text{ensem}}>0), & \\text{otherwise.}\\end{array}\\right.", "}$ To be precise, we predict the majority if a tie does not exist, or the sign of $\\hat{p}_{\\text{ensem}}$ otherwise.", "The final ranking of answers is carried out by first rank the (predicted) positive answers, and then the (predicted) negative answers." ], [ "Dataset-Specific Details ", "MedNLI: Since the MEDIQA shared task uses a different test set than the original MedNLI dataset, we merge the original MedNLI development set into the training set and use evaluation performance on the original MedNLI test set.", "Furthermore, MedNLI and MNLI are the same NLI tasks, thus, we shared final-layer classifiers for these two tasks.", "For MedNLI, we find that each consecutive 3 samples in all the training set contain the same premise with different hypothesizes, and contains exactly 1 entail, 1 neutral and 1 contradiction.", "To the end, in our prediction, we constrain the three predictions to be one of each kind, and use the most likely prediction from the model prediction probabilities.", "RQE: We use the clinical question as the premise and question from FAQ as the hypothesis.", "We find that the test data distribution is quite different from the train data distribution.", "To mitigate this effect, we randomly shuffle half of the evaluation data into the training set and evaluate on the remaining half.", "QA: We use the answer as the premise and the question as the hypothesis.", "The QA task is cast as both a ranking task and a classification task.", "Each question is associated with a relevance score in $\\lbrace 1,2,3,4\\rbrace $ , and an additional rank over all the answers for a specific question is given.", "We use a modified score to incorporate both information: suppose there are $m$ questions with relevance score $s\\in \\lbrace 1,2,3,4\\rbrace $ .", "Then the $i$ -th most relevant answer in these $m$ questions get modified score $s-\\frac{i-1}{m}$ .", "In this way the scores are uniformly distributed in $(s-1,s]$ .", "We shift all scores by $-2$ so that a positive score leads to a correct answer and vice versa.", "We also tried pairwise losses to incorporate the ranking but did not find it to boost the performance very much.", "We find that the development set distribution is inconsistent with test data - the training and test set consist of both LiveQAMed and Alexa questions, whereas the development set seems to only contain LiveQAMed questions.", "We shuffle the training and development set to make them similar: We use the last 25 questions in original development set (LiveQAMed questions) and the last 25 Alexa questions (from the original training set) as our development set, and use the remaining questions as our training set.", "This results in 1,504 training pairs and 431 validation pairs.", "Due to the limited size of the QA dataset, we use cross-validation that divides all pairs into 5 slices and train 5 models by using each slice as a validation set.", "We train MT-DNN and SciBERT on both these 5 setups and obtain 10 models, and ensemble all the 10 models obtained.", "MedQuAD: We use 10,109 questions from MedQuAD because the remaining questions are not available due to copyright issues.", "The original MedQuAD dataset only contains positive question pairs.", "We add negative samples to the dataset by randomly sampling an answer from the same web page.", "For each positive QA pair, we add two negative samples.", "The resulting 30,327 pairs are randomly divided into 27,391 training pairs and 2,936 evaluation pairs.", "Then we use the same method as QA to train MedQuAD; we also share the same answer module between QA and MedQuAD." ], [ "Implementation and Hyperparameters", "We implement our method using PyTorchhttps://pytorch.org/ and Pytorch-pretrained-BERThttps://github.com/huggingface/pytorch-pretrained-BERT, as an extension to MT-DNNhttps://github.com/namisan/mt-dnn.", "We also use the pytorch-compatible SciBERT pretrained model provided by AllenNLPhttps://github.com/allenai/scibert.", "Each training example is pruned to at most 384 tokens for MT-DNN models and 512 tokens for SciBERT models.", "We use a batch size of 16 for MT-DNN, and 40 for SciBERT.", "For fine-tuning, we train the models for 20 epochs using a learning rate of $5\\times 10^{-5}$ .", "After that, we further fine-tune the model from the best multi-task model for 6 epochs for each dataset, using a learning rate of $5\\times 10^{-6}$ .", "We ensemble all models with an accuracy larger than 87.7 for MedNLI, 83.5 for shuffled RQE, and 83.0 for QA.", "We ensemble 4 models for MedNLI, 14 models for RQE.", "For QA, we ensemble 10 models from cross-validation and 7 models using the normal training-validation approach." ], [ "Results", "In this section, we provide the leaderboard performance and conduct an analysis of the effect of ensemble models from different sources." ], [ "Test Set Performance and LeaderBoards", "The results for MedNLI dataset is summarized in Table REF .", "Our method ends up the 3rd place on the leaderboard and substantially improving upon previous state-of-the-art (SOTA) methods.", "Table: The leaderboard for MedNLI task (link).", "Scores are accuracy(%).", "Our method ranked the 3rd on the leaderboard.", "Previous SOTA method was from , on the original MedNLI test set (used as dev set here).The results for RQE dataset is summarized in Table REF .", "Our method ends up the 7th place on the leaderboard.", "Our method has a very large discrepancy between the dev set performance and test set performance.", "We think this is because the test set is quite different from dev set, and that the dev set is very small and easy to overfit to.", "Table: The leaderboard for RQE task (link).", "Scores are accuracy(%).", "Our method ranked the 7th on the leaderboard.The results for QA dataset is summarized in Table REF .", "Our method reaches the first place on the leaderboard based on accuracy and precision score and 3rd-highest MRR.", "We note that the Spearman score is not consistent with other scores in the leaderboard; actually, the Spearman score is computed just based on the predicted positive answers, and a method can get very high Spearman score by never predict positive labels.", "Table: The leaderboard for QA task (link).", "Our method ranked #1 on the leaderboard in terms of Acc (accuracy).", "The Spearman score is not consistent with other scores in the leaderboard." ], [ "Ensembles from Different Sources", "We compare the effect of ensembling from different sources in Table REF .", "We train 6 different models with different randomizations, with initializations from MT-DNN (#1,#2,#3) and SciBERT (#4, #5,#6) respectively.", "If we ensemble models with the same MT-DNN architecture, the resulting model only has around 1.5% improvement in accuracy, compared to the numerical average of the ensemble model accuracies (#1+#2+#3 and #4+#5+#6 in Table REF ).", "On the other hand, if we ensemble three models from different sources (#1+#2+#5 and #1+#5+#6 in Table REF ), the resulting model gains more than 3% in accuracy compared to the numerical average.", "This shows that ensembling from different sources has a great advantage than ensembling from single-source models.", "Table: Comparison of ensembles from different sources.", "Avg.Acc stands for average accuracy, the numerical average of each individual model's accuracy.", "Esm.Acc stands for ensemble accuracy, the accuracy of the resulting ensemble model.", "For ensembles, MT-DNN means all the three models are from MT-DNN, and similarly for SciBERT; MultiSource denotes the ensemble models come from two different sources." ], [ "Single-Model Performance", "For completeness, we report the single-model performance on the MedNLI development set under various multi-task learning setups and initializations in Table REF .", "(1) The Naïve approach denotes only MedNLI, RQE, QA, MedQuAD is considered as in-domain data in Algorithm without any external data; (2) The Ratio approach denotes that we consider MedNLI as in-domain data, and RQE, QA, MedQuAD as external data in Algorithm ; (3) The Ratio+MNLI approach denotes that we consider MedNLI, RQE, QA, MedQuAD as in-domain data and MNLI as external data in Algorithm .", "Note that MNLI is much larger than the medical datasets, so if we use RQE, QA, MedQuAD, MNLI as external data, the performance is very similar to the third setting.", "We did not conduct experiments on single-dataset settings, as previous works have suggested that multi-task learning can obtain much better results than single-task models [9], [13].", "Overall, the best results are achieved via using SciBERT as the pre-trained model, and multi-task learning with MNLI.", "The models trained by mixing in-domain data (the second setup) is also competitive.", "We therefore use models from both setups for ensemble.", "Table: Single model performance on MedNLI developlment data.", "Naiïve means simply integrating all medical-domain data; Ratio means using MedNLI as in-domain data and other medical domain data as external data; Ratio+MNLI means using medical domain data as in-domain and MNLI as external." ], [ "Conclusion", "We present new methods for multi-source transfer learning for the medical domain.", "Our results show that ensembles from different sources can improve model performance much more greatly than ensembles from a single source.", "Our methods are proved effective in the MEDIQA2019 shared task." ] ]
1906.04382
[ [ "Detection and estimation of parameters in high dimensional multiple\n change point regression models via $\\ell_1/\\ell_0$ regularization and\n discrete optimization" ], [ "Abstract Binary segmentation, which is sequential in nature is thus far the most widely used method for identifying multiple change points in statistical models.", "Here we propose a top down methodology called arbitrary segmentation that proceeds in a conceptually reverse manner.", "We begin with an arbitrary superset of the parametric space of the change points, and locate unknown change points by suitably filtering this space down.", "Critically, we reframe the problem as that of variable selection in the change point parameters, this enables the filtering down process to be achieved in a single step with the aid of an $\\ell_0$ regularization, thus avoiding the sequentiality of binary segmentation.", "We study this method under a high dimensional multiple change point linear regression model and show that rates convergence of the error in the regression and change point estimates are near optimal.", "We propose a simulated annealing (SA) approach to implement a key finite state space discrete optimization that arises in our method.", "Theoretical results are numerically supported via simulations.", "The proposed method is shown to possess the ability to agnostically detect the `no change' scenario.", "Furthermore, its computational complexity is of order $O(Np^2)$+SA, where SA is the cost of a SA optimization on a $N$(no.", "of change points) dimensional grid.", "Thus, the proposed methodology is significantly more computationally efficient than existing approaches.", "Finally, our theoretical results are obtained under weaker model conditions than those assumed in the current literature." ], [ "Introduction", "High dimensional regression models that allow vastly larger number of parameters $p$ than the sample size $n,$ have found applications in many fields of scientific inquiry such as genomics, social networking, empirical economics, finance among many others.", "This has led to a rapid development of statistical literature investigating methods capable of analyzing such models and data sets.", "One of the most successful methods for analysing high dimensional regression models has been the Lasso, which is based on the least squares loss and $\\ell _1$ regularization ([42]).", "Innumerable investigations have since been carried out to study the behavior of the Lasso estimator and its various modifications in many different settings (see e.g., [49]; [48]; [6]; [3] [5]; [23], [24] and the references therein).", "For a general overview on the developments of Lasso and its variants we refer to the monograph of [7] and the review article of [43].", "All aforementioned articles provide results in a regression setting where the parameters are dynamically stable.", "In contrast, multiphase/change point regression models provide a dynamic setting in which regression parameters are allowed to switch values based on a change inducing variable or in a time ordered sense.", "Such models allow for a greater versatility in modelling data, especially in a high dimensional setting.", "In many experiments, the estimated locations of change points may reveal additional critical information of interest.", "In the past few years several articles have studied high dimensional change point models in an `only means' setup.", "In this setting, change points are characterized with respect to dynamic mean vectors of time ordered random vectors, where the dimension of the observation vector may be larger than the number of observations ([8], [12],and [45]; among others).", "Another context in which high dimensional change point models have been investigated is that of a dynamic covariance structure which is related to the study of evolving networks ([39],[14], [1]; among others).", "In contrast, change point methods for high dimensional linear regression models have received much less attention and only a select few articles have considered this problem in the recent literature.", "In this paper, we consider a high dimensional multiphase (change point) regression model given by, $y_i=\\sum _{j=1}^{N+1} x_i^T\\beta _{(j-1)}^0{\\bf 1}[\\tau _{j-1}^0 < w_i \\le \\tau _j^0] +\\varepsilon _i,\\quad i=1,..,n,$ where $N\\ge 0,$ $\\tau _0^0=-\\infty ,$ $\\tau _{N+1}^0=\\infty ,$ and ${\\bf 1}[\\cdot ]$ represents the indicator function.", "The components of the change point parameter vector are assumed to be $\\tau ^0=(\\tau ^0_1,...,\\tau ^0_N)^T\\in \\bar{{\\mathbb {R}}}^{N},$ $\\bar{{\\mathbb {R}}}={\\mathbb {R}}\\cup \\lbrace -\\infty \\rbrace $ such that $\\tau _{j-1}^0\\le \\tau _j^0,$ $j=1,...,N.$ First note that when $\\tau _0=\\tau _1^0=...=\\tau _N^0=-\\infty ,$ model (REF ) reduces to an ordinary linear regression model without change points.", "This case where all change points are at negative infinity characterizes the case of `no change', and in the following we refer to this case as $N=0.$ On the other hand we characterize the case of one or more change points, $N\\ge 1,$ as when the components of $\\tau ^0$ are distinct and finite, i.e., $-\\infty <\\tau ^0_1<...<\\tau ^0_N.$ The observed variables in model (REF ) are the response $y_i\\in {\\mathbb {R}},$ the $p$ -dimensional predictors $x_i\\in {\\mathbb {R}}^p,$ and change inducing variable $w_i\\in {\\mathbb {R}}.$ The parameters of interest are the number of change points $N\\in \\lbrace 0,1,2,...\\rbrace ,$ the change point parameter vector $\\tau ^0\\in \\bar{{\\mathbb {R}}}^{N},$ and the regression parameters $\\beta _{(j)}^0\\in {\\mathbb {R}}^p,$ $j=0,...,N.$ The change points $\\tau _{j}^0$ $j=1,..,N$ represents threshold values of the variable $w$ subsequent to which the regression parameter changes from its current value $\\beta _{(j-1)}$ to a new value $\\beta _{(j)}.$ Furthermore, we let $p>>n,$ so that model (REF ) corresponds to a high dimensional setting.", "In the classical setting with a fixed number of parameters and $n\\rightarrow \\infty ,$ change point regression models such as (REF ) have been extensively investigated, albeit a large proportion of this literature is developed in the case with only a single change point.", "The works of [17], [18], [20], [2], [19], and [21], investigate the setting where parameters are assumed to change at certain unknown time points of the sampling period.", "On the other hand, the works of [16], [28], and [29] study the setting where the change point is formulated based on one or more covariate thresholds.", "In the literature, the latter approach is typically referred to as two-phase or multiphase regression, however it is also common to broadly call both as change point regression models.", "The literature on regularized estimation in change point regression models is very sparse.", "Models similar to (REF ) with a single change point have been studied by [26], [30], and [31] in the high dimensional setting.", "The case of multiple change points is investigated in [9], [47], , [22] and [32].", "Amongst these articles the first three consider the fixed $p$ setting, whereas the last article considers the high dimensional setting as is also the case in this paper.", "The article of [32] proposes a binary segmentation approach for the recovery of change points of the regression model, where change points are searched for, and then added to the set of all change points one by one.", "In the context of change point parameters, this binary segmentation approach can be viewed as the counterpart of step-up regression where parameters are included sequentially.", "It is important to remember that in the current high dimensional setting, in order to search for a single change point for each segment, the approach of [32] requires $O\\big (n{\\rm Lasso}(n)\\big )$ computations, where Lasso$(n)$ represents the computational cost of running one Lasso optimization with a sample size $n.$ In fact the authors show that the overall computation cost of their approach is of the order O($n\\log (n)$ Lasso$(n)$ ).", "In contrast, our approach proceeds in a conceptually reverse manner.", "The method that we propose can be viewed in a sense as the counterpart of step-down regression for the change point parameters.", "We begin with a superset of the parameteric space of the unknown change points and filter this space down to identify the unknown change points, following which we estimate the regression parameters.", "Critically, the `stepping down' process in our methodology can be carried out in a single step via a $\\ell _0$ regularization.", "We achieve this by converting the problem of recovery of change points to a variable selection problem in the change point parameters.", "This conversion of the change point estimation problem to a variable selection problem in turn relies on initial regression estimates.", "The second main novelty of this manuscript is to show that, initial regression estimates that are much slower than optimal in rates of convergence can be utilized to obtain change point estimates that are themselves near optimal in rates of convergence.", "In other words, our setup constitutes a rare statistical scenario where relatively `poor' estimates of some parameters of a model can be utilized to obtain near optimal estimates of other parameters of the model.", "The proposed method circumvents the sequential approach of binary segmentation for the recovery of change points.", "Consequently, the method requires only Lasso$(n)+$ SA computations for the identification and recovery of change points, where SA represents the computational cost of a simulated annealing optimization which is typically very efficient.", "The simulated annealing algorithm is used to implement a key discrete optimization over an $O(N)$ dimensional space that arises in our methodology due to the use of an $\\ell _0$ regularization.", "Thus our approach is far more efficient than any existing comparable methodology for high dimensional change point regression models.", "Being based on a $\\ell _0$ regularization, our approach also provides the ability to detect the case of $N=0,$ where an ordinary linear regression without change points is more appropriate.", "In comparison, binary segmentation approaches typically require the existence of at least one change point.", "Finally, we also note that our analysis requires significantly weaker assumptions than those currently assumed in the literature.", "Further comparisons of our method, assumptions and results with the existing literature are made in Section .", "The remainder of this article is organized as follows.", "Section provides the proposed methodology and technical assumptions required for the theoretical analysis.", "Section provides the main theoretical results regarding the performance of the proposed methodology.", "Section discusses the implementation of the proposed method and a simulated annealing approach for the implementation of a key step of our method.", "This section also provides numerical results on the finite sample performance of our method.", "The proofs of all main results are provided in Appendix A of the Supplementary materials of this article.", "Some additional technical results and lemma's are provided in Appendix B of the Supplementary materials.", "Notations: We conclude this section with a short note on the notations used in this paper.", "Throughout the paper, for any vector $\\delta \\in {\\mathbb {R}}^p,$ $\\Vert \\delta \\Vert _0$ represents the number of non-zero components in $\\delta ,$ and $\\Vert \\delta \\Vert _1$ and $\\Vert \\delta \\Vert _2$ represent the usual 1-norm and Euclidean norm, respectively.", "The norm $\\Vert \\delta \\Vert _{\\infty }$ represents the usual sup norm, i.e., the maximum of absolute values of all elements.", "For any set of indices $S\\subseteq \\lbrace 1,....,p\\rbrace ,$ let $\\delta _S=(\\delta _j)_{j\\in S}$ represent a sub-vector of $\\delta $ containing components corresponding to the indices in $S.$ Also, let $|S|$ represent the cardinality of the set $S.$ The notation ${\\bf 1}[]$ represents the usual indicator function.", "We denote by $\\Phi ()$ the cdf of $w_i^{\\prime }$ s and let $d(\\tau _a,\\tau _b)=P(\\tau _a< w_i\\le \\tau _b)=\\Phi (\\tau _b)-\\Phi (\\tau _a),$ $\\tau _a\\le \\tau _b\\in \\bar{{\\mathbb {R}}},$ clearly, $d(\\tau _a,\\tau _b)=0 \\Leftrightarrow \\tau _a=\\tau _b.$ We represent by $\\bar{{\\mathbb {R}}}={\\mathbb {R}}\\cup \\lbrace -\\infty \\rbrace $ as the extended Euclidean space, with only the left closure point included.", "We shall also use the notation $a\\vee b=\\max \\lbrace a,b\\rbrace ,$ and $a\\wedge b=\\min \\lbrace a,b\\rbrace ,$ $a,b\\in {\\mathbb {R}}.$ The notation $c_u,c_m$ is used to represent generic constants that may be different from one line to the next.", "Here, $0<c_u<\\infty $ represent universal constants, whereas $0<c_m<\\infty $ are constants that depend on model parameters such as variance parameters of underlying distributions.", "Lastly, $0<c_1,c_2<\\infty $ are also generic constants that may depend on both $c_u,$ and $c_m.$ For any $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}},$ $\\tau _a\\le \\tau _b$ and any $\\gamma \\in {\\mathbb {R}}^p,$ define the segmentwise least squares loss as, $Q^*(\\gamma ,\\tau _a,\\tau _b):=\\sum _{i=1}^{n} (y_i-x_i^T\\gamma )^2{\\bf 1}[\\tau _a< w_i\\le \\tau _b],$ where the indicator function ${\\bf 1}[\\tau _a<w_i\\le \\tau _b]=0$This is a slight misuse of notation, and is only used for simpler exposition.", "To be notationally precise, this term should be ${\\bf 1}[w_i\\le \\tau _b]-{\\bf 1}[w_i\\le \\tau _a]$ , for $\\tau _a\\le \\tau _b$ if $\\tau _a=\\tau _b.$ For any $\\check{N}\\ge 1,$ let $\\check{\\tau }:=(\\check{\\tau }_1,...,\\check{\\tau }_{\\check{N}})^T\\in \\bar{{\\mathbb {R}}}^{\\check{N}},$ be any vector such that $\\check{\\tau }_{j-1}\\le \\check{\\tau }_j,$ $j=1,...,\\check{N},$ $\\check{\\tau }_0=-\\infty .$ Also, let $\\check{\\tau }_{N+1}=\\infty $ such that $(\\check{\\tau }_0,\\check{\\tau }^T,\\check{\\tau }_{\\check{N}+1})^T$ forms a partition of $\\bar{{\\mathbb {R}}}.$ Additionally, for any sequence of vectors $\\alpha _{(j)}\\in {\\mathbb {R}}^p,$ $j=0,...,\\check{N}$ denote by $\\alpha =(\\alpha _{(0)}^T,....,\\alpha _{(\\check{N})}^T)^T\\in {\\mathbb {R}}^{{(\\check{N}+1)} p}$ as the concatenation of all $\\alpha _{(j)}s.$ Then define the total least squares loss evaluated at $(\\check{N},\\alpha ,\\check{\\tau })$ as, $Q(\\check{N},\\alpha ,\\check{\\tau }):= \\frac{1}{n}\\sum _{j=1}^{\\check{N}+1} Q^*(\\alpha _{(j-1)},\\check{\\tau }_{j-1},\\check{\\tau }_{j}).\\nonumber $ Next, for any $\\tau \\in {\\mathbb {R}}^{\\check{N}},$ define $\\hat{{\\cal T}}(\\tau )\\subseteq \\lbrace 1,...,\\check{N}\\rbrace $ as the set of indices of distinct and finite components of $\\tau ,$ i.e., $\\hat{{\\cal T}}(\\tau )=\\big \\lbrace j\\in \\lbrace 1,...,\\check{N}\\rbrace \\,\\,;\\,\\, \\tau _{j-1}\\ne \\tau _j,\\,\\,\\tau _j\\ne -\\infty \\big \\rbrace ,$ where $\\tau _0=-\\infty .$ Under these notations and the model (REF ), we propose estimators for the number of change points, locations of change points and the regression coefficients respectively.", "These estimators are stated in the following as two algorithms each consisting of two steps.", "The first algorithm is designed to recover the number and locations of change points of the model (REF ) and the second to recover the corresponding regression coefficients.", "All technical assumptions required for the theoretical validity of the proposed estimates are stated in Section REF .", "Figure: NO_CAPTIONNote that while the initializing $\\check{\\tau }$ in Step 0 is chosen in ${\\mathbb {R}}^{\\check{N}},$ however the optimization in Step 1 is performed over the extended Euclidean space $\\bar{{\\mathbb {R}}}^{\\check{N}}.$ Algorithm 1 begins (Step 0) with a nearly arbitrary partition $\\check{\\tau }$ in a superset ${\\mathbb {R}}^{\\check{N}}$ of the parametric space ${\\mathbb {R}}^{N}$ of the unknown change points.", "The simple update in Step 1 of Algorithm 1 recovers the number of change points $\\tilde{N},$ and the corresponding locations $\\tilde{\\tau }.$ There are two main novelties of Algorithm 1.", "First, instead of searching for change points sequentially, Algorithm 1 searches for them in a larger parametric space by reframing the problem as one of variable selection.", "Here the selection is in terms of differences between adjacent $\\tau _j$ 's, i.e., the $\\ell _0$ regularization in Step 1 is forcing these adjacent components to collapse towards each other.", "This regularization can be viewed as a $\\ell _0$ version of the total variation penalty on the components $\\tau .$ The second main novelty is that in order to achieve this conversion to a variable selection problem, we use a nearly arbitrary partition that serves as an initial rough guess.", "It shall become theoretically and empirically apparent in the following that the estimates obtained in Step 1 are robust against this initial partition, i.e., nearly any arbitrarily chosen partition in Step 0 shall yield near optimal estimates from Step 1.", "The underlying working mechanism of Algorithm 1 is illustrated in Figure REF .", "Figure: Illustration of the working mechanism of Algorithm 1 with N=2,N=2, N ˇ=5\\check{N}=5: Initializing with a nearly arbitrary partition τ ˇ=(τ ˇ 1 ,...,τ ˇ 5 ) T \\check{\\tau }=(\\check{\\tau }_1,...,\\check{\\tau }_5)^T and corresponding regression coefficient estimates α ^ (j) ,\\hat{\\alpha }_{(j)}, j=0,..,5,j=0,..,5, the algorithm converges to the unknown number of change points and the locations of the distinct change points lie in an optimal neighborhood of the unknown change points.", "Specifically, the components nearest to an unknown τ j 0 \\tau ^0_j shall converge toward it, and all remaining components converge either to the previous or the next component.Next, we propose Algorithm 2 for the estimation of regression parameter vectors of model (REF ).", "This algorithm utilizes the estimated number ($\\tilde{N}$ ) and locations ($\\tilde{\\tau }$ ) of change points from Algorithm 1, to obtain coefficient estimates on the corresponding partition yielded by $\\tilde{\\tau }.$ Note that when $\\tilde{N}=0$ from Algorithm 1, then Algorithm 2 is equivalent to implementing the ordinary Lasso on the data $(x_i,y_i),$ $i=1,...,n.$ Figure: NO_CAPTIONThe main theoretical contribution of this manuscript is to show that the proposed methodology consistently recovers the unknown number of change points, and yields estimates of the locations of change points and that of regression coefficient vectors that are near optimal in their rates of convergence.", "Specifically, under suitable conditions, we shall derive the following relations that hold for $n$ sufficiently large with probability at least $1-c_1(1\\vee N)\\exp (-c_2\\log p).$ $&(i)&\\,\\, {\\rm When}\\,\\, N\\ge 0,\\quad \\tilde{N}= N,\\\\&(ii)&\\,\\, {\\rm When}\\,\\, N\\ge 1,\\quad \\sum _{j=1}^{N} d(\\tilde{\\tau }_j,\\tau ^0_j)\\le c_uc_mN\\frac{s\\log p}{n},\\,\\,{\\rm and} \\nonumber \\\\&(iii)& \\sum _{j=0}^N\\Vert \\tilde{\\alpha }_{(j)}-\\beta ^0_{(j)}\\Vert _q \\le c_uc_m Ns^{\\frac{1}{q}}\\sqrt{\\frac{\\log p}{n}},\\quad q=1,2.\\nonumber $ In an ordinary high dimensional linear regression model without change points, it has been shown that the optimal rate of convergence for a regression parameter vector estimate is $\\sqrt{s\\log p/n}$ under the $\\ell _2$ norm ([46], [37], [4]).", "Also, the rate of convergence of the change point estimates in (REF ) matches the fastest available in the literature, see, e.g., ([26], [30], [32], [31]).", "The result in (REF ) is quite surprising since the estimates $\\tilde{N}$ and $\\tilde{\\tau }$ are computed based on initial regression coefficient estimates $\\hat{\\alpha }$ from Step 0 of Algorithm 1.", "These regression estimates may not be anywhere near optimal in their rate of convergence, since these are in turn computed based on a nearly arbitrary partition of the support of $w.$ Despite these rough regression estimates, we can prove that Step 1 of Algorithm 1 identifies the change points correctly and provides estimates that are indeed near optimal in their rate of convergence.", "Next, we discuss two immediate concerns that may arise to the reader regarding Algorithm 1.", "First, how stringent is Condition A on the initializers $\\check{N}$ and $\\check{\\tau }$ in Step 0 of this algorithm.", "This condition on the initializers is infact very mild.", "For Algorithm 1, where $\\check{N}$ is user chosen, nearly any arbitrarily chosen partition with a large enough $\\check{N}\\ge 1\\vee N,$ satisfies this condition.", "Other requirements of this condition are only meant to remove pathological cases, such as when all components of $\\check{\\tau }$ are closely clustered together or are concentrated at one end of the support of $w.$ From a practical perspective, an equally spaced large enough partition, as expected, works well in all empirically examined cases.", "A second concern that may arise regarding Algorithm 1 is whether the optimization of Step 1 of these methods is computationally feasible.", "At first impression, the optimization of Step 1 does indeed appear to be computationally intensive given that it is a nonsmooth, nonconvex optimization (with no apparent convex relaxations), and with potentially multiple global optimums.", "However the following observations shall serve to erase this impression.", "Note that although this optimization of Step 1 is over the extended Euclidean space $\\bar{{\\mathbb {R}}}^{\\check{N}},$ however, the loss function $Q(\\check{N}, \\hat{\\alpha }, )$ is a step function over the finite grid $\\tau \\in \\lbrace -\\infty ,w_1,....,w_n\\rbrace ^{\\check{N}},$ with step change occurring at grid points on this $\\check{N}$ dimensional grid (see, Figure REF in Section for an illustration of this behavior).", "Additionally, the $\\ell _0$ norm term in the optimization of Step 1 is either 0 or $1,$ based on whether $\\tau _{j-1}$ and $\\tau _{j}$ are equal or unequal respectively, in other words the distance between $\\tau _{j-1}$ and $\\tau _{j}$ does not influence the value of $\\ell _0$ norm (note that this will not be true if in Step 1, the $\\ell _1$ norm is considered in place of the $\\ell _0$ norm).", "These two observations together imply that any global optimum achieved in the extended Euclidean space $\\bar{{\\mathbb {R}}}^{\\check{N}}$ is also attained at some $\\check{N}$ dimensional point on the grid $\\lbrace -\\infty ,w_1,....,w_n\\rbrace ^{\\check{N}}.$ In other words, the optimization in Step 1 is reduced to a discrete optimization on a finite state space (the total number of possible states being $(n+1)^{\\check{N}}$ ).", "In view of these observations, the optimization in Step 1 is reminiscent of the well known travelling salesman problem, and correspondingly can be solved efficiently using a simulated annealing approach.", "Additionally, since simulated annealing is not a gradient based approach, it is capable of easily handling a $\\ell _0$ penalty.", "A detailed discussion of the implementation of Algorithm 1 is provided in Section .", "We also note that Step 0 of Algorithm 1, and Step 1 of Algorithm 2 are Lasso$\\big (n, p\\big )$ estimates.", "Thereby, these two steps are efficiently implementable using any one of the several available methods in the literature, for e.g.", "coordinate or gradient descent algorithms, see, e.g.", "[15] or via interior point methods for linear optimization under second order conic constraints, see, e.g., [27].", "Finally, we conclude this section by also emphasizing the computational efficiency of the proposed Algorithm 1.", "First note that Step 0 of Algorithm 1 are $(\\check{N}+1)$ computations of Lasso$(n,p)$ estimates.", "It is also known that computational complexity of most algorithms for the Lasso optimization scales like $O(p^2).$ As briefly described above, Step 1 of Algorithm 1 shall be implemented via simulated annealing (SA) over a $\\check{N}$ dimensional grid.", "SA optimizations are known to be very efficient for a large class of problems and can ordinarily be accomplished in a time scaling of order $O(\\check{N}^4),$ see, e.g.", "[41].", "We also mention that in the worst case the complexity of SA can also be exponential, depending on the optimization under consideration.", "However, in our study the SA optimization of Step 1 is empirically observed to be well behaved and carried out with a cheap computational cost.", "Thus, assuming $\\check{N}\\le c(1\\vee N),$ $c\\ge 1,$ the overall complexity of Algorithm 1 is $O\\big ((1\\vee N)p^2\\big )+SA.$ Thereby, this algorithm is far more computationally efficient than any comparable existing method for the estimation of parameters of model (REF ).", "To see this, compare the above complexity to the binary segmentation approach proposed in [32].", "They show that the their method is implementable with $O(n\\log n)$ Lasso$\\big (n,(N+1)p\\Big )$ computations with the aid of dynamic programming, thus the procedure effectively yields a time scaling of $O(nNp^2).$" ], [ "Assumptions", "In this subsection we state all necessary conditions and technical assumptions under which the results of this article are derived.", "Condition A (requirements of initializer):  (i) The initializing vector $\\check{\\tau }=(\\check{\\tau }_1,....,\\check{\\tau }_{\\check{N}})^T\\in {\\mathbb {R}}^{\\check{N}}$ is such that $\\check{N}$ is larger than the true number of change points, i.e., $\\check{N} \\ge 1\\vee N,$ and $\\check{N}\\le c_u(1\\vee N),$ $c_u\\ge 1.$   (ii) All initial change points are sufficiently separated, i.e., $d(\\check{\\tau }_{j-1},\\check{\\tau }_{j})> l_{\\min }>0,$ for all $j=1,...,\\check{N}+1,$ for some positive sequence $l_{\\min },$ where $\\check{\\tau }_{0},\\check{\\tau }_{\\check{N}+1}$ denote $-\\infty $ and $\\infty $ respectively.", "(iii) Let ${\\check{u}}_n=1\\vee c_u(1/n)^{1/k},$ for some constants $k\\in [1,\\infty ),$ and $c_u>0.$ Then assume that there exists a subset ${\\cal T}:=\\lbrace m_0,m_1,...,m_N,m_{N+1}\\rbrace \\subseteq \\lbrace 0,1,2,....\\check{N}+1\\rbrace $ such that $m_0=0,$ $m_{N+1}=\\check{N}+1$ and $\\max _{1\\le j\\le N}d(\\check{\\tau }_{m_j}, \\tau ^0_{j})\\le {\\check{u}}_n.$ When $N=0,$ define ${\\cal T}:=\\lbrace m_0,m_{N+1}\\rbrace .$ As briefly discussed earlier, Condition A is a mild assumption on the initializers.", "Roughly speaking, this condition requires the initial change point vector to be a large enough partition of ${\\mathbb {R}}$ where the components of this initializing vector are sufficiently separated from each other.", "Also, this condition requires that at least one initial change point lies in some fractional neighborhood of each unknown change point.", "The condition $d(\\check{\\tau }_{m_j}, \\tau ^0_{j})\\le {\\check{u}}_n,$ $j=1,...,N$ is very mild since the constant $k$ can be arbitrarily largeThe constant $k$ can be arbitrarily large as long as the rate conditions of Condition B and C are satisfied..", "In one of our main results, we shall show that despite the initializers $\\check{\\tau }_{m_j}$ lying in an arbitrary fractional neighborhood of $\\tau _j^0,$ the updated change point estimate $\\tilde{\\tau }$ satisfies $\\Vert \\tilde{\\tau }-\\tau ^0\\Vert _1\\le Ns\\log p/n,$ with high probability.", "Note that, the localization error bound of $\\tilde{\\tau }$ is free of $k.$ This condition is similar to Condition I assumed in [26], we refer to that article for further insights on this condition.", "Here we also state that implementation of the proposed methodology does not require prior knowledge of $k.$ It is observed a large enough grid of equally separated initial change points works well in nearly all empirically examined cases.", "The term $l_{\\min }$ in Condition A(i) is allowed to potentially decrease to zero with $n,$ however this dependence is suppressed for clarity of exposition.", "The rate at which such a convergence of $l_{\\min }$ is allowed also depends on other model parameters, and is explicitly stated in Condition B(iii).", "We can now define the $\\check{N}$ -dimensional parameter that Step 1 of Algorithm 1 is designed to recover in place of the $N$ -dimensional $\\tau ^0.$ For this purpose first define a set of indices ${\\cal T}^*=\\lbrace h_0,h_1,...,h_N,h_{N+1}\\rbrace ,$ where $\\lbrace h_1,...,h_{N}\\rbrace \\subseteq \\lbrace 1,...,\\check{N}\\rbrace ,$ $h_0=0,$ and $h_{N+1}=\\check{N}+1.$ Consider any $\\tau \\in {\\mathbb {R}}^{\\check{N}},$ satisfying $\\tau _1\\le \\tau _2\\le ...\\le \\tau _{\\check{N}},$ then the components $h_j$ of ${\\cal T}^*$ are defined as, $\\hspace{28.45274pt} h_{j}=\\min \\Big \\lbrace k\\in \\lbrace 1,...,\\check{N}\\rbrace ;\\,\\, k>m_{j-1};,\\,\\, \\tau _{k}=\\tau _{m_j}\\Big \\rbrace ,\\,\\, {\\rm for}\\,\\, j=1,...,N.$ where ${\\cal T}=\\lbrace m_0,m_1...,m_{\\check{N}+1}\\rbrace $ is given in Condition A.", "Clearly, the construction of these indices depend on the choice of the vector $\\tau \\in \\bar{{\\mathbb {R}}}^{\\check{N}},$ and the set ${\\cal T}.$ In the following this dependence is notationally suppressed for clarity of exposition, and is to be understood implicitly.", "The indices $h_j$ 's are meant to capture the first index $j$ after $m_{j-1}$ for which $\\tau _j=\\tau _{m_j}.$ In the case where the chosen $\\tau \\in \\bar{{\\mathbb {R}}}^{\\check{N}}$ is such that $\\tau _{m_j-1}<\\tau _{m_j},$ for all $j=1,..,N,$ then the set ${\\cal T}^*={\\cal T}.$ Now define the vector $\\tau ^*=(\\tau ^*_1,...,\\tau ^*_{\\check{N}})^T\\in \\bar{{\\mathbb {R}}}^{\\check{N}}$ such that, $\\hspace{28.45274pt}\\tau ^*_{h_j}=\\tau ^0_j,\\,\\,j=1,...,N\\,\\,{\\rm and}\\,\\, \\tau _k^*=\\tau ^0_j,\\,\\, h_j \\le k \\le m_j,\\,\\,j=1,...N,$ and finally, $\\tau ^*_j=\\tau ^*_{j-1}$ for all remaining indices in the set ${\\cal T}^{*c},$ where, as before, $\\tau ^*_0=-\\infty .$ Note that under the above definition of $\\tau ^*,$ the subset of finite and distinct components of this vector is exactly the unknown parameter vector $\\tau ^0,$ however the orientation or order in which they appear in this $\\check{N}$ -dimensional vector may be different depending on the set ${\\cal T}^*$ and the chosen $\\tau ,$ as well as the set ${\\cal T}.$ To see the need for this non-traditional construction of the target parameter $\\tau ^*,$ first recall that the objective function in Step 1 of Algorithm 1 is non-convex, and consequently may have multiple global optimums.", "Now consider any such global optimum $\\hat{\\tau }=(\\hat{\\tau }_1,...,\\hat{\\tau }_{\\check{N}})^T\\in \\bar{{\\mathbb {R}}}^{\\check{N}}$ and let the orientation index set ${\\cal T}^*$ be defined in accordance with this optimum $\\hat{\\tau },$ together with the index set ${\\cal T}.$ Then, the $\\tau ^*$ constructed with corresponding ${\\cal T}^*,$ forms the target vector that $\\hat{\\tau }$ is infact approximating.", "The non-traditional aspect of this construction is that the subset of finite and distinct components of the target vector $\\tau ^*$ is exactly the parameter vector $\\tau ^0$ and thus fixed and non-random.", "However the orientation in which the components may appear depend on the optimizer itself, i.e.", "this orientation may be random and depends on the orientation in which the global optimum is achieved.", "In the following we illustrate the construction of $\\tau ^*$ using a concrete example.", "Example 2.1 Consider the model (REF ) with $N=3,$ and $\\tau ^0=(-1,0,1).$ Let Algorithm 1 be initialized with $\\check{N}=7$ and $\\check{\\tau }$ such that ${\\cal T}=\\lbrace 0,2,4,6,8\\rbrace ,$ i.e., the second, fourth and sixth components of $\\check{\\tau }$ are in a fractional neighborhood of the $\\tau ^0_1,$ $\\tau ^0_2$ and $\\tau ^0_3$ respectively.", "Now suppose following two cases.", "[leftmargin=*] a In the first case, suppose that the global optimum $\\hat{\\tau }=(\\hat{\\tau }_1,...,\\hat{\\tau }_7)^T$ obtained from Step 1 of Algorithm 1 is such that $\\hat{\\tau }_1<\\hat{\\tau }_2,$ $\\hat{\\tau }_3<\\hat{\\tau }_4,$ and $\\hat{\\tau }_5=\\hat{\\tau }_6.$ Thus, in this case, by the definition of the set ${\\cal T}^*$ we have that ${\\cal T}^*=\\lbrace 0,2,4,5,8\\rbrace .$ Consequently, by the definition of $\\tau ^*,$ we have that $\\tau ^*=(-\\infty ,-1,-1,0,1,1,1).$ b In the second case, suppose that the global optimum $\\hat{\\tau }=(\\hat{\\tau }_1,...,\\hat{\\tau }_7)^T$ obtained from Step 1 of Algorithm 1 is such that $\\hat{\\tau }_1<\\hat{\\tau }_2,$ $\\hat{\\tau }_3<\\hat{\\tau }_4,$ and $\\hat{\\tau }_5<\\hat{\\tau }_6.$ In this case, we have that ${\\cal T}^*=\\lbrace 0,2,4,6,8\\rbrace ,$ and $\\tau ^*=(-\\infty ,-1,-1,0,0,1,1).$ Our results to follow shall show that any global optimum $\\hat{\\tau },$ must lie in a near optimal neighborhood of the corresponding $\\tau ^*,$ with high probability.", "Note that irrespective of the orientation of the components of the vector $\\tau ^*,$ the subset of finite and distinct components is exactly $\\tau ^0.$ Correspondingly, we shall obtain the estimates $\\tilde{\\tau }$ which are obtained as the subset of distinct and finite components of $\\hat{\\tau }.$ Condition B (assumptions on model dimensions):   (i) For $j=1,...,N,$ let $S_j=\\big \\lbrace k\\in \\lbrace 1,...,p\\rbrace ;\\,\\,\\beta _{(j)k}^0\\ne 0\\big \\rbrace $ and $S=\\cup _j S_j.$ Then for some $s=s_n\\ge 1,$ we assume $|S|\\le s.$   (ii) The model dimensions $s,p,n,$ satisfy $s\\log p\\big /nl_{\\min }^2\\rightarrow 0.$   (iii) The choice of $k\\in [1,\\infty ),$ and $l_{\\min }$ of Condition A, $\\rho ^2$ of Condition C, together with $s,n,$ satisfies $(s\\rho ^2)\\big /(l_{\\min }^2n^{1/k})\\rightarrow 0.$ Condition B(i) is the usual sparsity assumption on high dimensional models.", "Conditions B(ii) and B(iii) are restrictions on model dimensions, Condition B(iii) restricts the dimensionality of the model in accordance with the initializing $\\check{u}_n$ -neighborhood and the minimum separation $l_{\\min }.$ The largest model allowed by Condition B occurs when the initializers in Condition A allows for $k=1,$ $l_{\\min }>c_u,$ and Condition C allows for $\\rho ^2=O(1).$ In this case, we require $s\\log p/n\\rightarrow 0,$ i.e., Condition B(iii) becomes redundant given Condition B(ii).", "Condition C (assumptions on change parameters): If $N\\ge 1,$   (i) Define the minimum jump size $\\xi _{\\min }:=\\min _{j}\\Vert \\beta _{(j)}-\\beta _{(j-1)}\\Vert _2,$ $j=1,...,N,$ and assume that it is bounded below, i.e., $\\xi _{\\min }>c_u,$ $c_u>0.$ Also define the maximum jump size $\\xi _{\\max }:=\\max _j\\Vert \\beta _{(j)}-\\beta _{(j-1)}\\Vert _2,$ $j=1,...,N,$ and let $\\rho =\\xi _{\\max }/\\xi _{\\min }$ be the ratio of these jump sizes.", "Assume that $\\rho \\sqrt{s\\log p/n}\\rightarrow 0.$   (ii) Assume that all unknown change points are sufficiently separated, i.e., $d(\\tau _{j-1}^0,\\tau _j^0)\\ge c_ul_{\\min },$ $c_u>0$ for all $j=1,...,N,$ such that, $N\\rho \\check{u}_n/l_{\\min }\\rightarrow 0.$ This condition is only applicable when at least one change point exists in the model (REF ).", "When no change point exists ($N=0$ ), we can instead define the ratio $\\rho =1,$ and all remaining conditions can be ignored.", "Condition C(ii) is satisfied trivially if only a finite number of change points are assumed in model (REF ) and the jump ratio $\\rho \\le c_u,$ i.e., the maximum and minimum jumps are of the same order.", "Note that Condition C(ii) and Condition A(ii) are controlled by the same sequence $l_{\\min },$ essentially assuming the least separation between the initializing change points and that between the true change points are of the same order.", "This is again not asking for much, since by assumption we have also assumed that $\\check{N}\\le c(1\\vee N)$ in Condition A(i).", "In the case of an increasing number of change points, its rate is controlled by C(ii).", "Note that we do not make any assumptions on the maximum jump size $\\xi _{\\max },$ instead we control the jump ratio $\\rho .$ Condition D (assumptions on model distributions):   (i) The vectors $x_i=(x_{i1},...,x_{ip})^T,$ $i=1,..,n,$ are i.i.d subgaussianRecall that for $\\sigma >0,$ the random variable $\\eta $ is said to be $\\sigma $ -subgaussian if, for all $t\\in {\\mathbb {R}},$ $E[\\exp (t\\eta )] \\le \\exp (\\sigma ^2t^2/2).$ Similarly, a random vector $\\xi \\in {\\mathbb {R}}^p$ is said to be $\\sigma $ -subgaussian if the inner products $\\langle \\xi , v\\rangle $ are $\\sigma $ -subgaussian for any $v\\in {\\mathbb {R}}^p$ with $\\Vert v\\Vert _2 = 1.$ with mean vector zero, and variance parameter $\\sigma _x^2\\le C.$ Furthermore, the covariance matrix $\\Sigma :=Ex_ix_i^T$ has bounded eigenvalues, i.e., $0<\\kappa \\le \\rm {min eigen}(\\Sigma )<\\rm {max eigen}(\\Sigma )\\le \\phi <\\infty .$   (ii) The model errors $\\varepsilon _i$ are i.i.d.", "subgaussian with mean zero and variance parameter $\\sigma _{\\varepsilon }^2\\le C.$   (iii) The change inducing random variables $w_i,$ $i=1,...,n$ are i.i.d, with cdf represented by $\\Phi (\\tau _a)=P(w_i\\le \\tau _a),$ $\\tau _a\\in \\bar{{\\mathbb {R}}},$ and the distance between any two $\\tau _a<\\tau _b\\in \\bar{{\\mathbb {R}}}$ in the cdf scale represented as $d(\\tau _a,\\tau _b)=P(\\tau _a< w_i\\le \\tau _b).$   (iv) The r.v.", "'s $x_i,w_i,\\varepsilon _i$ are independent of each other.", "The subgaussian assumptions in Condition D(i) and D(ii) are now standard in high dimensional linear regression models and are known to accommodate a large class of random designs.", "In ordinary high dimensional linear regression, these assumptions are used to establish well behaved restricted eigenvalues of the Gram matrix $\\sum x_ix_i^T/n$ ([36]; [40]), which are in turn used to derive convergence rates of $\\ell _1$ regularized estimators ([6]; and several others).", "These assumptions shall play a similar role in our high dimensional multiple change point setting.", "Condition D(iii) on the change inducing variable, allows for both discrete or continuous r.v.'s.", "Finally, we also note that assumption D(iii) on the change inducing variable $w,$ allows for both continuous or discrete r.v.'s.", "From a general perspective of regularized estimation for high dimensional change point linear regression models, the works that are closely related to this article are [26], [30], [31], and [32].", "The idea of converting a multiple change point detection problem to a variable selection problem using an $\\ell _0$ regularization and an arbitrary segmentation is novel and is completely different from all articles listed above.", "The articles [30], [26], and [31], consider a setting with only a single change point.", "From a technical perspective, the assumptions made on model distributions in this article are similar to those made in [26] and are comparable to those assumed in [32].", "A major advantage of the proposed methodology is its ability to detect the `no change' case, i.e., where there are no change points in the model, to the best of our knowledge, the only other article that posses this capability is [26], although it is limited to atmost a single change point.", "Finally, we also emphasize that for the detection and estimation of multiple change points in regression models, the methodology proposed in this article is much more efficient with a computational complexity of $O(Np^2)+$ SA, in comparison to the existing binary segmentation approach proposed in [32], which scales like $O(nNp^2).$" ], [ "Main Results", "To present the results of this section we require the following definitions.", "For any $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}},$ let $\\zeta _i(\\tau _a,\\tau _b)={\\left\\lbrace \\begin{array}{ll}{\\bf 1}[\\tau _{a}<w_i\\le \\tau _b],\\quad {\\rm if}\\,\\, \\tau _a<\\tau _{b}, \\\\{\\bf 1}[\\tau _b< w_i\\le \\tau _a],\\quad {\\rm if}\\,\\,\\tau _b<\\tau _a.\\end{array}\\right.", "}$ Here it is implicitly understood that $\\zeta _i(\\tau _a,\\tau _b)=0,$ if $\\tau _a=\\tau _b.$ Also, define for any $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}},$ the following set of random indices, $n^w(\\tau _a,\\tau _b)={\\left\\lbrace \\begin{array}{ll}i\\in \\lbrace 1,...,n\\rbrace ;\\quad \\tau _a< w_i\\le \\tau _b,\\quad {\\rm if}\\,\\, \\tau _a<\\tau _b,\\\\i\\in \\lbrace 1,...,n\\rbrace ;\\quad \\tau _b< w_i\\le \\tau _a,\\quad {\\rm if}\\,\\,\\tau _b<\\tau _a.\\end{array}\\right.", "}$ Here $n^w(\\tau _a,\\tau _b)=\\emptyset ,$ if $\\tau _a=\\tau _b.$ To develop our results we require control on the cardinality $|n^w(\\tau _a,\\tau _b)|$ of the random set $n^w(\\tau _a,\\tau _b).$ Note that this cardinality is determined by the r.v.", "'s defined in (REF ), i.e., $|n^w(\\tau _a,\\tau _b)|= \\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _b).$ In view of this observation, the following lemma provides uniform control (over $\\tau _a,\\tau _b$ ) on the stochastic quantity $\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _b).$ Lemma 3.1 Let $u_n,v_n$ be any non-negative sequences such that $\\log (u_n^{-1})=O(\\log p)$ and $v_n\\ge c_u\\log p /n,$ $c_u>0.$ Then under Condition D(iii), we have, $&(i)& \\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _b)\\le c_u\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace ,\\nonumber \\\\&(ii)& \\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _b)\\ge c_u v_n,\\nonumber $ with probability at least $1- c_1\\exp (-c_2\\log p),$ for $n$ sufficiently large.", "An application of Lemma REF leads to uniform control (over $\\tau _a,$ $\\tau _b$ ) of other stochastic quantities such as $\\big \\Vert \\sum _{i\\in n^w} \\varepsilon _ix_i^T\\big \\Vert _{\\infty },$ among others, which are necessary for the arguments to follow.", "These bounds are provided in Lemma REF in supplementary materials of this article.", "More simplistic versions of Lemma REF have also been used by [25] in the context of graphical models with missing data, and in [26] in the context of high dimensional change point regression with a single change point.", "To proceed further, recall that Step 1 of Algorithm 1, utilizes estimates of regression coefficients from Step 0, which are based on misspecified initial change points.", "Thus, in order to obtain variable selection and estimation results regarding the change point estimates of Step 1, we first need to analyze the rates of convergence of regression estimates of Step 0.", "This analysis in turn requires restricted eigenvalue conditions on the Gram matrix $\\sum _{i} x_ix_i^T/n,$ which is described in the following.", "For any deterministic set $S\\subseteq \\lbrace 1,2,...,p\\rbrace ,$ define the collection ${\\mathbb {A}}$ as, ${\\mathbb {A}}=\\Big \\lbrace \\delta \\in {\\mathbb {R}}^p;\\, \\Vert \\delta _{S^c}\\Vert _1\\le 3\\Vert \\delta _S\\Vert _1\\Big \\rbrace .$ Then, [6] define the lower restricted eigenvalue condition as, $\\inf _{\\delta \\in {\\mathbb {A}}} \\frac{1}{n}\\sum _{i=1}^n\\delta ^T x_ix_i^T\\delta \\ge c_u\\kappa \\Vert \\delta \\Vert _2^2,\\quad \\rm {for\\,\\,some\\,\\,constant}\\,\\, \\kappa >0.$ Our analysis shall require uniform versions of the condition (REF ), these are developed in Lemma REF .", "Additionally, we shall also require the set ${\\mathbb {A}}_2$ defined below, which is a slightly different version of the set ${\\cal A}$ defined in (REF ).", "${\\mathbb {A}}_2=\\Big \\lbrace \\delta \\in {\\mathbb {R}}^p; \\Vert \\delta _{S^c}\\Vert _1\\le 3\\Vert \\delta _S\\Vert _1+ c_u\\xi _{\\max }\\sqrt{s}\\Big \\rbrace .$ Finally, we also mention that other weaker versions of Condition (REF ) are also available in the literature, such as the compatibility condition of [7], and the $\\ell _q$ sensitivity of [13].", "In the setup of common random designs, it is also well established that condition (REF ) holds with probability converging to $1,$ see for e.g.", "[36], [40] for Gaussian designs and [33] for sub-Gaussian designs.", "The following lemma provides the plausibility of the uniform restricted eigenvalue conditions required in our analysis.", "Lemma 3.2 Let ${\\mathbb {A}}$ and ${\\mathbb {A}}_2$ be as given in (REF ) and (REF ) respectively, for $S$ as defined in Condition B.", "Let $u_n, v_n$ be non-negative sequences such that $\\log (u_n^{-1})=O(\\log p)$ and $v_n\\ge c_us\\log p\\big /n,$ for a suitably chosen constant $c_u>0.$ Then under Conditions B(i), B(ii) and D, and for $n$ sufficiently large, the following restricted eigenvalue conditions hold with probability at least $1-c_1\\exp (-c_2\\log p),$ $&(i)& \\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\inf _{\\delta \\in {\\mathbb {A}}} \\frac{1}{n}\\sum _{i\\in n^w(\\tau _a,\\tau _b)}\\delta ^T x_ix_i^T \\delta \\ge c_uc_m v_n\\Vert \\delta \\Vert _2^2,\\nonumber \\\\&(ii)&\\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\ d(\\tau _a,\\tau _b)\\ge v_n}}\\inf _{\\delta \\in {\\mathbb {A}}_2}\\frac{1}{n}\\sum _{i\\in n^w(\\tau _a,\\tau _b)}\\delta ^Tx_ix_i^T\\delta \\ge c_uc_m v_n\\Vert \\delta \\Vert _2^2 - c_u c_m \\frac{\\xi _{\\max }^2s\\log p}{n},\\nonumber \\\\&(iii)&\\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\sup _{\\delta \\in {\\mathbb {A}}}\\frac{1}{n}\\sum _{i\\in n_w}\\delta ^T x_ix_i^T \\delta \\le c_uc_m\\Vert \\delta \\Vert _2^2 \\max \\Big \\lbrace \\frac{s\\log p}{n}, u_n\\Big \\rbrace .\\nonumber $ In the following, for any positive number $r>0$ and any $\\tau _a\\in {\\mathbb {R}},$ define the interval ${\\cal B}(\\tau _a,r)=\\big \\lbrace \\tau \\in {\\mathbb {R}};\\, d(\\tau _a,\\tau )\\le r\\big \\rbrace .$ The rates of the initial regression coefficient estimates of Step 0 of Algorithm 1 shall be a consequence of the following general result.", "Theorem 3.1 Suppose Condition B(i), B(ii), C(ii) and D. Let $u_n$ be any non-negative sequence satisfying $\\log (u_n^{-1})=O(\\log p)$ and let $Q^*$ be as given in (REF ).", "For any $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}},$ let $\\hat{\\alpha }\\in {\\mathbb {R}}^{p}$ be the solution to the Lasso optimization $\\hat{\\alpha }=\\operatornamewithlimits{arg\\,min}_{\\alpha \\in {\\mathbb {R}}^p}\\Big \\lbrace \\frac{1}{n}Q^*(\\alpha ,\\tau _a,\\tau _b)+\\lambda _0\\Vert \\alpha \\Vert _1 \\Big \\rbrace \\nonumber .$ Additionally, for any fixed $j=1,...,N+1,$ let ${\\cal C}_j={\\cal C}_j^1\\cup {\\cal C}_j^2\\cup {\\cal C}_j^3\\cup {\\cal C}^4_j,$ where ${\\cal C}_j^1&=&\\Big \\lbrace \\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\,\\, d(\\tau _a,\\tau _b)>l_{\\min }; \\tau _a\\in {\\cal B}(\\tau _{j-1}^0,u_n), \\tau _b\\in {\\cal B}(\\tau _j^0,u_n)\\Big \\rbrace ,\\nonumber \\\\{\\cal C}_j^2&=&\\Big \\lbrace \\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\,\\, d(\\tau _a,\\tau _b)>l_{\\min }; \\tau _a\\ge \\tau _{j-1}^0, \\tau _b\\in {\\cal B}(\\tau _j^0,u_n)\\Big \\rbrace ,\\nonumber \\\\{\\cal C}_j^3&=&\\Big \\lbrace \\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\,\\, d(\\tau _a,\\tau _b)>l_{\\min }; \\tau _a\\in {\\cal B}(\\tau _{j-1}^0,u_n), \\tau _b\\le \\tau _j^0\\Big \\rbrace ,\\nonumber \\\\{\\cal C}_j^4&=&\\Big \\lbrace \\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\,\\, d(\\tau _a,\\tau _b)>l_{\\min }; \\tau _a\\ge \\tau _{j-1}^0, \\tau _b\\le \\tau _{j}^0,\\Big \\rbrace .\\nonumber $ Then choosing $\\lambda _0=c_uc_m\\max \\big \\lbrace \\sqrt{\\log p/n}, \\xi _{\\max }u_n\\big \\rbrace ,$ for $n$ sufficiently large we have for $j=1,...,N+1,$ $\\,\\,\\sup _{\\tau _a,\\tau _b\\in {\\cal C}_j}\\Vert \\hat{\\alpha }-\\beta ^0_{(j-1)}\\Vert _q\\le c_uc_m s^{\\frac{1}{q}} \\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\,\\, \\xi _{\\max }u_n\\Big \\rbrace \\Big /l_{\\min },\\quad q=1,2,\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ Theorem REF can be used to obtain the rates of convergence of $\\hat{\\alpha }_{(j)},$ $j=0,...,\\check{N},$ obtained from Step 0 of Algorithm 1 and Algorithm 2.", "To state these rates explicitly we require the following notation.", "Let ${\\cal T}$ be as defined in Condition A, and define for each $j\\in {\\cal T}^c,$ $k_j=\\min \\Big \\lbrace k;\\,\\, 1\\le k\\le N+1;\\,\\, \\tau ^0_{k-1}<\\check{\\tau }_j\\le \\tau ^0_{k}\\Big \\rbrace .$ Simply stated, the index $k_j$ is the first index $k$ between $1,..,N+1,$ such that $\\tau _j$ lies between $\\tau ^0_{k-1}$ and $\\tau ^0_k.$ This index $k_j$ identifies the regression coefficient vector $\\beta ^0_{(k_j-1)}$ with its approximation $\\hat{\\alpha }_{(j-1)}$ for each $j\\in {\\cal T}^c,$ this notation is illustrated in Example REF .", "Under this notation, the following corollary provides the rates of convergence of the initial regression estimates.", "Corollary 3.1 Let $\\check{N}$ and $\\check{\\tau }\\in {\\mathbb {R}}^{\\check{N}}$ be any initializers satisfying Condition A and assume the conditions of Theorem REF .", "Also, let $\\hat{\\alpha }_{(j)},$ $j=0,...,N$ be the estimates obtained from Step 0 of Algorithm 1 and let ${k_j}$ be as defined in (REF ).", "Then, upon choosing $\\lambda _0=c_uc_m\\max \\big \\lbrace \\sqrt{\\log p/n}, \\xi _{\\max }{\\check{u}}_n\\big \\rbrace ,$ $q=1,2,$ and $n$ sufficiently large, we have the following.", "(i) For each fixed $j=1,...,N+1,$ $\\Vert \\hat{\\alpha }_{(m_j-1)}-\\beta ^0_{(j-1)}\\Vert _q\\le c_uc_m s^{\\frac{1}{q}} \\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\,\\, \\xi _{\\max }{\\check{u}}_n\\Big \\rbrace \\Big /l_{\\min },\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ (ii) For each fixed $j\\in {\\cal T}^c,$ $\\Vert \\hat{\\alpha }_{(j-1)}-\\beta ^0_{(k_j-1)}\\Vert _q\\le c_uc_m s^{\\frac{1}{q}} \\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\,\\,\\xi _{\\max }{\\check{u}}_n\\Big \\rbrace \\Big /l_{\\min },\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ Example 3.1 Suppose $N=2,$ $\\check{N}=4$ and the chosen initial $\\check{\\tau }\\in {\\mathbb {R}}^{4}$ is in the orientation illustrated in Figure REF .", "Figure: A possible orientation of initializers τ ˇ∈ℝ 4 ,\\check{\\tau }\\in {\\mathbb {R}}^{4}, where N=2.N=2.", "In this orientation of $\\check{\\tau }$ , we have ${\\cal T}=\\lbrace m_0,m_1,m_2,m_3\\rbrace =\\lbrace 0,2,4,5\\rbrace ,$ and $\\lbrace k_1,k_3\\rbrace =\\lbrace 1,2\\rbrace .$ Consequently, by Corollary REF , the initial regression estimates $\\hat{\\alpha }_{(0)},\\hat{\\alpha }_{(1)},....,\\hat{\\alpha }_{(4)}$ will be such that $\\hat{\\alpha }_{(0)},$ $\\hat{\\alpha }_{(2)}$ will approximate $\\beta ^0_{(0)},$ $\\beta ^0_{(1)}$ respectively and $\\hat{\\alpha }_{(1)},$ $\\hat{\\alpha }_{(3)},\\hat{\\alpha }_{(4)}$ will approximate $\\beta ^0_{(0)},\\beta ^0_{(1)}$ and $\\beta ^0_{(2)}$ respectively.", "We now turn our attention to the main goal of this article, i.e., establishing variable selection and estimation results of change point estimates obtained from Step 1 of Algorithm 1.", "To achieve this, we require the following series of definitions.", "Let $\\tau ^*$ be as defined in (REF ), and for any $\\check{N}\\ge 1,$ $\\tau \\in {\\mathbb {R}}^{\\check{N}}$ and $\\alpha \\in {\\mathbb {R}}^{p(\\check{N}+1)},$ define, $\\hspace{28.45274pt}{\\cal U}^*(\\check{N},\\alpha ,\\tau )&=&Q(\\check{N},\\hat{\\alpha },\\tau )-Q(\\check{N},\\hat{\\alpha },\\tau ^*),\\\\{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )&=&{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )+\\mu \\sum _{j=1}^{\\check{N}}\\Big (\\Vert d(\\tau _j,\\tau _{j-1})\\Vert _0-\\Vert d(\\tau _j^*,\\tau _{j-1}^*)\\Vert _0\\Big ).\\nonumber $ From the definition (REF ), note that when $\\check{N}=N,$ we have that $\\tau ^*=\\tau ^0.$ Recall the sets of indices ${\\cal T}$ from Condition A, and the set ${\\cal T}^*$ from (REF ) for any $\\tau \\in \\bar{{\\mathbb {R}}}^{\\check{N}}.$ Note that the intersection ${\\cal T}^{*c}\\cap {\\cal T}^{c}$ comprises of all possible indices that may potentially lead to distinct interruptions between $\\tau _{h_0},\\tau _{h_1},...,\\tau _{h_{\\check{N}+1}}.$ Keeping this observation in mind, consider any non-negative sequences $u_n,v_n,$ any subset ${\\cal K}\\subseteq {\\cal T}^{*c}\\cap {\\cal T}^{c},$ define the collection, $\\hspace{5.69054pt}{\\cal G}(u_n,v_n,{\\cal K})=\\Big \\lbrace \\tau \\in \\bar{{\\mathbb {R}}}^{\\check{N}};\\,\\, \\tau _1\\le \\tau _2\\le ...\\le \\tau _{\\check{N}},\\,\\,\\hspace{56.9055pt}\\\\v_{n}\\le \\sum _{j=1}^N d(\\tau _{h_j},\\tau ^0_j)\\le u_{n},\\,\\, {\\rm and\\,\\,for\\,\\,each}\\,\\,l\\in {\\cal K},\\,\\,\\tau _{l}\\ne \\tau _{l-1}\\Big \\rbrace .\\hspace{-42.67912pt}\\nonumber $ The arguments $u_n,v_n$ capture information regarding the closeness of an arbitrary vector to the unknown change point vector in the components corresponding to the set ${\\cal T}^*.$ The set ${\\cal K}\\subseteq {\\cal T}^{*c}$ captures all distinct interruptions between any two components with indices in the set ${\\cal T}^*.$ The following example provides more insight to the construction of the set ${\\cal G}(u_n,v_n,{\\cal K}),$ and its defining arguments.", "Example 3.2 Consider the model (REF ) with $N=2.$ Let the initializer $\\check{\\tau }$ be chosen such that $\\check{N}=5,$ such that ${\\cal T}=\\lbrace 0,2,4,6\\rbrace ,$ and ${\\cal T}^c=\\lbrace 1,3,5\\rbrace .$ Then for any $\\tau =(\\tau _1,\\tau _2,\\tau _3,\\tau _4,\\tau _5)^T\\in \\bar{{\\mathbb {R}}}^{\\check{N}},$ satisfying $\\tau _1\\le \\tau _2...\\le \\tau _5,$ consider the following three scenarios.", "[leftmargin=*] a If $\\tau _1<\\tau _2<\\tau _3=\\tau _4<\\tau _5,$ then ${\\cal T}^*=\\lbrace 0,2,3,6\\rbrace ,$ and ${\\cal T}^{*c}=\\lbrace 1,4,5\\rbrace .$ Clearly, the set ${\\cal T}^{*c}\\cap {\\cal T}^c=\\lbrace 1,5\\rbrace $ form the distinct interruptions.", "Thus, assuming that $v_n\\le d(\\tau _2,\\tau _1^0)+d(\\tau _3,\\tau _2^0)\\le u_n,$ then $\\tau \\in {\\cal G}(u_n,v_n,{\\cal K}),$ with ${\\cal K}=\\lbrace 1,5\\rbrace .$ b If $\\tau _1=\\tau _2=\\tau _3=\\tau _4=\\tau _5,$ then ${\\cal T}^*=\\lbrace 0,1,3,6\\rbrace ,$ and ${\\cal T}^{*c}=\\lbrace 5\\rbrace .$ The potential interruptions can be due to induces in the set ${\\cal T}^{*c}\\cap {\\cal T}^c=\\lbrace 5\\rbrace ,$ however since in this case $\\tau _5=\\tau _4,$ hence ${\\cal K}=\\emptyset .$ c If $\\tau _1=\\tau _2<\\tau _3<\\tau _4=\\tau _5,$ then ${\\cal T}^*=\\lbrace 0,1,4,6\\rbrace ,$ ${\\cal T}^{*c}=\\lbrace 2,3,5\\rbrace .$ Potential interruptions can be due to induces in the set ${\\cal T}^{*c}\\cap {\\cal T}^c=\\lbrace 3,5\\rbrace .$ Since $\\tau _5=\\tau _4,$ thus in this case ${\\cal K}=\\lbrace 3\\rbrace $ captures the sole distinct interruption.", "A partial motivation for defining the collection ${\\cal G}(u_n,v_n,{\\cal K})$ is as follows.", "Recall from the results stated in (REF ), we intend to show that the number of finite and distinct components $\\tilde{N}$ of $\\hat{\\tau }$ obtained from Step 1 of Algorithm 1 matches exactly with the true number of change points $N,$ with high probability.", "The argument we develop to prove this result proceeds by showing that $\\hat{\\tau }$ must lie in ${\\cal G}(u_n,v_n,{\\cal K}),$ where ${\\cal K}=\\emptyset ,$ with high probability.", "Note that the latter statement shall infact imply the desired result.", "Finally, for any non-negative sequence $u_n,$ we also define the function, $F(u_n)={\\left\\lbrace \\begin{array}{ll} 0 &\\,\\,{\\rm if}\\,\\, u_n/l_{\\min }\\rightarrow 0\\\\ N &\\,\\, {\\rm otherwise}\\end{array}\\right.", "}.$ The following lemma provides a uniform lower bound of the expression ${\\cal U}(\\check{N},\\hat{\\alpha },\\tau ),$ over the collection ${\\cal G}:={\\cal G}(u_n,v_n,{\\cal K}),$ that holds with high probability.", "This result shall lie at the heart of the argument used to obtain the main results of this article regarding variable selection and estimation of change points from Algorithm 1.", "For the result to follow, let $r_n$ be the $\\ell _2$ rate obtained from the initial regression coefficients provided in Corollary REF , i.e., $r_n= c_uc_m \\sqrt{s}\\max \\big \\lbrace \\sqrt{\\log p/n},\\,\\,\\xi _{\\max }{\\check{u}}_n\\big \\rbrace \\big /l_{\\min }.$ Lemma 3.3 Suppose Conditions A, B(i), B(ii) C, and D hold.", "Let $\\check{u}_n$ be as given in Condition A and choose $\\lambda _0$ as prescribed in Corollary REF .", "Let $u_n,v_n$ be any non-negative sequences such that $\\log (u_n^{-1})=O(\\log p).$ Let ${\\cal G}:={\\cal G}(u_n,v_n,{\\cal K}),$ and $F(u_n),$ be as defined in (REF ), and (REF ).", "Additionally, let $\\hat{\\alpha }$ be the estimates obtained from Step 0 of Algorithm 1.", "Then for $n$ sufficiently large, we have the following lower bounds.", "(i) When $N=0,$ we have, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )\\ge \\mu |{\\cal K}|-c_uc_m|{\\cal K}|r_n^2- c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n,\\nonumber $ with probability at least $1-c_1 (1\\vee N)\\exp (-c_2\\log p).$ (ii) When $N\\ge 1,$ and $v_n\\ge c_uNs\\log p/n,$This result is also valid when $v_n=0.$ we have, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )&\\ge & c_uc_mv_{n}+\\mu |{\\cal K}|-c_uc_mN\\frac{\\rho ^2s\\log p}{n}-\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}|{\\cal K}|r_n^2\\nonumber \\\\&&-\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}r_n^2u_{n}- \\frac{c_uc_m\\rho }{(1\\vee \\xi _{\\min })}\\sqrt{\\frac{s\\log p}{n}}\\sqrt{Nu_n}\\nonumber \\\\&&- \\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n- \\frac{\\mu }{(1\\vee \\xi _{\\min }^2)}F(u_n),\\nonumber $ with probability at least $1-c_1 (1\\vee N)\\exp (-c_2\\log p).$ The preceding results developed in this article provide the necessary machinery required to obtain the main results of this article regarding estimation of the number and locations of change points, and the regression coefficients obtained from Algorithm 1.", "The results to follow shall essentially say that, with high probability, Algorithm 1 exactly recovers the unknown number of change points and yields estimates of locations of change points that are in a near optimal neighborhood of the unknown change points.", "Additionally, Algorithm 2 yields regression coefficient estimates that are in an optimal neighborhood of the unknown regression coefficients.", "The following theorem provides the validity of the estimates $\\tilde{N}$ and $\\tilde{\\tau }$ obtained from Algorithm 1.", "Theorem 3.2 Assume Conditions A, B, C and D and choose $\\lambda _0$ as prescribed in Corollary REF .", "Let ${\\cal T},$ ${\\cal T}^*,$ and $\\hat{{\\cal T}}$ be as defined in Condition A, (REF ) and (REF ) respectively.", "Then upon choosing $\\mu =c_uc_m\\rho (s\\log p/n)^{1/k^*},$ with $k^*=2\\vee k,$ the estimates $\\tilde{N},$ and $\\tilde{\\tau }$ obtained from Algorithm 1 satisfy the following relations, $&(i)& {\\rm When}\\,\\, N\\ge 0,\\quad {\\rm we\\,\\,have}\\quad \\tilde{N}=N,\\nonumber \\\\&(ii)& {\\rm When}\\,\\, N\\ge 1,\\quad {\\rm we\\,\\,have}\\quad \\sum _{j=1}^{N} d(\\tilde{\\tau }_{j},\\tau _{j}^0)\\le c_uc_m N\\rho ^2\\frac{s\\log p}{n},\\nonumber $ with probability at least $1-c_1(1\\vee N)\\exp (-c_2\\log p),$ and for $n$ sufficiently large.", "The usefulness of Theorem REF is apparent.", "Despite initializing Algorithm 1 with an arbitrarily large $\\check{N},$ the estimates $\\hat{\\tau }$ obtained from Step 1 of Algorithm 1 will have exactly $N$ finite and distinct components with high probability (all other components will collapse to any of the remaining $N$ distinct components or negative infinity).", "Additionally, the components of $\\hat{\\tau }$ that are identified as finite and distinct will lie in a near optimal neighborhood of the true change point vector $\\tau ^0.$ Recall that estimate $\\hat{\\tau }$ from Step 1 of Algorithm 1 are computed based on regression estimates from Step 0 that may be much slower than optimal in their rate of convergence.", "Yet, $\\hat{\\tau }$ of is near optimal in its rate of convergence.", "It is also important to remember that this process is carried out in a single step and not by an iterative procedure, thereby also providing the algorithm its computational advantage.", "The following theorem provides the rate of convergence of regression coefficient estimates obtained from Algorithm 2.", "Corollary 3.2 Suppose the conditions of Theorem REF and for each $j=0,...,N,$ choose $\\lambda _{1j}=c_uc_m\\max \\big \\lbrace \\sqrt{\\log p/n},\\,\\, \\xi _{\\max }(|\\tilde{\\tau }_j-\\tau ^0_{j}|\\vee |\\tilde{\\tau }_{j+1}-\\tau ^0_{j+1}|)\\big \\rbrace .$ Let $N\\ge 1,$ and $\\tilde{\\alpha }_{(j)},$ $j=0,...,N$ be estimates of the regression coefficients obtained from Algorithm 3.", "Then, for $n$ sufficiently large and $q=1,2,$ we have the following bound, $\\sum _{j=0}^{N}\\Vert \\tilde{\\alpha }_{(j)}-\\beta ^0_{(j)}\\Vert _q\\le \\nonumber c_uc_mN\\frac{s^{\\frac{1}{q}}}{l_{\\min }}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\,\\, \\xi _{\\max }\\rho ^2\\frac{s\\log p}{n}\\Big \\rbrace ,\\nonumber $ that holds with probability at least $1-c_1(1\\vee N)\\exp (-c_2\\log p).", "$ To conclude this section, we present the following corollary that specifies conditions under which near optimality of these rates is observed, as described in (REF ).", "Corollary 3.3 Suppose conditions of Theorem REF and Corollary REF .", "Then assuming that $\\rho ^2=O(1),$ we have the following relations with probability at least $1-c_1(1\\vee N)\\exp (-c_2\\log p).$ (i) For $N\\ge 0,$ $\\tilde{N}=N.$   (ii) For $N\\ge 1,$ and $n$ sufficiently large, $\\sum _{j=1}^{N} d({\\tilde{\\tau }}_{j},\\tau _{j}^0) \\le c_uc_m Ns\\log p\\big /n.$   (iii) Additionally assuming that $N\\le 1\\vee c_{u1},$ $l_{\\min }\\ge c_{u2},$ and that $\\xi _{\\max }s\\sqrt{\\log p/n}=O(1).$ We have, $\\sum _{j=0}^{N}\\Vert \\tilde{\\alpha }_{(j)}-\\beta ^0_{(j)}\\Vert _q\\le \\nonumber c_uc_mNs^{1/q}\\sqrt{\\log p/n},$ for $q=1,2$ and $n$ sufficiently large." ], [ "Implementation and numerical results", "In this section we discuss the implementation of the proposed methodology and provide monte carlo simulation results of the same.", "First, as briefly stated in Section , for any fixed $\\alpha \\in {\\mathbb {R}}^{p(\\check{N}+1)},$ the loss function $Q(\\check{N},\\alpha ,\\tau )$ is step function of $\\tau ,$ with step changes occurring at any point on the $\\check{N}$ dimensional finite grid $\\lbrace -\\infty ,w_1,....,w_n,\\infty \\rbrace ^{\\check{N}}.$ We illustrate this fact in Figure REF , for the special case where $N=\\check{N}=1.$ To proceed with the implementation of Algorithm 1, first note that Step 1 of Algorithm 1 requires $\\Phi (\\cdot )$ to be known (via the distance function $d(\\cdot )$ ), which is typically not the case in practice.", "However, also note that the function $d(\\cdot )$ appears in the optimization of Step 1 only via the $\\ell _0$ norm, $\\Vert d(\\tau _{j-1},\\tau _j)\\Vert _0.$ Observing that $\\Vert d(\\tau _{j-1},\\tau _j)\\Vert _0=\\Vert \\tau _{j-1}-\\tau _j\\Vert _0,$ provided we implicitly define the additional conventions $\\Vert \\infty -\\infty \\Vert _0:=0,$ and $\\Vert \\infty -a\\Vert _0:=1,$ for any $a<\\infty ,$ in the implementation.", "Thus, the term $\\Vert d(\\tau _{j-1},\\tau _j)\\Vert _0$ can be replaced by $\\Vert \\tau _{j-1}-\\tau _j\\Vert _0$ without altering the estimator.", "Alternatively, to avoid this notational complexity in coding the estimator, a new surrogate variable $w_i^*$ can be created which follows a pseudo uniform distributionHere we refer to a pseudo uniform distribution in the sense typically used in MCMC methods, where the realizations $w_i^*,....w_n^*$ reproduce the behavior of $n$ realizations of a ${\\cal U}(0,1]$ distribution, see, Definition 2.1 of [38]., $w_i^*\\sim {\\cal U}(0,1],$ while preserving the data structure.", "This can be done as follows, let $w_{(1)},..w_{(n)}$ represent the order statistics of $w_i^{\\prime }s,$ and construct $w_{(i)}^*=i/n,$ $i=1,...,n.$ Since $w_i$ 's are independent realizations, the surrogate $w_i^*\\sim {\\cal U}(0,1]$ in the sense described above.", "In this case, we can reparameterize the model (REF ) to an ordinary change point regression model as follows.", "First, re-order all observations with respect to the ordered surrogate change inducing variable $w_{(1)}^*,...,w_{(n)}^*.$ Then we can express model (REF ) as, $\\hspace{14.22636pt}y_{i}= x_{i}^T\\beta _{(j-1)}^0 +\\varepsilon _{i},\\quad \\tau _{j-1}^{\\dagger }<i/n\\le \\tau _j^{\\dagger },\\,\\,j=1,...,N+1.$ Here, $\\tau _j^{\\dagger },$ $j=1,...,N$ are reparameterized change point parameters in the ${\\rm Supp}(w^*)=(0,1],$ and $\\tau _0^{\\dagger }=0,\\tau _{N+1}^{\\dagger }=1.$ In view of this reparameterization, together with the step behavior of the function $Q(\\check{N}, \\hat{\\alpha },\\cdot ),$ we can now equivalently implement Algorithm 1a, in place of Algorithm 1.", "Figure: NO_CAPTIONThe change made in Algorithm 1a (in comparison to Algorithm 1) is in Step 1 of the procedure.", "First instead of searching over the extended Euclidean space, we are instead searching over a finite multi-dimensional grid.", "Second, owing to the creation of the surrogate change inducing variable, $w_i^*\\sim {\\cal U}(0,1],$ we have $d(\\tau _{j-1},\\tau _j)=|\\tau _{j-1}-\\tau _j|.$ The only difference is that, Algorithm 1a estimates the parameters of the reparameterized model (REF ) instead of (REF ).", "The change point parameters of model (REF ) can be easily obtained from those of (REF ) by reverting back to the corresponding quantiles.", "Observe that Step 0 of Algorithm 1a and Step 1 of Algorithm 2 are ordinary Lasso optimizations, these can be accomplished by several different methods available in the literature, for e.g.", "coordinate or gradient descent algorithms, see, e.g.", "[15] or via interior point methods for linear optimization under second order conic constraints, see, e.g., [27].", "On the other hand, the implementation of Step 1 of Algorithm 1a is a non trivial task.", "Keeping in mind that this step is a discrete optimization over a finite state space, we propose a simulated annealing approach for this purpose and the method is discussed in the following subsection." ], [ "Implementation of ", "Simulated annealing is a well known variant of the Metropolis Hastings algorithm, see, for e.g.", "Chapter 5 and Chapter 7 of the monograph [38].", "This algorithm is especially useful for finite state space optimizations, and its stochastic nature endows it with its most desirable feature, which is its ability to escape local optimums while only visiting very few states of the state space under consideration.", "First, we require another reparameterization of Step 1 of Algorithm 1a.", "Let $d^{\\dagger }=(d_1^{\\dagger },...,d_{N}^{\\dagger })^T\\in {\\mathbb {R}}^N,$ be parameters of the model (REF ), such that $n\\tau _1^{\\dagger }=d_1,n\\tau _2^{\\dagger }=d_1+d_2,...,n\\tau _{N}^{\\dagger }=\\sum _{j=1}^{N}d_j^{\\dagger }.$ Then Step 1 of Algorithm 1a can equivalently be performed by searching for an optimizer $\\hat{d}=(\\hat{d}_1,...,\\hat{d}_{\\check{N}})^T$ in the state space $\\lbrace 0,1...n\\rbrace ^{\\check{N}},$ as follows, $\\hat{d}=\\operatornamewithlimits{arg\\,min}_{{d\\in \\lbrace 0,1,2...,n\\rbrace ^{\\check{N}};\\\\ \\sum _{j=1}^{\\check{N}}d_j\\le n}}\\Big \\lbrace Q(\\check{N}, \\hat{\\alpha }, \\tau )+ \\mu \\sum _{j=1}^{\\check{N}}\\Vert \\frac{d_j}{n}\\Vert _0 \\Big \\rbrace ,\\qquad \\mu >0,$ where $\\tau =(\\tau _1,...,\\tau _{\\check{N}})^T,$ with $n\\tau _j=\\sum _{k=1}^j d_k,$ $j=1,...,\\check{N}.$ Finally, the change point estimates of Step 1 of Algorithm 1a can be recovered by computing $\\hat{\\tau }=(\\hat{d}_1,\\hat{d}_{1}+\\hat{d}_2,....,\\sum _{j=1}^{\\check{N}} \\hat{d}_j)^T\\big /n.$ We adopt simulated annealing in the context of optimization (REF ).", "For efficient implementation of this procedure, one requires a carefully constructed proposal density taking into account special features of the problem under consideration.", "Specifically, in our setup we construct a proposal density which encourages the algorithm to visit sparse states of the components of the vector $d,$ over which the optimization (REF ) is to be performed.", "Construction of proposal density: In the optimization step of (REF ), the finite state space under consideration is $\\lbrace 0,1,...,n\\rbrace ^{\\check{N}}.$ Additionally we intend to construct a proposal density that encourages the algorithm to visit $\\check{N}$ dimensional states with sparse solutions.", "For this purpose, let $M\\ge 1$ be the total number of iterations of the simulated annealing algorithm to be performed, and for any $x=(x_1,...,x_{\\check{N}})^T\\in \\lbrace 0,1,...,n\\rbrace ^{\\check{N}},$ let $g(x)=\\big (g_1(x_1),...,g_{\\check{N}}(x_{\\check{N}})\\big )^T$ be the $\\check{N}$ dimensional componentwise density functions, where each component is a discrete uniform density with an inflated probability at zero, i.e., for each $i=1,...,M,$ $j=1,...,\\check{N},$ define, $g_j(x):=g_{j}(x_j;d;b;\\pi _{ij})={\\left\\lbrace \\begin{array}{ll}\\pi _{ij}, & x=0 \\\\{\\rm discrete Uniform}, & x_j\\in \\lbrace l,u\\rbrace ,\\end{array}\\right.", "}$ where $d=(d_1,...,d_{\\check{N}})^T\\in \\lbrace 0,...,n\\rbrace ^{\\check{N}},$ $b\\in \\lbrace 0,...,n\\rbrace $ and $\\pi _{ij}\\in [0,1]$ are parameters of this proposal distribution.", "The lower and upper limits are $l=\\max \\lbrace 0,d_j-b\\rbrace ,$ and $u=\\min \\lbrace n-\\sum _{j=1}^{j-1}d_k,d_j+b\\rbrace .$ Here $b$ and $\\pi _{ij}$ 's are user chosen parameters, where higher values of $b$ allow for larger jumps between states and $\\pi _{ij}$ 's are zero inflation parameters that encourage sparsity in the $j^{th}$ component.", "Lastly, the parameter $d$ is the $\\check{N}$ dimensional centering parameter, i.e., realizations from this proposal are roughly centered around the components of $d.$ Note that the limits of the discrete uniform part of the proposal enforce the restriction that any candidate state $d^{\\prime }$ generated by the proposal satisfies $\\sum _{j=1}^{\\check{N}}d_j^{\\prime }\\le n,$ which is required for the optimization (REF ).", "Next we discuss the choice of the zero inflation parameters $\\pi _{ij}$ 's in the proposal densities.", "The objective of introducing this zero inflation in the proposal is meant in order to allow the algorithm to visit all combinations of sparse states of the components of $d.$ For this purpose we design a zero inflation mechanism changing with iteration $i,$ as illustrated in Figure REF (for the case $\\check{N}=3$ ).", "The zero inflation parameter $\\pi _{ij}$ for each component $j$ of proposal is constructed to follow a sine curve oscillating in the interval (0,1), over the iterations $i$ 's.", "Critically, the sine curve corresponding to each component $g_{ij}$ is chosen such that it has a different period of oscillation in comparison to all other components.", "These varying periods of oscillation create all possible sparsity patterns among the components of the candidate $d,$ i.e., given a large number of periods of the sine curves, any sparse combination of $d$ 's will be generated at some iterations between $1,...,M.$ More specifically, for each iteration $i=1,...,M,$ we set $\\pi _{ij}=0.475\\sin \\Big (\\frac{i2\\pi }{Ma_j}\\Big )+0.475,$ here $a_j$ is the number of periods of the sine curve between $1,...,M,$ chosen for the $j^{th}$ component.", "This completes the necessary requirements to implement simulated annealing.", "For completeness, we state in Algorithm 3, the simulated annealing algorithm in context of the optimization (REF ).", "Figure: Left Panel: Step behavior of Q(N ˇ,α ^,τ)Q({\\check{N}},\\hat{\\alpha },\\tau ) over τ∈ Supp (w),\\tau \\in {\\rm Supp}(w), Q(N ˇ,α ^,τ)Q({\\check{N}},\\hat{\\alpha },\\tau ) evaluated over grid of points τ∈{0,0.02,...,1}.\\tau \\in \\lbrace 0,0.02,...,1\\rbrace .", "Here w i ∼𝒰(0,1),w_i\\sim {\\cal U}(0,1), n=7n=7 N=1,N=1, N ˇ=1,\\check{N}=1, p=3,p=3, β (0) 0 =(1,0,0) T ,\\beta ^0_{(0)}=(1,0,0)^T, β (0) 0 =(1,1,0) T ,\\beta ^0_{(0)}=(1,1,0)^T, α ^ (0) =(0.41,0,0) T ,\\hat{\\alpha }_{(0)}=(0.41,0,0)^T, α ^ (1) =(0.13,0.92,0) T ,\\hat{\\alpha }_{(1)}=(0.13,0.92,0)^T, w 0 =0,w_0=0, w 8 =1.w_8=1.", "Observe that step changes occur at w i w_i's.", "Right Panel: Construction of zero inflation for proposal density () with N ˇ=3.\\check{N}=3.", "Zero inflation probability π ij \\pi _{ij} is controlled via a sine curve, where the period of each sine curve is different, thereby producing candidate states d,d, with all possible sparsity patterns.Figure: NO_CAPTIONHere $\\Delta h_i=h(d^i)-h(d),$ where $h(d)=Q(\\check{N}, \\hat{\\alpha }, \\tau )+ \\mu \\sum _{j=1}^{\\check{N}}\\Vert \\frac{d_j}{n}\\Vert _0.$ Also, $T_i,$ $i=1,..,n$ represents a user chosen decreasing sequence of positive numbers, which is also commonly referred to as the `temperature function' of simulated annealing.", "An illustration of the evolution of the simulated annealing algorithm with the above described proposal density for the optimization (REF ) is provided in Figure REF .", "The following subsection provides numerical results obtained via monte carlo simulations of the methodology described here.", "Figure: Illustration of the evolution of simulated annealing for the optimization () to obtain d ^=(d ^ 1 ,...,d ^ N ˇ ) T .\\hat{d}=(\\hat{d}_1,...,\\hat{d}_{\\check{N}})^T.", "Here n=375,n=375, p=50,p=50, N=2,N=2, N ˇ=5,\\check{N}=5, μ=0.25.\\mu =0.25.", "The true change points are located at τ 1 0 =125\\tau _1^0=125 and τ 2 0 =250.\\tau _2^0=250.", "The proposal density is that in (), with a j =1/250 + 25 * ( j - 1 ),a_j=1\\big /\\big (250+25*(j-1)\\big ), j=1,...,N ˇ.j=1,...,\\check{N}.", "The temperature function is set to T i =1/log(1+i),T_i=1/\\log (1+i), i=1,...,10000.i=1,...,10000.", "Observe that all but all but the components d ^ 1 ,\\hat{d}_1, d ^ 4 \\hat{d}_4 converge to zero and d ^ 1 \\hat{d}_1 converges to 127 and d ^ 1 +d ^ 2 +d ^ 3 \\hat{d}_1+\\hat{d}_2+\\hat{d}_3 converges to 257,257, which are near the locations of the true change point parameters.", "Starting value in the algorithm: d=(94,56,56,56,56),d=(94,56,56,56,56), i.e., τ=(94,150,206,262,318).\\tau =(94,150,206,262,318).Remark 4.1 The construction of the surrogate change inducing variable $w_i^*,$ the reparameterizaton of (REF ) and (REF ) is required only to avoid coding complexity of the algorithm.", "In general, a similar simulated annealing approach can be easily developed for directly implementing Step 1 of Algorithm 1 on the state space $\\lbrace -\\infty ,w_1,...,w_n,\\infty \\rbrace ^{\\check{N}},$ however to avoid redundancy, these details are omitted." ], [ "Numerical Results", "The main objective of the monte carlo simulations of this section are to assess the empirical performance of Algorithm 1 of the proposed method, which performs the detection and estimation of change points in the assumed model.", "We do not perform simulations for Algorithm 3 of the process since this step is an ordinary lasso optimization, whose empirical validity has been established in the literature via an innumerable number of simulations.", "In view of the reparameterization described earlier in this section, we consider the data generating process given in (REF ).", "The r.v.", "'s $\\varepsilon _i,$ $w_i$ and $x_i$ are drawn independently satisfying $\\varepsilon _i\\sim {\\cal N}(0,\\sigma _{\\varepsilon }^2),$ and $x_i\\sim {\\cal N}(0,\\Sigma ).$ Here, $\\Sigma $ is a $p \\times p$ matrix with elements $\\Sigma _{ij}=\\rho ^{|i-j|},$ $i,j=1,...,p.$ We set, $\\sigma _{\\varepsilon }=1$ and $\\rho = 0.5.$ The number of change points $N$ is set to one of $\\lbrace 0,1,...,4\\rbrace ,$ i.e., we consider one to five segment models.", "The case of $N=0,$ where no change points are assumed is only a detection problem, as opposed to the remaining cases where the objective is both detection and estimation of change points.", "The change point parameters are assumed to be equally spaced in $(0,1),$ specifically, we set $\\tau _1^{\\dagger }=\\frac{1}{N+1},\\tau _2^{\\dagger }= \\frac{2}{N+1},...,\\tau _N^{\\dagger }=\\frac{N}{N+1}.$ Simulations are performed for all combinations of the parameters $p\\in \\lbrace 50,175,300\\rbrace ,$ and $n\\in \\lbrace 250,375,500,625\\rbrace .$ Note that the total number of parameters to be estimated for each combination of $(p,N)$ is $p(N+1)+N.$ The regression coefficients are set in the following manner.", "The even numbered regression coefficient vectors $\\beta ^0_{2j}=(1_{1\\times 5}, 0_{1\\times (p-5)})^T,$ for all $j\\ge 0,$ such that $2j\\le N,$ and the odd numbered coefficient vectors are chosen as $\\beta ^0_{2j+1}=(0_{1\\times 5},1_{1\\times 5}, 0_{1\\times (p-5)})^T,$ for all $j\\ge 0,$ such that $2j+1\\le N.$ Here $0_{1\\times 5}=(0,...,0)_{1\\times 5},$ $1_{1\\times 5}=(1,...,1)_{1\\times 5},$ and $0_{1\\times (p-5)}=(0,...,0)_{1\\times (p-5)}.$ The initial number of change points assumed in Step 0 of Algorithm 1a are set to $\\check{N}=4,5,6,7$ for $n=250,375,500,625,$ respectively.", "Finally, the parameters of the simulated annealing optimization are chosen as follows.", "The total number of iterations performed under simulated annealing is set to $M=10,000,$ the temperature function (over iterations) is set to $T_i=1\\big /(temp*\\log (1+i)),$ $temp=1.25,$ $i=1,...,M.$ The period of sine curves constructed for zero inflation of the proposal density described in Section REF is chosen as $a_j=1\\big /\\big (250+25*(j-1)\\big ),$ $j=1,...,\\check{N},$ i.e., the first component of proposal completed 250 oscillations within the $M$ iterations and each following component has 25 more oscillations than the previous.", "All results are based on 100 monte carlo repetitions.", "Computations are performed in the software R, [35].", "All lasso optimizations are performed with the R package `glmnet', developed by [11].", "For reporting our results, we compute monte carlo approximations of the following metrics.", "On the detection of change points: Probability of match ${\\rm (PrM)}= E{\\bf 1}[\\tilde{N}=N],$ Probability of exceeding ${\\rm (PrE)}= E{\\bf 1}[\\tilde{N}>N],$ and Probability of lower number ${\\rm (PrL)}= E{\\bf 1}[\\tilde{N}<N],$ ${\\rm Bias}(\\tilde{N})= E(\\tilde{N}-N),$ ${\\rm RMSE}(\\tilde{N})=\\big (E(\\tilde{N}- N)^2\\big )^{\\frac{1}{2}}.$ On the estimation of location of change points conditioned on correct recovery of the number of change points: ${\\rm Bias(L)}=\\Vert E\\big (\\tilde{\\tau }-\\tau \\big |\\tilde{N}=N\\big )\\Vert _2,$ and ${\\rm RMSE(L)}=\\Vert \\big (E\\big ((\\tilde{\\tau }-\\tau )^2\\big |\\tilde{N}=N\\big )\\big )^{\\frac{1}{2}}\\Vert _1.$ Choice of tuning parameters $\\lambda _0,\\lambda _1,\\mu $: For lasso optimization of Step 0, the regularization parameter $\\lambda _0$ is chosen via a 5-fold cross validation (performed internally by the R package `glmnet').", "Next, we use a BIC-type criteria to choose the regularizer $\\mu $ of Step 1 of Algorithm 1a.", "Specifically, let $\\hat{d}(\\mu )$ represent the solution of (REF ) and $\\hat{\\tau }(\\mu )$ be the corresponding change point solution, then we choose $\\mu $ as argument that minimizes the criteria, ${\\rm BIC}(\\mu )= \\log \\big (Q\\big (\\check{N},\\hat{\\alpha },\\hat{\\tau }(\\mu )\\big )\\big ) + c\\frac{\\Vert \\hat{d}(\\mu )\\Vert _0\\log n}{n}$ Here we set $c=10,$ which performs well in all empirically examined cases.", "The simulation results for $p=50,175,300$ are reported in Table REF , Table REF and Table REF , respectively.", "The results are encouraging and supportive of our theoretical findings.", "For nearly all examined cases, in $\\approx 80\\%$ of all simulations, the estimated number of change points match exactly with the unknown number of change points.", "In cases where there is mismatch between $\\tilde{N}$ and $N,$ it can be approximated from ${\\rm Bias}(\\tilde{N})$ and ${\\rm RMSE}(\\tilde{N}),$ that the proposed procedure misses the unknown number of change points by $\\approx 1$ change point.", "In these cases of mismatch, it is also observed that under the given settings, $\\tilde{N}$ exceeds $N,$ indicating the BIC selection criteria can be further tightened by increasing the value of the constant chosen in its definition.", "Additionally, it is also observed from ${\\rm Bias(L)},$ and ${\\rm RMSE(L)}$ that the components of $\\tilde{\\tau }$ precisely converge toward the locations of the unknown change points, however, as expected some deterioration in accuracy is observed as $p$ increases.", "Table: Numerical results on the performance of Algorithm 1 in estimating the number of change points NN and their locations τ 0 ,\\tau ^0, when p=50.p=50.Table: Numerical results on the performance of Algorithm 1 in estimating the number of change points NN and their locations τ 0 ,\\tau ^0, when p=175.p=175.Table: Numerical results on the performance of Algorithm 1 in estimating the number of change points NN and their locations τ 0 ,\\tau ^0, when p=300.p=300." ], [ "Discussion", "Dynamic high dimensional regression models which are characterized via change points, provide an intuitive modelling approach that allows for dynamic behavior of parameters.", "These models allow for much greater versatility of the assumed model, and consequently a greater fidelity to the data structure.", "These models have been sparsely used in applications due to gaps in theoretical understanding and a lack of availability of efficient methods for estimation of parameters for such models.", "This article serves to fill this void.", "We develop a novel methodology for the detection and estimation of multiple change points in high dimensional linear regression models.", "The proposed method is theoretically sound and empirically more efficient than methods currently available in the literature.", "The idea of arbitrary segmentation is not restricted to regression models and the proposed methodology could potentially be developed for other relevant models such as dynamic networks.", "Two technical questions remained unanswered.", "First, what is optimal rate of regularized change point estimates in a high dimensional setting such as the one considered in this article.", "Second, is there theoretical validity of a BIC type criteria for the selection of the regularization parameter in the $\\ell _0$ regularization considered in this article.", "However these questions are left open for further investigations.", "Supplementary Materials for “Detection and estimation of parameters in high dimensional multiple change point regression models via $\\ell _1\\big /\\ell _0$ regularization and discrete optimization\"" ], [ "Proofs of Section 3", " [Proof of Lemma REF ] We begin by proving Part (i) of this lemma.", "Since $\\bar{{\\mathbb {R}}}$ is compact under the metric $\\Phi (\\cdot ),$ divide the space $\\bar{{\\mathbb {R}}}$ into $l=1/2u_n$ closed intervals (disjoint except at the boundaries), each of length $2u_n.$ Let $\\tau _1,...\\tau _{l}$ be fixed points which represent the centres of these intervals.", "We shall show that the following bound holds, $\\max _{j=1,...,l}\\sup _{{\\tau \\in \\bar{{\\mathbb {R}}};\\\\\\tau \\in {\\cal B}(\\tau _j,u_n)}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau )\\le c_u\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace ,$ with probability at least $1-c_1\\exp (-c_2\\log p),$ for $n$ sufficiently large.", "Assuming (REF ), observe that any $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}}$ satisfying $d(\\tau _a,\\tau _b)\\le u_n,$ must lie in atmost two adjacent intervals ${\\cal B}(\\tau _j,u_n)\\cup {\\cal B}(\\tau _{j+1},u_n),$ for some $j=1,...,l-1.$ This implies that $\\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _b)\\le 2\\max _{j=1,...,l}\\sup _{{\\tau \\in \\bar{{\\mathbb {R}}};\\\\\\tau \\in {\\cal B}(\\tau _j,u_n)}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau )\\le c_u\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace ,\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ Thus to prove part (i), it only remains to prove (REF ), this is done in the following.", "Consider a fixed $j\\in \\lbrace 1,...,l\\rbrace $ and let $\\tau _a>\\tau _j$ be a boundary point on the right of $\\tau _j,$ such that $d(\\tau _j,\\tau _a)=u_n.$ Then note that $p_n:=E\\zeta _i(\\tau _a,\\tau _j)=d(\\tau _a,\\tau _j).$ Since $\\zeta _i(\\tau _a,\\tau _b),$ $i=1,...,n$ are Bernoulli r.v.", "'s, hence for any $s>0,$ the moment generating function is given by $E\\exp \\big (s\\zeta _i(\\tau _a,\\tau _j)\\big )=q_n+p_n\\exp (s),$ where $q_n=1-p_n.$ Applying the Chernoff Inequality, we obtain, $P\\big (\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j) > t+np_n\\big )&=&P\\big (e^{\\sum _{i=1}^ns\\zeta _i(\\tau _a,\\tau _j)}> e^{(st+snp_n)}\\big )\\nonumber \\\\&\\le & e^{-s(t+np_n)}[q_n+p_n e^s]^n.\\nonumber $ Now, in order to show, $\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j)\\le c_u\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace $ with probability at least $1-c_1\\exp (-c_2\\log p),$ we divide the argument into two cases.", "First, when $d(\\tau _a,\\tau _j)\\ge c\\log p/n,$ for some constant $c>0.$ In this case, upon choosing $t=nd(\\tau _a,\\tau _j)$ we obtain, $P\\big (\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j)> 2nd(\\tau _j,\\tau _a)\\big ) \\le e^{[- 2snd(\\tau _a,\\tau _j)]}[1+(d(\\tau _a,\\tau _j))(e^s-1)]^n.\\nonumber $ Using the deterministic inequality $(1+x)^k\\le \\exp (kx),$ for any $k,x>0,$ we obtain that $P\\big (\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j)> 2nd(\\tau _a,\\tau _j)\\big ) \\le e^{- 2snd(\\tau _a,\\tau _j)}e^{(e^{s}-1)nd(\\tau _a,\\tau _j)} \\le e^{-c_2\\log p}.\\nonumber $ The inequality to the right follows by choosing $s=\\log 2,$ which maximizes the function $f(s)=2s-e^s+1$ and provides a positive value at the maximum, and by using the restriction $d(\\tau _a,\\tau _j)\\ge c\\log p/n.$ Next, when $d(\\tau _a,\\tau _)< c\\log p/n.$ Here choose $t=c\\log p$ to obtain, $P\\big (\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j)> c\\log p+ nd(\\tau _a,\\tau _j)\\big )\\hspace{108.405pt}\\nonumber \\\\\\le e^{[-sc\\log p - snd(\\tau _a,\\tau _j)]}[1+(d(\\tau _a,\\tau _j))(e^s-1)]^n.$ Calling upon the inequality $(1+x)^k\\le \\exp (kx),$ for any $k,x>0,$ we can bound the RHS of (REF ) from above by $\\exp \\big [-s c\\log p+(e^s-s-1) \\log p\\big ].$ Now $s=\\log (1+c)$ provides a positive value at the maximum, since it maximizes $f(s)=(1+c)s-e^s+1.$ Then for any $c>0,$ we obtain, $P\\big (\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j)> c\\log p+ nd(\\tau _a,\\tau _j)\\big ) &\\le & e^{-c_2\\log p}.", "\\nonumber $ Upon combining both cases, (REF ) follows by noting $d(\\tau _a,\\tau _j)=u_n.", "$ Now repeating the same argument for a fixed boundary point $\\tau _b$ on the left of $\\tau _j,$ such that $d(\\tau _b,\\tau _j)=u_n,$ and applying a union bound we obtain, $\\max _{\\tau \\in \\lbrace \\tau _a,\\tau _b\\rbrace }\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\le c_u\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace $ with probability at least $1- c_1\\exp (-c_2\\log p).$ In order to show that (REF ) holds uniformly over ${\\cal B}(\\tau _j,u_n).$ For this, we begin by noting that for any $\\tau \\in {\\cal B}(\\tau _j,u_n),$ where $\\tau >\\tau _j$ we have $\\zeta _i(\\tau ,\\tau _j)={\\bf 1}[w_i\\in (\\tau _j,\\tau )]\\le {\\bf 1}\\big [w_i\\in (\\tau _j,\\tau _a)\\big ].$ Similarly for any $\\tau \\in {\\cal T}(\\tau _j,u_n)$ where $\\tau <\\tau _j$ we have $\\zeta _i(\\tau )\\le {\\bf 1}\\big [w_i\\in (\\tau _{b},\\tau _j)\\big ].$ Thus $\\hspace{28.45274pt}\\sup _{\\tau \\in {\\cal B}(\\tau _j,u_n)} \\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\le \\max _{\\tau \\in \\lbrace \\tau _a,\\tau _b\\rbrace }\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\le c_u\\Big \\lbrace \\frac{\\log p}{n},u_n\\Big \\rbrace .$ with probability at least $1-c_1\\exp (-c_2\\log p).$ Combining the bound (REF ) over all $j=1,...,l$ using a union bound, we obtain (REF ) with probability at least $1-c_1(2u_n)^{-1}\\exp (-c_2\\log p).$ Finally since by assumption $\\log (u_n^{-1})= O(\\log p)$ therefore, (REF ) holds with probability at least $1-c_1\\exp (-c_2\\log p),$ for $n$ sufficiently large.", "This completes the proof of Part (i).", "The proof of Part (ii) proceeds with a similar idea as Part (i).", "Divide the space $\\bar{{\\mathbb {R}}}$ into $l=2/v_n$ closed intervals (disjoint except at the boundaries), each of length $v_n/2.$ Let $\\tau _1,...\\tau _{l}$ be fixed points which represent the centres of these intervals.", "We shall show that, $\\min _{j=1,...,l}\\inf _{{\\tau \\in \\bar{{\\mathbb {R}}};\\\\\\tau \\in {\\cal B}(\\tau _j,v_n/4)}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\ge c_uv_n,$ with probability at least $1-c_1\\exp (-c_2\\log p),$ for $n$ sufficiently large.", "Assuming (REF ), observe that, at least one interval ${\\cal B}(\\tau _j,v_n/2),$ $j=1,...,l$ will be contained in the interval between any two $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}}$ satisfying $d(\\tau _a,\\tau _b)\\ge v_n.$ This implies that $\\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _b)\\ge \\min _{j=1,...,l}\\inf _{{\\tau \\in \\bar{{\\mathbb {R}}};\\\\\\tau \\in {\\cal B}(\\tau _j,v_n/4)}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\ge c_uv_n\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ Thus to prove part (i), it only remains to prove (REF ).", "For this purpose, we use a lower bound for sums of non-negative r.v.s' stated in Lemma REF .", "This result was originally proved by [34].", "For a fixed right boundary point $\\tau _a>\\tau _j$ such that $d(\\tau _a,\\tau _j)=v_n/4,$ set $t=v_n$ in Lemma REF .", "Then we have $P\\Big (\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau _a,\\tau _j)\\le v_n\\Big )\\le \\exp \\Big (-4\\frac{n^2v_n^2}{nv_n}\\Big )\\le c_1\\exp (-c_2\\log p),\\nonumber $ where the last inequality follows from $v_n\\ge c\\log p/n.$ We obtain the same bound applying a similar argument for the left boundary point $\\tau _b<\\tau _{j}$ such that $d(\\tau _{b},\\tau _j)=v_n/4.$ Now applying an elementary union bound we obtain $\\hspace{14.22636pt}P\\Big (\\min _{\\tau \\in \\lbrace \\tau _a,\\tau _b\\rbrace }\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\ge c_u v_n\\Big )\\ge 1-c_1\\exp (-c_2\\log p).$ In order to obtain uniformity over $\\tau \\in \\big \\lbrace \\tau ;\\, d(\\tau ,\\tau _j)\\ge v_n/4\\big \\rbrace $ note that for $\\tau >\\tau _j,$ we have $\\zeta _i(\\tau ,\\tau _j)={\\bf 1}\\big [w_i\\in (\\tau _j,\\tau )\\big ]\\ge {\\bf 1}[w_i\\in (\\tau _j,\\tau _a]]$ and for any $\\tau <\\tau _j,$ we have $\\zeta _i(\\tau ,\\tau _j)={\\bf 1}[w_i\\in (\\tau _b,\\tau _j)]\\ge {\\bf 1}[w_i\\in [\\tau _b,\\tau _j)].$ This implies that $\\inf _{{\\tau \\in \\bar{{\\mathbb {R}}};\\\\ d(\\tau ,\\tau _j)\\ge v_n}}\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\ge \\min _{\\tau \\in \\lbrace \\tau _a,\\tau _b\\rbrace }\\frac{1}{n}\\sum _{i=1}^n\\zeta _i(\\tau ,\\tau _j)\\ge c_uv_v.$ with probability at least $1-c_1\\exp (-c_2\\log p).$ Finally, (REF ) follows by using a union bound over all $j=1,...,N$ and recalling that by assumption $v_n\\ge c\\log p/ n$ and therefore $\\log (v_n^{-1})=O(\\log p).$ This complete the proof of Lemma REF .", "[Proof of Lemma REF ] In the following let $n^w:=n^w(\\tau _a,\\tau _b).$ To prove Part (i) note that, $\\hspace{28.45274pt}\\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\inf _{\\delta \\in {\\mathbb {A}}} \\frac{1}{n}\\sum _{i\\in n^w}\\delta ^T x_ix_i^T \\delta = \\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\frac{|n^w|}{n}\\inf _{\\delta \\in {\\mathbb {A}}}\\frac{1}{|n^w|}\\sum _{i\\in n^w}\\delta ^T x_ix_i^T \\delta $ Let $P_w(\\cdot )$ represent the conditional probability $P(\\cdot |w),$ where $w=(w_1,...,w_n)^T.$ Recalling that $w$ is independent of $x,\\varepsilon ,$ by assumption D(iv) and applying Lemma REF and Lemma REF we obtain, $P_w\\Big (\\inf _{\\delta \\in {\\mathbb {A}}}\\frac{1}{|n^w|}\\sum _{i\\in n^w} \\delta ^Tx_ix_i^T\\delta \\ge \\kappa \\Vert \\delta \\Vert _2^2-c_uc_m\\frac{\\log p}{|n^w|}\\Vert \\delta \\Vert _1^2\\Big )\\ge \\hspace{28.45274pt}\\nonumber \\\\1-c_1\\exp (-c_2\\log p)$ Since the probability in the RHS of (REF ) is free of $w,$ taking expectations on both sides yields, $P\\Big (\\inf _{\\delta \\in {\\mathbb {A}}}\\frac{1}{|n^w|}\\sum _{i\\in n^w} \\delta ^Tx_ix_i^T\\delta \\ge \\kappa \\Vert \\delta \\Vert _2^2-c_uc_m\\frac{\\log p}{|n^w|}\\Vert \\delta \\Vert _1^2\\Big )\\ge \\hspace{28.45274pt}\\nonumber \\\\1-c_1\\exp (-c_2\\log p)$ Recall from Part (ii) of Lemma REF that $\\inf _{d(\\tau _a,\\tau _b)\\ge v_n}|n^w|/n\\ge c_uv_n,$ with probability at least $1-c_1\\exp (-c_2\\log p).$ Also, since $\\delta \\in {\\mathbb {A}},$ hence $\\Vert \\delta \\Vert _1^2\\le c_us\\Vert \\delta \\Vert _2^2.$ Combining these results with (REF ) and substituting in (REF ) we obtain, $\\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\inf _{\\delta \\in {\\mathbb {A}}} \\frac{1}{n}\\sum _{i\\in n^w}\\delta ^T x_ix_i^T \\delta &\\ge & c_uc_mv_n\\Vert \\delta \\Vert _2^2-c_uc_m\\frac{s\\log p}{n}\\Vert \\delta \\Vert _2^2\\nonumber \\\\&\\ge & c_uc_mv_n\\Vert \\delta \\Vert _2^2,\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ Here the final inequality follows since by assumption $v_n\\ge cs\\log p\\big /n.$ This completes the proof of Part (i).", "The proof of Part (ii) and Part (iii) are very similar to Part (i) and thus only key steps are provided.", "To prove part (ii), proceed as in Part (i) to obtain, $P\\Big (\\inf _{\\delta \\in {\\mathbb {A}}_2}\\frac{1}{|n^w|}\\sum _{i\\in n^w} \\delta ^Tx_ix_i^T\\delta \\ge \\kappa \\Vert \\delta \\Vert _2^2-c_uc_m\\frac{\\log p}{|n^w|}\\Vert \\delta \\Vert _1^2\\Big )\\ge \\hspace{28.45274pt}\\nonumber \\\\1-c_1\\exp (-c_2\\log p)$ In this case since $\\delta \\in {\\mathbb {A}}_2,$ hence $\\Vert \\delta \\Vert _1^2\\le c_us(\\Vert \\delta \\Vert _2^2+\\xi _{\\max }^2).$ Substituting this result in (REF ) and proceeding as in Part (i) yields, $\\inf _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\ge v_n}}\\inf _{\\delta \\in {\\mathbb {A}}_2} \\frac{1}{n}\\sum _{i\\in n^w}\\delta ^T x_ix_i^T \\delta &\\ge & c_uc_mv_n\\Vert \\delta \\Vert _2^2-c_uc_m\\frac{s\\log p}{n}\\Vert \\delta \\Vert _2^2-\\frac{\\xi _{\\max }^2s\\log p}{n}\\nonumber \\\\&\\ge & c_uc_mv_n\\Vert \\delta \\Vert _2^2-\\frac{\\xi _{\\max }^2s\\log p}{n},\\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ This completes the proof of Part (ii).", "To prove Part (iii), note that $\\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\ d(\\tau _a,\\tau _b)\\le u_n}}\\sup _{\\delta \\in {\\mathbb {A}}}\\frac{1}{n}\\sum _{i\\in n_w} \\delta ^Tx_ix_i^T\\delta =\\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\ d(\\tau _a,\\tau _b)\\le u_n}}\\frac{|n_w|}{n}\\sup _{\\delta \\in {\\mathbb {A}}}\\frac{1}{|n_w|}\\sum _{i\\in n_w} \\delta ^Tx_ix_i^T\\delta \\nonumber $ Now, from Part (i) of Lemma REF we have that $\\sup _{d(\\tau _a,\\tau _b)\\le u_n}|n^w|/n\\le c_u\\max \\lbrace \\log p/n ,u_n\\rbrace .$ Proceeding via the conditional probability argument described for Part (i) leads to, $\\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\ d(\\tau _a,\\tau _b)\\le u_n}}\\sup _{\\delta \\in {\\mathbb {A}}} \\frac{1}{n}\\sum _{i\\in n^w}\\delta ^T x_ix_i^T \\delta &\\le & c_uc_m\\Vert \\delta \\Vert _2^2 \\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace + c_uc_m\\frac{s\\log p}{n}\\Vert \\delta \\Vert _2^2\\nonumber \\\\&\\le & c_uc_m\\Vert \\delta \\Vert _2^2 \\max \\Big \\lbrace \\frac{s\\log p}{n},u_n\\Big \\rbrace \\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ This completes the proof of Lemma REF .", "[Proof of Theorem REF ] First consider the case where, $\\tau _a,\\tau _b\\in {\\cal C}_j^1.$ Then a simple algebraic manipulation of the basic inequality $\\frac{1}{n}Q^*(\\hat{\\alpha },\\tau _a,\\tau _b)+\\lambda \\Vert \\hat{\\alpha }\\Vert _1\\le \\frac{1}{n}Q^*(\\beta ^0_{(j-1)},\\tau _a,\\tau _b)+\\lambda \\Vert \\beta ^0_{(j-1)}\\Vert _1$ yields, $\\frac{1}{n}\\sum _{i\\in n^w(\\tau _a,\\tau _b)} \\Vert x_i^T(\\hat{\\alpha }-\\beta ^{0}_{(j-1)})\\Vert _2^2 + \\lambda _0\\Vert \\alpha \\Vert _1\\le \\hspace{36.135pt}\\nonumber \\\\\\Big |\\frac{2}{n}\\sum _{i\\in n^w(\\tau _a,\\tau _b)} \\tilde{\\varepsilon }_ix_i^T(\\hat{\\alpha }-\\beta ^{0}_{(j-1)})\\Big |+ \\lambda _0\\Vert \\beta ^0_{(j)}\\Vert _1.$ Here $\\tilde{\\varepsilon }_i=y_i-x_i^T\\beta ^0_{(j-1)}.$ Note that $\\tilde{\\varepsilon }_i$ may or may not be the same as $\\varepsilon _i$ depending on the index $i.$ Also, we have the following bound, $\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _a,\\tau _b)} \\tilde{\\varepsilon }_ix_i^T\\Big \\Vert _{\\infty }\\hspace{289.07999pt}\\nonumber \\\\\\le \\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _{j-1}^0,\\tau _j^0)} \\varepsilon _ix_i^T\\Big \\Vert _{\\infty } +\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _a,\\tau _{j-1}^0)} \\tilde{\\varepsilon }_ix_i^T\\Big \\Vert _{\\infty }+\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _j^0,\\tau _b)} \\tilde{\\varepsilon }_ix_i^T\\Big \\Vert _{\\infty }\\hspace{28.45274pt}\\nonumber \\\\\\le \\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _{j-1}^0,\\tau _j^0)} \\varepsilon _ix_i^T\\Big \\Vert _{\\infty }+\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _{j-1}^0,\\tau _j^0)} \\varepsilon _ix_i^T\\Big \\Vert _{\\infty }+\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _{j-1}^0,\\tau _j^0)} \\varepsilon _ix_i^T\\Big \\Vert _{\\infty }\\hspace{21.33955pt}\\nonumber \\\\+\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _a,\\tau _{j-1}^0)} (\\beta _{(j-1)}^0-\\beta _{(j-2)}^0)^Tx_ix_i^T \\Big \\Vert _{\\infty }+\\frac{1}{n}\\Big \\Vert \\sum _{i\\in n^w(\\tau _j^0,\\tau _b)} (\\beta _{(j)}^0-\\beta _{(j-1)}^0)^Tx_ix_i^T \\Big \\Vert _{\\infty }\\nonumber \\\\\\le c_uc_m\\sqrt{\\frac{\\log p}{n}} + c_uc_m\\sqrt{\\frac{\\log p}{n}}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\sqrt{u_n}\\Big \\rbrace +c_uc_m\\max \\Big \\lbrace \\frac{\\xi _{\\max }\\log p}{n},\\xi _{\\max }u_n\\Big \\rbrace \\nonumber \\\\\\le c_uc_m\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\xi _{\\max }u_n\\Big \\rbrace =\\lambda .\\hspace{224.03743pt}\\nonumber $ The second to last inequality here follows by applying the bounds provided in Lemma REF .", "Substituting the bound of the final inequality in (REF ), and choosing $\\lambda _0=2\\lambda ,$ yields the relation $\\Vert \\hat{\\alpha }_{S^c}\\Vert _1\\le 3\\Vert (\\hat{\\alpha }-\\beta ^0_{(j-1)})_S\\Vert _1,$ consequently the vector $\\hat{\\alpha }-\\beta ^0_{(j-1)}\\in {\\mathbb {A}}.$ Thus the first two inequalities of Lemma REF are now applicable.", "From (REF ) and an application of Part (i) Lemma REF with $v_n=l_{\\min }$ we can obtain, $c_uc_ml_{\\min }\\Vert \\hat{\\alpha }-\\beta ^0_{j-1}\\Vert _2^2\\le \\sqrt{s} \\lambda \\Vert \\hat{\\alpha }-\\beta ^0_{j-1}\\Vert _2,\\nonumber $ which directly implies that $\\Vert \\hat{\\alpha }-\\beta ^0_{j-1}\\Vert _2\\le \\sqrt{s}\\lambda \\big /l_{\\min }.$ To obtain the $\\ell _1$ bound, recall that since $\\hat{\\alpha }-\\beta ^0_{(j-1)}\\in {\\mathbb {A}},$ hence $\\Vert \\hat{\\alpha }-\\beta ^0_{(j-1)}\\Vert _1\\le \\sqrt{s}\\Vert \\hat{\\alpha }-\\beta ^0_{(j-1)}\\Vert _2.$ To complete the proof of this case, note that all bounds in the above arguments hold uniformly over any $\\tau _a,\\tau _b\\in {\\cal C}^1_j,$ with probability at least $1-c_1\\exp (-c_2\\log p).$ The cases of $\\tau _a,\\tau _b\\in {\\cal C}_j^2,$ ${\\cal C}^3_j$ and ${\\cal C}^4_j$ can be proved similarly.", "The final statement of the lemma follows by applying a union bound.", "[Proof of Corollary REF ] The proof of this result is a direct consequence of Theorem REF .", "Observe that by Condition A(i) and A(ii), the initial change point vector $\\check{\\tau }=(\\check{\\tau }_1,\\check{\\tau }_2,...,\\check{\\tau }_{\\check{N}})^T$ satisfies the following.", "For any $j=1,...,N$ the pair $\\tau _{m_{j}-1},\\tau _{m_j}$ lies in either ${\\cal C}^1_{j},$ or ${\\cal C}^2_{j}$ as defined in Theorem REF .", "Part (i) of this corollary follows by applying Theorem REF .", "Similarly, Part (ii) follows by noting that for any $j\\in {\\cal T}^c,$ the pair $\\check{\\tau }_{j-1},\\check{\\tau }_j$ belongs to either ${\\cal C}_{k_j}^3$ or ${\\cal C}_{k_j}^4.$ This completes the proof of this corollary.", "Remark 6.1 (Additional notation used in the Proof of Lemma REF ): Recall that the set ${\\cal T}=\\lbrace m_0,m_1,...,m_{N},m_{N+1}\\rbrace $ (defined in Condition A) is the subset of indices of $\\lbrace 0,1,2....,\\check{N}+1\\rbrace ,$ such that the initial change point $\\check{\\tau }_{m_j}$ lies in a ${\\check{u}}$ -neighborhood of $\\tau ^0_j.$ In the proof to follow, we use the notation $\\sum _{m_{j-1}<l<m_j},$ to represent the sum over all possible indices $l$ which lie between $m_{j-1}$ and $m_j.$ For example, let $N=2,$ $\\check{N}=5$ and consider any $\\tau \\in \\bar{{\\mathbb {R}}}^{\\check{N}}$ in the orientation described in the following Figure REF .", "Figure: A possible orientation of initializers τ ˇ∈ℝ 5 ,\\check{\\tau }\\in {\\mathbb {R}}^{5}, where N=2.N=2.Then, ${\\cal T}=\\lbrace m_0,m_1,m_2,m_3\\rbrace =\\lbrace 0,3,4,6\\rbrace ,$ and for $j=1,$ we denote by, $\\sum _{m_{j-1}<l<m_j}Q^*(\\alpha _{(l-1)},\\tau _{l-1},\\tau _l)= Q^*(\\alpha _{0},\\tau _{0},\\tau _1)+Q^*(\\alpha _{1},\\tau _{1},\\tau _2)\\nonumber $ Remark 6.2 (Useful observation utilized in the Proof of Lemma REF ): Consider the following decomposition of the $\\ell _0$ regularizing term in Step 1 of Algorithm 1.", "For any $\\tau \\in {\\mathbb {R}}^{\\check{N}}$ such that, $\\tau \\in {\\cal G}(u_n,v_n,{\\cal K},{\\cal K}_2),$ and $\\tau ^*$ as defined in (REF ), we have that, $\\sum _{j=1}^{\\check{N}}\\Big (\\Vert d(\\tau _{j-1},\\tau _j)\\Vert _0-\\Vert d(\\tau _{j-1}^*,\\tau _j^*)\\Vert _0\\Big )=\\sum _{j\\in {\\cal T}^c}\\Vert d(\\tau _{j-1},\\tau _j)\\Vert _0\\nonumber \\\\+\\sum _{j=1}^{N}\\Big (\\Vert d(\\tau _{h_j-1},\\tau _{h_j})\\Vert _0-\\Vert d(\\tau _{h_j-1}^*,\\tau _{h_j}^*)\\Vert _0\\Big )\\nonumber $ [Proof of Lemma REF ] Consider any $\\tau \\in {\\cal G}(u_n,v_n,{\\cal K}).$ The proof to follow relies in part on an algebraic manipulation of ${\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )$ defined in (REF ), which in turn requires a decomposition of the least squares loss $Q(\\check{N}, \\alpha ,\\tau ).$ This decomposition of the least squares loss depends on the orientation of $\\tau ,$ and in the following we assume a specific orientation of $\\tau ,$ such that $\\tau _{h_j-1}\\le \\tau ^0_{j}\\le \\tau _{h_j}\\le \\tau _{j+1},$ $j=1,...,N.$ While assuming this orientation does lead to a loss in generality in the sense that it does not include all possible $\\tau ,$ however it can be observed that any orientation of $\\tau $ shall lead to the same lower bound, this can be verified by following the same argument as below, however with a correspondingly different decomposition of the least squares loss.", "We also refer to Lemma 4.1 of [26], which provides a similar result in the special case with $\\check{N}=1,$ for further intuition as to how the same bound persists under any other orientation.", "In the following, for any $\\alpha _{(j)}\\in {\\mathbb {R}}^{p},$ $j=0,...,\\check{N},$ let $\\alpha $ represent the concatenation of $\\alpha _{(j)}^{\\prime }$ s. Then consider, $Q(\\check{N},\\alpha ,\\tau )&=&\\frac{1}{n}\\sum _{j=1}^{\\check{N}+1}Q^*(\\alpha _{(j-1)},\\tau _{j-1},\\tau _j)=\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{h_{j-1}<l\\le h_j}Q^*(\\alpha _{(l-1)},\\tau _{l-1},\\tau _l)\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{h_{j-1}<l< h_j}Q^*(\\alpha _{(l-1)},\\tau _{l-1},\\tau _l)+\\frac{1}{n}\\sum _{j=1}^{N+1}Q^*(\\alpha _{(h_j-1)},\\tau _{h_j-1},\\tau _{h_j})\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}Q^*(\\alpha _{(l-1)},\\tau _{l-1},\\tau _l)+\\frac{1}{n}\\sum _{j=1}^{N+1}Q^*(\\alpha _{(h_j-1)},\\tau _{h_j-1},\\tau _{j}^0)\\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N}Q^*(\\alpha _{(h_j-1)},\\tau _{j}^0,\\tau _{h_j}).\\nonumber $ Now, recall the definition of $\\tau ^*$ from (REF ) and note that, $Q(\\check{N},\\alpha ,\\tau ^*)&=&\\frac{1}{n}\\sum _{j=1}^{\\check{N}+1}Q^*(\\alpha _{(j-1)},\\tau _{j-1}^*,\\tau _j^*)=\\frac{1}{n}\\sum _{j=1}^{N+1}Q^*(\\alpha _{(h_j-1)},\\tau _{h_j-1}^*,\\tau _{h_j}^*)\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}Q^*(\\alpha _{(h_j-1)},\\tau _{l-1},\\tau _{l})+\\frac{1}{n}\\sum _{j=1}^NQ(\\alpha _{(h_{j+1}-1)},\\tau _{j}^0,\\tau _{h_j})\\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N+1}Q^*(\\alpha _{(h_j-1)},\\tau _{h_j-1},\\tau _{j}^0).\\nonumber $ Substituting the above expressions for $Q(\\check{N},\\alpha ,\\tau )$ and $Q(\\check{N},\\alpha ,\\tau ^*)$ in the definition of ${\\cal U}^*(\\check{N}, \\alpha ,\\tau )$ given in (REF ), we obtain, ${\\cal U}^*(\\check{N}, \\alpha ,\\tau )&=&Q(\\check{N}, \\alpha ,\\tau )-Q(\\check{N}, \\alpha ,\\tau ^*)\\nonumber \\\\&=& \\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}Q^*(\\alpha _{(l-1)},\\tau _{l-1},\\tau _l)+\\frac{1}{n}\\sum _{j=1}^{N}Q^*(\\alpha _{(h_j-1)},\\tau _{j}^0,\\tau _{h_j})\\nonumber \\\\&&-\\frac{1}{n}\\sum _{j=1}^NQ(\\alpha _{(h_{j+1}-1)},\\tau _{j}^0,\\tau _{h_j})-\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}Q^*(\\alpha _{(h_j-1)},\\tau _{l-1},\\tau _{l})\\nonumber \\\\&:=&(T1)+(T2)-(T3)-(T4)\\nonumber $ Further simplifying terms (T1)-(T4) we obtain, $T1&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}Q^*(\\alpha _{(l-1)},\\tau _{l-1},\\tau _l)=\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\big (y_i-x_i^T\\alpha _{(l-1)}\\big )^2\\nonumber \\\\&=& \\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}} \\varepsilon _i^2-\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\varepsilon _ix_i^T(\\alpha _{(l-1)}-\\beta ^0_{(j-1)}) \\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}(\\alpha _{(l-1)}-\\beta ^0_{(j-1)})^Tx_ix_i^T(\\alpha _{(l-1)}-\\beta ^0_{(j-1)})\\nonumber $ $T2&=&\\frac{1}{n}\\sum _{j=1}^{N}Q^*(\\alpha _{(h_j-1)},\\tau _{j}^0,\\tau _{h_j})=\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\big (y_i-x_i^T\\alpha _{(h_j-1)}\\big )^2\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\big (\\varepsilon _i-x_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})\\big )^2\\nonumber \\\\&=& \\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})} \\varepsilon _i^2 - \\frac{2}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\varepsilon _ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})\\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})\\nonumber $ $T3&=&\\frac{1}{n}\\sum _{j=1}^NQ(\\alpha _{(h_{j+1}-1)},\\tau _{j}^0,\\tau _{h_j})\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})} \\varepsilon _i^2 - \\frac{2}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\varepsilon _ix_i^T(\\alpha _{(h_{j+1}-1)}-\\beta ^0_{(j)})\\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}(\\alpha _{(h_{j+1}-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\alpha _{(h_{j+1}-1)}-\\beta ^0_{(j)})\\nonumber $ $T4&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}Q^*(\\alpha _{(h_j-1)},\\tau _{l-1},\\tau _{l})\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\big (y_i-x_i^T\\alpha _{(h_j-1)}\\big )^2\\nonumber \\\\&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}} \\varepsilon _i^2\\nonumber \\\\&&-\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\varepsilon _ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)}) \\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})^Tx_ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})\\nonumber $ Substituting the above expressions for terms $(T1)-(T4)$ back in the expression for ${\\cal U}^*(\\check{N},\\alpha ,\\tau ),$ while also noting that all terms involving $\\varepsilon _i^2$ cancel each other, we obtain, ${\\cal U}^*(\\check{N},\\alpha ,\\tau )&=&\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})\\nonumber \\\\&&+\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}(\\alpha _{(l-1)}-\\beta ^0_{j-1})^Tx_ix_i^T(\\alpha _{(l-1)}-\\beta ^0_{(j-1)})\\nonumber \\\\&&-\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}(\\alpha _{(h_{j+1}-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\alpha _{(h_{j+1}-1)}-\\beta ^0_{(j)})\\nonumber \\\\&&-\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})^Tx_ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})\\nonumber \\\\&&-\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\varepsilon _ix_i^T(\\alpha _{(l-1)}-\\beta ^0_{(j-1)})\\nonumber \\\\&&-\\frac{2}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\varepsilon _ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j)})\\nonumber \\\\&&+\\frac{2}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\varepsilon _ix_i^T(\\alpha _{(h_{j+1}-1)}-\\beta ^0_{(j)})\\nonumber \\\\&&+\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\varepsilon _ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)}) \\nonumber \\\\&:=& (R1)+(R2)-(R3)-(R4)-(R5)-(R6)+(R7)+(R8)\\nonumber $ Here, the terms $(R1),(R3),(R6),(R7)$ are non-zero only when $N\\ge 1.$ In the case where $N=0,$ these four terms will be identically zero.", "Also note that $R2\\ge 0,$ since it is a quadratic form.", "Observe that when ${\\cal U}^*(\\check{N},\\alpha ,\\tau )$ is evaluated at $\\hat{\\alpha },$ and at any $\\tau \\in {\\cal G}(u_n,v_n,{\\cal K}),$ the following uniform bounds for the terms $(R1)-(R8)$ hold, each with probability at least $1-c_1N\\exp (-c_2\\log p).$ These bounds for terms $(R1)-(R8)$ follow from applications of Lemma REF , Lemma REF and Corollary REF .", "Details pertaining to the derivations of these bounds are discussed in detail in Lemma REF in Appendix B of the supplementary materials.", "$R4&=&\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})^Tx_ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})\\nonumber \\\\&\\le & \\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}r_n^2 \\le c_uc_m|{\\cal K}|r_n^2\\nonumber \\\\|R5|&\\le & \\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\Big |\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\varepsilon _ix_i^T(\\alpha _{(l-1)}-\\beta ^0_{(j-1)})\\Big |\\nonumber \\\\&\\le & 2\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\sqrt{\\frac{\\log p}{n}}\\sqrt{s}r_n \\le c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n\\nonumber \\\\|R8|&\\le &\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< h_j}\\Big |\\sum _{\\tau _{l-1}<w_i<\\tau _{l}}\\varepsilon _ix_i^T(\\alpha _{(h_j-1)}-\\beta ^0_{(j-1)})\\Big |\\nonumber \\\\&\\le & 2\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l< m_j}\\sqrt{\\frac{\\log p}{n}}\\sqrt{s}r_n \\le c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n\\nonumber $ Next, consider the following two subcases.", "In the first subcase, assume $N=0,$ In this subcase $R1=R3=R6=R7=0.$ Thus, combining the bounds for $(R1)-(R8),$ we obtain for this subcase, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )\\ge -c_uc_m|{\\cal K}|r_n^2- c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n\\nonumber $ In the second subcase, where $N\\ge 1,$ we have, $R1&=&\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})\\nonumber \\\\&\\ge & c_uc_m\\xi _{\\min }^2v_{n}-c_uc_m\\frac{\\xi _{\\max }^2s\\log p}{n}\\nonumber \\\\R3&=&\\frac{1}{n}\\sum _{j=1}^{N}\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}(\\hat{\\alpha }_{(h_{j+1}-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_{j+1}-1)}-\\beta ^0_{(j)})\\nonumber \\\\&\\le & c_uc_mr_n^2\\sum _{j=1}^{N}\\max \\Big \\lbrace \\frac{s\\log p}{n},u_{nj}\\Big \\rbrace \\nonumber $ $|R6|&\\le &\\frac{2}{n}\\sum _{j=1}^{N}\\Big |\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\varepsilon _ix_i^T(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})\\Big |\\nonumber \\\\&\\le & c_uc_m\\xi _{\\max }\\sqrt{\\frac{\\log p}{n}}\\sum _{j=1}^{N}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\sqrt{u_{nj}}\\Big \\rbrace \\nonumber \\\\|R7|&\\le &\\frac{2}{n}\\sum _{j=1}^{N}\\Big |\\sum _{i\\in n^w(\\tau ^0_j,\\tau _{h_j})}\\varepsilon _ix_i^T(\\hat{\\alpha }_{(h_{j+1}-1)}-\\beta ^0_{(j)})\\Big |\\nonumber \\\\&\\le &c_uc_m r_n\\sqrt{\\frac{\\log p}{n}}\\sum _{j=1}^{N}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\sqrt{u_{nj}}\\Big \\rbrace .\\nonumber $ Combining the bounds for the terms $(R1)-(R8),$ we obtain, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )\\ge c_uc_m\\xi _{\\min }^2v_{n}-c_uc_m\\frac{\\xi _{\\max }^2s\\log p}{n}-c_uc_mr_n^2\\sum _{j=1}^{N}\\max \\Big \\lbrace \\frac{s\\log p}{n},u_{nj}\\Big \\rbrace \\nonumber \\\\-c_uc_m\\xi _{\\max }\\sqrt{\\frac{\\log p}{n}}\\sum _{j=1}^{N}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\sqrt{u_{nj}}\\Big \\rbrace -c_uc_m|{\\cal K}|r_n^2\\nonumber \\\\- c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n\\hspace{148.15372pt}$ To complete the proof, recall the definition of ${\\cal U}(\\check{N},\\hat{\\alpha },\\tau )$ from (REF ), and observe from (REF ), that for $\\tau \\in {\\cal G}$ ${\\cal U}(\\check{N},\\hat{\\alpha },\\tau )&=&{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )+\\mu |{\\cal K}|\\nonumber \\\\&&+ \\mu \\sum _{j=1}^N\\Big (\\Vert d(\\tau _{h_j-1},\\tau _{h_j})\\Vert _0-\\Vert d(\\tau _{h_j-1}^*,\\tau _{h_j}^*)\\Vert _0\\Big )\\nonumber $ Now, if $u_n$ is such that it converges to zero faster than $l_{\\min },$ i.e.", "$u_n/l_{\\min }\\rightarrow 0,$ then clearly the sign of $d(\\tau _{h_j-1},\\tau _{h_j})$ will be the same as that of $d(\\tau _{h_j-1}^*,\\tau _{h_j}^*),$ for each $j\\in \\lbrace 1,...,N\\rbrace ,$ and $n$ sufficiently large.", "The statement of this lemma now follows by combining the above expression with (REF ) and using the assumption $\\xi _{\\min }>c_u.$ [Proof of Theorem REF ] To begin with, note that by Condition B(iii) we have that $r_n^2/\\xi _{\\min }^2=o(s\\log p/n)^{1/k}.$ We begin by proving Part (i) of this theorem.", "For this purpose, first consider the case when $N=0.$ In this in this case $\\lbrace h_1,....,h_N\\rbrace =\\emptyset ,$ and thus by construction, the sequences $u_n,$ $v_n$ play no role in the set ${\\cal G}({\\cal K}):={\\cal G}(u_n,v_n,{\\cal K}).$ Now, applying Part (i) of Lemma REF , we obtain, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )\\ge \\mu |{\\cal K}|-c_uc_m|{\\cal K}|r_n^2- c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n,\\nonumber $ Now, let if possible ${\\cal K}$ be non-empty.", "Then by the choice of $\\mu =c_uc_m\\rho (s\\log p/n)^{1/k^*},$ where $k^*=\\max \\lbrace k,2\\rbrace ,$ and $n$ sufficiently large, we have that, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )>0.$ This implies that the optimizer $\\hat{\\tau }\\in \\bar{{\\mathbb {R}}}^{\\check{N}}$ of Step 1 of Algorithm 1, cannot lie in the set ${\\cal G}({\\cal K}),$ for any non-empty set ${\\cal K},$ with probability at least $1-c_1N\\exp (-c_2\\log p).$ Thus the only remaining possibility is that $\\hat{\\tau }\\in \\bar{{\\mathbb {R}}}^{\\check{N}}$ is such that $\\tau _{j-1}=\\tau _j,$ for all $l=1,...,\\check{N},$ i.e., $\\hat{\\tau }_j=-\\infty ,$ $j=1,...,N.$ This directly implies that $\\hat{{\\cal T}}(\\hat{\\tau })=\\emptyset ,$ and consequently $\\tilde{N}=0,$ with probability at least $1-c_1N\\exp (-c_2\\log p).$ Thus proving the theorem for this case.", "Next consider the case $N\\ge 1.$ Since the optimization of Step 1 of Algorithm 1 is over a subset of $\\tau \\in \\bar{{\\mathbb {R}}}^{\\check{N}},$ therefore any such $\\tau $ must satisfy $0\\le \\sum _{j=1}^{N}d(\\tau _{h_j},\\tau _j^0)\\le u_{n}=N,$ consequently, $\\tau \\in {\\cal G}:={\\cal G}(N,0,{\\cal K}),$ for some ${\\cal K}\\subseteq {\\cal T}^c.$ Let $v_{n}\\ge Ns\\log p/n$ be any positive sequence, then applying Part (ii) of Lemma REF over the collection ${\\cal G}(N,v_n,{\\cal K}),$ yields the bound, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )&\\ge & c_uc_mv_{n}+\\mu |{\\cal K}|-c_uc_mN\\frac{\\rho ^2s\\log p}{n}-\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}|{\\cal K}|r_n^2\\nonumber \\\\&&-N\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}r_n^2- N\\frac{c_uc_m\\rho }{(1\\vee \\xi _{\\min })}\\sqrt{\\frac{s\\log p}{n}}\\nonumber \\\\&&- \\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n- \\frac{N\\mu }{(1\\vee \\xi _{\\min }^2)},\\nonumber $ with probability at least $1-c_1N\\exp (-c_2\\log p).$ Now, if we choose $v_n:=v_n^*= c_uc_mN\\rho (s\\log p/n)^{1/k^*}.$ Then for $n$ sufficiently large we have, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )>0.$ This implies that the optimizer $\\hat{\\tau }$ cannot lie in the set ${\\cal G}(N,v_n^*,{\\cal K}),$ and thus $\\hat{\\tau }\\in {\\cal G}(v_n^*,0,{\\cal K}),$ for some ${\\cal K}.$ This statement together with Condition C(ii) also implies that all $\\hat{\\tau }_{h_j}$ 's are finite and distinct, thereby implying that $\\tilde{N}\\ge N.$ Now, for any non empty ${\\cal K},$ reset $u_n=v_n^*$ and apply Part (ii) of Lemma REF over the collection ${\\cal G}(u_n,0,{\\cal K}).$ Noting that in this case $F(u_n)=0,$ we obtain, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )&\\ge & \\mu |{\\cal K}|-c_uc_mN\\frac{\\rho ^2s\\log p}{n}-\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}|{\\cal K}|r_n^2\\nonumber \\\\&&-\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}r_n^2u_{n}- \\frac{c_uc_m\\rho }{(1\\vee \\xi _{\\min })}\\sqrt{\\frac{s\\log p}{n}}\\sqrt{Nu_n}\\nonumber \\\\&&- \\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n,\\nonumber $ Under the choice $\\mu =c_uc_m\\rho (s\\log p/n)^{1/k^*},$ we obtain that $\\inf _{\\tau \\in {\\cal G}}{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )>0,$ for any non-empty set ${\\cal K}.$ Consequently implying that $\\hat{\\tau }\\in {\\cal G}(u_n,0,\\emptyset ).$ In other words, there are no finite and distinct interruptions between $\\hat{\\tau }_{h_j}$ 's, consequently $\\tilde{N}= N,$ with probability at least $1-c_1N\\exp (-c_2\\log p).$ This proves Part (i) of this theorem.", "The proof of part (ii) relies on applying the above argument to recursively tighten the bound for $\\hat{\\tau }.$ We have already shown that $\\hat{\\tau }\\in {\\cal G}(u_n,0,\\emptyset ).$ Applying the same lower bound over the collection ${\\cal G}(u_n,v_n,\\emptyset )$ we obtain, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}(\\check{N},\\hat{\\alpha },\\tau )&\\ge & c_uc_mv_{n}-c_uc_mN\\frac{\\rho ^2s\\log p}{n}-\\frac{c_uc_m}{(1\\vee \\xi _{\\min }^2)}r_n^2u_{n}\\nonumber \\\\&&- \\frac{c_uc_m\\rho }{(1\\vee \\xi _{\\min })}\\sqrt{\\frac{s\\log p}{n}}\\sqrt{Nu_n}\\nonumber $ Now, upon choosing, $v_n\\ge v_n^*:=c_uc_mN\\rho ^{1+\\frac{1}{2}}\\Big (\\frac{s\\log p}{n}\\Big )^{a_2},\\,\\,\\,{\\rm with}\\,\\,a_2=\\min \\big \\lbrace \\frac{1}{2}+\\frac{1}{2k^*},\\frac{1}{k^*}+\\frac{1}{k^*}\\big \\rbrace \\nonumber $ with, we obtain that for $n$ large, $\\inf _{\\tau \\in {\\cal G}}{\\cal U}^*(\\check{N},\\hat{\\alpha },\\tau )>0.$ Thus implying that $\\hat{\\tau }\\in {\\cal G}(v_n^*,0,\\emptyset ),$ i.e., $\\sum _{j=1}^Nd(\\hat{\\tau }_{h_j},\\tau ^0_j)\\le v_n^*,$ with probability at least $1-c_1N\\exp (-c_2\\log p).$ Note that, by using the above recursive argument we have tightened the desired rate at each step.", "Continuing these recursions, by resetting $u_n$ to the bound of the previous recursion, and applying Part (ii) of Lemma REF over the collection ${\\cal G}(u_n,v_n,\\emptyset ),$ we can obtain for the $m^{th}$ recursion that, $\\sum _{j=1}^Nd(\\hat{\\tau }_{h_j},\\tau ^0_j)\\le c_uc_m \\rho ^{b_m}\\Big (\\frac{s\\log p}{n}\\Big )^{a_m},\\quad {\\rm where,}\\nonumber \\\\a_m=\\min \\Big \\lbrace \\frac{1}{2}+\\frac{a_{m-1}}{2},\\,\\frac{1}{k^*}+a_{m-1}\\Big \\rbrace ,\\,\\,{\\rm and}\\,\\, b_m=1+\\frac{b_{m-1}}{2}\\hspace{-28.45274pt},\\nonumber $ additionally $a_1=1/k^*$ and $b_1=1.$ To finish the proof, note that if we continue the above recursions an infinite number of times, we obtain $a_{\\infty }=\\sum _{m=1}^{\\infty }1/2^m=1$ and $b_{\\infty }=1+\\sum _{m=1}^{\\infty }1/2^m=2.$ Note that, despite the recursions in the above argument, the probability of the bound obtained after every recursion is maintained to be at least $1-c_1N\\exp (-c_2\\log p),$ this follows from Remark REF .", "This completes the proof of this theorem.", "Remark 6.3 (Observation utilized in the proof of Theorem REF ): The proof of Theorem REF relies on recursive application of Lemma REF .", "This in turn requires recursive application of the bounds of Lemma REF , the probability of all bounds holding simultaneously at each recursion being at least $1-c_1N\\exp (-c_2\\log p).$ Despite these recursions (potentially infinite) the result from the final recursion continues to hold with probability at least $1-c_1N\\exp (-c_2\\log p).$ To see this, let $u_n\\rightarrow 0$ be any positive sequence and let $\\lbrace a_j\\rbrace \\rightarrow a_{\\infty },$ $j\\rightarrow \\infty ,$ $0<a_j\\le 1,$ be any strictly increasing sequence over $j=1,2,....$ .", "Then define sequences $u^j_n=u_n^{a_j},$ $j=1,2...$ .", "Here note that $u_n^{j+1}=o(u_n^j),$ $j=1,...,$ i.e., each sequence converges to zero faster than the preceding one.", "Let ${\\cal E}_{u^1},{\\cal E}_{u^2}...$ be events, each with probability $1-c_1N\\exp (-c_2\\log p),$ on which the upper bounds of Lemma REF hold for each $u^1_n,u^2_n,...$ respectively.", "Clearly, on the intersection of events ${\\cal E}_{u^1}\\cap {\\cal E}_{u^2}\\cap ....,$ all upper bounds of Lemma REF hold simultaneously over any sequence $u_n^j,$ $j=1,...,\\infty $ Now, note that by the construction of these sequences, and that these are all upper bounds, the following containment holds ${\\cal E}_{u^1}\\supseteq {\\cal E}_{u^2}\\supseteq ...\\supseteq {\\cal E}_{u^\\infty }.$ This implies that on the event ${\\cal E}_{u^\\infty }$ all bounds of Lemma REF hold simultaneously for any sequence $\\lbrace u_n^{j}\\rbrace ,$ $j=1,...,\\infty .$ Here ${\\cal E}_{u^\\infty }$ represents the set corresponding to the sequence $u_{n}^{\\infty }=u_n^{a_{\\infty }}.$ Also, by a single application of Lemma REF , $P({\\cal E}_{u^\\infty })\\ge 1-c_1\\exp (-c_2\\log p).$ The same argument can be made for the lower bound of Lemma REF , with the direction of the containment switched.", "[Proof of Corollary REF ] First, note that from the result of Theorem REF , we have that $\\tilde{N}=N$ and $\\sum _{j=1}^{N}d(\\tilde{\\tau }_j,\\tau ^0_j)\\le N\\rho ^2s\\log p\\big /n,$ with probability at least $1-c_1N\\exp (-c_2\\log p),$ for $n$ sufficiently large.", "All arguments to follow are restricted to the event where these two results hold.", "Now by construction of Algorithm 2, the regression estimates $\\tilde{\\alpha }_{(j)},$ $j=0,...,N$ are computed based on the partition yielded by the change point estimate $\\tilde{\\tau }.$ Let $u_{nj}:=|\\tilde{\\tau }_{j}-\\tau ^0_j|\\vee |\\tilde{\\tau }_{j+1}-\\tau ^0_{j+1}|$ Then, choosing $\\lambda _{1j}=c_uc_m\\max \\lbrace \\sqrt{\\log p/n},\\,\\xi _{\\max }u_{nj}\\rbrace ,$ and applying Theorem REF , we obtain for each $j=0,...,N,$ that, $\\Vert \\tilde{\\alpha }_{(j)}-\\beta _{(j)}^0\\Vert \\le c_uc_ms^{\\frac{1}{q}}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}},\\,\\xi _{\\max }u_{nj}\\Big \\rbrace \\Big /l_{\\min },$ with probability at least $1-c_1\\exp (-c_2\\log p).$ Again, by Theorem REF we have that $\\sum _{j=1}^nu_{nj}\\le c_uc_m N\\rho ^2 s\\log p/n,$ with probability at least $1-c_1(1\\vee N)\\exp (-c_2\\log p).$ Thus, summing up the bounds in (REF ) over $j=0,...,N$ we obtain the statement of the Corollary." ], [ "Auxiliary results", "Lemma 7.1 Suppose Condition D and let $u_n$ be any non-negative sequence satisfying $\\log (u_n^{-1})=O(\\log p).$ Then we have for any fixed $\\delta \\in {\\mathbb {R}}^p$ that, $&(i)& \\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\Big \\Vert \\frac{1}{n}\\sum _{i\\in n^w(\\tau _a,\\tau _b)} \\delta ^T x_ix_i^T \\Big \\Vert _{\\infty } \\le c_u c_{m}\\Vert \\delta \\Vert _2\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace ,\\nonumber \\\\&(ii)& \\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\frac{1}{n}\\sum _{i\\in n^w(\\tau _a,\\tau _b)} \\delta ^T x_ix_i^T \\delta \\le c_u c_{m} \\Vert \\delta \\Vert _2^2\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace ,\\nonumber \\\\&(iii)& \\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\frac{1}{n}\\big \\Vert \\sum _{i\\in n^w(\\tau _a,\\tau _b)} \\varepsilon _ix_i^T \\big \\Vert _{\\infty } \\le c_u c_{m}\\sqrt{\\frac{\\log p}{n}}\\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}}, \\sqrt{u_n}\\Big \\rbrace ,\\nonumber $ with probability at least $1- c_1\\exp (-c_2\\log p).$ [Proof of Lemma REF ] We begin with the proof of Part (i).", "Note that the RHS of the inequality in Part (i) is normalized by the $\\ell _2$ norm of $\\delta .$ Hence, without loss of generality we can assume $\\Vert \\delta \\Vert _2=1.$ In following denote $n^w=n^w(\\tau _a,\\tau _b).$ Note that if $|n^w|=0$ then Lemma REF holds trivially with probability $1,$ thus without loss of generality we shall assume that $|n_w|>0.$ Now, for any fixed $\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}},$ we have $\\Big \\Vert \\frac{1}{n}\\sum _{i\\in n_w} \\delta ^T x_ix_i^T \\Big \\Vert _{\\infty } \\le \\frac{|n_w|}{n}\\Big \\Vert \\frac{1}{|n_w|}\\sum _{i\\in n_w} \\delta ^T x_ix_i^T \\Big \\Vert _{\\infty }$ Under Condition D(iv) and by properties of conditional expectations (see e.g.", "Lemma REF ), the conditional probability $P_w()= P(| w)$ can be bounded by treating $w$ as a constant.", "Thus, $P_w\\Big (\\Big \\Vert \\frac{\\sum _{i\\in n_w} \\delta ^Tx_ix_i^T}{|n_w|}- \\delta ^T\\Sigma \\Big \\Vert _{\\infty } > t\\Big )\\le 6p \\exp (-c_u|n_w|\\min \\big \\lbrace \\frac{t^2}{\\sigma _x^4}, \\frac{t}{\\sigma _x^2}\\big \\rbrace )\\nonumber $ where the above probability bound is obtained by an application of Part (ii) of Lemma 14 of Loh and Wainwright (2012): supplementary materials.", "This lemma is reproduced as Lemma REF in this section.", "Now choosing $t=c_u\\max \\Big \\lbrace \\sigma _x^2\\sqrt{\\frac{\\log p}{|n_w|}}, \\sigma _x\\frac{\\log p}{|n_w|}\\Big \\rbrace $ we obtain, $P_w\\left(\\Big \\Vert \\frac{\\sum _{i\\in n_w} \\delta ^Tx_ix_i^T}{|n_w|}\\Big \\Vert _{\\infty }\\le \\Vert \\delta ^T\\Sigma \\Vert _{\\infty } + c_u\\max \\Big \\lbrace \\sigma _x^2\\sqrt{\\frac{\\log p}{|n_w|}}, \\sigma _x\\frac{\\log p}{|n_w|}\\Big \\rbrace \\right)\\nonumber \\\\\\ge 1-c_1\\exp (-c_2\\log p).$ The result in (REF ) together with (REF ) yields, $P_w\\left(\\Big \\Vert \\frac{1}{n}\\sum _{i\\in n_w} \\delta ^T x_ix_i^T \\Big \\Vert _{\\infty } \\le \\frac{|n_w|}{n}\\Vert \\delta ^T\\Sigma \\Vert _{\\infty }+\\frac{|n_w|}{n}c_u\\max \\Big \\lbrace \\sigma _x^2\\sqrt{\\frac{\\log p}{|n_w|}}, \\sigma _x\\frac{\\log p}{|n_w|}\\Big \\rbrace \\right)\\nonumber \\\\\\ge 1-c_1\\exp (-c_2\\log p).\\nonumber $ Taking expectations on both sides and observing that the RHS of the above conditional probability is free of $w,$ we obtain, $P\\left(\\Big \\Vert \\frac{1}{n}\\sum _{ i\\in n_w} \\delta ^T x_ix_i^T \\Big \\Vert _{\\infty } \\le \\frac{|n_w|}{n}\\Vert \\delta ^T\\Sigma \\Vert _{\\infty }+\\frac{|n_w|}{n}c_u\\max \\Big \\lbrace \\sigma _x^2\\sqrt{\\frac{\\log p}{|n_w|}}, \\sigma _x\\frac{\\log p}{|n_w|}\\Big \\rbrace \\right)\\hspace{-28.45274pt}\\nonumber \\\\\\ge 1-c_1\\exp (-c_2\\log p)$ On the other hand, we have by Part (i) of Lemma REF , that with probability at least $1-c_1\\exp (-c_2\\log p)$ that $\\sup _{d(\\tau _a,\\tau _b)\\le u_n}|n_w|/n\\le c_u\\max \\lbrace \\log p/n , u_n\\rbrace .$ Also, it is straightforward to see that $\\Vert \\delta ^T\\Sigma \\Vert _{\\infty }\\le c_u \\phi ,$ for some constant $c_u>0.$ Thus with the same probability we have the bound, $\\sup _{\\tau \\in {\\cal T}(\\tau _{0n},u_n)}\\frac{|n_w|}{n}\\Vert \\delta ^T\\Sigma \\Vert _{\\infty }\\le c_u\\phi \\max \\Big \\lbrace c_a\\frac{\\log p}{n}, u_n\\Big \\rbrace .$ Again applying Part (i) of Lemma REF we also have the following bound with probability at least $1-c_1\\exp (-c_2\\log p),$ $\\sup _{{\\tau _a,\\tau _b\\in \\bar{{\\mathbb {R}}};\\\\d(\\tau _a,\\tau _b)\\le u_n}}\\frac{|n_w|}{n}\\sqrt{\\frac{\\log p}{|n_w|}}&\\le & c_u\\sqrt{\\frac{\\log p}{n}} \\max \\Big \\lbrace \\sqrt{\\frac{\\log p}{n}}, \\sqrt{u_n}\\Big \\rbrace \\nonumber \\\\&\\le & c_u\\max \\Big \\lbrace \\frac{\\log p}{n}, u_n\\Big \\rbrace .$ The final inequality follows upon noting that if $\\sqrt{\\log p/n}\\sqrt{u_n}\\ge u_n $ then $u_n \\le \\log p/n.$ Finally also note that $\\sup _{d(\\tau _a,\\tau _b)\\le u_n} (|n_w|/n) (\\log p/|n_w|)\\le \\log p/n.$ Part (i) of the lemma follows by combining these results together with the bounds (REF ) and (REF ) in (REF ).", "The proofs of Part (ii) and Part (iii) are similar and are thus omitted.", "Lemma 7.2 (Bounds used in the proof of Lemma REF ): Let $\\hat{\\alpha }_{(j)},$ $j=0,...,\\check{N}$ be the regression estimates obtained from Step 1 of Algorithm 1, ${\\cal T}$ and ${\\cal T}^*$ be as defined in Condition A and (REF ) respectively and let ${\\cal G}:={\\cal G}(u_n,v_n,{\\cal K})$ be as defined in (REF ).", "Then assuming the conditions of Lemma REF , the following bounds hold with probability at least $1-c_1(1\\vee N)\\exp (-c_2\\log p),$ for $n$ sufficiently large.", "$&(i)&\\inf _{\\tau \\in {\\cal G}}\\frac{1}{n\\xi _{\\min }^2}\\sum _{j=1}^N\\sum _{i\\in n^w(\\tau _{h_j},\\tau _j^0)}(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})\\ge \\nonumber \\\\ &&\\hspace{231.26378pt}c_uc_mv_{n}-c_uc_mN\\frac{\\rho ^2s\\log p}{n},\\nonumber \\\\&(ii)&\\inf _{\\tau \\in {\\cal G}}\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l<h_j}\\sum _{\\tau _{l-1}<w_i<\\tau _l}(\\hat{\\alpha }_{(l-1)}-\\beta ^0_{(j-1)})^Tx_ix_i^T(\\hat{\\alpha }_{(l-1)}-\\beta ^0_{(j-1)}) \\ge 0,\\nonumber \\\\&(iii)& \\sup _{\\tau \\in {\\cal G}}\\frac{1}{n}\\sum _{j=1}^N\\sum _{i\\in n^w(\\tau _j^0,\\tau _{h_j})}(\\hat{\\alpha }_{(h_{j+1}-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_{j+1}-1)}-\\beta ^0_{(j)}) \\le \\nonumber \\\\&&\\hspace{231.26378pt}c_uc_mr_n^2 \\max \\Big \\lbrace \\frac{Ns\\log p}{n}, u_{n}\\Big \\rbrace ,\\nonumber \\\\&(iv)& \\sup _{\\tau \\in {\\cal G}}\\frac{1}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l<\\tau _l}\\sum _{\\tau _{l-1}<w_i<\\tau _l}(\\hat{\\alpha }_{(h_{j}-1)}-\\beta ^0_{(j-1)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_{j}-1)}-\\beta ^0_{(j-1)}) \\le c_uc_m|{\\cal K}|r_n^2,\\nonumber \\\\&(v)& \\sup _{\\tau \\in {\\cal G}}\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l<\\tau _l}\\sum _{\\tau _{l-1}<w_i<\\tau _l}\\varepsilon _ix_i^T(\\hat{\\alpha }_{(l-1)}-\\beta ^0_{(j-1)}) \\le c_uc_m |{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n,\\nonumber \\\\&(vi)& \\sup _{\\tau \\in {\\cal G}}\\frac{2}{n}\\sum _{j=1}^N\\sum _{i\\in n^w(\\tau _j^0,\\tau _{h_j})}\\varepsilon _ix_i^T(\\hat{\\alpha }_{(h_{j}-1)}-\\beta ^0_{(j)}) \\le c_uc_m \\xi _{\\max }\\sqrt{\\frac{\\log p}{n}}\\max \\Big \\lbrace N\\sqrt{\\frac{\\log p}{n}}, \\sqrt{Nu_{n}}\\Big \\rbrace ,\\nonumber \\\\&(vii)& \\sup _{\\tau \\in {\\cal G}}\\frac{2}{n}\\sum _{j=1}^N\\sum _{i\\in n^w(\\tau _j^0,\\tau _{h_j})}\\varepsilon _ix_i^T(\\hat{\\alpha }_{(h_{j+1}-1)}-\\beta ^0_{(j)}) \\le c_uc_m r_n\\sqrt{\\frac{\\log p}{n}}\\max \\Big \\lbrace N\\sqrt{\\frac{\\log p}{n}}, \\sqrt{Nu_{n}}\\Big \\rbrace ,\\nonumber \\\\&(viii)& \\sup _{\\tau \\in {\\cal G}}\\frac{2}{n}\\sum _{j=1}^{N+1}\\sum _{m_{j-1}<l<\\tau _l}\\sum _{\\tau _{l-1}<w_i<\\tau _l}\\varepsilon _ix_i^T(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j-1)}) \\le c_uc_m|{\\cal K}|\\sqrt{\\frac{s\\log p}{n}}r_n.", "\\nonumber $ [Proof of Lemma REF ] To prove part (i), let $v_{nj}\\ge s\\log p/n,$ $j=1,...,N$ and $v_n=\\sum _{j=1}^N v_{nj}\\ge Ns\\log p/n,$ then by Part (ii) of REF we have with probability at least $1-c_1\\exp (-c_2\\log p)$ that $\\inf _{\\tau _j;\\, d(\\tau _j,\\tau _j^0)\\ge v_{nj}}\\inf _{\\delta _{(j)}\\in {\\cal A}_2}\\frac{1}{n}\\sum _{i\\in n^w(\\tau _j,\\tau _j^0)}\\delta _{(j)}^Tx_ix_i^T\\delta _{(j)}\\ge c_uc_mv_{nj}\\Vert \\delta _{(j)}\\Vert _2^2-c_uc_m\\frac{\\xi _{\\max }^2s\\log p}{n}.\\nonumber $ Applying this bound for each $j=1,...,N,$ and summing them up, we obtain with probability at least $1-c_1N\\exp (-c_2\\log p),$ $\\frac{1}{n}\\sum _{j=1}^N\\inf _{\\tau _j;\\, d(\\tau _j,\\tau _j^0)\\ge v_{nj}}\\inf _{\\delta _{(j)}\\in {\\cal A}_2}\\sum _{i\\in n^w(\\tau _j,\\tau _j^0)}\\delta _{(j)}^Tx_ix_i^T\\delta _{(j)}\\ge \\hspace{42.67912pt}\\nonumber \\\\ c_uc_mv_{n}\\min _j\\Vert \\delta _{(j)}\\Vert _2^2-c_uc_mN\\frac{\\xi _{\\max }^2s\\log p}{n}.\\hspace{-42.67912pt}$ Now let $\\delta _{(j)}=(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)}).$ By the construction of the indices $h_j$ of the index set ${\\cal T}^*=\\lbrace h_0,h_1,...,h_{N+1}\\rbrace ,$ in (REF ) we have that $m_{j-1}< h_j\\le m_j.$ Consequently, from the proof of Theorem REF we have that $(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j-1)})\\in {\\mathbb {A}},$ $j=1,...,N$ with probability at least $1-c_1N\\exp (-c_2\\log p).$ This in turn implies that $(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})\\in {\\mathbb {A}}_2,$ $j=1,...,N$ with the same probability.", "Additionally, we have for any $j=1,...,N,$ $\\Vert \\delta _{(j)}\\Vert _2^2&=&\\Vert (\\beta ^0_{(j)}-\\beta ^0_{(j-1)})+\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j-1)}\\Vert _2^2= \\Vert (\\beta ^0_{(j)}-\\beta ^0_{(j-1)})\\Vert _2^2\\nonumber \\\\&&+\\Vert \\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j-1)}\\Vert _2^2+\\Vert \\beta ^0_{(j)}-\\beta ^0_{(j-1)}\\Vert _2\\Vert \\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j-1)}\\Vert _2\\nonumber \\\\&\\ge & \\xi _{\\min }^2- r_n^2-2\\xi _{\\max }r_n$ with probability at least $1-c_1N\\exp (-c_2\\log p).$ Applying Condition B(iii) we obtain with the same probability that $\\min _{j}\\Vert \\delta _{(j)}\\Vert _2^2\\big /\\xi _{\\min }^2\\ge 1,$ for $n$ sufficiently large.", "Substituting these results back in REF we obtain, $\\frac{1}{n\\xi _{\\min }^2}\\sum _{j=1}^N\\inf _{\\tau _j;\\, d(\\tau _j,\\tau _j^0)\\ge v_{nj}}\\sum _{i\\in n^w(\\tau _j,\\tau _j^0)}(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})\\ge \\hspace{7.11317pt}\\nonumber \\\\ c_uc_mv_{n}-c_uc_mN\\frac{\\rho ^2s\\log p}{n}.\\hspace{-5.69046pt}$ with probability at least $1-c_1N\\exp (-c_2\\log p),$ and for $n$ sufficiently large.", "Now, recall the collection ${\\cal G}(u_n,v_n,{\\cal K}),$ defined for any $v_n\\ge Ns\\log p/n.$ Note that the sequence $u_n$ and the set ${\\cal K}$ are irrelevant for this bound, and by definition of this set we have that $\\sum _{j=1}^{N}d(\\tau _{h_j},\\tau _j^0)\\ge v_n\\ge c_uNs\\log p/n.$ In the case where $v_{nj}\\ge c_us\\log p/n,$ for each $j=1,...,N,$ clearly, the infimum on the LHS of (REF ) can be directly replaced with an infimum over the collection ${\\cal G}(u_n,v_n,{\\cal K}),$ with the corresponding expressions evaluated at $\\tau _{h_j}^{\\prime }$ s in place of $\\tau _j$ 's.", "This follows since the replacement infimum is over a subset of that in (REF ).", "In the case where $v_{nj}=o(s\\log p/n)$ for one or more $j$ 's (W.L.O.G.", "assume $j=1$ ).", "Since this component is of smaller order than $v_n,$ consequently we shall still have that $v_{n,-j}:=\\sum _{j\\ne 1} v_{nj}\\ge c_uNs\\log p/n,$ for $n$ large, i.e., the ratio of $v_n/v_{n,-j}=O(1).$ Thus by applying all above arguments to only the components $j=1,...,N,$ where $v_{nj}\\ge c_us\\log p/n,$ we obtain, $\\frac{1}{n\\xi _{\\min }^2}\\inf _{\\tau \\in {\\cal G}(u_n,v_n,{\\cal K})}\\sum _{j=1}^N\\sum _{i\\in n^w(\\tau _{h_j},\\tau _j^0)}(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})^Tx_ix_i^T(\\hat{\\alpha }_{(h_j-1)}-\\beta ^0_{(j)})\\ge \\hspace{7.11317pt}\\nonumber \\\\ c_uc_mv_{n}-c_uc_mN\\frac{\\rho ^2s\\log p}{n}.\\hspace{-5.69046pt}\\nonumber $ with probability $1-c_1(1\\vee N)\\exp (-c_2\\log p),$ and for $n$ sufficiently large.", "This completes the proof of Part (i) of this lemma.", "The bound $R2\\ge 0$ is trivial since it is a quadratic term.", "The bounds for $R3$ and $R4$ follow directly by an application of Part (iii) of (REF ).", "The bound for $R6,$ $R7$ and $R8$ can be obtained by an application of Lemma REF .", "This completes the proof of the Lemma.", "Lemma 7.3 Let the $\\lbrace X_i\\rbrace _{i=1}^m$ be independent random variables, $EX_i^2<\\infty ,$ $X_i\\ge 0.$ Set $S=\\sum _{i=1}^nX_i$ and let $t>0.$ Then $P\\Big (ES-S\\ge t\\Big )\\le \\exp \\Big (\\frac{-t^2}{2\\sum _{i=1}^nEX_i^2}\\Big )\\nonumber $ This result is as stated in Theorem 1 of Maurer (2003), it provides a lower bound on a sum of positive independent r.v.'s.", "Lemma 7.4 Let $z_i\\in {\\mathbb {R}}^p,$ $i=1,...,n$ be i.i.d subgaussian random vectors with variance parameter $\\sigma _z^2$ and covariance $\\Sigma _z=Ez_iz_i^T.$ Also, let $\\lambda _{\\min }(\\Sigma _z)$ and $\\lambda _{\\max }(\\Sigma _z)$ be the minimum and maximum eigenvalues of the covariance matrix respectively.", "Then, $&(i)&\\frac{1}{n} \\sum _{i=1}^n \\delta ^Tz_iz_i^T\\delta \\ge \\frac{\\lambda _{\\min }(\\Sigma _z)}{2}\\Vert \\delta \\Vert _2^2- c_u\\lambda _{\\min }(\\Sigma _z)\\max \\Big \\lbrace \\frac{\\sigma _z^4}{\\lambda _{\\min }^2(\\Sigma _z)},1\\Big \\rbrace \\frac{\\log p}{n} \\Vert \\delta \\Vert _1^2,\\quad \\forall \\delta \\in {\\mathbb {R}}^p, \\nonumber \\\\&(ii)& \\frac{1}{n} \\sum _{i=1}^n \\delta ^Tz_iz_i^T\\delta \\le \\frac{3\\lambda _{\\max }(\\Sigma _z)}{2}\\Vert \\delta \\Vert _2^2+ c_u\\lambda _{\\min }(\\Sigma _z)\\max \\Big \\lbrace \\frac{\\sigma _z^4}{\\lambda _{\\min }^2(\\Sigma _z)},1\\Big \\rbrace \\frac{\\log p}{n} \\Vert \\delta \\Vert _1^2,\\quad \\forall \\delta \\in {\\mathbb {R}}^p, \\nonumber $ with probability at least $1-c_1\\exp (-c_2\\log p).$ Lemma 7.5 Suppose $X$ and $Y$ are independent random variables.", "Let $\\phi $ be a function with $E|\\phi (X,Y)|<\\infty $ and let $g(x)=E\\phi (x,Y),$ then $E\\big (\\phi (X,Y)|X\\big )=g(X)\\nonumber $ This is an elementary result on conditional expectations and is stated for the reader's convenience.", "A straightforward proof can be found in Example 1.5. page 222, [10].", "Lemma 7.6 If $X\\in {\\mathbb {R}}^{n\\times p_1}$ is a zero mean subgaussian matrix with parameters $(\\Sigma _x,\\sigma _x^2)$ , then for any fixed (unit) vector in $v\\in {\\mathbb {R}}^{p_1},$ we have $(i)\\,\\, P\\Big (\\Big |\\Vert Xv\\Vert _2^2-E\\Vert Xv\\Vert _2^2\\Big |\\ge nt\\Big )\\le \\exp \\Big (-cn\\min \\Big \\lbrace \\frac{t^2}{\\sigma _x^4},\\frac{t}{\\sigma _x^2}\\Big \\rbrace \\Big )\\hspace{62.59605pt}\\nonumber $ Moreover, if $Y\\in {\\mathbb {R}}^{n\\times p_2}$ is a zero mean subgaussian matrix with parameters $(\\Sigma _y,\\sigma _y^2),$ then $(ii)\\,\\,P\\Big (\\Vert \\frac{Y^TX}{n}-{\\rm cov}(y_i,x_i)\\Vert _{\\infty }\\ge t\\Big ) \\le 6p_1p_2\\exp \\Big (-cn \\min \\Big \\lbrace \\frac{t^2}{\\sigma _x^2\\sigma _y^2},\\frac{t}{\\sigma _x\\sigma _y}\\Big \\rbrace \\Big )\\nonumber $ where $x_i,y_i$ are the $i^{th}$ rows of $X$ and $Y$ respectively.", "In particular, if $n\\ge c\\log p,$ then $(iii)\\,\\,P\\Big (\\Vert \\frac{Y^TX}{n}-{\\rm cov}(y_i,x_i)\\Vert _{\\infty }\\ge c\\sigma _x\\sigma _y \\sqrt{\\frac{\\log p}{n}}\\Big ) \\le c_1\\exp (-c_2\\log p).\\hspace{39.83385pt} \\nonumber $ This lemma provides tail bounds on subexponential r.v.", "'s and is as stated in Lemma 14 of [33]: supplementary materials.", "The first part of this lemma is a restatement of Proposition 5.16 of [44] and the other two part are derived via algebraic manipulations of the product under consideration." ] ]
1906.04396
[ [ "Anomaly Detection in High Performance Computers: A Vicinity Perspective" ], [ "Abstract In response to the demand for higher computational power, the number of computing nodes in high performance computers (HPC) increases rapidly.", "Exascale HPC systems are expected to arrive by 2020.", "With drastic increase in the number of HPC system components, it is expected to observe a sudden increase in the number of failures which, consequently, poses a threat to the continuous operation of the HPC systems.", "Detecting failures as early as possible and, ideally, predicting them, is a necessary step to avoid interruptions in HPC systems operation.", "Anomaly detection is a well-known general purpose approach for failure detection, in computing systems.", "The majority of existing methods are designed for specific architectures, require adjustments on the computing systems hardware and software, need excessive information, or pose a threat to users' and systems' privacy.", "This work proposes a node failure detection mechanism based on a vicinity-based statistical anomaly detection approach using passively collected and anonymized system log entries.", "Application of the proposed approach on system logs collected over 8 months indicates an anomaly detection precision between 62% to 81%." ], [ "Introduction", "In response to the demand for higher computational power, the number of components in high performance computers (HPC) rapidly increases [1].", "It is expected that Exascale HPC systems become available by 2020 [2].", "Besides increasing the quantity of computing resources, achieving high performance is also dependent on the optimized utilization of available resources, as well as on the continuous operation of the HPC system as a whole.", "Over the past decades, scientists proposed various methods and algorithms to adjust the workload on HPC systems to achieve the highest possible performance.", "With the drastic increase in the number of HPC system components, it is expected to observe a sudden increase in the number of failures which consequently poses a threat to the continuous operation of the HPC systems [3].", "Therefore, detecting failures as early as possible and, ideally, predicting them, is a necessary step to avoid interruptions in continuous HPC systems operation.", "A failure in general is an (observed) incorrect behavior with respect to the expected behavior.", "Failures can be observed and analyzed at different granularities, from a single transistor to an entire HPC system.", "Nodes are the smallest units in HPC systems that have a fully functional computational software stack and can be added or removed from HPC systems with minimum side-effects on the other computing units.", "Therefore, the granularity of failure detection in this work is set at the node level.", "Heterogeneity of computing nodes, as well as the topology of HPC systems, are other effective factors that influence the overall system performance.", "In accordance with node-related effective factors such as physical location, role in the cluster, computational workload, hardware properties, and so forth, various categorizations of computing nodes are possible.", "Hereafter, the term vicinity is used to refer to nodes that exhibit similar properties such as the ones mentioned above.", "The concept of vicinity defines new dimensions of node correlation, beyond the natural temporal and spatial correlations.", "subsec:vicinity provides examples and describes the vicinity of the nodes in more detail.", "An anomaly is an (observed) unexpected behavior with respect to the expected behavior.", "In contrast to failures, unexpected behaviors are not necessarily incorrect behaviors.", "Anomaly detection is a well-known general purpose approach for detecting failures in computing systems [4].", "In HPC systems, system log analysis can be used for anomaly detection for the purpose of preventing failures [5], [6], [7], [8].", "All HPC systems on the current TOP500 [9] list are Linux-based.", "Therefore, they all generate system log (Syslog) [10] messages by default.", "The goal of this work is to detect node failures via analyzing fully anonymized system logs using a vicinity-based statistical anomaly detection approach.", "The use-case of this study is a production Petascale HPC system, called Taurushttps://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/SystemTaurus.", "In addition to technical data, system log entries contain sensitive data about the system and its users.", "Therefore, addressing the data privacy [11] concerns is a fundamental requirement of any mechanism that detects anomalies on production HPC systems via system log analysis.", "The anonymization of system log entries, prior to performing Syslog analysis while retaining useful data, addresses the data privacy concerns [12].", "In this work, the Taurus nodes are first categorized into four different vicinities based on similarities they exhibit in terms of (1) hardware architecture, (2) resource allocation, (3) physical location, and (4) time of failures.", "Then, the anomalies are detected within each vicinity.", "Subsequently, the effectiveness of performing anomaly detection in each vicinity for the purpose of failure detection is compared and discussed.", "To assess the usefulness of the proposed method of anomaly detection on anonymized data, a copy of 8 months of Taurus system logs was anonymized via the PaRS [13] anonymization mechanism and the anonymized system logs were used as the input data.", "The main contributions of this work are: (1) proposing a node failure detection mechanism via a vicinity-based statistical anomaly detection approach that employs a passive data collection approach, as well as (2) analyzing the effectiveness of anomaly detection method in various vicinities using the 8 months of Taurus HPC cluster system logs.", "In addition, (3) to the best of our knowledge this is the first work on anomaly detection that is capable of utilizing both original and anonymized Syslog entries with a high degree of equivalence between the analysis results.", "The remainder of this work is organized as follows.", "The node vicinities, data sources, and anonymization method are introduced in Subsections REF , REF , and REF respectively.", "The method to identify node failures employed in this work is described in sec:failures.", "The proposed anomaly detection method is described in detail in sec:anomaly.", "The impact of anomaly detection in different node vicinities is analyzed in sec:comparison.", "The background and current related works are introduced in sec:relatedwork.", "Finally, the work is concluded and future work directions are discussed in sec:conclusion." ], [ "Proposed Anomaly Detection Methodology", "Taurus is an HPC cluster composed of 2,046 computing nodes.", "The computing nodes are organized in 6 islands based on their hardware propertiesDetailed hardware information of Taurus: https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/HardwareTaurus.", "fig:hardware provides a schematic illustration of the Taurus node topology.", "Each letter represents a single computing node.", "Nodes with identical colors are of identical hardware (processing unit) architecture." ], [ "Vicinities", "Computing nodes with similar characteristics are considered to be in the vicinity of each other.", "Node characteristics include any physical, spatial, temporal, or logical properties of the computing nodes.", "A group of computing nodes located in the same rack, performing different tasks of the same job, or sharing a common resource (e.g., file system, power supply unit), are all examples of nodes in the vicinity of each other.", "The concept of vicinity defines new dimensions of node correlation, beyond the natural temporal and spatial correlations.", "Each vicinity can be imagined as a new dimension in which two separated entities (nodes) become correlated.", "For example, points $A:(1,10)$ and $B:(4,6)$ in a 2D Cartesian representation are separated by a distance of $4-1=3$ on the $X$ axis and $10-6=4$ on the $Y$ axis, respectively.", "Defining the new dimension $Z$ , according to a common (but so far unseen) feature of $A$ and $B$ would result in a 3D representation of $A:(1,10,5)$ and $B:(4,6,5)$ .", "Here '5' denotes that common feature.", "In the new 3D representation, even though $A$ and $B$ are still separated on $X$ and $Y$ , their distance on the dimension Z will be $5-5=0$ .", "In another word, $A$ and $B$ will be in the vicinity of each other from the $Z$ axis perspective.", "In this work, node vicinities are observed from four different perspectives: $(1)~$hardware architecture, $(2)~$resource allocation, $(3)~$physical location, and $(4)~$time of failure.", "The first perspective denotes a node vicinity according to the node's physical properties, the second perspective emphasizes the node's logical properties, while the third and fourth perspectives denote spatial and temporal properties, respectively.", "All other correlations among nodes can be mapped onto these four proposed vicinities, e.g., nodes connected to a single switch can be mapped onto the physical location vicinity.", "In Subsections REF , REF , REF , and REF these four vicinities are explained in more detail, based on the Taurus architecture.", "The node vicinities are intended to mitigate the major characteristic differences between nodes.", "Therefore, in cases that several parameters influence a certain node's characteristic, the most dominant parameter is considered to identify the node's vicinity.", "All nodes in island 2 beside their Sandy bridge or Haswell CPUs are equipped with graphical processing units (GPU).", "Since the majority of jobs submitted to island 2 mainly utilize GPUs rather than CPUs, GPUs are considered as dominant processing units of these nodes.", "Therefore, in this work, island 2 is considered as a homogeneous GPU island, despite the heterogeneity of its nodes' CPUs.", "It is important to emphasize that in the context of this work, two nodes in the vicinity of each other are not necessarily physically co-located.", "In fact, they may even belong to physically separated partitions of the HPC system.", "Computing nodes on Taurus may be of 4 different processors architectures: Intel Haswell, Broadwell, Sandy Bridge, and Westmere.", "108 nodes with Sandy Bridge and Haswell processors are also equipped with GPUs (NVIDIA Tesla K20X and K80).", "According to their hardware architectures, the 2,046 computing nodes on Taurus can be divided into 5 categories.", "The node's dominant processor architecture and the number of nodes in each architecture category are shown in tab:hardware.", "A schematic illustration of the Taurus topology, including the type of each node's hardware architecture is provided in fig:hardware.", "Nodes with identical colors in fig:hardware are in the vicinity of each other from the hardware architecture perspective.", "Table: Hardware architecture of Taurus computing nodesFigure: Schematic island and node topology of Taurus.", "Node colors represent the dominant processing unit type of a node.", "Thick border lines indicate the 6 islands of Taurus." ], [ "Resource Allocation Vicinity", "Taurus employs Slurm [14] for job scheduling.", "The submitted jobs are allocated resources based on direct requests of the users, system policies, and the status of available resources.", "All nodes that execute tasks of the same job are in the vicinity of each other from the resource allocation perspective.", "In contrast to the static nature of the hardware architecture vicinity perspective, the resource allocation vicinity is fully dynamic and may change frequently as the running jobs are completed and new jobs are submitted to the cluster." ], [ "Physical Location Vicinity", "Various granularities can be used to express the physical location of a node in Taurus, e.g., chassis, rack, or island.", "Since the power, temperature, and connectivity of all nodes located in a single rack are controlled together, this work considers racks as the physical location granularity.", "Each row of an island shown in fig:hardware represents one rack of nodes.", "All nodes located in the same rack are in the vicinity of each other from the physical location perspective." ], [ "Time of Failure Vicinity", "Often failure is a consequence of various node-level events on and of properties of several nodes.", "However, a failure in itself is observable on a particular node at a specific moment.", "Therefore, the time of failure is considered as a temporal property of that particular node even though, several nodes may fail due to the same reason.", "From this perspective, all nodes that experience a failure within the same predefined time interval, fall into the same vicinity category.", "In this work, the time of failure interval considered is 10 minutes.", "The 10-minute time interval is chosen according to the results of the previous study on Taurus failure correlations [8].", "That study revealed that the majority of failures correlated on Taurus occurred within 10 minutes of each other.", "Therefore, failures that occur across the entire system within 10 minutes of each other are assumed to be in the same temporal vicinity.", "Thus, the nodes on which such failures occur are in the vicinity of each other from the time of failure perspective." ], [ "Data Sources", "System logs represent the main data source in this work.", "Syslogs of Taurus were collected for a period of one year from January to December 2017.", "Syslog daemons on each node collected the system log entries and forwarded them to a central log collector.", "To maintain the neutrality of the results and to provide a general approach, the pre-configured Syslog daemons were used without any changes, except for being configured to forward Syslog entries to the central log collector.", "In addition to Syslog entries, three other data sources were considered to improve the accuracy of the failure identification method: outage database, service notifications, and job status reports.", "The outage database reports all system outages from the users' perspective, the service notifications notify users regarding future scheduled maintenances and system-wide outages, and the job status reports indicate the final status of a job after its execution.", "tab:data-sources provides an overview of the four data sources used in this work.", "Table: Data sourcesThere are certain data gaps in the four data sources used in this work.", "The gaps are mainly incurred due to the interruption of the data collection mechanism.", "The Syslog entries cover the full period of one year from January to December 2017.", "The job status reports generated by Slurm showed in fig:slurm-failure covers the period of 28-02-2017 to 14-11-2017.", "The service notifications and the outage database, provide information from a higher perspective and are available for the whole period of one year from January to December 2017.", "Given these available data, the focus of this work is on the period of 01-03-2017 to 31-10-2017.", "The existence of certain gaps in data sources is reportedly a common challenge in similar studies [15].", "Figure: Slurm job status report on all islands of Taurus for the year 2017.", "Intervals of unavailability of jobs reports do not necessarily specify node outages.", "The red dots indicate jobs reported as \"failed\" concurrent to a node failure." ], [ "Anonymization of System Logs", "System log entries contain sensitive data about the system and its users.", "Therefore, addressing data privacy concerns is a fundamental requirement of any mechanism that performs system log analysis.", "The anonymization of system log entries, prior to performing Syslog analysis, addresses the data privacy concerns.", "PaRS [13] is an anonymization mechanism that provides full anonymization through hashing the message part of Syslog entries.", "PaRS substitutes all variables and sensitive terms in each log entry with constant values and maps each substituted entry to a hash key.", "Through anonymization, the partial semantic of log entries required for anomaly detection via Syslog analysis is preserved.", "The rightmost column in tab:anonymization contains the anonymized Syslog entries which correspond to the raw log entries shown in the middle column (prior to anonymization).", "As shown in tab:anonymization, an identical hash key is generated for entries with similar raw log semantics.", "Table: Sample syslog entries in their original and anonymized form" ], [ "Taurus Node Failures", "To assess the proposed method's functionality, all Taurus node failures must be known.", "Due to various technical reasons, the complete list of all node failures on Taurus for the period of this work is not available.", "This step is aimed to provide a complete list of Taurus node failures as the ground truth for further analysis and comparisons in sec:comparison.", "Node failures in computing systems can be divided into two main categories according to their root causes: (1) Failures that occur during the normal operation of the HPC system caused by internal factors, such as software and hardware errors or race conditions (i.e., regular failures).", "(2) Failures that occur due to external factors, such as power outages and human errors.", "Analyzing the impact of the external causes of node failures requires additional information regarding external factors that are not available, e.g., detailed information about the behavior of the HPC system power supply.", "Therefore, the focus in this work is on the first group of node failures namely regular failures, typically caused by internal factors.", "The next step to identify such failures is to detect node outages and to distinguish regular failures from those which may occur as a result of external factors, such as maintenance, human errors, and others.", "The failure detection workflow in Taurus is shown in fig:workflow.", "Computing nodes of Taurus generate and send Syslog entries to a central log collector which stores them for future analysis.", "This is a passive log collection mechanism chosen due to imposing no additional overhead, and to be applicable to other HPC systems.", "However, the failure identification process becomes more challenging in comparison to the use of active log collection mechanisms.", "Due to using the passive log collection mechanism, a node outage can be confidently detected only when a direct indication in the form of a log entry is generated by the failing node and correctly received and stored by the central log collector, e.g.,\" Kernel panic - not syncing: Fatal Exception.\"", "However, in many cases, a node outage leaves no direct indication in system logs.", "A workaround is to assume the absence of log entries for more than a certain period of time as an indication of a potential outage.", "Nonetheless, this assumption is not accurate.", "For various reasons such as CPU overloading or network congestion, the flow of system log entries from the computing nodes to the central log collector may be interrupted or delayed, which could be assumed as an outage, even though the computing nodes are functional.", "Also, in many cases immediately after the occurrence of an outage, the protection mechanisms recover the node.", "In both latter scenarios (temporary interruption in data flow and automatic node recovery), an active node probing approach may also fail to correctly detect all node outages.", "Analyzing Taurus system logs revealed that during a healthy boot all nodes leave similar footprints in Syslog entries.", "When a node fails to generate the expected footprint at boot time, it is an indication of a faulty boot process and thus the node will be either automatically rebooted or it will fail shortly thereafter.", "The higher frequency of log generation at boot time in comparison with the normal operation is another indicator of a boot event, which can be used to identify even a problematic boot process.", "The proposed node outage detection method in this work first detects the node boot events.", "Afterward, Syslog entries are backtracked until the last Syslog entry before the boot event is found.", "The timestamp of the last Syslog entry prior to the boot event is considered as the point of the outage.", "All node outages will be identified using the proposed method.", "The only exception is when a node fails and has no further successful boot process.", "In such cases, comparing the timestamp of the last available Syslog entry with the current time (i.e., 31-10-2017 23:59:59) reveals the missing node outages.", "fig:syslog-events illustrates all detected node outages on Taurus over the course of 2017.", "Figure: Node outages detected via Syslog analysis.", "Unexpected events (shown in red) indicate the absence of information in system logs which may be a sign of node crashes.", "Expected events indicate node outages that are planned due to maintenance or intentionally caused by the system protection mechanisms in place.The detected outages are compared against the information from other available data sources (mentioned in tab:data-sources).", "When a node outage occurs outside of the scheduled maintenance period and no job could be accomplished on that particular node at the time of the detected outage, the detected outage represents a regular failure.", "As fig:slurm-failure illustrates, it is common that certain jobs on a specific node fail, although other jobs on the same node are accomplished simultaneously.", "Also, when a node outage is recorded in the outages database that monitors the availability of the HPC system from the users' perspective, it is considered as a regular failure.", "Figure: Failure detection Workflow via system logs.", "In case of having both contradicting results at once, the incident will not be considered as a failure." ], [ "Anomaly Detection", "The common behavior of most nodes within a node vicinity is considered as normal behavior in that vicinity.", "A node's behavior is defined as the Syslog Generation frequency of the node (hereafter SG).", "The SG parameter is dynamically calculated based on the number of Syslog entries received from a computing node during a fixed time window (e.g.", "30 minutes) prior to the current (observation) moment.", "The SG parameter of each node is compared against the SG of other nodes in the same vicinity.", "Based on these comparisons, the normal value of the SG parameter of certain nodes at a given moment in time is calculated.", "Once the deviation of a node's SG parameter from the normal behavior exceeds a certain threshold, the node's behavior is considered abnormal.", "The deviation threshold is dynamically calculated within each vicinityA sample code written in python to demonstrate the calculation of dynamic thresholds via k-means is available: ghiasvand.net/u/param.", "To calculate the deviation threshold, all nodes within a vicinity (i.e.", "one row of fig:behaviorsample) are partitioned into 2 clusters based on their SG parameter via a clustering method such as K-Means.", "The deviation threshold is the relative density of resulting clusters which is calculated as the sum of squared distances of samples to their closest cluster centerAlso known as within cluster sum..", "Figure: Detection of failures (shown in orange) of node 1110 via the proposed failure detection mechanism.", "Cells colored in light blue indicate non-responsive nodes.Figure: Application of the proposed failure detection mechanism during normal behavior of node 1110.", "Detected failures are shown in orange.", "Cells colored in light blue indicate non-responsive nodes.fig:behaviorsample illustrates the behavior of node 1110 and 8 other neighboring nodes (physically located next to each other) prior to a point in time when 11 failures occurred on node 1110 in the year 2017. fig:behaviorrandom on the other hand, illustrates the behavior of the same nodes prior to 11 random points of time in which node 1110 functioned normally.", "In fig:behaviorsample and fig:behaviorrandom, the timestamp at the beginning of each row represents the observation moment in the year 2017.", "The value of each cell represents the SG parameter of the respective node (column's header) within a time interval of 30 minutes prior to the observation moment.", "Cells with abnormal behavior are shown in orange.", "The cell coloring in each row is relative to the value of other cells in that particular vicinity (row).", "According to fig:behaviorsample, node 1110 experienced 11 failures in 2017.", "For 7 out of the 11 failures illustrated in fig:behaviorsample, the deviation of the SG parameter correctly identifies the abnormal behavior of node 1110.", "In the example provided in fig:behaviorsample, the SG parameters were obtained for the nodes physical location vicinity.", "The same comparisons were made within other node vicinities.", "In sec:comparison the effectiveness of the proposed anomaly detection method in each node vicinity is discussed." ], [ "Impact of Vicinities on Anomaly Detection", "The proposed anomaly detection method was applied to 8 months of Taurus system logs.", "Regular failures were detected based on the node vicinities introduced in subsec:vicinity.", "The proposed method is applicable to hardware architecture, resource allocation, and physical location vicinities for the purpose of online anomaly detection.", "However, the time of failure vicinity can only be used as an offline approach to analyze the nodes' behavior after each failure occurrence.", "Results of the proposed method's application on Taurus system logs were compared against the set of regular failures detected in sec:failures.", "The following subsections describe the impact of performing the proposed anomaly detection method in each vicinity in more detail." ], [ "Impact of Hardware Architecture Vicinity", "Taurus nodes are located in 6 islands.", "As shown in fig:hardware, island 4 hosts nodes with different processor types, while islands 1, 2, 3, 5, and 6 are homogeneous.", "Although the nodes' hardware architecture influences the job allocation, as fig:slurm-failure illustrates there is no noticeable difference among job allocation patterns on various Taurus islands.", "However, as shown in fig:syslog-events, with the exception of islands 5 and 6 -which comprise of identical processor types- the node outages have different distribution patterns on each island.", "fig:syslog-pattern-comparision illustrates a one-to-one comparison of Syslog generation patterns in all 6 islands of Taurus.", "This figure visualizes the temporal and spatial patterns among more than 46K, 82K, 45K, 968K, 940K, and 1M Syslog entries generated by islands 1 to 6, respectively.", "Islands 5 and 6 present an almost identical pattern, which is also very similar to island 4.", "In contrast, the system log generation pattern on each of the islands 1, 2, and 3 has a completely different pattern.", "Figure: Syslog generation patterns of different islands.", "Each sub-diagram is vertically divided into two sections.", "Each section illustrates the Syslog generation pattern of 100 nodes of the respective island during 24 hours.", "e.g., sub-diagram (e) illustrates the Syslog generation pattern of island 1 (bottom) versus island 6 (top).The comparison shown in fig:syslog-pattern-comparision indicates that the processor architecture has a direct impact on node behavior.", "Therefore, the behavior of nodes in island 1 (Sandy Bridge) should not be predicted based on the behavior of nodes in island 3 (Westmere), while a similar behavior is expected from nodes in island 5 (Haswell) and island 6 (Haswell).", "No additional patterns were detected when conducting a similar analysis based on the amount of node's physical memory.", "The use of the proposed anomaly detection method on nodes with different hardware architecture proved to be virtually impossible.", "Detecting anomalies within the hardware architecture vicinity on Taurus also revealed several false positives.", "It is intended to improve the accuracy of the proposed statistical anomaly detection method by identifying the most relevant vicinity perspective among the four vicinities considered in this work.", "The proposed method detects anomalies by analyzing fully anonymized system logs.", "To the best of our knowledge, there is no similar approach for detecting anomalies using fully anonymized system logs.", "Therefore, a quantitative comparison cannot be conducted.", "However, tab:discussion shows a qualitative comparison of the proposed method's accuracy inside and outside of each vicinity." ], [ "Impact of Resource Allocation Vicinity", "fig:slurm-failure illustrates jobs that failed due to node failures, as reported by Slurm.", "Several jobs which were allocated on multiple nodes were terminated due to node failures during the 8-month period considered in this work.", "However, except for one incident in which a misbehaving job was causing the node failures, no direct correlation between the node failures and resource allocation was found.", "Therefore, among a group of computing nodes executing various tasks of the same job, as long as the running job is not causing a node failure itself, the probability of a second node failure occurrence is not higher than on the rest of the cluster.", "It is important to emphasize that the globally shared resources are not included in this analysis and conclusion.", "The abnormal behavior of globally shared resources such as the distributed file system may cause node failures itself.", "Globally shared resources are, however, unique entities in the cluster.", "Thus, directly including them in anomaly detection will not bring any additional benefit.", "Globally shared resources are, in fact, indirectly considered in anomaly detection as correlations between nodes.", "The use of the proposed anomaly detection method on resource allocation vicinity did not improve the accuracy of anomaly detection.", "However, the low number of failed jobs allocated on multiple nodes prevents drawing a robust conclusion." ], [ "Impact of Physical Location Vicinity", "Computing nodes are located inside HPC systems according to their function and architecture.", "Therefore, the physical location of the nodes is not an independent factor.", "The physical location of each computing node on Taurus can uniquely be addressed by its island, rack and chassis number.", "Analyzing the distribution of failures on Taurus as shown in Figures REF and REF , reveals a strong correlation among failures in each rack.", "Therefore, in comparison with the rest of the cluster, it is more likely to observe the second failure in a rack that already experienced a node failure.", "Out of the 2,046 computing nodes on Taurus, about $20\\%$ of nodes never reported a node failure, and $60\\%$ of computing nodes experienced 10 or fewer node failures during the time period considered in this work.", "The remaining $20\\%$ of computing nodes are responsible for more than $70\\%$ of all node failures.", "As shown in fig:job-failure, the probability of a future node failure is in proportion to the number of previous node failures.", "Figure: Taurus node failures in the year 2017, nodes are sorted increasingly by the number of failures.", "The exponential increase in the number of failures on certain nodes indicates a higher probability of subsequent failures.Application of the proposed anomaly detection method on the physical location vicinity provided the most accurate results among the three vicinities of hardware architecture, resource allocation, and physical location.", "The higher accuracy and the static nature of physical location vicinity make it a good candidate for the application of the proposed anomaly detection method." ], [ "Impact of Time of Failure Vicinity", "The time of failure vicinity includes all computing nodes which experience a node failure within a predefined time period (e.g., 10 minutes) regardless of their physical location.", "Therefore, it is required to regularly update the set of nodes in this vicinity and also unlike the others, it can be used only for offline behavior analysis after node failure.", "However, the results of offline behavior analysis can be used in online anomaly detection.", "Figure: Jobs reported as \"failed\" on Taurus in 2017 at the time of node failures.fig:job-failure-time illustrates job failures on Taurus.", "Each red circle represents a failure occurred on a node on a specific date.", "The horizontal concentration of red circles indicates several failures on a node, while the vertical concentration of circles represents simultaneous failures on several nodes.", "In several cases, a temporal correlation among node failures is observable.", "However, in most cases, the temporal correlation is supported by an even stronger spatial correlation (horizontal concentration of points in fig:syslog-events and fig:job-failure-time).", "Application of the proposed anomaly detection method on time of failure vicinity provided different results according to the cause of failures.", "When failures were caused by misbehaving global shared resources, such as the distributed file system, the results were accurate and reliable.", "However, for other causes of failures, such as misuse of local resources or sudden network interruptions, a high number of false positives hindered the successful anomaly detection." ], [ "Results and Discussions", "tab:discussion summarizes the outcome of Subsections REF , REF , REF , and REF in which the efficiency of anomaly detection was studied in various vicinities.", "According to these preliminary results, the impact of failure detection via the proposed method inside resource allocation and time of failure vicinities of Taurus is negligible and thus, should be avoided.", "Failure detection inside the Physical location vicinity, on the other hand, has a high impact on the accuracy of the final results.", "It also became evident that the application of the proposed method outside the hardware architecture vicinity of nodes can significantly degrade the accuracy of the failure detection mechanism.", "Table: Accuracy of the proposed failure detection method in different vicinitiesTherefore, based on the preliminary results shown in tab:discussion the proposed method was applied to the physical location vicinities within each hardware architecture vicinity.", "With the exception of one rack in island 4 as shown in fig:hardware, on Taurus this vicinity is practically a single rack with homogeneous computing nodes.", "Table: Results of applying the proposed failure detection method on Taurus Syslogs entriestab:results summarizes the final results of applying the proposed failure detection mechanism on Taurus system logs.", "Using the original Syslog entries as the input data, with a precision of $62\\%$ the majority of failures were detected.", "Since the proposed method only considers the frequency of log entries rather than the content of each log entry, the similar output could be achieved by using the anonymized Syslog entries without endangering the users' privacy.", "The PaRS anonymization method, used in this work, preserves the similarity of Syslog entries such that the frequency of each type of entries can be precisely calculated.", "Filtering out frequent log entries from the input data further improves the precision, with a $6\\%$ penalty on recall.", "Filtering frequent entries from anonymized system logs also improves the precision of the failure detection mechanism, although the recall factor is reduced to $75\\%$ .", "The difference between the results of filtered Syslog entries and filtered anonymized Syslog entries lies in the filtering approach.", "Since the content of anonymized system logs cannot be read, frequent log entries are detected based on their hash key and time-stamp.", "Therefore, some log entries have been mistakenly filtered out.", "Filtering frequent log entries before data anonymization requires further analysis of original system logs and may endanger user privacy.", "The majority of the undetected anomalies (false negatives) are related to sudden failures that do not leave large footprints in system logs (e.g., power outage).", "Several false negatives were also caused by calculating the wrong threshold for the SG parameter (e.g.", "shortly after major system updates).", "Fine tuning the threshold calculator -via comparing similar vicinities- improves the recall rate by about $4\\%$ ." ], [ "Related Work", "Failures in traditional high performance computing systems were either completely ignored, avoided or at most addressed in the middleware layer[16].", "The advances in the size and complexity of the HPC systems, demanded more flexible approaches against failures since they are a norm rather than an exception[17].", "The checkpoint/restart and redundancy became the de-facto methods to protect HPC systems against node failures.", "However, both methods impose performance and energy consumption penalties.", "Knowing the point of failure and adapting the checkpointing period or the redundancy level accordingly will increase performance and reduce energy consumption significantly[18].", "Despite the existence of failure protection mechanisms, the ever decreasing mean time between failures remains a key challenge for Exascale computing systems[19].", "Considering the high number of components in modern HPC systems and the strong correlations between node failures [15], [8], several studies investigate behavioral analysis to predict failures via anomaly detection.", "Both supervised[20] and unsupervised[21] approaches were proposed.", "Additionally, several tools were designed to assist in the detection of node failure correlations[22], [23].", "Many of the proposed approaches, such as proactive failure detection[24] are limited to certain components of the system.", "More general approaches, such as a framework for detecting anomalous messages [25] requires access to the full text of the original Syslog entries.", "All the above-mentioned works are based on system log analysis.", "The huge volume of system logs generated by HPC systems demands more automated approaches that take advantage of big data analysis[26] and machine learning algorithms.", "Algorithms, models and tools such as those proposed by Anwesha[27], Zhang[28], Gupta[29], and Du[30] are few examples of using deep learning for predicting failures in HPC systems via Syslog analysis.", "The drawback of all application-dependent approaches is the overhead incurred to the HPC system.", "Using high levels of redundancy and special purpose designed architectures are also not general solutions for existing and operational HPC systems.", "In addition, the available failure prediction methods are either using models configured for a specific HPC system or require detailed information about all events occurring on the HPC system which endangers users' privacy.", "A working solution should be applicable to existing operational HPC systems and should not incur extensive overhead to the underlying system.", "Furthermore, to the best of our knowledge, none of the existing approaches consider users' privacy protection as a fundamental principle in their analysis.", "The proposed approach in this work employs a statistical Syslog analysis to detect anomalies in existing and operational HPC systems.", "In contrast to other approaches, the proposed approach protects users' privacy by design.", "Since the input data (system logs) are passively collected, always anonymized, and no modification of the original HPC system is required, the proposed approach is applicable to virtually all existing and operational HPC systems.", "Furthermore, system logs can be replaced by any other persistent source of information which reflects the HPC system status." ], [ "Conclusion and Future Work", "Given that failures are becoming an integral part of HPC systems, employing mechanisms that detect failures as early as possible significantly reduces the recovery costs and reduces the interruptions in the normal operation of HPC systems.", "In this work, a node failure detection mechanism via anomaly detection on system logs was proposed, which calculates the nodes system log generation frequency (the SG parameter) during a fixed time interval and compares this parameter against other nodes in the vicinity.", "The concept of vicinity defines new dimensions of node correlation, beyond the natural temporal and spatial correlations.", "Abnormal behavior and thus a node failure is detected by reaching a deviation threshold from the SG parameter of the majority of the nodes.", "Eight months of system logs collected from the Taurus HPC cluster were used as the main data source.", "The proposed failure detection mechanism was applied to Taurus system logs in four different vicinities.", "According to the results, the most effective vicinities were identified as physical location and hardware architecture.", "Finally, the proposed mechanism was applied to each rack of computing nodes with similar hardware architecture within the Taurus HPC cluster.", "Using each of the original or fully anonymized system log entries a node failure detection precision of $62\\%$ could be reached.", "Filtering out frequent common Syslog entries improved the precision of node failure detection to $87\\%$ .", "It has been also shown that the proposed mechanism could detect $75\\%$ of node failures with more than $80\\%$ precision via analyzing filtered anonymized Syslog entries.", "For future work, it is planned to dynamically adjust the interval of the sliding time window which calculates the SG parameter of each node, according to the feedback received from the time of failure vicinity.", "Also, at the moment, the detection occurs rather close to the time of failure, which could be improved by better filtering of the input data.", "Furthermore, the resource allocation vicinity will be further studied via analyzing jobs' information that are executed across multiple nodes.", "Performing similar analyses on publicly available system logs, such as those from the Failure Trace Archive (FTA)fta.scem.uws.edu.au/index.php?n=Main.DataSets or the Computer Failure Data Repository (CFDR)www.usenix.org/cfdr, as well as other types of monitoring data, such as node power consumption, are also planned as future work." ], [ "Acknowledgment", "This work is in part supported by the German Research Foundation (DFG) within the Cluster of Excellence `Center for Advancing Electronics Dresden (cfaed)', and by Eucor –  The European Campus, within the Seed Money project `Data Analysis for Improving High Performance Computing Operations and Research.'", "1 =0mu plus 1mu" ] ]
1906.04550
[ [ "Ultra Fast Medoid Identification via Correlated Sequential Halving" ], [ "Abstract The medoid of a set of n points is the point in the set that minimizes the sum of distances to other points.", "It can be determined exactly in O(n^2) time by computing the distances between all pairs of points.", "Previous works show that one can significantly reduce the number of distance computations needed by adaptively querying distances.", "The resulting randomized algorithm is obtained by a direct conversion of the computation problem to a multi-armed bandit statistical inference problem.", "In this work, we show that we can better exploit the structure of the underlying computation problem by modifying the traditional bandit sampling strategy and using it in conjunction with a suitably chosen multi-armed bandit algorithm.", "Four to five orders of magnitude gains over exact computation are obtained on real data, in terms of both number of distance computations needed and wall clock time.", "Theoretical results are obtained to quantify such gains in terms of data parameters.", "Our code is publicly available online at https://github.com/TavorB/Correlated-Sequential-Halving." ], [ "Introduction", "In large datasets, one often wants to find a single element that is representative of the dataset as a whole.", "While the mean, a point potentially outside the dataset, may suffice in some problems, it will be uninformative when the data is sparse in some domain; taking the mean of an image dataset will yield visually random noise [2].", "In such instances the medoid is a more appropriate representative, where the medoid is defined as the point in a dataset which minimizes the sum of distances to other points.", "For one dimensional data under $\\ell _1$ distance, this is equivalent to the median.", "This has seen use in algorithms such as $k$ -medoid clustering due to its reduced sensitivity to outliers [3].", "Formally, let $x_1,...,x_n \\in {\\mathcal {U}}$ , where the underlying space ${\\mathcal {U}}$ is equipped with some distance function $d: {\\mathcal {U}}\\times {\\mathcal {U}}\\mapsto {\\mathbb {R}}_+$ .", "It is convenient to think of ${\\mathcal {U}}= {\\mathbb {R}}^d$ and $d(x,y) = \\Vert x-y\\Vert _2$ for concreteness, but other spaces and distance functions (which need not be symmetric or satisfy the triangle inequality) can be substituted.", "The medoid of $\\lbrace x_i\\rbrace _{i=1}^n$ , assumed here to be unique, is defined as $x_{i^*}$ where $ i^* = \\operatornamewithlimits{\\arg \\!\\min }_{i \\in [n]} \\theta _i \\hspace{14.22636pt} : \\hspace{14.22636pt} \\theta _i \\triangleq \\frac{1}{n}\\sum _{j=1}^n d(x_i,x_j)$ Note that for non-adversarially constructed data, the medoid will almost certainly be unique.", "Unfortunately, brute force computation of the medoid becomes infeasible for large datasets, e.g.", "RNA-Seq datasets with $n=100k$ points [4].", "This issue has been addressed in recent works by noting that in most problem instances solving for the value of each $\\theta _{i}$ exactly is unnecessary, as we are only interested in identifying $x_{i^*}$ and not in computing every $\\theta _i$ [1], [5], [6], [7].", "This allows us to solve the problem by only estimating each $\\theta _i$ , such that we are able to distinguish with high probability whether it is the medoid.", "By turning this computational problem into a statistical one of estimating the $\\theta _i$ 's one can greatly decrease algorithmic complexity and running time.", "The key insight here is that sampling a random $J \\sim \\textnormal {Unif}([n])$ and computing $d(x_i,x_J)$ gives an unbiased estimate of $\\theta _i$ .", "Clearly, as we sample and average over more independently selected $J_k\\overset{iid}{\\sim } \\textnormal {Unif}([n])$ , we will obtain a better estimate of $\\theta _i$ .", "Estimating each $\\theta _i$ to the same degree of precision by computing $\\hat{\\theta }_i = \\frac{1}{T} \\sum _{k=1}^T d(x_i,x_{J_k})$ yields an order of magnitude improvement over exact computation, via an algorithm like RAND [7].", "In a recent work [1] it was observed that this statistical estimation could be done much more efficiently by adaptively allocating estimation budget to each of the $\\theta _i$ in eq.", "(REF ).", "This is due to the observation that we only need to estimate each $\\theta _i$ to a necessary degree of accuracy, such that we are able to say with high probability whether it is the medoid or not.", "By reducing to a stochastic multi-armed bandit problem, where each arm corresponds to a $\\theta _i$ , existing multi-armed bandit algorithms can be leveraged leading to the algorithm Med-dit [1].", "As can be seen in Fig.", "REF adding adaptivity to the statistical estimation problem yields another order of magnitude improvement.", "Figure: Empirical performance of exact computation, RAND, Med-dit and Correlated Sequential Halving.The error probability is the probability of not returning the correct medoid." ], [ "Contribution", "While adaptivity is already a drastic improvement, current schemes are still unable to process large datasets efficiently; running Med-dit on datasets with $n=100k$ takes 1.5 hours.", "The main contribution of this paper is a novel algorithm that is able to perform this same computation in 1 minute.", "Our algorithm achieves this by observing that we want to find the minimum element and not the minimum value, and so our interest is only in the relative ordering of the $\\theta _i$ , not their actual values.", "In the simple case of trying to determine if $\\theta _1 > \\theta _2$ , we are interested in estimating $\\theta _1 -\\theta _2$ rather than $\\theta _1$ or $\\theta _2$ separately.", "One can imagine the first step is to take one sample for each, i.e.", "$d(x_1,x_{J_1})$ to estimate $\\theta _1$ and $d(x_2,x_{J_2})$ to estimate $\\theta _2$ , and compare the two estimates.", "In the direct bandit reduction used in the design of Med-dit, $J_1$ and $J_2$ would be independently chosen, since successive samples in the multi-armed bandit formulation are independent.", "In effect, we are trying to compare $\\theta _1$ and $\\theta _2$ , but not using a common reference point to estimate them.", "This can be problematic for a sampling based algorithm, as it could be the case that $\\theta _1 < \\theta _2$ , but the reference point $x_{J_1}$ we pick for estimating $\\theta _1$ is on the periphery of the dataset as in Fig.", "REF .", "This issue can fortunately be remedied by using the same reference point for both $x_1$ and $x_2$ as in Fig.", "REF .", "By using the same reference point we are correlating the samples and intuitively reducing the variance of the estimator for $\\theta _1 - \\theta _2$ .", "Here, we are exploiting the structure of the underlying computation problem rather than simply treating this as a standard multi-armed bandit statistical inference problem.", "Figure: Toy 2D exampleBuilding on this idea, we correlate the random sampling in our reduction to statistical estimation and design a new medoid algorithm, Correlated Sequential Halving.", "This algorithm is based on the Sequential Halving algorithm in the multi-armed bandit literature [9].", "We see in Fig.", "REF that we are able to gain another one to two orders of magnitude improvement, yielding an overall four to five orders of magnitude improvement over exact computation.", "This is accomplished by exploiting the fact that the underlying problem is computational rather than statistical." ], [ "Theoretical Basis", "We now provide high level insight into the theoretical basis for our observed improvement, later formalized in Theorem REF .", "We assume without loss of generality that the points are sorted so that $\\theta _1< \\theta _2 \\le \\hdots \\le \\theta _n$ , and define $\\Delta _i \\triangleq \\theta _i -\\theta _1$ for $i \\in [n] \\setminus \\lbrace 1\\rbrace $ , where $[n]$ is the set $\\lbrace 1, 2, \\ldots , n\\rbrace $ .", "For visual clarity, we use the standard notation $a \\vee b \\triangleq \\max (a,b)$ and $a \\wedge b \\triangleq \\min (a,b)$ , and assume a base of 2 for all logarithms.", "Our proposed algorithm samples in a correlated manner as in Fig.", "REF , and so we introduce new notation to quantify this improvement.", "As formalized later, $\\rho _i$ is the improvement afforded by correlated sampling in distinguishing arm $i$ from arm 1.", "$\\rho _i$ can be thought of as the relative reduction in variance, where a small $\\rho _i$ indicates that $d(x_1,x_{J_1}) - d(x_i,x_{J_1})$ concentratesThroughout this work we talk about concentration in the sense of the empirical average of a random variable concentrating about the true mean of that random variable.", "faster than $d(x_1,x_{J_1})- d(x_i,x_{J_2})$ about $-\\Delta _i$ for $J_1,J_2$ drawn independently from $\\textnormal {Unif}([n])$ , shown graphically in Fig.", "REF .", "Figure: Correlated d(1,J 1 )-d(i,J 1 )d(1,J_1)-d(i,J_1) vs Independent d(1,J 1 )-d(i,J 2 )d(1,J_1)-d(i,J_2) sampling in RNA-Seq 20k dataset .", "Averaged over the dataset, the independent samples have standard deviation σ=0.25\\sigma = 0.25, so for (a) ρ i =.11\\rho _i=.11, and (b) ρ i =.25\\rho _i = .25In the standard bandit setting with independent sampling, one needs a number of samples proportional to $H_2 = \\max _{i \\ge 2} {i}{\\Delta _i^2}$ to determine the best arm [10].", "Replacing the standard arm difficulty of ${1}{\\Delta _i^2}$ with ${\\rho _i^2}{\\Delta _i^2}$ , the difficulty accounting for correlation, we show that one can solve the problem using a number of samples proportional to $\\tilde{H}_2 = \\max _{i \\ge 2} {i\\rho _{(i)}^2}{\\Delta _{(i)}^2}$ , an analogous measure.", "Here the permutation $(\\cdot )$ indicates that the arms are sorted by decreasing ${\\rho _i}{\\Delta _i}$ as opposed to just by ${1}{\\Delta _i}$ .", "These details are formalized in Theorem REF .", "Our theoretical improvement incorporating correlation can thus be quantified as $H_2 / \\tilde{H}_2$ .", "As we show later in Fig.", "REF , in real datasets arms with small $\\Delta _i$ have similarly small $\\rho _i$ , indicating that correlation yields a larger relative gain for previously difficult arms.", "Indeed, for the RNA-Seq 20k dataset we see that the ratio is $H_2 / \\tilde{H}_2=6.6$ .", "The Netflix 100k dataset is too large to perform this calculation on, but for similar datasets like MNIST [11] this ratio is $4.8$ .", "We hasten to note that this ratio does not fully encapsulate the gains afforded by the correlation our algorithm uses, as only pairwise correlation is considered in our analysis.", "This is discussed further in Appendix" ], [ "Related Works", "Several algorithms have been proposed for the problem of medoid identification.", "An $O(n^{3/2}2^{\\Theta (d)})$ algorithm called TRIMED was developed finding the true medoid of a dataset under certain assumptions on the distribution of the points near the medoid [5].", "This algorithm cleverly carves away non-medoid points, but unfortunately does not scale well with the dimensionality of the dataset.", "In the use cases we consider the data is very high dimensional, often with $d \\approx n$ .", "While this algorithm works well for small $d$ , it becomes infeasible to run when $d>20$ .", "A similar problem, where the central vertex in a graph is desired, has also been analyzed.", "One proposed algorithm for this problem is RAND, which selects a random subset of vertices of size $k$ and measures the distance between each vertex in the graph and every vertex in the subset [7].", "This was later improved upon with the advent of TOPRANK [6].", "We build off of the algorithm Med-dit (Medoid-Bandit), which finds the medoid in $\\tilde{O}(n)$ time under mild distributional assumptions [1].", "More generally, the use of bandits in computational problems has gained recent interest.", "In addition to medoid finding [1], other examples include Monte Carlo Tree Search for game playing AI [12], hyper-parameter tuning [13], $k$ -nearest neighbor, hierarchical clustering and mutual information feature selection [14], approximate $k$ -nearest neighbor [15], and Monte-Carlo multiple testing [16].", "All of these works use a direct reduction of the computation problem to the multi-armed bandit statistical inference problem.", "In contrast, the present work further exploits the fact that the inference problem comes from a computational problem, which allows a more effective sampling strategy to be devised.", "This idea of preserving the structure of the computation problem in the reduction to a statistical estimation one has potentially broader impact and applicability to these other applications." ], [ "Correlated Sequential Halving", "In previous works it was noted that sampling a random $J \\sim \\textnormal {Unif}([n])$ and computing $d(x_i,x_J)$ gives an unbiased estimate of $\\theta _i$ [1], [14].", "This was where the problem was reduced to that of a multi-armed bandit and solved with an Upper Confidence Bound (UCB) based algorithm [17].", "In their analysis, estimates of $\\theta _i$ are generated as $\\hat{\\theta }_i = \\frac{1}{|{\\mathcal {J}}_i|}\\sum _{j \\in {\\mathcal {J}}_i} d(x_i,x_j)$ for ${\\mathcal {J}}_i \\subseteq [n]$ , and the analysis hinges on showing that as we sample the arms more, $\\hat{\\theta }_1 < \\hat{\\theta }_i \\ \\forall \\ i \\in [n]$ with high probability In order to maintain the unbiasedness of the estimator given the sequential nature of UCB, reference points are chosen with replacement in Med-dit, potentially yielding a multiset ${\\mathcal {J}}_i$ .", "For the sake of clarity we ignore this subtlety for Med-dit, as our algorithm samples without replacement..", "In a standard UCB analysis this is done by showing that each $\\hat{\\theta }_i$ individually concentrates.", "However on closer inspection, we see that this is not necessary; it is sufficient for the differences $\\hat{\\theta }_1 - \\hat{\\theta }_i$ to concentrate for all $i \\in [n]$ .", "Using our intuition from Fig.", "REF we see that one way to get this difference to concentrate faster is by sampling the same $j$ for both arms 1 and $i$ .", "We can see that if $|{\\mathcal {J}}_1| = |{\\mathcal {J}}_i|$ , one possible approach is to set ${\\mathcal {J}}_1 = {\\mathcal {J}}_i = {\\mathcal {J}}$ .", "This allows us to simplify $\\hat{\\theta }_1 - \\hat{\\theta }_i$ as $\\hat{\\theta }_1 - \\hat{\\theta }_i &= \\frac{1}{|{\\mathcal {J}}_1|}\\sum _{j \\in {\\mathcal {J}}_1} d(x_1,x_j) - \\frac{1}{|{\\mathcal {J}}_i|}\\sum _{j \\in {\\mathcal {J}}_i} d(x_i,x_j)= \\frac{1}{|{\\mathcal {J}}|}\\sum _{j \\in {\\mathcal {J}}} d(x_1,x_j) - d(x_i,x_j).$ While UCB algorithms yield a serial process that samples one arm at a time, this observation suggests that a different algorithm that pulls many arms at the same time would perform better, as then the same reference $j$ could be used.", "By estimating each points' centrality $\\theta _i$ independently, we are ignoring the dependence of our estimators on the random reference points selected; using the same set of reference points for estimating each $\\theta _i$ reduces the variance in the choice of random reference points.", "We show that a modified version of Sequential Halving [10] is much more amenable to this type of analysis.", "At a high level this is due to the fact that Sequential Halving proceeds in stages by sampling arms uniformly, eliminating the worse half of arms from consideration, and repeating.", "This very naturally obeys this “correlated sampling” condition, as we can now use the same set of reference points ${\\mathcal {J}}$ for all arms under consideration in each round.", "We present the slightly modified algorithm below, introducing correlation and capping the number of pulls per round, noting that the main difference comes in the analysis rather than the algorithm itself.", "[H] [1] Correlated Sequential Halving Input: Sampling budget $T$ , dataset $\\lbrace x_i\\rbrace _{i=1}^n$ $\\text{initialize } S_0 \\leftarrow [n]$ r=0 $\\lceil \\log n \\rceil -1$ select a set ${\\mathcal {J}}_r$ of $t_r$ data point indices uniformly at random without replacement from $[n]$ where $t_r =\\left\\lbrace 1 \\vee \\left\\lfloor \\frac{T}{|S_r| \\lceil \\log n \\rceil } \\right\\rfloor \\right\\rbrace \\wedge n \\hspace{142.26378pt}$ For each $i \\in S_r$ set $\\hat{\\theta }^{(r)}_i = \\frac{1}{t_r} \\sum _{j \\in {\\mathcal {J}}_r} d(x_i,x_j)$ $t_r=n$ Output arm in $S_r$ with the smallest $\\hat{\\theta }^{(r)}_i$ Let $S_{r+1}$ be the set of $\\lceil |S_r|/2 \\rceil $ arms in $S_r$ with the smallest $\\hat{\\theta }^{(r)}_i$ arm in $S_{\\lceil \\log n \\rceil }$ Examining the random variables $\\hat{\\Delta }_i \\triangleq d(x_1,x_J) - d(x_i,x_J)$ for $J \\sim \\text{Unif}([n])$ , we see that for any fixed dataset all $\\hat{\\Delta }_i$ are bounded, as $\\max _{i,j \\in [n]} d(x_i,x_j)$ is finite.", "In particular, this means that all $\\hat{\\Delta }_i$ are sub-Gaussian.", "Definition 1 We define $\\sigma $ to be the minimum sub-Gaussian constant of $d(x_I,x_J)$ for $I,J$ drawn independently from $\\text{Unif}([n])$ .", "Additionally, for $i \\in [n]$ we define $\\rho _i \\sigma $ to be the minimum sub-Gaussian constant of $d(x_1,x_J) - d(x_i,x_J)$ , where $\\sigma $ is as above and $\\rho _i$ is an arm (point) dependent scaling, as displayed in Figure REF .", "This shifts the direction of the analysis, as where in previous works the sub-Gaussianity of $d(x_1,x_J)$ was used [1], we now instead utilize the sub-Gaussianity of $d(x_1,x_J) - d(x_i,x_J)$ .", "Here $\\rho _i \\le 1$ indicates that the correlated sampling improves the concentration and by extension the algorithmic performance.", "A standard UCB algorithm is unable to algorithmically make use of these $\\lbrace \\rho _i\\rbrace $ .", "Even considering batch UCB algorithms, in order to incorporate correlation the confidence bounds would need to be calculated differently for each pair of arms depending on the number of $j$ 's they've pulled in common and the sub-Gaussian parameter of $d(x_{i_1},x_J) - d(x_{i_2},x_J)$ .", "It is unreasonable to assume this is known for all pairs of points a priori, and so we restrict ourselves to an algorithm that only uses these pairwise correlations implicitly in its analysis instead of explicitly in the algorithm.", "Below we state the main theorem of the paper.", "Theorem 2.1 Assuming that $T \\ge n \\log n$ and denoting the sub-Gaussian constants of $d(x_1,x_J) - d(x_i,x_J)$ as $\\rho _i \\sigma $ for $i \\in [n]$ as in definition REF , Correlated Sequential Halving (Algorithm ) correctly identifies the medoid in at most $T$ distance computations with probability at least $1 - 3 \\log n \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\min _{i \\ge \\frac{T}{n \\log n}}\\left[ \\frac{\\Delta _{(i)}^2}{i \\rho _{(i)}^2}\\right]\\right)} \\\\$ $\\text{which can be coarsely lower bounded as} \\hspace{28.45274pt} 1 - 3 \\log n \\cdot \\exp {\\left(-\\frac{T }{16 \\tilde{H}_2 \\sigma ^2 \\log n}\\right)}$ $\\text{where} \\hspace{28.45274pt}\\tilde{H}_2 = \\underset{i\\ge 2}{\\max } \\frac{i\\rho _{(i)}^2}{\\Delta _{(i)}^2}\\hspace{28.45274pt}, \\hspace{28.45274pt}(\\cdot ) : [n] \\mapsto [n], (1) = 1, \\frac{\\Delta _{(2)}}{\\rho _{(2)}} \\le \\frac{\\Delta _{(3)}}{\\rho _{(3)}} \\le \\cdots \\le \\frac{\\Delta _{(n)}}{\\rho _{(n)}}$ Above $\\tilde{H}_2$ is a natural measure of hardness for this problem analogous to $H_2 = \\max _i \\frac{i}{\\Delta _{i}^2}$ in the standard bandit case, and $(\\cdot )$ orders the arms by difficulty in distinguishing from the best arm after taking into account the $\\rho _i$ .", "We defer the proof of Thm.", "REF and necessary lemmas to Appendix for readability." ], [ "Lower bounds", "Ideally in such a bandit problem, we would like to provide a matching lower bound.", "We can naively lower bound the sample complexity as $\\Omega (n)$ , but unfortunately no tighter results are known.", "A more traditional bandit lower bound was recently proved for adaptive sampling in the approximate $k$ -NN case, but requires that the algorithm only interact with the data by sampling coordinates uniformly at random [15].", "This lower bound can be transferred to the medoid setting, however this constraint becomes that an algorithm can only interact with the data by measuring the distance from a desired point to another point selected uniformly at random.", "This unfortunately removes all the correlation effects we analyze.", "For a more in depth discussion of the difficulty of providing a lower bound for this problem and the higher order problem structure causing this, we refer the reader to Appendix ." ], [ "Simulation Results", "Correlated Sequential Halving (corrSH) empirically performs much better than UCB type algorithms on all datasets tested, reducing the number of comparisons needed by 2 orders of magnitude for the RNA-Seq dataset and by 1 order of magnitude for the Netflix dataset to achieve comparable error probabilities, as shown in Table REF .", "This yields a similarly drastic reduction in wall clock time which contrasts most UCB based algorithms; usually, when implemented, the overhead needed to run UCB makes it so that even though there is a significant reduction in number of pulls, the wall clock time improvement is marginal [14].", "Table: Performance in average number of pulls per arm.", "Final percent error noted parenthetically if nonzero.", "corrSH was run with varying budgets until it had no failures on the 1000 trials.We note that in our simulations we only used 1 pull to initialize each arm for Med-dit for plotting purposes where in reality one would use 16 or some larger constant, sacrificing a small additional number of pulls for a roughly $10\\%$ reduction in wall clock time.", "In these plots we show a comparison between Med-dit [1], Correlated Sequential Halving, and RAND [7], shown in Figures REF and REF .", "Figure: Number of pulls versus error probability for various datasets and distance metrics.", "(a) Netflix 20k, cosine .", "(b) RNA-Seq 100k, ℓ 1 \\ell _1 (c) MNIST, ℓ 2 \\ell _2" ], [ "Simulation details", "The 3 curves for the randomized algorithms previously discussed are generated in different ways.", "For RAND and Med-dit the curves represent the empirical probability, averaged over 1000 trials, that after $nx$ pulls ($x$ pulls per arm on average) the true medoid was the empirically best arm.", "RAND was run with a budget of 1000 pulls per arm, and Med-dit was run with target error probability of $\\delta = 1/n$ .", "Since Correlated Sequential Halving behaves differently after $x$ pulls per arm depending on what its input budget was, it requires a different method of simulation; every solid dot in the plots represents the average of 1000 trials at a fixed budget, and the dotted line connecting them is simply interpolating the projected performance.", "In all cases the only variable across trials was the random seed, which was varied across 0-999 for reproducibility.", "The value noted for Correlated Sequential Halving in Table REF is the minimum budget above which all simulated error probabilities were 0.", "Remark 1 In theory it is much cleaner to discard samples from previous stages when constructing the estimators in stage $r$ to avoid dependence issues in the analysis.", "In practice we use these past samples, that is we construct our estimator for arm $i$ in stage $r$ from all the samples seen of arm $i$ so far, rather than just the $t_r$ fresh ones.", "Many different datasets and distance metrics were used to validate the performance of our algorithm.", "The first dataset used was a single cell RNA-Seq one, which contains the gene expressions corresponding to each cell in a tissue sample.", "A common first step in analyzing single cell RNA-Seq datasets is clustering the data to discover sub classes of cells, where medoid finding is used as a subroutine.", "Since millions of cells are sequenced and tens of thousands of gene expressions are measured in such a process, this naturally gives us a large high dimensional dataset.", "Since the gene expressions are normalized to a probability distribution for each cell, $\\ell _1$ distance is commonly used for clustering [18].", "We use the 10xGenomics dataset consisting of 27,998 gene-expressions over 1.3 million neuron cells from the cortex, hippocampus, and subventricular zone of a mouse brain [4].", "We test on two subsets of this dataset, a small one of 20,000 cells randomly subsampled, and a larger one of 109,140 cells, the largest true cluster in the dataset.", "While we can exactly compute a solution for the 20k dataset, it is computationally difficult to do so for the larger one, so we use the most commonly returned point from our algorithms as ground truth (all 3 algorithms have the same most frequently returned point).", "Another dataset we used was the famous Netflix-prize dataset [8].", "In such recommendation systems, the objective is to cluster users with similar preferences.", "One challenge in such problems is that the data is very sparse, with only .21% of the entries in the Netflix-prize dataset being nonzero.", "This necessitates the use of normalized distance measures in clustering the dataset, like cosine distance, as discussed in [2].", "This dataset consists of 17,769 movies and their ratings by 480,000 Netflix users.", "We again subsample this dataset, generating a small and large dataset of 20,000 and 100,000 users randomly subsampled.", "Ground truth is generated as before.", "The final dataset we used was the zeros from the commonly used MNIST dataset [11].", "This dataset consists of centered images of handwritten digits.", "We subsample this, using only the images corresponding to handwritten zeros, in order to truly have one cluster.", "We use $\\ell _2$ distance, as root mean squared error (RMSE) is a frequently used metric for image reconstruction.", "Combining the train and test datasets we get 6,424 images, and since each image is 28x28 pixels we get $d=784$ .", "Since this is a smaller dataset, we are able to compute the ground truth exactly." ], [ "Discussion on $\\rho _i$ }", "For correlation to improve our algorithmic performance, we ideally want $\\rho _i \\ll 1$ and decaying with $\\Delta _i$ .", "Empirically this appears to be the case as seen in Fig.", "REF , where we plot $\\rho _i$ versus $\\Delta _i$ for the RNA-Seq and MNIST datasets.", "$\\frac{1}{\\rho _i^2}$ can be thought of as the multiplicative reduction in number of pulls needed to differentiate that arm from the best arm, i.e.", "$\\frac{1}{\\rho _i} = 10$ roughly implies that we need a factor of 100 fewer pulls to differentiate it from the best arm due to our “correlation”.", "Notably, for arms that would normally require many pulls to differentiate from the best arm (small $\\Delta _i$ ), $\\rho _i$ is also small.", "Since algorithms spend the bulk of their time differentiating between the top few arms, this translates into large practical gains.", "One candidate explanation for the phenomena that small $\\Delta _i$ lead to small $\\rho _i$ is that the points themselves are close in space.", "However, this intuition fails for high dimensional datasets as shown in Fig.", "REF .", "We do see empirically however that $\\rho _i$ decreases with $\\Delta _i$ , which drastically decreases the number of comparisons needed as desired.", "We can bound $\\rho _i$ if our distance function obeys the triangle inequality, as $\\hat{\\Delta }_i \\triangleq d(x_i,x_J) - d(x_1,x_J)$ is then a bounded random variable since $|\\hat{\\Delta }_i| \\le d(x_i,x_1)$ .", "Combining this with the knowledge that ${{E}}\\hat{\\Delta }_i = \\Delta _i$ we get $\\hat{\\Delta }_i$ is sub-Gaussian with parameter at most $\\rho _i \\sigma \\le \\frac{2d(x_i,x_1)+\\Delta _i}{2}$ Alternatively, if we assume that $\\hat{\\Delta }_i$ is normally distributed with variance $\\rho _i^2 \\sigma ^2$ , we are able to get a tighter characterization of $\\rho _i$ : $\\rho _i^2 \\sigma ^2&= \\mathrm {Var}(d(1,J) - d(i,J)) \\\\&= {{E}}\\left[ \\left(d(1,J) - d(i,J) \\right)^2\\right] - \\left( {{E}}\\left[d(1,J) - d(i,J) \\right]\\right)^2\\\\&\\le d(1,i)^2 - \\Delta _i^2$ We can clearly see that as $d(1,i)\\rightarrow 0$ , $\\rho _i$ decreases, to 0 in the normal case.", "However in high dimensional datasets $d(1,i)$ is usually not small for almost any $i$ .", "This is empirically shown in Fig.", "REF .", "Figure: Distance from point ii to the medoid, d(x 1 ,x i )d(x_1,x_i) versus Δ i \\Delta _iWhile $\\rho _i$ can be small, it is not immediately clear that it is bounded above.", "However, since our distances are bounded for any given dataset, we have that $d(1,J)$ and $d(i,J)$ are both $\\sigma $ -sub-Gaussian for some $\\sigma $ , and so we can bound the sub-Gaussian parameter of $d(1,J)-d(i,J)$ quantity using the Orlicz norm.", "$\\rho _i^2 \\sigma ^2 = \\Vert d(1,J) - d(i,J) \\Vert _\\Psi ^2 \\le (\\Vert d(1,J)\\Vert _\\Psi + \\Vert d(i,J)\\Vert _\\Psi )^2 = 4 \\sigma ^2$ While this appears to be worse at first glance, we are able to jointly bound ${{P}}(\\hat{\\theta }_i - \\hat{\\theta }_1 - \\Delta _i< -\\Delta _i) \\le \\exp {\\left(-\\frac{n \\Delta _i^2}{2 \\rho _i^2 \\sigma ^2}\\right)}\\le \\exp {\\left(-\\frac{n \\Delta _i^2}{8 \\sigma ^2}\\right)}$ by the control of $\\rho _i$ above.", "In the regular case, this bound is achieved by separating the two and bounding the probability that either $\\hat{\\theta }_i< \\theta _i - \\Delta _i/2$ or $\\hat{\\theta }_1 > \\theta _1 + \\Delta _i/2$ , which yields an equivalent probability since we need $\\hat{\\theta }_i, \\hat{\\theta }_1$ to concentrate to half the original width.", "Hence, even for data without significant correlation, attempting to correlate the noise will not increase the number of pulls required when using this standard analysis method." ], [ "Fixed Budget", "In simulating Correlated Sequential Halving, we swept the budget over a range and reported the smallest budget above which there were 0 errors in 1000 trials.", "One logical question given a fixed budget algorithm like corrSH is then, for a given problem, what to set the budget to.", "This is an important question for further investigation, as there does not seem to be an efficient way to estimate $\\tilde{H}_2$ .", "Before providing our doubling trick based solution, we would like to note that it is unclear what to set the hyperparameters to for any of the aforementioned randomized algorithms.", "RAND is similarly fixed budget, and for Med-dit, while setting $\\delta = \\frac{1}{n}$ achieves vanishing error probability theoretically, using this setting in practice for finite $n$ yields an error rate of $6\\%$ for the Netflix 100k dataset.", "Additionally, the fixed budget setting makes sense in the case of limited computed power or time sensitive applications.", "The approach we propose is a variant of the doubling trick, which is commonly used to convert fixed budget or finite horizon algorithms to data dependent or anytime ones.", "Here this would translate to running the algorithm with a certain budget $T$ (say $3n$ ), then doubling the budget to $6n$ and rerunning the algorithm.", "If the two answers are the same, declare this the medoid and output it.", "If the answers are different, double the budget again to $12n$ and compare.", "The odds that the same incorrect arm is outputted both times is exceedingly small, as even with a budget that is too small, the most likely output of this algorithm is the true medoid.", "This requires a budget of at most $8T$ to yield approximately the same error probability as that of just running our algorithm with budget $T$ ." ], [ "Summary", "We have presented a new algorithm, Correlated Sequential Halving, for computing the medoid of a large dataset.", "We prove bounds on it's performance, deviating from standard multi-armed bandit analysis due to the correlation in the arms.", "We include experimental results to corroborate our theoretical gains, showing the massive improvement to be gained from utilizing correlation in real world datasets.", "There remains future practical work to be done in seeing if other computational or statistical problems can benefit from this correlation trick.", "Additionally there are open theoretical questions in proving lower bounds for this special query model, seeing if there is any larger view of correlation beyond pairwise that is analytically tractable, and analyzing this generalized stochastic bandits setting." ], [ "Acknowledgements", "The authors gratefully acknowledge funding from the NSF GRFP, Alcatel-Lucent Stanford Graduate Fellowship, NSF grant under CCF-1563098, and the Center for Science of Information (CSoI), an NSF Science and Technology Center under grant agreement CCF-0939370." ], [ "Proof of Theorem ", "We assume $n$ is a power of 2 for readability, but the analysis holds for any $n$ .", "We begin with the following immediate consequence of Hoeffding's inequality, remembering that $|{\\mathcal {J}}_r| = t_r$ : Lemma A.1 Assume that the best arm was not eliminated prior to round r. Then for any arm $i \\in S_r$ ${{P}}\\left(\\hat{\\theta }_1^{(r)} > \\hat{\\theta }^{(r)}_i\\right) &= {{P}}\\left(\\frac{1}{|{\\mathcal {J}}_r|}\\sum _{j \\in {\\mathcal {J}}_r} d(x_1,x_j) - d(x_i,x_j) + \\Delta _i> \\Delta _i\\right) \\le \\exp {\\left(\\frac{-t_r\\Delta _i^2}{2\\rho _i^2 \\sigma ^2}\\right)}\\mathbb {I}\\lbrace t_r<n\\rbrace $ Where if $t_r = n$ we know that this probability is exactly 0 by definition of the medoid.", "We now examine one round of Correlated Sequential Halving and bound the probability that the algorithm eliminates the best arm at round $r$ , recalling that $ t_r =\\left\\lbrace 1 \\vee \\left\\lfloor \\frac{T}{|S_r| \\lceil \\log n \\rceil } \\right\\rfloor \\right\\rbrace \\wedge n.$ Lemma A.2 The probability that the medoid is eliminated in round $r$ is at most $3 \\exp {\\left(-\\frac{T }{8 \\sigma ^2 \\log n} \\cdot \\left[ \\frac{\\Delta _{(i_r)}^2}{i_r \\rho _{(i_r)}^2}\\right]\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace $ for $i_r = |S_r|/4 = \\frac{n}{2^{r+2}}$ The proof follows similarly to that of [10], modulo the interesting feature that if $t_r=n$ there is no uncertainty.", "Additionally, the analysis differs in that here we are interested in giving the sample complexity in terms of $\\frac{\\Delta _{(i)}}{\\rho _{(i)}}$ instead of $\\Delta _i$ , and so instead of removing arms $i$ with low $\\Delta _i$ from consideration as in [10], we remove arms with low $\\frac{\\Delta _{i}}{\\rho _{i}}$ for the analysis.", "Formally, define $S_r^{\\prime }$ as the set of arms in $S_r$ excluding the $i_r = \\frac{1}{4} |S_r|$ arms $i$ with smallest $\\frac{\\Delta _{i}}{\\rho _{i}}$ .", "We define the random variable $N_r$ as the number of arms in $S_r^{\\prime }$ whose empirical average in round $r$ , $\\hat{\\theta }_i^{(r)}$ , is smaller than that of the optimal arm.", "We begin by showing that ${{E}}[N_r]$ is small.", "${{E}}[N_r] &= \\sum _{j \\in S_r^{\\prime }} {{P}}\\left(\\hat{\\theta }^{(r)}_i > \\hat{\\theta }^{(r)}_j\\right)\\\\&\\le \\sum _{j \\in S_r^{\\prime }} \\exp {\\left(-\\frac{t_r \\Delta _j^2}{2\\rho _j^2 \\sigma ^2}\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace \\\\&\\le \\sum _{j \\in S_r^{\\prime }} \\exp {\\left(-\\frac{T \\Delta _j^2}{4\\rho _j^2 \\sigma ^2 |S_r| \\log n}\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace \\\\&\\le |S_r^{\\prime }| \\max _{j \\in S_r^{\\prime }} \\exp {\\left(-\\frac{T \\Delta _j^2}{4\\rho _j^2 \\sigma ^2 |S_r|\\log n}\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace \\\\&= |S_r^{\\prime }| \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\frac{1}{i_r}\\cdot \\min _{j \\in S_r^{\\prime }} \\left\\lbrace \\frac{\\Delta _j^2}{\\rho _j^2}\\right\\rbrace \\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace \\\\&\\le |S_r^{\\prime }| \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n}\\cdot \\frac{1}{i_r} \\cdot \\min _{i \\ge i_r} \\left\\lbrace \\frac{\\Delta _{(i)}^2}{\\rho _{(i)}^2}\\right\\rbrace \\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace \\\\&= |S_r^{\\prime }| \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\left[ \\frac{\\Delta _{(i_r)}^2}{i_r \\rho _{(i_r)}^2}\\right]\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace $ Where in the third line we assumed $T \\ge n \\log n$ so that $t_r = \\left\\lfloor \\frac{T}{|S_r| \\lceil \\log n \\rceil } \\right\\rfloor \\ge \\frac{T}{2|S_r| \\lceil \\log n \\rceil }$ Additionally, in the second to last line we used the fact that due to the removal of arms with small $\\frac{\\Delta _{(i)}}{\\rho _{(i)}}$ , for all arms $j \\in S_r^{\\prime }$ where $j = (i)$ , we have that $i \\ge i_r$ .", "We now see that in order for the best arm to be eliminated in round $r$ at least $|S_r|/2$ arms must have lower empirical averages in round $r$ .", "This means that at least $|S_r|/4$ arms from $S_r^{\\prime }$ must outperform the best arm, i.e.", "$N_r \\ge |S_r|/4 = |S_r^{\\prime }|/3$ .", "We can then bound this probability with Markov's inequality as below: ${{P}}\\left(N_r \\ge \\frac{1}{3}|S_r^{\\prime }|\\right) &\\le 3 {{E}}[N_r]/|S_r^{\\prime }| \\le 3 \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\left[ \\frac{\\Delta _{(i_r)}^2}{i_r \\rho _{(i_r)}^2}\\right]\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace .$ We note that $t_r < n$ is a deterministic condition.", "Via some algebra, we obtain that $t_r = \\left\\lfloor \\frac{T}{|S_r| \\lceil \\log n \\rceil } \\right\\rfloor \\le \\frac{T}{|S_r| \\log n} = \\frac{T 2^r}{n \\log n} $ This means that if $r < \\log \\left(\\frac{n^2 \\log n}{T} \\right)$ then $t_r < n$ .", "To this end we define $r_{max} \\triangleq \\left\\lfloor \\log \\left(\\frac{n^2 \\log n}{T} \\right) \\right\\rfloor $ and $i_{r_{max}} \\triangleq \\frac{n}{2^{r_{max}}} \\ge \\frac{T}{n \\log n}$ .", "With this in place, we are now able to easily prove Theorem REF The algorithm clearly does not exceed its budget of $T$ arm pulls (distance measurements).", "Further, if the best arm survives the execution of all $\\log n$ rounds then the algorithm succeeds as all other arms must have been eliminated.", "Hence, by a union bound over the stages, our probability of failure (the best arm being eliminated in any of the $\\log n$ stages) is at most $3 \\sum _{r=1}^{\\log n} &\\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\left[ \\frac{\\Delta _{(i_r)}^2}{i_r \\rho _{(i_r)}^2}\\right]\\right)} \\mathbb {I}\\lbrace t_r<n\\rbrace \\\\&\\le 3 \\sum _{r=1}^{\\log n} \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\left[ \\frac{\\Delta _{(i_r)}^2}{i_r \\rho _{(i_r)}^2}\\right]\\right)} \\mathbb {I}\\left\\lbrace r < \\log \\left(\\frac{n^2 \\log n}{T} \\right)\\right\\rbrace \\\\& \\le 3 \\log n \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\min _{r \\le r_{max}}\\left[ \\frac{\\Delta _{(i_r)}^2}{i_r \\rho _{(i_r)}^2}\\right]\\right)} \\\\& \\le 3 \\log n \\exp {\\left(-\\frac{T }{16 \\sigma ^2 \\log n} \\cdot \\min _{i \\ge i_{r_{max}}}\\left[ \\frac{\\Delta _{(i)}^2}{i \\rho _{(i)}^2}\\right]\\right)} \\\\&\\le 3 \\log n \\cdot \\exp {\\left(-\\frac{T}{16 \\tilde{H}_2 \\sigma ^2 \\log n}\\right)}$ We note that in cases where $\\frac{\\rho _{(i)}^2}{\\Delta _{(i)}^2}$ is very large for small $i$ , this last line is loose.", "Remark 2 A standard analysis of this algorithm, ignoring arms with small $\\Delta _i$ to create $S_r^{\\prime }$ , would yield hardness measure $ H_2^{\\prime } = \\max _i \\frac{i \\rho _{i}^2}{\\Delta _{i}^2}$ .", "However, we can see by pigeonhole principle that $\\max _i \\frac{i \\rho _{i}^2}{\\Delta _{i}^2} \\ge \\max _i \\frac{i \\rho _{(i)}^2}{\\Delta _{(i)}^2}$ Remark 3 While it is convenient to think of ${\\mathcal {U}}= {\\mathbb {R}}^d$ and $d(x,y) = \\Vert x-y\\Vert _2$ , we note that our results are valid for arbitrary distance functions which may not be symmetric or obey the triangle inequality, like Bregman divergences or squared Euclidean distance." ], [ "Lower bounds", "It seems very difficult to generate lower bounds for the sample complexity of the medoid problem due to the higher order structure present." ], [ "Beyond pairwise correlation", "Throughout this work we have discussed the benefits of correlating measurements.", "However, the only way in which correlation figured into our analysis was in helping $\\hat{\\theta }_i - \\hat{\\theta }_1$ concentrate.", "Due to this correlation we can show that the difference between estimators concentrates quickly, analyzing pairs of estimators rather than just individual $\\hat{\\theta }_i$ .", "This leads to the natural question, can correlation help beyond just pairs of estimators?", "We answer this question in the affirmative.", "As a concrete example assume that $\\lbrace x_i\\rbrace _{i=1}^n \\in {\\mathbb {R}}^2$ are evenly spaced around the unit circle, and $x_0 = (0,0)$ is the medoid of $\\lbrace x_i\\rbrace _{i=0}^n$ .", "For a reference point $x_J$ drawn uniformly at random we define $\\hat{\\Delta }_i \\triangleq d(x_i,x_J) - d(x_1,x_J)$ .", "Let $x_i = (1,0)$ , $x_k = (-1,0)$ .", "We have previously shown that $\\hat{\\Delta }_i, \\hat{\\Delta }_k$ concentrate nicely.", "However, in sequential halving, we are concerned with the probability that over half the estimators appear better than the best estimator, i.e.", "$\\hat{\\Delta }_i < 0$ for many $i$ (more than $n/2$ for the first round).", "Many samples are needed to argue that this is small if we assume that the events $\\hat{\\Delta }_i < 0$ and $\\hat{\\Delta }_k<0$ are independent as is currently being done, but we can clearly see that for $i,k$ as given, ${{P}}\\left(\\lbrace \\hat{\\Delta }_i < 0\\rbrace \\cap \\lbrace \\hat{\\Delta }_k<0 \\rbrace \\right) = 0$ where the probability is taken with respect to the randomness in selecting a common reference point $x_J$ .", "[count=p] in 0,...,11 shape=circle,fill=black, scale=0.5] (p) at (-*30:2) ;; shape=circle,fill=black, scale=0.5,label=left:$x_1$ ] (x1) at (0,0) ; shape=circle,fill=black, scale=0.5,label=right:$x_2$ ] (x2) at (0:2) ; shape=circle,fill=black, scale=0.5,label=left:$x_3$ ] (x3) at (180:2) ; shape=circle,fill=black, scale=0.5,label=left:$x_J$ ] (xJ) at (60:2) ; (1) arc (0:360:2); [dashed] (x1) – (xJ) – (x2); [dashed] (x3) – (xJ); [dotted, gray] (-3,0) – (3,0); [dotted, gray] (0,-3) – (0,3); It is not clear what quantities we should be interested in when looking at all the estimators jointly, but it is clear that there are additional benefits to correlation beyond simply the improved concentration of differences of estimators." ], [ "Bandit lower bounds", "Ideally in such a bandit problem we would like to provide a matching lower bound.", "This is made difficult by the fact that we lack insight into which quantities are relevant in determining the hardness of the problem.", "A more traditional bandit lower bound was recently proved for adaptive sampling in the approximate $k$ -NN case, but this lower bound requires the data points to be constrained, i.e.", "$[x_i]_j \\in \\lbrace \\pm 1/2 \\rbrace $ , and more importantly that the algorithm is only allowed to interact with the data by sampling coordinates uniformly at random [15].", "This second constraint on the algorithm unfortunately removes all the structure we wish to analyze from the problem.", "The lower bound is proved using a change of measure argument, neatly presented in [9].", "In the case we wish to analyze, strategies are no longer limited to random sampling the data, i.e.", "for a given $x_i$ we can measure its distance to a specific $x_j$ , we do not need to independently sample a reference point for each pull.", "Currently, we do not know of any data dependent lower bound for this problem.", "A trivial lower bound is $\\Omega (n)$ distance computations, as we need to perform at least one distance computation for every data point.", "However, we have as of yet been unable to provide any tighter lower bounds in terms of the $\\rho _i$ 's or any larger scale structure as mentioned above." ] ]
1906.04356
[ [ "Independence in Arithmetic: The Method of $(\\mathcal L, n)$-Models" ], [ "Abstract I develop in depth the machinery of $(\\mathcal L, n)$-models originally introduced by Shelah and, independently in a slightly different form by Kripke.", "This machinery allows fairly routine constructions of true but unprovable sentences in $\\mathsf{PA}$.", "I give two applications: 1.", "Shelah's alternative proof of the Paris-Harrington theorem, and 2.", "The independence over $\\mathsf{PA}$ of a new $\\Pi^0_1$ Ramsey theoretic statement about colorings of finite sequences of structures." ], [ "Introduction", "In [7] $(\\mathcal {L}, n)$ -models are used by Shelah explicitly to reprove the Paris-Harrington Theorem and a similar idea is used implicitly to give an example of a true but unprovable $\\Pi ^0_1$ -sentence.", "In fact, the method Shelah employs turns out to be very flexible.", "The goal of this work is to show how $(\\mathcal {L}, n)$ -models can be used to routinely construct finitary Ramsey theoretic statements, even $\\Pi ^0_1$ ones, which can be shown to be true in the standard model but are not provable in $\\mathsf {PA}$ .", "In this article, I develop the the machinery of $(\\mathcal {L}, n)$ -models, beginning with the definitions and lemmas of [7], though often using slightly strengthened forms.", "This is the content of the section following this one.", "After setting the scene, two applications are given (sections 3, and 4 below respectively).", "First I work through Shelah's proof of the Paris-Harrington Theorem.", "Second, I give a new example of a true but unprovable $\\Pi ^0_1$ -statement.", "My hope is that the reader, having seen two applications back to back, should start to see how one can apply the ideas to a wide variety of contexts.", "The final section ends with a discussion and some open questions.", "Let me finish this introduction with a word about the history of the ideas discussed here.", "In his article, Shelah states that he was motivated by a question of Harrington as to whether the success of the Paris-Harrington theorem could be reproduced with a $\\Pi ^0_1$ -statement.", "While he briefly mentions the article in his reflection [8], it does not seem he ever pursued the ideas further in any published work.", "Independently a similar idea was also discovered by Kripke under the name fulfillment (here we use this word for the relation $\\models ^*$ , inspired by Kripke).", "Kripke's work remains (to the best of my knowledge) unpublished though several other authors have written on it, for example [6] and [5].", "As far as I know the only other place a version of $(\\mathcal {L}, n)$ -models à la Shelah has appeared is in the beautiful article [11] by Wilkie, there to provide a very different type of application.", "What I call $(\\mathcal {L}, n)$ -models in this article are called approximating structures by Wilkie.", "My terminology throughout this text is mostly standard, conforming, for example, to that of [4] and [3].", "Those books also may be consulted for any undefined concepts in the theory of models of $\\mathsf {PA}$ .", "Acknowledgments.", "The material in this paper benefited from many people.", "Foremost I would like to thank Roman Kossak for introducing the subject to me and for his kind and patient advice throughout.", "Much of the research here was completed during an independent study Roman supervised for me at the CUNY Graduate Center during the Fall 2018 semester.", "Second I would like to thank Henry Towsner and Kameryn Williams for many very helpful discussions.", "Third, I would like to thank the participants of the CUNY Models of $\\mathsf {PA}$ seminar, the UPenn Logic Seminar and the JAF 38 conference where various versions of this material was presented.", "I would also like to thank the anonymous referee for their very helpful comments and close reading which improved the paper significantly.", "Last but not certainly not least, the author gratefully acknowledges the support of the Austrian Science Fund FWF through grant number Y1012-N35." ], [ "Partial $\\mathcal {L}$ Structures and {{formula:f73df097-5a39-49f7-8a66-559cf4a98969}} -models", "Throughout let's fix a finite signature first order language $\\mathcal {L}$ .", "Later on $\\mathcal {L}$ will be assumed to extend the language of $\\mathsf {PA}$ , which I denote $\\mathcal {L}_{\\mathsf {PA}}$ , by (at most) finitely many predicate and constant symbols (and always including the symbol $<$ for order), however the basic definitions below do not depend on this.", "It is important though that the signature of $\\mathcal {L}$ be finite.", "In what follows there are some metamathematical questions that need to be addressed.", "In theory we would like to, as much as possible, argue and formalize concepts in $\\mathsf {PA}$ via arithmetization in the usual way.", "However, this creates some awkwardness when discussing (potentially) infinite objects.", "Following a suggestion of the anonymous referee we instead, when necessary, formalize our concepts in $\\mathsf {ACA}_0$ , the second order arithmetic augmented with arithmetic comprehension axioms, see [9].", "Since this theory is conservative over $\\mathsf {PA}$ for first order statements ([9]), all arguments and concepts which are finite (internally) could in fact be formalized in $\\mathsf {PA}$ , albeit with potentially more awkward phrasing.", "To facilitate the formalization going forward let me be clear that, unless otherwise stated, every instance of “finite\" means potentially “non-standard finite\".", "Definition 2.1 (Partial $\\mathcal {L}$ structure) A structure $\\mathcal {M} = \\langle M, \\cdots \\rangle $ is called a partial $\\mathcal {L}$ structure if $M$ is a set, every constant symbol $c$ in the signature of $\\mathcal {L}$ is interpreted as some member $c^M \\in M$ , every relation symbol $R$ with arity $n$ in the signature of $\\mathcal {L}$ is interpreted as a relation of the appropriate arity $R^M \\subseteq M^n$ and the function symbols are interpreted as (potentially) partial functions of the appropriate arity on $M$ .", "In other words, $\\mathcal {M}$ is an $\\mathcal {L}$ -structure with partial functions as opposed to total ones.", "Remark 1 Remarking on the formalization, we may take one of two approaches here.", "Either we can work over $\\mathsf {PA}$ and, implicitly, structures are assumed to be (internally) finite or else we may work over $\\mathsf {ACA}_0$ and allow structures to be infinite (coded by a set of natural numbers).", "As explained above this does not change the proof theoretic strength of the results of this section and it is easy to see that the proofs mostly work the same regardless of how one chooses to approach this question.", "However, some readers (the author included) may find it conceptually easier to take the “$\\mathsf {PA}$ approach\" and only consider finite structures coded as elements of a fixed model of $\\mathsf {PA}$ .", "In fact, this approach is all that is needed for the applications in Sections 3 and 4.", "If $f$ is a function and $\\bar{a} \\in M^{ln(\\bar{a})}$ is such that $f(\\bar{a})$ is not defined, I let $f(\\bar{a})$ be not a well defined term and any sentence involving it is not a well formed formula.", "Allowing for this caveat the usual recursive model-theoretic definition of satisfaction $\\mathcal {M} \\models \\varphi $ can be defined as usualIt was pointed out to me by Victoria Gitman that there is an ambiguity here concerning how to interpret $\\mathcal {M} \\models \\forall x \\varphi (x)$ if it is not the case for every $a \\in \\mathcal {M}$ the the formula $\\varphi (a)$ is well defined.", "In fact, in all applications we only need $\\mathcal {M} \\models \\varphi $ for atomic $\\varphi $ so this issue doesn't actually arise.", "Nevertheless let me take the most restrictive approach and assert that if $\\mathcal {M} \\models \\forall x \\varphi (x)$ then in particular $\\varphi (a)$ is defined for each $a \\in \\mathcal {M}$ ..", "There is a key example of a partial structure I will return to often.", "Example 2.2 (Key Example) Let $\\mathcal {L}$ extend $\\mathcal {L}_{\\mathsf {PA}}$ and let $n$ be a (as always, possibly non-standard) natural number.", "Define $\\mathcal {M}_n$ to be the structure with universe $n = \\lbrace 0, 1, ..., n-1\\rbrace $ and $\\le $ , $+$ , $\\times $ etc defined as normal but restricted to this set.", "For instance $\\mathcal {M}_6 \\models 1 + 1 =2$ but the term $3 \\times 4$ is treated as syntactic nonsense.", "Many of the standard notions from basic model theory can be developed in the context of partial models.", "In particular, one can define isomorphism of structures in the natural way.", "Also, one could instead work with relational signatures and use relations representing the graphs of the partial functions though I view the use of function symbols as enlightening in applications.", "Also, they provide more natural definitions when we choose to tweak basic notions from model theory for our context.", "The definition of substructure, given below, is the first such example.", "Definition 2.3 (Substructure) Given two partial $\\mathcal {L}$ structures $\\mathcal {M}$ and $\\mathcal {N}$ , say that $\\mathcal {M}$ is a substructure of $\\mathcal {N}$ if it is a substructure in the usual sense and for all tuples $a_0, ..., a_{n-1} \\in M$ and all $n$ -ary function symbols $f$ we have that $f^{\\mathcal {N}}(a_0,...,a_{n-1})$ is defined.", "In this case I write $\\mathcal {M} \\subseteq \\mathcal {N}$ .", "If a partial $\\mathcal {L}$ -structure $\\mathcal {M}$ happens to interpret all function symbols as total functions then I call the structure total.", "Note that by the definition of the substructure relation $\\mathcal {M} \\subseteq \\mathcal {M}$ if and only if it is total.", "In particular, non-total structures are not substructures of themselvesThis observation was pointed out to by Professor Alfred Dolich and I thank him for the comment.", ".", "Continuing the key example from above and assuming that $+$ , $\\times $ and the successor function are the only function symbols in $\\mathcal {L}$ , it follows that if $n > m^2$ then $\\mathcal {M}_m \\subseteq \\mathcal {M}_n$ since for all $l, k < m$ , $S(l), l\\times k, l + k \\le m^2 < n$ so these are all well defined terms in $\\mathcal {M}_n$ .", "The following definition is the main character of the article.", "Definition 2.4 ($(\\mathcal {L}, n)$ -model) Let $n$ be a finite number (as always, possibly non-standard).", "An $(\\mathcal {L}, n)$ -model is a sequence $\\vec{\\mathcal {A}} = \\langle \\mathcal {A}_0, \\mathcal {A}_1,...\\mathcal {A}_{n-1}\\rangle $ of length $n$ such that for all $i < n$ $\\mathcal {A}_i$ is a partial $\\mathcal {L}$ -structure and for all $i < n-1$ we have that $\\mathcal {A}_i \\subseteq \\mathcal {A}_{i+1}$ .", "Example 2.5 (Key Example Continued) Let $\\vec{m} = m_0 < m_1 < ... < m_{n-1}$ be a sequence of natural numbers with $m_i^2 < m_{i+1}$ .", "There is an associated $(\\mathcal {L}, n)$ -model, namely $\\vec{\\mathcal {M}}_{\\vec{m}} = \\langle \\mathcal {M}_{m_0}, \\mathcal {M}_{m_1}, \\cdots , \\mathcal {M}_{m_{n-1}}\\rangle $ .", "I call such a model a square increasing $(\\mathcal {L}, n)$-model and $\\vec{m}$ its associated square increasing sequence.", "Let us set some notation and terminology.", "Given an $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ I will often write $\\mathcal {A}_i$ for the $i^{\\rm th}$ model in the sequence.", "I call $\\mathcal {A}_{n-1}$ the top model.", "If $\\mathcal {A}_0$ is a partial structure, $\\vec{\\mathcal {B}}$ is an $(\\mathcal {L}, n)$ -model and $\\mathcal {A}_0 \\subseteq \\mathcal {B}_0$ then I write $\\langle \\mathcal {A}_0, \\vec{\\mathcal {B}}\\rangle $ for the $(\\mathcal {L}, n+1)$ -model $\\langle \\mathcal {A}_0, \\mathcal {B}_0, \\mathcal {B}_1, ..., \\mathcal {B}_{n-1}\\rangle $ and similarly for $\\langle \\vec{\\mathcal {B}}, \\mathcal {A}_n\\rangle $ when $\\mathcal {A}_n$ is a superstructure of the top model of $\\vec{\\mathcal {B}}$ .", "Finally note that if $\\vec{\\mathcal {A}}$ is an $(\\mathcal {L}, n)$ -model then $\\bigcup \\mathcal {A}$ is the top model and in particular is a partial $\\mathcal {L}$ -structure.", "We will see that $(\\mathcal {L}, n)$ -models satisfy a kind of satisfaction relation called fulfillment which can be used to code consistency statements into finite combinatorial ones.", "In order to define fulfillment, let me set some conventions.", "From now on, given a formula $\\varphi $ , denote by $dp(\\varphi )$ the depth of $\\varphi $ i.e.", "the number of quantifiers appearing in $\\varphi $ (NOT the number of quantifier alternations).", "Denote by $|\\varphi |$ the syntactic length of $\\varphi $ .", "Given an $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ , and numbers $i< j < n$ I denote by $\\vec{\\mathcal {A}}^{[i, j]}$ the sequence $\\mathcal {A}_i \\subseteq \\mathcal {A}_{i+1} \\subseteq ... \\subseteq \\mathcal {A}_j$ .", "Note that this is an $(\\mathcal {L}, j-i + 1)$ -model.", "Definition 2.6 (Fulfillment) Let $\\varphi (\\vec{x})$ be an $\\mathcal {L}$ formula, $\\vec{\\mathcal {A}}$ an $(\\mathcal {L}, n)$ -model from some $n$ and and $\\vec{a}$ a tuple of elements of the same arity as $\\vec{x}$ from $\\mathcal {A}_{n-1}$ (the top model).", "Assume there is an $i < n - dp(\\phi ) - 1$ so that for every term $t(\\vec{x})$ appearing in $\\varphi $ the associated expression $t(\\vec{a})$ is defined in $\\mathcal {A}_{i+1}$ and let $i_{\\vec{a}}$ be the least such $i$ .", "Note that the existence of such an $i$ implies in particular that parameters are not in the top model.", "We define recursively what we mean by $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ (read as $\\mathcal {A}$ fulfills $\\varphi (\\vec{a})$ ).", "If $\\varphi $ is atomic, then $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ if and only if $\\mathcal {A}_{n-1} \\models \\varphi (\\vec{a})$ .", "If $\\varphi := \\psi _1 \\wedge \\psi _2$ , then $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ if and only if $\\mathcal {A} \\models ^* \\psi _1(\\vec{a})$ and $\\mathcal {A} \\models ^* \\psi _2(\\vec{a})$ .", "If $\\varphi := \\psi _1 \\vee \\psi _2$ , then $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ if and only if $\\mathcal {A} \\models ^* \\psi _1(\\vec{a})$ or $\\mathcal {A} \\models ^* \\psi _2(\\vec{a})$ .", "If $\\varphi :=\\lnot \\psi $ , then $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ if and only if it is not the case that $\\mathcal {A} \\models ^* \\psi (\\vec{a})$ .", "If $\\varphi := \\exists y \\psi (y, \\vec{x})$ , then $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ if and only if there is a $b \\in \\mathcal {A}_{i+1}$ and $\\mathcal {A}^{[i+1, n-1]} \\models ^* \\psi (b, \\vec{a})$ .", "If $\\varphi := \\forall y \\psi (y, \\vec{x})$ , then $\\mathcal {A} \\models ^* \\varphi (\\vec{a})$ if and only if for all $j \\in [i, n-dp(\\psi )]$ , and all $b \\in \\mathcal {A}_{j}$ we have that $\\mathcal {A} \\models ^* \\psi (b, \\vec{a})$ .", "Note first of all observe that the recursion above is well defined: when we pass from the existential sentence $\\exists y \\, \\psi (y, \\vec{a})$ to the witness $\\psi (b, \\vec{a})$ we increase $i$ by 1 but strip off one quantifier so the set up described in the first paragraph of the definition is still satisfied.", "Similar statements hold for the universal case.", "The intuition of the definition of $\\models ^*$ is as follows.", "An $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ is an attempt to build an actual model with each element of the sequence a further step of the construction.", "At the $n^{\\rm th}$ stage we are asked to guess what will be true in the final structure of length $\\omega $ .", "Guessing $\\varphi $ will be true is exactly the statement that $\\vec{\\mathcal {A}}\\models ^* \\varphi $ .", "This is underlined by the restriction on the final case that we only look for witnesses appearing early enough on in the sequence of models since we can't yet make promises about what will happen with the elements of the top model.", "An important though immediate observation is the following.", "If $\\vec{\\mathcal {A}}\\models ^* \\varphi (\\vec{a})$ and for every term $t(\\vec{x})$ appearing in $\\varphi $ the associated expression $t(\\vec{a})$ is an element of $\\mathcal {A}_0$ , then, for every $m$ so that $dp(\\varphi ) < m \\le n$ we already have $\\vec{\\mathcal {A}}^{[0, m-1]} \\models ^* \\varphi (\\vec{a})$ .", "It follows in particular that if $\\vec{\\mathcal {A}}$ is an $(\\mathcal {L}, n)$ -model, $\\varphi (\\vec{x})$ is a formula $\\vec{a} \\in \\mathcal {A}_{n-1}$ and $\\vec{\\mathcal {A}}\\models ^* \\varphi (\\vec{a})$ (and so $\\varphi (\\vec{x})$ has small enough depth relative to $n$ etc) then given any $(\\mathcal {L}, m)$ -model $\\vec{\\mathcal {B}}$ for some $m > n$ so that $\\vec{\\mathcal {B}}^{[0, n-1]} = \\vec{\\mathcal {A}}$ we have that $\\vec{\\mathcal {B}}\\models ^* \\varphi (\\vec{a})$ .", "This type of reasoning will be used frequently.", "In words what it amounts to is that end-extending the $(\\mathcal {L}, n)$ -model does not change the fulfillment of formulas of small depth.", "Rather, what extending allows us to do is make new formulas eligible to be fulfilled or not.", "From now on I fix $\\mathcal {L}$ to extend $\\mathcal {L}_{\\mathsf {PA}}$ by finitely many relation symbols, but no new function symbolsIn what follows we could add finitely many function symbols but then the use of square-increasing sequences $\\vec{m}$ would need to be replaced by whatever relation guarantees the growth of $\\vec{m}$ dominates the finitely many functions we add.", "For example, if we added a function symbol for the function $n \\mapsto 2^n$ then we we would need that for each $i < n$ in the sequence satisfies $m_{i+1} > 2^{m_i}$ .", "Assuming all such functions are definable in $\\mathsf {PA}$ (external to the $(\\mathcal {L}, n)$ -models), the details of this modification are cosmetic and left to the reader..", "In the rest of the article when I write for all $\\mathcal {L}$ ... it is implied that I am quantifying only over such languages.", "As an example of $\\models ^*$ and a lemma for a theorem to be proved later let's consider what is fulfilled by models of the form $\\vec{\\mathcal {M}}_{\\vec{m}}$ described above.", "It turns out these models fulfill a large fragment of $\\mathsf {PA}$ , namely Robinson's arithmetic $\\mathsf {Q}$ , see [10].", "Since fulfillment is very sensitive to syntax, let me note explicitly the version of the axioms of Robinson's arithmetic $\\mathsf {Q}$ that I will use: $\\forall x \\, (0 \\ne S(x))$ $\\forall x \\, \\forall y\\, (S(x) = S(y) \\rightarrow x=y)$ $\\forall x (x \\ne 0 \\rightarrow \\exists y (x = S(y))$ $\\forall x\\, (x +0 = x)$ $\\forall x \\, \\forall y\\, (x +S(y)= S(x+y))$ $\\forall x \\,(x \\times 0 = 0)$ $\\forall x \\, \\forall y \\, (x \\times S(y)=(x \\times y) + x)$ Recall that $\\mathsf {PA}$ can be recovered from $\\mathsf {Q}$ by adding the induction or equivalently the least number principle schema (cf [10]).", "In the lemma below, if $\\Gamma $ is a set of sentences, I mean by $\\vec{\\mathcal {A}}\\models ^* \\Gamma $ that for each $\\varphi \\in \\Gamma $ , $\\vec{\\mathcal {A}}\\models ^* \\varphi $ .", "Lemma 2.7 Suppose $\\vec{\\mathcal {M}}_{\\vec{m}}$ is a square increasing $(\\mathcal {L}, n)$ -model of length at least 3 whose associated square increasing sequence is $\\vec{m}$ and $m_0 > 1$ .", "Then $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\mathsf {Q}$ .", "Also, $\\vec{\\mathcal {M}}\\models ^*$$<$ is a linear order with no greatest element.", "The significance of $n \\ge 3$ is that all of the axioms of $\\mathsf {Q}$ have depth 2.", "The assumption $m_0 > 1$ ensures that all terms are in the 0th model.", "It can be removed if we let $n \\ge 4$ .", "This is essentially a straightforward examination of the definitions and all of the axioms are proved in the same way.", "Let me prove Axioms 1 and 7 and the “also\" part to show how the definitions work in practice.", "The others can be proved similarly.", "Fix a square increasing sequence $\\vec{m}$ and let $\\vec{\\mathcal {M}}_{\\vec{m}}$ be the associated square increasing $(\\mathcal {L}, n)$ -model.", "Let's first prove that $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\forall x \\, (0 \\ne S(x))$ .", "This statement has only the parameter $0 \\in \\mathcal {M}_{m_0}$ and the term $S(x)$ so every term applied to the parameter appears in $M_{m_0}$ i.e.", "0 and $1 = S(0)$ are both in $\\mathcal {M}_{m_0}$ (this is where we used $m_0 > 1$ ).", "Note moreover that the sentence, having depth 1, is such that $n-dp({\\rm Axiom \\, 1}) -1 = n-2$ .", "Since $n \\ge 3$ we have that $n-2 > 0$ .", "Per the definition of fulfillment, we need to show that for each $j \\in [0, n-2]$ and each $a \\in \\mathcal {M}_{m_j}$ we have $\\mathcal {M}_{m_{n-1}} \\models 0 \\ne S(a)$ .", "Clearly this is true provided it makes sense, i.e.", "the term $S(a)$ is defined.", "But it is, since $a \\in \\mathcal {M}_{m_{j}} \\subseteq \\mathcal {M}_{m_{n-2}}$ and hence $S(a)$ is in $\\mathcal {M}_{m_{n-1}}$ .", "For Axiom 7, observe that it has no parameters, so, conforming to the notation in the definition of fulfillment, we take $i_{\\vec{a}} = 0$ here where $\\vec{a} = \\emptyset $ .", "Also, Axiom 7 has depth 2 so we get $n - dp({\\rm Axiom \\, 7}) - 1 = n - 3 \\ge 0$ .", "Now, we need to show that for each $j_0 \\in [0, n-3]$ , each $a_0 \\in \\mathcal {M}_{m_{j_0}}$ , we have $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\forall y (a_0 \\times S(y) = (a_0 \\times y) + a_0)$ .", "Unwinding this further, observe that we now have a depth 1 formula and, all of the terms in this formula, when applied to $a_0$ are at most $a_0^2 + a_0$ so they appear in, at the latest $\\mathcal {M}_{m_{j_0 + 1}}$ since $a_0 < m_{j_0}$ and $m_{j_0}^2 < m_{j_0 + 1}$ .", "Since, moreover, $j_0 < n - 2$ we have that $j_0 + 1 < n-1$ so the parameters and terms from $\\forall y (a_0 \\times S(y) = (a_0 \\times y) + a_0)$ do not come from the top model so the above instance of fulfillment makes sense.", "Let's assume for simplicity (again without any real loss) that $a_0 \\times S(a_0) \\in \\mathcal {M}_{m_{j_0 + 1}} \\setminus \\mathcal {M}_{m_{j_0}}$ .", "Now, continuing with the definition, $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\forall y (a_0 \\times S(y) = (a_0 \\times y) + a_0)$ if for all $j_1 \\in [j_0 + 1, n - 2]$ and all $a_1 \\in \\mathcal {M}_{m_{j_1}}$ we have $\\mathcal {M}_{m_{n-1}} \\models a_0 \\times S(a_1) = a_0 \\times a_1 + a_0$ .", "As before the latter is clearly true assuming that all of the terms are defined.", "However, since $a_0, a_1 < m_{j_1}$ and $m_{j_1}^2 < m_{n-1}$ this is the case.", "For the also part, it is not hard to see that $\\le $ is linear.", "What's surprising is that, even though the structures are finite and have a greatest element externally, this is not fulfilled by the sequence of models.", "Indeed notice, to say that $\\le $ has no top element means formally that the following sentence is fulfilled: $\\forall x \\exists y (x \\le y \\wedge x \\ne y)$ .", "This has depth 2.", "Thus, $\\vec{\\mathcal {M}}_{\\vec{m}}$ fulfills this sentence if for every $j \\in [0, n-3]$ for every $b \\in \\mathcal {M}_{m_j}$ , we have that $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\exists y (b \\le y \\wedge b \\ne y)$ .", "This latter sentence is fulfilled just in case there is an $a \\in \\mathcal {M}_{m_{j+1}}$ , so that $\\mathcal {M}_{m_{n-1}} \\models b \\le a \\wedge b \\ne a$ .", "This is true since $\\mathcal {M}_{m_j}$ is a proper initial segment of $\\mathcal {M}_{m_{j+1}}$ .", "The utility of $\\vec{\\mathcal {A}}\\models ^* \\varphi $ is described by the next few lemmas.", "I will say that $\\varphi $ has a model if there is a (total) $\\mathcal {L}$ structure $\\mathcal {M}$ so that $\\mathcal {M} \\models \\varphi $ (in the normal sense) and that $\\varphi $ has an $(\\mathcal {L}, n)$ -model if there is an $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ so that $\\vec{\\mathcal {A}}\\models ^* \\varphi $ .", "Here, as described in the first paragraph of this Section I opt to formalize “$\\varphi $ has a model\" in $\\mathsf {ACA}_0$ as opposed to as via definable models and arithmetized completeness.", "The following lemma is perhaps the most important as it will be used to bound the complexity of statements we wish to prove are independent.", "In a weakened form it appears as Claim 1.3 b) of [7].", "In the lemma below I will assume that $\\vec{\\mathcal {A}}$ has an external well order and use it implicitly, referring for example to the least element of $\\vec{\\mathcal {A}}$ so that...holds.", "Note that in $\\mathsf {PA}$ and $\\mathsf {ACA}_0$ one can assume this for free.", "In principle the model constructed in the lemma below depends on this well order however in practice it never seems to make a difference.", "Lemma 2.8 (The Finite Model Lemma) Let $n$ be a natural number and $\\varphi $ be an $\\mathcal {L}$ -sentence of depth at most $n - 2$ .", "Let $|\\mathcal {L}|$ denote the cardinality of the signature of $\\mathcal {L}$ and let $j$ be the largest size of an arity of a function symbol.", "Given any $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ , there is another $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {B}}$ so that the following hold: $\\mathcal {B}_0$ has cardinality at most $|\\mathcal {L}|$ $\\mathcal {B}_{i+1}$ has cardinality at most $|\\mathcal {B}_i| + |\\mathcal {B}_i|^j|\\mathcal {L}| + \\binom{|\\varphi |}{2}[|\\mathcal {B}_i| + |\\mathcal {B}_i|^j|\\mathcal {L}|]^{|\\varphi |} + [\\sum _{m=0}^{i-1}\\binom{i}{m}][|\\mathcal {B}_i| + |\\mathcal {B}_i|^j|\\mathcal {L}| + \\binom{|\\varphi |}{2}(|\\mathcal {B}_i| + |\\mathcal {B}_i|^j|\\mathcal {L}|)]^{|\\varphi |}\\binom{|\\varphi |}{2}2$ The universe of $\\mathcal {B}_i$ is a subset of the universe of $\\mathcal {A}_i$ (but not necessarily a substructure) for each $i < n$ .", "For every subformula $\\psi (\\vec{x})$ of $\\varphi $ and every tuple $\\vec{b} \\in \\mathcal {B}^{|\\vec{x}|}$ we have $\\vec{\\mathcal {B}}\\models ^* \\psi (\\vec{b})$ if and only if $\\vec{\\mathcal {A}}\\models ^* \\psi (\\vec{b})$ and in particular, fulfillment for $\\psi (\\vec{b})$ is defined for $\\vec{\\mathcal {B}}$ if and only if it is defined for $\\vec{\\mathcal {A}}$ .", "If $\\mathcal {A}_{i_0} \\subseteq ... \\subseteq \\mathcal {A}_{i_{m}}$ is a subsequence of $\\vec{\\mathcal {A}}$ of length $m + 1 \\le n$ and $\\psi (x)$ is a subformula of $\\varphi $ so that the minimal $x$ with $\\langle \\mathcal {A}_{i_0}, \\mathcal {A}_{i_1}, \\mathcal {A}_{i_3}, ..., \\mathcal {A}_{i_m} \\rangle \\models ^* \\psi (x)$ is different from the minimal $x$ so that $\\langle \\mathcal {A}_{i_0}, \\mathcal {A}_{i_2}, \\mathcal {A}_{i_3}, ..., \\mathcal {A}_{i_m} \\rangle \\models ^* \\psi (x)$ then both such $x$ 's appear in their respective $\\mathcal {B}_{i_a}$ 's.", "Moreover, given $\\varphi $ , $\\mathcal {L}$ and $\\mathcal {A}$ , the procedure for producing $\\mathcal {B}$ is computable.", "Roughly the lemma is a version of the downward Löwenheim-Skolem theorem with elementarity restricted to subformulae of $\\varphi $ .", "The bounds in the condition 2 are admittedly the stuff of nightmares for combinatorialists.", "Moreover they are almost certainly not best possible.", "However, they also are not so important.", "All that we will need going forward is that they are primitive recursive in $i$ , $|\\varphi |$ , $|\\mathcal {L}|$ , and $j$ .", "In particular they do not depend on $n$ or $\\vec{\\mathcal {A}}$ .", "Also the meat of the lemma is items 1 through 4.", "Item 5 is a technical condition that will be used in an application later.", "The lemma without the extra condition goes through just as well and the reader is encouraged the first time through to read the proof without worrying about this final condition.", "It will be used only in Section 4.", "I will define by induction, for $i < n$ models $\\mathcal {B}_i$ , so that $\\mathcal {B}_{i} \\subseteq \\mathcal {B}_{i+1}$ , the domain of $\\mathcal {B}_i$ is contained in that of $\\mathcal {A}_i$ for each $i$ and so that $\\mathcal {B}_i$ is of the appropriate size.", "Then I will set $\\vec{\\mathcal {B}}= \\langle \\mathcal {B}_i \\; | \\;i <n \\rangle $ and argue that conditions 4 and 5 hold.", "It will be clear from the construction that this procedure can be carried out recursively, given knowledge of $\\mathcal {A}$ , $\\mathcal {L}$ and $\\varphi $ .", "Before beginning let us count a few objects so as to reduce the text in the main part of the proof when we compute the bounds.", "First note that the number of subformulae of $\\varphi $ is at most $\\binom{|\\varphi |}{2}$ since each subformula is uniquely chosen by picking a start point and an endpoint.", "Next note that given a finite partial $\\mathcal {L}$ -structure $\\mathcal {C}$ , there are at most $|\\mathcal {C}|^j|\\mathcal {L}|$ many expressions of the form $\\lbrace f(\\vec{c}) \\; | \\; \\vec{c} \\in \\mathcal {C}^{|\\vec{c}|} \\; {\\rm and} \\; f \\in \\mathcal {L} \\; {\\rm a \\; function\\; symbol}\\rbrace $ .", "Now, let $\\mathcal {B}_0 \\subseteq \\mathcal {A}_0$ be the set of all individual constants, plus the least element of $\\mathcal {A}_0$ if there are no constants.", "Note that $|\\mathcal {B}_0| \\le |\\mathcal {L}|$ .", "Next assume inductively that $\\mathcal {B}_i$ is defined.", "We will define $\\mathcal {B}_{i+1}$ .", "First expand $\\mathcal {B}_i$ to $\\mathcal {B}_i^* =\\mathcal {B}_i \\cup \\lbrace f(\\bar{b}) \\; | \\; \\bar{b} \\subseteq \\mathcal {B}_i \\; {\\rm and} \\; f \\; {\\rm a \\; function \\; symbol} \\rbrace $ .", "Note that $\\mathcal {B}_i^* \\subseteq \\mathcal {A}_{i+1}$ since every element of $\\mathcal {B}_i$ is in $\\mathcal {A}_i$ and $\\mathcal {A}_{i}$ is closed under functions in $\\mathcal {A}_{i+1}$ .", "Also observe that $\\mathcal {B}_i^*$ has size at most $|\\mathcal {B}_i| + |\\mathcal {B}_i|^j|\\mathcal {L}|$ by the discussion in the previous paragraph.", "Now if $\\exists y \\psi (y, \\overline{x})$ is a subformula of $\\varphi $ and $\\overline{a} \\subseteq \\mathcal {B}_i$ then if $\\vec{\\mathcal {A}}\\models ^* \\exists y \\psi (y, \\overline{a})$ , pick the least $b \\in \\mathcal {A}_{i+1}$ witnessing this.", "At the same time if $\\forall y \\psi (y, \\overline{a})$ is a subformula of $\\varphi $ so that there this a $c \\in \\mathcal {A}_{i+1}$ with $\\vec{\\mathcal {A}}\\models ^* \\lnot \\psi (c, \\overline{a})$ , pick the least such $c \\in \\mathcal {A}_{i+1}$ .", "Let $\\mathcal {B}^{**}$ the union of $\\mathcal {B}^*$ with all of these $c$ 's.", "Since for each subformula of $\\varphi $ and each tuple of $\\mathcal {B}^*$ of length at most $|\\varphi |$ we chose one $c$ we get that $|\\mathcal {B}^{**}| \\le |\\mathcal {B}_i^*| + (\\binom{|\\varphi |}{2}|\\mathcal {B}_i^*|^{|\\varphi |}$ .", "Finally if $\\psi ^{\\prime }$ is any subformula, $i > m$ and there are $i_0, ..., i_m < i$ so that the minimal $x$ with $\\langle \\mathcal {A}_{i_0}, \\mathcal {A}_{i_1}, \\mathcal {A}_{i_3}, ..., \\mathcal {A}_{i_m} \\rangle \\models ^* \\psi ^{\\prime }(x)$ is different from the minimal $x$ so that $\\langle \\mathcal {A}_{i_0}, \\mathcal {A}_{i_2}, \\mathcal {A}_{i_3}, ..., \\mathcal {A}_{i_m} \\rangle \\models ^* \\psi ^{\\prime } (x)$ then choose both such $x$ 's.", "Now let $\\mathcal {B}_{i+1}$ be $\\mathcal {B}_i^{**}$ alongside all such $x$ 's.", "In this last stage, for each $m < i$ and each $m$ -tuple of elements less than $i$ we added at most 2 elements for each tuple of $\\mathcal {B}^{**}_i$ of size at most $|\\varphi |$ and each subformla of $\\varphi $ .", "Putting this together we get a rough bound of $|\\mathcal {B}_{i+1}| \\le |\\mathcal {B}^*_i| + [\\sum _{m=0}^{i-1}\\binom{i}{m}]|\\mathcal {B}_i^{**}|^{|\\varphi |}\\binom{|\\varphi |}{2}2$ .", "Unwinding the bounds of $\\mathcal {B}_i^*$ and $\\mathcal {B}^{**}_i$ we get the bounds claimed in condition 2 of the lemma statement.", "By construction we have dealt with condition 5 so it remains to see that $\\vec{\\mathcal {B}}\\models ^* \\psi $ if and only if $\\vec{\\mathcal {A}}\\models ^* \\psi $ for each subformula $\\psi $ with parameters in $\\mathcal {B}_i$ (say).", "This is by induction on $\\psi $ .", "If $\\psi $ is atomic, then $\\vec{\\mathcal {B}}\\models ^* \\psi $ if and only if $\\mathcal {B}_{n-1} \\models \\psi $ by definition.", "Moreover, note that $\\mathcal {B}_{n-1}$ is a substructure of $\\mathcal {A}_{n-1}$ in the normal sense (not necessarily closed under function symbols) and therefore $\\mathcal {B}_{n-1} \\models \\psi $ if and only if $\\mathcal {A}_{n-1} \\models \\psi $ since $\\psi $ is atomic.", "Finally noting that by definition $\\mathcal {A}_{n-1} \\models \\psi $ if and only if $\\vec{\\mathcal {A}}\\models ^* \\psi $ finishes the atomic case.", "By induction, the boolean cases are immediate, so we focus on the quantifier cases.", "If $\\psi $ is of the form $\\exists y \\psi ^{\\prime } (y)$ then by our construction, there is a witness in $\\vec{\\mathcal {B}}$ if and only if there is a witness in $\\vec{\\mathcal {A}}$ so this case is taken care of.", "Finally if $\\psi $ is of the form $\\forall y \\psi ^{\\prime } (y)$ then if $\\vec{\\mathcal {A}}\\models ^* \\forall y \\psi ^{\\prime } (y)$ then for each $a \\in \\vec{\\mathcal {A}}^{[i+1, n - dp(\\psi )]}$ and hence each $a \\in \\vec{\\mathcal {B}}^{[i + 1, n - dp(\\psi )]}$ $\\vec{\\mathcal {A}}\\models ^* \\psi ^{\\prime } (a)$ and so by the inductive hypothesis $\\vec{\\mathcal {B}}\\models ^* \\psi ^{\\prime } (a)$ which means $\\vec{\\mathcal {B}}\\models ^* \\forall y \\psi ^{\\prime } (y)$ .", "Conversely, if $\\vec{\\mathcal {A}}\\models \\lnot \\forall y \\psi ^{\\prime } (y)$ then there is a witness in $\\vec{\\mathcal {A}}^{[i + 1, n - dp(\\psi )]}$ and the least such witness was put into $\\vec{\\mathcal {B}}$ during the construction so the converse holds as well.", "Since the exact bounds in the finite model lemma are not important, in what follows, I denote by $Col (i,j , k , l)$ the primitive recursive function giving these bounds where $i$ is the index of the sequence, $j$ is the greatest arity of a function symbol in $\\mathcal {L}$ , $k = |\\varphi |$ , and $l = |\\mathcal {L}|$ ($Col$ for collapse).", "In other words for all $i < n$ the lemma states that $|\\mathcal {B}_i| < Col (i, j, |\\varphi |, |\\mathcal {L}|)$ .", "To push this idea further, let me introduce a notion of isomorphism for $(\\mathcal {L}, n)$ -models.", "Definition 2.9 Let $\\vec{\\mathcal {A}}$ and $\\vec{\\mathcal {B}}$ be two $(\\mathcal {L}, n)$ -models.", "We say that $\\vec{\\mathcal {A}}$ and $\\vec{\\mathcal {B}}$ are isomorphic, denoted $\\mathcal {A} \\cong \\mathcal {B}$ if there is a bijection $g:\\bigcup \\mathcal {A} \\rightarrow \\bigcup \\mathcal {B}$ so that for any $i < n$ $g \\upharpoonright \\mathcal {A}_i$ bijects onto $\\mathcal {B}_i$ and is an isomorphism of partial $\\mathcal {L}$ -structures.", "Using the finite model lemma, if $\\varphi $ has an $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ which is (externally) linearly ordered by $<$ , then it has one whose domain is a finite initial segment of the natural numbers via the isomorphism induced by the unique order preserving bijection between the domain of the model $\\vec{\\mathcal {B}}$ obtained by the computable procedure described in the finite model lemma and the initial segment is of length $|\\mathcal {B}_{n-1}| < Col(n-1, |\\varphi |, k, |\\mathcal {L}|, n)$ .", "Such a structure is called the F-collapse of $\\vec{\\mathcal {A}}$ for $\\varphi $ (“F\" for fulfillment).", "Next I prove a kind of completeness theorem for $\\models ^*$ .", "This can proved entirely syntactically via any reasonable arithmetization of proof theory.", "However, appealing to $\\mathsf {ACA}_0$ , and the fact that this theory proves König's lemma [9], provides a much slicker proof that does not require any notion of proof.", "Lemma 2.10 ($\\mathsf {ACA}_0$ ) Let $\\varphi $ be an $\\mathcal {L}$ -sentence.", "Then $\\varphi $ has a model if and only if it has a $(\\mathcal {L}, n)$ -model for all $n > dp(\\varphi )$ .", "The forward direction is obvious.", "If $\\mathcal {M}$ is total and $\\mathcal {M} \\models \\varphi $ then it is rudimentary to check that, if we let $\\vec{\\mathcal {A}}$ be the $(\\mathcal {L}, n)$ -model so that for all $k < n$ $\\mathcal {A}_n = \\mathcal {M}$ then $\\vec{\\mathcal {A}}\\models ^* \\varphi $ .", "For the backward direction, suppose that for all $n > dp(\\varphi )$ , the sentence $\\varphi $ has a $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ .", "By the finite model lemma we can assume moreover that for each $n$ the witnessing $(\\mathcal {L}, n)$ -model is finite and, in fact, via its $F$ -collapse, each $\\mathcal {A}_i$ has domain $Col(i, j, |\\varphi |, \\mathcal {L})$ where $j$ is the largest arity of a function symbol in the signature of $\\mathcal {L}$ .", "For each $n > dp(\\varphi )$ let $T_n$ be the collection of all such $(\\mathcal {L}, n)$ -models of $\\varphi $ and let $T = \\bigcup _{n} T_n$ .", "Since $Col(i, j, k, l)$ does not depend on $n$ we have that if $k < n$ then $T_k = \\lbrace \\vec{\\mathcal {A}}^{[0, k]} \\; | \\; \\vec{\\mathcal {A}}\\in T_n\\rbrace $ .", "Let $T = \\bigcup _{n > dp(\\varphi )} T_n$ .", "We order $T$ by end extension i.e.", "define $\\vec{\\mathcal {A}}\\le _{end} \\vec{\\mathcal {B}}$ just in case $\\vec{\\mathcal {A}}$ is an $(\\mathcal {L}, m)$ -model and $\\vec{\\mathcal {B}}$ is an $(\\mathcal {L}, n)$ -model for $n \\ge m$ and for all $dp(\\varphi ) < k < m$ $\\mathcal {A}_k = \\mathcal {B}_k$ .", "Note that $(T, \\le _{end})$ is an infinite tree by assumption.", "Moreover, it is finitely branching since for each $k$ and $l$ there are only finitely many $(\\mathcal {L}, k)$ -models on any given set $\\lbrace 0, ..., l-1\\rbrace $ and, hence the levels of $T$ , which are just the $T_n$ 's are finite.", "Applying König's lemma to this tree, we can conclude in $\\mathsf {ACA}_0$ that it has a branch, $\\langle \\mathcal {A}(n) \\; | \\; n < \\omega \\rangle $ .", "Let $\\mathcal {M} = \\bigcup _{n > dp(\\varphi )} \\mathcal {A}(n)$ .", "It remains to check that $\\mathcal {M}\\models \\varphi $ .", "This is by induction on $\\varphi $ .", "Note that the assumption is that for each $n$ $\\mathcal {A}(n) \\models ^* \\varphi $ .", "If $\\varphi $ is atomic, then fixing any $n$ we have that $\\mathcal {A}(n) \\models ^*\\varphi $ which means that the top model of that sequence, which is a (partial) substructure of $\\mathcal {M}$ is a model of $\\varphi $ and hence $\\mathcal {M}\\models \\varphi $ .", "The Boolean cases follow immediately by induction so we look at the quantifier cases.", "Suppose $\\varphi := \\exists x \\psi (x)$ .", "Since $\\mathcal {A}(0) \\models ^* \\exists x \\, \\psi (x)$ there is an $a$ in some model in $\\mathcal {A}_0$ so that $\\mathcal {A}(0) \\models ^* \\psi (a)$ and hence by induction $\\mathcal {M}\\models \\psi (a)$ so $\\mathcal {M}\\models \\exists x \\, \\psi (x)$ .", "If $\\varphi : = \\forall x \\, \\psi (x)$ then for each $a \\in \\mathcal {M}$ we can find an $n_a$ so that $a$ appears early enough on in the sequence of models in $\\mathcal {A}(n)$ that, since $\\mathcal {A}(n) \\models ^* \\forall x \\, \\psi (x)$ we get $\\mathcal {A}(n) \\models ^* \\psi (a)$ and hence, again by induction $\\mathcal {M}\\models \\psi (a)$ .", "As a result we get $\\mathcal {M}\\models \\forall x \\, \\psi (x)$ as needed.", "Using the conservativity of $\\mathsf {ACA}_0$ over $\\mathsf {PA}$ alongside the arithmetized completeness theorem we obtain the following lemma that can also be proved directly.", "To make sense of it, let us fix an arithmetized proof system, say the Hilbert system of [1], (see more generally the discussion in Chapter 1, Section 4(a), pp.98-102 of [1] for more on arithmetization of proofs).", "Lemma 2.11 ($\\mathsf {PA}$ ) Let $\\varphi $ be an $\\mathcal {L}$ -sentence.", "If $\\varphi $ is not provable, then for all sufficiently large $n$ there is an $(\\mathcal {L}, n)$ -model of $\\lnot \\varphi $ .", "If $\\vdash \\varphi $ , then for all sufficiently large $n$ and every $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}$ for which “$\\vec{\\mathcal {A}}\\models ^* \\varphi $ \" is defined, in fact $\\vec{\\mathcal {A}}$ fulfills $\\varphi $ .", "Putting this together we obtain: Theorem 2.12 (The Completeness Theorem for Fulfillment) An $\\mathcal {L}$ sentence $\\varphi $ is provable if and only if for all $n > dp(\\varphi )$ , all $(\\mathcal {L}, n)$ -models for which fulfillment of $\\varphi $ is defined fulfill $\\varphi $ .", "As a consequence we get an important result that will be used later.", "Note that the significance of this theorem is that it is provable in $\\mathsf {PA}$ .", "Corollary 2.13 The statement For all finite subsets $\\Gamma \\subseteq \\mathsf {PA}$ and all $n > {\\rm max} \\lbrace dp(\\gamma ) + 1 \\; | \\; \\gamma \\in \\Gamma \\rbrace $ , $\\Gamma $ has an $(\\mathcal {L}, n)$ -model is equivalent to ${\\rm con}(\\mathsf {PA})$ .", "Fix a model $\\mathcal {M} \\models \\mathsf {PA}$ .", "Working internally in $\\mathcal {M}$ , we have that $\\mathcal {M} \\models \\lnot {\\rm con}(\\mathsf {PA})$ if and only if $\\mathcal {M}$ has a finite subset $\\Gamma \\subseteq \\mathsf {PA}$ so that ($\\mathcal {M}$ thinks) $\\Gamma \\vdash 0=1$ which happens if and only if $\\vdash \\lnot \\bigwedge \\Gamma $ .", "But then by Lemma REF this last statement happens if and only if all $(\\mathcal {L}, n)$ -models for $n >{\\rm max} \\lbrace dp(\\gamma ) + 1 \\; | \\; \\gamma \\in \\Gamma \\rbrace $ in which it fulfillment of $\\lnot \\bigwedge \\Gamma $ is defined fulfill $\\lnot \\bigwedge \\Gamma $ .", "Corollary REF is used as follows.", "We want to show that various sentences are not provable in $\\mathsf {PA}$ , thus we will show that, assuming such sentences, one can prove that for all sufficiently large $n$ and all finite subsets $\\Gamma \\subseteq \\mathsf {PA}$ $\\Gamma $ has an $(\\mathcal {L}, n)$ -model and hence ${\\rm con}(\\mathsf {PA})$ .", "The first example of such an argument is an alternative proof of the Paris-Harrington Theorem." ], [ "A New Proof of the Paris-Harrington Theorem", "In this section, I use the machinery of $(\\mathcal {L}, n)$ -models to reprove the Paris-Harrington Theorem from [2].", "Recall that the Paris-Harrington Principle, $\\mathsf {PH}$ , is the statement that for all $e, k, r$ there is an $N$ so that every partition $P: [N]^e \\rightarrow r$ there is a $H \\subseteq N$ which is homogenous, of size at least $k$ and so that the cardinality of $H$ is larger than its minimal element.", "The Paris-Harrington Theorem is the statement that $\\mathsf {PH}$ is independent of $\\mathsf {PA}$ .", "As it will be a useful template for later, let me recall briefly the proof that $\\mathsf {PH}$ is true in the standard model of $\\mathsf {PA}$ .", "Note that this proves half of the Paris-Harrington Theorem since it shows $\\mathsf {PA}$ does not prove $\\lnot \\mathsf {PH}$ .", "Proposition 3.1 (Theorem 1.2 of [2]) The Paris-Harrington Principle holds in the standard model of arithmetic.", "Of course, in contrast to our normal convention, the proof of this theorem is not formalized in arithmetic, but rather takes place in $\\mathsf {ZFC}$ (or any strong enough theory that we can reason about the “standard model of arithmetic\").", "Suppose not and fix an $e, k, r$ so that the principle fails.", "Let $T$ be the collection of partitions of $P:[N]^e \\rightarrow r$ so that there is no $H\\subseteq N$ which is homogenous, of size $k$ and so that the cardinality of $H$ is larger than the minimal element.", "By our assumption there is such a $P$ for each $N$ .", "Order $T$ by $P \\sqsubseteq Q$ if $P$ is a partition on $N$ and $Q$ is a partition on $N^{\\prime } > N$ and $Q \\upharpoonright [N]^e = P$ .", "Then $T$ is an infinite, finitely branching tree so by König's lemma it has a branch, $B \\subseteq T$ .", "Note, however, that $\\bigcup B : [\\omega ]^e \\rightarrow r$ is a partition of $\\omega $ .", "Therefore by the infinite Ramsey theorem there is an infinite $C \\subseteq \\bigcup B$ so that $\\bigcup B \\upharpoonright [C]^e$ is constant.", "Pick $N < \\omega $ so that $C \\cap N$ has size $k$ and cardinality larger than its minimal element.", "Since $C$ is infinite this is easily arranged: let $N$ be larger than the first $k + {\\rm min}(C)$ elements.", "But then $\\bigcup B \\upharpoonright [C \\cap N]^e$ is constant, contradicting our assumption.", "Moving back into $\\mathsf {PA}$ , define the theory $\\mathsf {PA}^{PF}_k$ to be the axioms of $\\mathsf {Q}$ plus the first $k$ instances of parameter free least number principle: $LNP(\\varphi ):=\\exists x \\varphi (x) \\rightarrow \\exists x \\forall y (\\varphi (x) \\wedge (\\varphi (y) \\rightarrow x \\le y))$ where $\\varphi $ is one of the first $k$ formulae relative to some primitive recursive ordering of the formulas of $\\mathcal {L}$ .", "It's well known that $\\mathsf {PA}$ is equivalent to $\\bigcup \\lbrace \\mathsf {PA}^{PF}_k \\; | \\; k \\in \\omega \\rbrace $ .", "Our goal is to show that $\\mathsf {PA} + \\mathsf {PH}$ implies $con(\\mathsf {PA})$ .", "In light of the results in the previous section, it suffices to show the following: Theorem 3.2 $\\mathsf {PA} + \\mathsf {PH}$ implies that for each $k$ and all sufficiently large $n$ there is a $(\\mathcal {L}, n)$ -model of $\\mathsf {PA}^{PF}_k$ .", "The Paris-Harrington theorem then follows as an immediate corollary: Corollary 3.3 (Paris-Harrington, Main Theorem 1.3 of [2]) $\\mathsf {PH}$ implies $con(\\mathsf {PA})$ and so, in particular, it is not provable in $\\mathsf {PA}$ and hence $\\mathsf {PH}$ is independent of $\\mathsf {PA}$ .", "Before beginning on Theorem REF let us isolate a slightly modified version of $\\mathsf {PH}$ which follows from $\\mathsf {PH}$ and which we will need.", "Given a number $N$ , I denote by $[N]^e_{sqInc}$ the set of $e$ -sized subsets of $N$ which, when placed in ascending order, are square increasing.", "Lemma 3.4 $\\mathsf {PH}$ implies that for every $e, k, r, m$ there is an $N$ so that every function $F: [N]_{sqInc}^e \\rightarrow r$ there is a $H \\subseteq N$ so that $H$ is square increasing, $F \\upharpoonright [H]^e$ is constant, $H$ contains only elements larger than $m$ and so that the cardinality of $H$ is larger than ${\\rm min}(H) + k$ .", "This lemma is a consequence of [2] where it is shown that a much more general statement follows from $\\mathsf {PH}$ .", "Towards proving Theorem REF assume $\\mathsf {PH}$ .", "As always we argue in $\\mathsf {PA}$ .", "Note that by Lemma REF every square increasing $(\\mathcal {L}, n)$ -model fulfills $\\mathsf {Q}$ (for $n > 3$ ).", "Therefore, it suffices to show that we can always extend square increasing models to longer square increasing models fulfilling arbitrarily finitely many instances of the least number principle.", "I begin by showing how to get a square-increasing $(\\mathcal {L}, n)$ -model of $LNP (\\varphi )$ for some fixed sentence $\\varphi $ .", "Lemma 3.5 ($\\mathsf {PA}+ \\mathsf {PH}$ ) Let $\\varphi (x)$ be an $\\mathcal {L}$ -formula.", "For all $n > dp(\\varphi ) + 3$ there is a square increasing $(\\mathcal {L}, n)$ -model of $LNP(\\varphi )$ .", "Fix an $n > dp(\\varphi ) + 3$ .", "If there is a square increasing $(\\mathcal {L}, n)$ -model fulfilling $\\lnot \\exists x \\varphi (x)$ then this model fulfills $LNP(\\varphi )$ so we're done, thus assume that every square increasing model fulfills $\\exists x \\varphi (x)$ .", "Now fix a number $m$ large enough that all terms in $\\varphi $ are definable in $\\mathcal {M}_{m}$ .", "For any square increasing sequence $\\vec{m} = m_0 < ... < m_{n-2}$ , with $m_0 > m^2$ let us define $F_\\varphi (\\vec{m}) = {\\rm min}\\lbrace x < m_0 \\; | \\; \\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\varphi (x)\\rbrace $ .", "By the assumption $F_\\varphi $ is defined on all square increasing sequences of length $n-1$ with first element at least $m^2+1$ .", "This is because we assume that $\\langle M_m, \\vec{\\mathcal {M}}_{\\vec{m}}\\rangle \\models ^* \\exists x \\varphi (x)$ so by the definition of fulfillment we can find such an $x < m_0$ .", "Now for such a square increasing sequence $m_0 < ... < m_{n-2} < m_{n-1}$ of length $n$ let $F^{\\prime }_\\varphi (m_0, m_1, ..., m_{n-2}, m_{n-1}) ={\\left\\lbrace \\begin{array}{ll}0, & F_\\varphi (m_0, m_2,m_3, ..., m_{n-1}) = F_\\varphi (m_0, m_1, m_3, ..., m_{n-1}) \\\\1, & otherwise\\end{array}\\right.", "}$ This is a two coloring of $n$ -tuples.", "Applying $\\mathsf {PH}$ (or, rather Lemma REF ), let $N$ be such that all $F^{\\prime }_\\varphi $ restricted to $[N]^n_{SqInc}$ has a homogenous subset $H \\subseteq N$ so that every element is larger than $m^2$ , and whose cardinality is larger than ${\\rm min}(H) + n+5$ .", "Claim 3.6 $F^{\\prime }_\\varphi \\upharpoonright [H]^n$ is identically 0.", "Otherwise it is 1.", "But that means that, if $H = \\lbrace m_0 < m_1 < ... < m_k\\rbrace $ we have that the set $\\lbrace F_\\varphi (m_0, m_l, m_{k- n - 2}..., m_{k}) \\; | \\; 1 \\le l \\le k - n -3\\rbrace $ is a subset of $m_0$ of size $k - n - 4$ .", "But by construction $k > m_0 + n + 5$ so this contradicts the pigeonhole principle.", "Let $m_0 < m_1 < ... < m_n \\in H$ and let $\\vec{m}$ be the associated square increasing sequence.", "I claim that $\\vec{\\mathcal {M}}_{\\vec{m}}$ fulfills that $x_0: = F_\\varphi (m_0, ..., m_{n-1}) = F_\\varphi (m_1, ..., m_n)$ is the minimal $x$ so that $\\varphi (x)$ .", "Note that $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\varphi (x_0)$ since $\\vec{\\mathcal {M}}_{\\vec{m}}^{[0, n-1]} \\models ^* \\varphi (x_0)$ by definition of $x_0$ and extending the sequence by one will not change fulfillment as described in the previous section.", "Suppose now towards a contradiction that $\\vec{\\mathcal {M}}_{\\vec{m}} \\models ^* \\exists y < x_0 \\, \\varphi (y)$ .", "In this case there is a corresponding $y < m_1$ so that $\\vec{\\mathcal {M}}^{[1, n]}_{\\vec{m}} \\models ^* \\varphi (y) \\wedge y < x_0$ but that is a contradiction to the fact that $F^{\\prime }_\\varphi \\upharpoonright [H]^n$ is identically 0 since in this case we actually have that $y < x_0$ (in the $\\mathsf {PA}$ ordering) and by the previous claim, $x_0$ was the least so that any $n$ -tuple of elements from $H$ fulfilled $\\varphi (x_0)$ .", "To conclude the proof observe that it follows that $\\langle M_m, \\vec{\\mathcal {M}}_{\\vec{m}}\\rangle \\models ^* LNP(\\varphi )$ .", "To see this, note that our assumption that all $(\\mathcal {L}, n)$ -models fulfill $\\exists x \\, \\varphi (x)$ applies in particular to give us that $\\langle M_m, \\vec{\\mathcal {M}}_{\\vec{m}}\\rangle \\models ^* \\exists x \\, \\varphi (x)$ and, by the previous paragraph we know that $\\langle M_m, \\vec{\\mathcal {M}}_{\\vec{m}}\\rangle \\models ^* \\forall y < x_0 \\lnot \\varphi (y)$ as needed.", "To prove Theorem REF it suffices now to show that we can handle $k$ many formulae at the same time.", "In fact this follows from Lemma REF .", "Fix now $k$ formulae $\\varphi _0, ..., \\varphi _{k-1}$ .", "Without loss we may assume that for $i < j < k$ there are no free variables in common between $\\varphi _i$ and $\\varphi _j$ .", "Let $x$ and $n_0,...,n_{k-1}$ moreover be fresh variable symbols not appearing in any of the $\\varphi _i$ 's.", "Enumerate the first $k$ primesTechnically the sentence below is not parameter free, it contains $p_0,..., p_{k-1}$ as parameters.", "However we can simply add constant symbols for the interval $[0, ..., p_{k-1}]$ and write this now in a parameter free way in this extended language.", "This strategy is acceptable since Lemma REF did not depend on $\\mathcal {L}$ in any significant way.", "as $p_0, ..., p_{k-1}$ .", "Let $\\psi (x)$ be the following formula: $\\exists n_0,...,n_{k-1} [ x = p_0^{n_0} p_1^{n_1}...p_{k-1}^{n_{k-1}} \\wedge \\bigwedge _{i < k} (\\exists y \\varphi _i(y) \\rightarrow \\varphi _i(n_i))]$ Observe that given any model of $\\mathsf {Q}$ satisfying the statement that for each $i < k$ $\\lnot \\exists z \\varphi _i(z)$ , must satisfy $\\psi (1)$ ($n_i = 0$ for all $i < k$ ).", "Conversely in any model of $\\mathsf {Q}$ in which for some $i < k$ $\\exists x \\, \\psi (x)$ holds there must be an $x > 1$ so that $\\psi (x)$ holds.", "Regardless, in every model of $\\mathsf {Q}$ either $\\psi (1)$ holds or else $\\exists x > 1 \\, \\psi (x)$ holds and so $\\mathsf {Q}\\vdash \\exists x \\, \\psi (x)$ .", "Applying Lemma REF now we get that for every sufficiently large $n$ there is a square increasing $(\\mathcal {L}, n)$ -model satisfying that there is a least witness to $\\exists x \\, \\psi (x)$ .", "I claim that any such $(\\mathcal {L}, n)$ -model is also a model of the least number principle for $\\varphi _i$ for each $i < k$ .", "To see this, fix such a model $\\vec{\\mathcal {A}}$ , let $\\bar{x} = p_0^{n_0}...p_{k-1}^{n_{k-1}}$ be the least witness to $\\exists x \\, \\psi (x)$ (according to $\\vec{\\mathcal {A}}$ ).", "Moreover, as in the proof of Lemma REF we can assume that $\\bar{x} \\in \\mathcal {A}_{0}$ and therefore $n_0, ..., n_{k-1} \\in \\mathcal {A}_0$ .", "Fix $i < k$ .", "We know that by $\\psi (\\bar{x})$ $\\vec{\\mathcal {A}}\\models ^*\\psi _i (n_i)$ .", "If $\\vec{\\mathcal {A}}\\models ^* \\exists ^* y_i < n_i \\, \\varphi _i(y_i)$ then there is a corresponding $y_i < n_i$ so that $\\vec{\\mathcal {A}}^{[1, n]} \\models ^* \\varphi _i (y_i)$ .", "But then $\\bar{x}^{\\prime } = p_0^{n_0}...p_i^{y_i}...p_{k-1}^{n_{k-1}} < \\bar{x}$ would also witness $\\exists x \\, \\psi (x)$ , which contradicts the minimality of $\\bar{x}$ .", "Let us note that the final argument above depended not on $\\mathsf {PH}$ but only on the conclusion of Lemma REF and hence for proving $con(\\mathsf {PA})$ using $(\\mathcal {L}, n)$ -models it suffices moving forward to prove that for all sufficiently large $n$ there is an $(\\mathcal {L}, n)$ -model of $\\mathsf {Q}\\wedge LNP(\\varphi )$ for any given $\\varphi $ ." ], [ "The Bounded Coloring Principle", "In this section I use the ideas from the previous proof to provide an example of a true but unprovable $\\Pi ^0_1$ -sentence, similar to $\\mathsf {PH}$ .", "The idea is to turn a $\\Pi ^0_2$ -sentence into a $\\Pi ^0_1$ -sentence by providing a primitive recursive bound on the existential quantifier via the finite model lemma.", "First, I define the notion of a bounded $(n, \\varphi )$ -coloring.", "Definition 4.1 (Bounded $(n, \\varphi ) $ -Colorings) Let $r$ , $n$ , and $N$ be natural numbers and let $\\varphi (x)$ an $\\mathcal {L}$ -formula.", "A bounded $(n, \\varphi )$ -coloring in $r$ colors on $N$ is a function $F$ , so that the following conditions hold: The domain of $F$ is the set of $(\\mathcal {L}, n)$ -models $\\vec{\\mathcal {A}}$ so that the top model of $\\vec{\\mathcal {A}}$ has universe contained in $N$ and fulfills $\\varphi (b)$ for some $b \\in \\bigcup _{\\mathcal {A} \\in \\vec{\\mathcal {A}}} \\mathcal {A}$ .", "The range of $F$ is $r$ .", "Boundedness: For each $k \\ge n$ and every $(\\mathcal {L}, k)$ -model $\\vec{\\mathcal {A}} = \\langle \\mathcal {A}_0, ..., \\mathcal {A}_{k-1}\\rangle $ , so that all of the sub $n$ -tuples of $\\vec{\\mathcal {A}}$ are in the domain of $F$ , (so in particular $\\vec{\\mathcal {A} } \\models ^* \\varphi (b)$ for some $b \\in \\mathcal {A}_{k-1}$ ) we have that if $\\vec{\\mathcal {B}} = \\langle \\mathcal {B}_0, ..., \\mathcal {B}_{k-1}\\rangle $ is the F-collapse of $\\vec{\\mathcal {A}}$ for $\\exists x \\varphi (x)$ then for any $i_0 < i_1 < ... < i_{n-1} < k$ we have that $F(\\mathcal {A}_{l_{i_0}} , ... ,\\mathcal {A}_{l_{i_{n-1}}}) =F(\\mathcal {B}_{l_{i_0}} , ... ,\\mathcal {B}_{l_{i_{n-1}}})$ .", "Remark 2 While the notion of a bounded $(n, \\varphi )$ -coloring involves arithmetization of logical concepts, namely the use of $\\varphi $ , one can also think of $\\varphi $ as providing an “axiomatization\" for a class of models.", "As the definition of a bounded $(n, \\varphi )$ -coloring can be thought of a coloring on all finite sequences fulfilling any given finite first order theory including that of groups, rings, fields etc.", "As such it is less “logical\" than it might first appear.", "Observe that to say that “$F$ is a bounded $(n, \\varphi )$ -coloring in $r$ colors on $N$ \" is $\\Delta _0$ with parameters $n, \\varphi , r$ and $N$ since every quantifier in the definition can be bounded by $2^{N^2}$ .", "To see the (naive) bound, note that this is the number of $N$ -tuples of subsets of $N$ so it contains the entire domain of $F$ as well as accounting for the universal quantifier in “for each $k \\ge n$ \" in the statement of boundedness since $k$ cannot be bigger than $N$ .", "The true $\\Pi ^0_1$ statement we will show to be unprovable in $\\mathsf {PA}$ is the following.", "Definition 4.2 The Bounded Coloring Principle, denoted $\\mathsf {BCP}$ is the statement that for all $r, n, \\mathcal {L}, \\varphi , j, m, k$ if $k \\ge n, |\\mathcal {L}| + m$ , the largest arity of a function symbol in $\\mathcal {L}$ is $j$ and $F$ is a $(n, \\varphi )$ -bounded coloring in $r$ colors on $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ then there is a sequence $\\vec{H} = \\langle \\mathcal {A}_0 \\subseteq \\mathcal {A}_1 \\subseteq ... \\subseteq \\mathcal {A}_{\\bar{k}-1} \\rangle $ of partial $\\mathcal {L}$ -structures of some length $\\bar{k} \\ge k$ so that any $n$ -length subsequence is in the domain of $F$ , $|\\mathcal {A}_0| + m < \\bar{k}$ and $F$ is homogeneous on the collection of all subsequences of $H$ of length $n$ i.e.", "$F\\upharpoonright [H]^n$ is constant and well defined.", "For a fixed $r, n, \\mathcal {L}, k$ and $N$ let us denote the conclusion of $\\mathsf {BCP}$ by $\\mathsf {BCP}(r, n, \\mathcal {L}, \\varphi , j, k, m, N)$ .", "Note that $\\mathsf {BCP} =\\forall r, n, \\mathcal {L}, \\varphi , j, m, k \\, \\mathsf {BCP}(r, n, \\mathcal {L}, \\varphi , j, k, m, Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1)$ and so in particular it is $\\Pi ^0_1$ .", "Remark 3 The reader may (and should) have some skepticism regarding the bound $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ since it does not involve $r$ .", "However the stipulation that $F$ be bounded is sufficiently restricting that even if in principle $r$ is very large compared to $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ , only a small subset actually appears in the image of $F$ .", "Therefore, in effect, the bound $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ on the size of the domain of $F$ is actually a bound on $r$ as well.", "The main goal of this section is to show the following theorem.", "Theorem 4.3 The statement $\\mathsf {BCP}$ is true in the standard model but $\\mathsf {PA} + \\mathsf {BCP}$ implies ${\\rm con} (\\mathsf {PA})$ .", "In particular, $\\mathsf {BCP}$ is independent of $\\mathsf {PA}$ .", "The proof of this theorem is broken into several lemmas.", "I start by showing that the statement is equivalent to the seemingly weaker, $\\Pi ^0_2$ statement with the primitive recursive bound removed.", "As always the base theory over which the equivalence below is proved is $\\mathsf {PA}$ .", "Lemma 4.4 The principle $\\mathsf {BCP}$ is equivalent to the statement, which I call $\\mathsf {BCP}^{\\prime }$ , that $\\forall r, n, \\mathcal {L}, \\varphi , j, m, k \\exists N \\, \\mathsf {BCP}(r, n, \\mathcal {L}, \\varphi , j, k, m, N)$ .", "In other words, we may replace the explicit bound given by $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ with simply an implicit bound of “$\\exists N$ \".", "The point is that the definition of boundedness is tailored for exactly this.", "Clearly $\\mathsf {BCP}$ implies $\\mathsf {BCP} ^{\\prime }$ .", "For the converse, suppose $\\mathsf {BCP}^{\\prime }$ holds, fix $r, n, \\mathcal {L}, j, k, m$ and let $N$ be large enough to witness $\\mathsf {BCP} ^{\\prime }$ .", "We need to show that already there is a homogeneous sequence of structures all of whose universes are contained in $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ .", "Let $F$ be a bounded $(n, \\varphi )$ -coloring on $N$ and, by $\\mathsf {BCP} ^{\\prime }$ let $\\vec{H} =\\langle \\mathcal {A}_0 \\subseteq ... \\subseteq \\mathcal {A}_{k-1} \\rangle $ be a collection of structures so that $F$ is homogeneous on all of its $n$ -tuples and the cardinality of $\\mathcal {A}_0 + m$ is less than $k$ .", "By boundedness, we can apply the F-collapse to the first $k$ structures of $\\vec{H}$ with respect to $\\exists x \\, \\varphi (x)$ to get a new homogeneous sequence for $F$ , this time with all structures contained in $Col(k, j, |\\exists x\\, \\varphi (x)|, |\\mathcal {L}|, n) + 1$ as required.", "The last point to note is that applying the $F$ -collapse to a given structure ensures that the first element has cardinality $\\mathcal {L}$ so since $k > |\\mathcal {L}| + m$ the length of the sequence is large enough.", "In a proof very similar to the one for $\\mathsf {PH}$ we now show that $\\mathsf {BCP}$ is true in the standard model.", "Lemma 4.5 In the standard model $\\mathsf {BCP}^{\\prime }$ is true and hence so is $\\mathsf {BCP}$ .", "As in the proof that $\\mathsf {PH}$ holds in the standard model in the last section this is the only proof in this section that is formalized outside of $\\mathsf {PA}$ .", "Fix $r, n, \\mathcal {L}, \\varphi , j, k, m$ and suppose that there is no $N$ witnessing $\\mathsf {BCP}^{\\prime }$ .", "Then for each $N$ we can find a bounded $(n, \\varphi )$ -coloring $F$ with no homogeneous subset as described in the conclusion of $\\mathsf {BCP}^{\\prime }$ .", "Given two such bounded $(n, \\varphi )$ -colorings $F$ and $G$ let $F \\sqsubseteq G$ if $F$ is a coloring on some $N$ and $G$ is a coloring on some $N^{\\prime } >N$ and the restriction of $G$ to $(\\mathcal {L}, n)$ -models contained in $N$ is $F$ .", "By taking all bounded $(n, \\varphi )$ -colorings with no homogeneous subset as described in the conclusion of $\\mathsf {BCP}^{\\prime }$ ordered by $\\sqsubseteq $ , we obtain a finite branching infinite tree, $T$ .", "By König's lemma $T$ an infinite branch, $B$ .", "The function $F_B : =\\bigcup B$ is a coloring of all $(\\mathcal {L}, n)$ -models (contained in any $N < \\omega $ ) fulfilling $\\exists x \\, \\varphi (x)$ .", "Fix an $(\\mathcal {L}, n)$ -model $\\langle \\mathcal {A}_0, ..., \\mathcal {A}_{n-1}\\rangle \\in {\\rm dom}(F_B)$ and, inductively, define $\\mathcal {A}_{m+1}$ to be a partial $\\mathcal {L}$ -structure so that $\\mathcal {A}_m \\subseteq \\mathcal {A}_{m+1}$ (and they're not equal) and $\\langle \\mathcal {A}_{m+1 - (n-1)}, ..., \\mathcal {A}_{m+1}\\rangle \\in {\\rm dom}(F_B)$ .", "Now let $G:[\\omega ]^n \\rightarrow r$ be the map defined by $G(l_0, ..., l_{n-1}) = F_B(\\langle \\mathcal {A}_{l_0}, ..., \\mathcal {A}_{l_{n-1}}\\rangle )$ .", "We have that $G$ is a coloring of $n$ -tuples of $\\omega $ in finitely many colors and hence it has an infinite homogeneous set $H \\subseteq \\omega $ .", "However by the way that $G$ is defined the set $H_B = \\lbrace \\mathcal {A}_l\\; | \\; l \\in H\\rbrace $ is an infinite set of partial $\\mathcal {L}$ -structures so that $F_B \\upharpoonright [H]^n$ is total and constant.", "It follows that taking any large enough initial segment of structures in $H$ will be a homogeneous set of required size for some element of $B$ , which is a contradiction.", "Thus it remains to show that $\\mathsf {PA} + \\mathsf {BCP}^{\\prime }$ implies ${\\rm con} (\\mathsf {PA})$ .", "To this end, fix a formula with one free variable, $\\varphi (x)$ .", "I will show that $\\mathsf {PA} + \\mathsf {BCP} ^{\\prime }$ implies that there is a square increasing $(\\mathcal {L}, n)$ -model of $LNP(\\varphi )$ .", "Upping this to finitely many formulae is then as in the proof of $\\mathsf {PH}$ given in the previous section as noted there so as a result we get that $\\mathsf {PA} + \\mathsf {BCP}$ implies that all finite subsets $\\Gamma \\subseteq \\mathsf {PA}$ have a model and hence $\\mathsf {PA}$ is consistent.", "Thus the remaining lemma to complete the proof of Theorem REF is the following.", "Lemma 4.6 $\\mathsf {BCP}^{\\prime }$ implies that that for all sufficiently large $n$ there is a square increasing $(\\mathcal {L}, n)$ model of $LNP(\\varphi )$ .", "Assume $\\mathsf {BCP}^{\\prime }$ .", "If $\\mathsf {Q}\\cup \\lbrace \\lnot \\exists x \\, \\varphi (x)\\rbrace $ is consistent then we can find for every large enough $n$ an $(\\mathcal {L}, n)$ -model of $\\bigwedge \\mathsf {Q}\\wedge \\exists x \\, \\varphi (x)$ and hence of $\\bigwedge \\mathsf {Q}\\wedge LNP(\\varphi )$ so we are done.", "Therefore let us suppose that $\\mathsf {Q}\\vdash \\exists x \\, \\varphi (x)$ .", "Let $n$ be larger than the depth of $\\bigwedge \\mathsf {Q}\\wedge \\exists x \\, \\varphi (x)$ .", "Fix $m$ large enough so that all terms in $\\varphi (x)$ are defined in $\\mathcal {M}_m$ .", "Without loss assume that $m > 4$ (this will be used in the proof of Claim REF ).", "Enlarge $\\mathcal {L}$ with a constant symbol for each number $a < m^2$ .", "The ensures that any $F$ -collapse of an $(\\mathcal {L}, n)$ -model for $\\exists x \\, \\varphi (x)$ will have all relevant parameters in the first model.", "Now, for an $(\\mathcal {L}, n)$ -model $\\vec{\\mathcal {A}}= \\langle \\mathcal {A}_0,...,\\mathcal {A}_{n-1} \\rangle \\models ^* \\mathsf {Q}$ with $\\mathcal {M}_m \\subseteq \\mathcal {A}_0$ let $F(\\vec{\\mathcal {A}})$ be the least $b \\in \\mathcal {A}_0$ with respect to the linear ordering $\\le $ defined in the top model of $\\vec{\\mathcal {A}}$ so that $\\vec{\\mathcal {A}}\\models ^* \\varphi (b)$ .", "Note that since (internally) we have a sequence of finite models defining a linear order so it makes sense to talk about least elements even if the order on the top model doesn't agree with the $\\mathsf {PA}$ order.", "Moreover by assumption $F$ is always defined.", "Now given $N$ , define a bounded $(n+1, \\bigwedge Q \\wedge \\varphi )$ -coloring $F^{\\prime }$ on $N$ as follows: $F^{\\prime }(\\langle \\mathcal {A}_0, \\mathcal {A}_1, \\mathcal {A}_2, ..., \\mathcal {A}_n\\rangle ) ={\\left\\lbrace \\begin{array}{ll}0, & F(\\mathcal {A}_0, \\mathcal {A}_1, \\mathcal {A}_3, ..., \\mathcal {A}_n) =F (\\mathcal {A}_0, \\mathcal {A}_2, \\mathcal {A}_3, ..., \\mathcal {A}_n)\\\\\\end{array}1, & otherwise\\right.", "}$ I claim that for any $N > n + 1$ this is in fact a bounded $(n, \\bigwedge \\mathsf {Q} \\wedge \\varphi )$ -coloring in 2-colors.", "For boundedness, suppose $k \\ge n + 1$ and we have an $(\\mathcal {L}, k)$ -model $\\vec{\\mathcal {A}} = \\langle \\mathcal {A}_0, ..., \\mathcal {A}_{k-1}\\rangle $ .", "By assumption, this sequence fulfills $\\bigwedge \\mathsf {Q} \\wedge \\exists x \\varphi (x)$ so we can find its F-collapse with respect to $\\bigwedge \\mathsf {Q} \\wedge \\exists x \\, \\varphi (x)$ .", "Let $\\vec{\\mathcal {B}} = \\langle \\mathcal {B}_0, ..., \\mathcal {B}_{k-1}\\rangle $ be this F-collapse.", "Now we have that for some $n+1$ -tuple of elements from $\\vec{\\mathcal {B}}$ , say $\\langle \\mathcal {B}_{i_0}, ..., \\mathcal {B}_{i_n} \\rangle $ $F(\\langle \\mathcal {B}_{i_0}, ..., \\mathcal {B}_{i_n} \\rangle ) = 1$ if and only if the minimal $x$ so that $\\langle \\mathcal {B}_{i_0}, \\mathcal {B}_{i_1}, \\mathcal {B}_{i_3}, ..., \\mathcal {B}_{i_n}\\rangle \\models ^* \\varphi (x)$ is different from the minimal $y$ so that $\\langle \\mathcal {B}_{i_0}, \\mathcal {B}_{i_2}, \\mathcal {B}_{i_3}, ..., \\mathcal {B}_{i_n}\\rangle \\models ^* \\varphi (y)$ but by the construction in the Finite Model Lemma, Lemma REF , condition 5, this happens if and only if exactly the same must have been true of the $\\mathcal {A}$ 's, as needed.", "Now let $N$ be large enough so that, applying $\\mathsf {BCP}^{\\prime }$ the coloring $F^{\\prime }$ restricted to structures contained in $N$ has a homogeneous sequence as described in the conclusion of $\\mathsf {BCP}^{\\prime }$ .", "In other words there is a structure $\\mathcal {A}_0$ , a number $k > |\\mathcal {A}_0| + m + n + 1$ and a sequence $\\vec{H} = \\langle \\mathcal {A}_0, ..., \\mathcal {A}_{k-1}\\rangle $ so that $F^{\\prime }[\\vec{H}]^n$ is constant and well defined.", "Claim 4.7 $F^{\\prime }\\upharpoonright [\\vec{H}]^n$ is identically 0.", "Otherwise $F^{\\prime } \\upharpoonright [\\vec{H}]^n$ is identically 1.", "But, then for each $1\\ge i \\ge k - n - 4$ we get a distinct value of $\\mathcal {A}_0$ for $F(\\langle \\mathcal {A}_0, \\mathcal {A}_i, \\mathcal {A}_{k - n +3}, \\mathcal {A}_{k - n + 2}, ..., \\mathcal {A}_{k-1}\\rangle )$ .", "However this implies $|\\mathcal {A}_0| \\ge k - n - 4$ contradicting the fact that $|\\mathcal {A}_0| < k - m - n - 1$ since we assumed that $m > 4$ .", "Now one can argue the same way as in the proof of the Paris-Harrington theorem we get that the first $n +1$ -tuple of elements from $\\vec{H}$ , will satisfy $LNP(\\varphi )$ .", "For completeness we repeat this proof here.", "We have that $\\langle \\mathcal {A}_0, ..., \\mathcal {A}_{n}\\rangle \\models ^* \\bigwedge \\mathsf {Q} \\wedge \\exists x \\, \\varphi (x)$ .", "Let $b_H$ be the (externally) least element of $\\mathcal {A}_n$ witnessing $\\varphi (x)$ .", "Since $H$ is homogeneous $b_H$ doesn't depend on which tuple from $H$ we pick.", "Note that have that $\\langle \\mathcal {A}_0, ..., \\mathcal {A}_n\\rangle \\models ^* \\bigwedge \\mathsf {Q} \\wedge \\varphi (b_H)$ .", "If this sequence doesn't fulfill that $b_H$ is minimal then we can find a $c < b_H$ so that $\\langle \\mathcal {A}_1, ..., \\mathcal {A}_n\\rangle \\models ^* \\varphi (c) \\wedge c < b_H$ .", "However this is a contradiction to the fact that $F \\upharpoonright [H]^n$ is homogeneously 0 as witnessed by the element $b_H$ ." ], [ "Conclusion and Open Questions", "I conclude with a number of questions.", "Some of these questions may seem inherently somewhat vague in nature.", "This mostly stems from the fact that the subject of “mathematical independence\" is open to interpretation much more so than many other areas of mathematics.", "Nevertheless, it seems worth trying to understand whether $(\\mathcal {L}, n)$ -models can provide a more robust, streamlined approach to independence in $\\mathsf {PA}$ .", "To this end, the most obvious family of questions concerns the degree to which other known independence theorems in arithmetic can be reproved using $(\\mathcal {L}, n)$ -models.", "For example the following is perhaps a good test question.", "Question 1 Can the independence of Goodstein's theorem be proved using $(\\mathcal {L}, n)$ -models?", "Of course there are many variations on this type of question since we can ask it about any number of other statements known to be independent of $\\mathsf {PA}$ .", "One can also ask more specifically about $\\Pi ^0_1$ independent statements in arithmetic and their relation to $(\\mathcal {L}, n)$ -models.", "The first of question I ask of this form concerns the collapse function.", "Note that this bound is used in the example $\\mathsf {BCP}$ above and also in Shelah's example from [7].", "Question 2 Given a primitive recursive function $f$ if a sentence of the form $\\forall x \\, \\varphi (x, f(x))$ is independent of $\\mathsf {PA}$ , can we describe the growth rate of $f$ compared to $Col (i, j, k, l)$ ?", "In the specific case of $\\mathsf {BCP}$ one might wonder whether a fast growing function is, somehow, “hiding\".", "This seems to be true in the following sense.", "As noted when introducing bounded colorings, the number of colors $r$ is somewhat immaterial as the notion of bounded puts a limit on how many colors can actually appear.", "Therefore, it is not clear how big $N$ needs to be in order to find a bounded $(n, \\varphi )$ -coloring on $N$ which surjects onto $r$ (for any fixed $r$ ).", "This function $r \\mapsto N$ appears to grow rather quickly.", "Question 3 How fast does the function mapping $r$ to $N$ as described above grow?", "On the other hand, both Shelah's original example and $\\mathsf {BCP}$ use structures and hence are explicitly model theoretic.", "Arguably this makes them less “mathematical\" than $\\mathsf {PH}$ .", "A vague, though interesting question, is whether one can avoid speaking of structures in this context.", "Question 4 Are there true, unprovable $\\Pi ^0_1$ sentences of Ramsey theory which do not refer to structures in any way?", "In a different direction, the construction of $(\\mathcal {L}, n)$ -models also works in the context of other foundational theories.", "For instance, one could run similar arguments in fragments of arithmetic, subsystems of second order arithmetic or various set theories.", "Question 5 What applications are there of $(\\mathcal {L}, n)$ -models in other foundational theories?", "For instance can $(\\mathcal {L}, n)$ -models be used to give an example of a sentence of some “mathematical content\" independent of $\\mathsf {ZFC}+ V = L$ ?" ] ]
1906.04273
[ [ "(2+1)-dimensional Static Cyclic Symmetric Traversable Wormhole:\n Quasinormal Modes and Causality" ], [ "Abstract In this paper we study a static cyclic symmetric traversable wormhole in $(2+1)-$dimensional gravity coupled to nonlinear electrodynamics in anti-de Sitter spacetime.", "The solution is characterized by three parameters: mass $M$, cosmological constant $\\Lambda$ and one electromagnetic parameter, $q_{\\alpha}$.", "The causality of this spacetime is studied, determining its maximal extension and constructing then the corresponding Kruskal-Szekeres and Penrose diagrams.", "The quasinormal modes (QNMs) that result from considering a massive scalar test field in the wormhole background are determined by solving in exact form the Klein-Gordon equation; the effective potential resembles the one of a harmonic oscillator shifted from its equilibrium position and, consequently, the QNMs have a pure point spectrum." ], [ "Introduction", "Anti-de Sitter gravity in $(2+1)$ -dimensions has attracted a lot of attention due to its connection to a Yang-Mills theory with the Chern-Simons term [1], [2].", "Moreover, taking advantage of simplifications due to the dimensional reduction, three dimensional Einstein theory of gravity has turned out a good model from which extract relevant insights regarding the quantum nature of gravity [3].", "In three spacetime dimensions, general relativity becomes a topological field theory without propagating degrees of freedom.", "Additionally, in string theory, there are near extremal black holes (BHs) whose entropy can be calculated and have a near-horizon geometry containing the Bañados-Teitelboim-Zanelli (BTZ) solution [4], [5].", "Particularly for the (2+1)-dimensional BTZ black hole (BH), the two-dimensional conformal description has by now well established [6]: the BTZ-BH provides a precise mathematical model of a holographic manifold.", "For these reasons systems where the conformal description can be carried out all the way through are very valuable.", "On the other hand nonlinear electrodynamics (NLED) has gained interest for a number of reasons.", "Nonlinear electrodynamics consists of theories derived from Lagrangians that depend arbitrarily on the two electromagnetic invariants, $F= 2(E^2-B^2)$ and $G= E \\cdot B$ , i.e.", "$L(F,G)$ .", "The ways in which $L(F,G)$ may be chosen are many, but two of them are outstanding: the Euler-Heisenberg theory [7], derived from quantum electrodynamics assumptions, takes into account some nonlinear features like the interaction of light by light.", "And the Born-Infeld theory [8], [9], proposed originally with the aim of avoiding the singularity in the electric field and the self-energy due to a point charge, it is a classical effective theory that describes nonlinear features arising from the interaction of very strong electromagnetic fields, where Maxwell linear superposition principle is not valid anymore.", "Interesting solutions have been derived from the Einstein gravity coupled to NLED, like regular BHs, wormholes (WHs) sustained with NLED, among others, see for instance [10].", "It is also worth to mention that some NLED arise from the spontaneous Lorentz symmetry breaking (LSB), triggered by a non-zero vacuum expectation value of the field strength [11].", "WHs in the anti- de Sitter (AdS) gravity are interesting objects to study.", "For instance, regarding the transmission of information through the throat, the understanding of the details of the traversable wormhole (WH) and its quantum information implications would shed light on the lost information problem [12].", "The thermodynamics of a WH and its trapped surfaces was addressed in [13], establishing that the accretion of phantom energy, considered as thermal radiation coming out from the WH, can significantly widen the radius of the throat.", "In [14] it is shown that Euclidean geometries with two boundaries that are connected through the bulk are similar to WH in the sense that they connect two well understood asymptotic regions.", "In [15] it is constructed a WH via a double trace deformation.", "Alternatively, WH solutions are constructed by gluing two spacetimes at null hypersurfaces, [16] [17] .", "Contrasting this procedure, in a recent paper the authors derived exact solutions of the Einstein equations coupled to NLED that can be interpreted as WHs and for certain values of the parameters such solutions become the BTZ-BH [18].", "Which has become an excellent laboratory for studying quantum effects since the seminal paper [19].", "And regarding LSB, it can be mentioned as well that WH solutions have been derived in the context of the bumblebee gravity [20], their QNMs have been studied in [21], and the corresponding gravitational lensing in [22].", "Moreover, WHs are related to BHs; BH and WH spacetimes are obtained by identifying points in (2+1)-dimensional AdS space by means of a discrete group of isometries, some of them resulting in non-eternal BHs with collapsing WH topologies [23].", "In this paper we present an exact solution of the Einstein equations in (2+1)-dimensions with a negative cosmological constant (AdS) coupled to NLED.", "The solution can be interpreted as a WH sourced by the NLED field with a Lagrangian of the form $F^{1/2}$ .", "This solution is a particular case of a broader family of solutions previously presented in [18].", "The solution is characterized by three parameters: mass $M$ , cosmological constant $- \\Lambda =1 /l^2$ and the electromagnetic parameter $q_{\\alpha }$ .", "The analogue to the Kruskal-Szekeres diagram is constructed for the WH, and the causality is investigated by means of the Penrose diagram, showing that the light trajectories traverse the WH.", "The WH Penrose diagram resembles the anti-de Sitter one with the WH embedded in it.", "A massive scalar test field is considered in the WH background; the corresponding Klein-Gordon (KG) equation, when written in terms of the tortoise coordinate, acquires a Schrödinger-like form and it is solved in exact form determining the frequencies of the massive scalar field; the boundary conditions are of purely ingoing waves at the throat and zero outgoing waves at infinity.", "The effective potential in the KG equation is a confining one and, accordingly, we found that the spectrum is real, showing then that the wormhole does not swallow the field as a black hole would, but the field goes through the throat passing then to the continuation of the WH, and preserving the energy of the test field.", "This also shows the stability of the scalar field in this WH background.", "The outline of the paper is as follows.", "In the next section we present the metric for the WH and the field that sources it as well as a brief review on its derivation.", "In Section III we find the maximal extension and then the Penrose diagram is constructed.", "In Section IV the KG equation for a massive scalar field is considered in the WH background.", "The radial sector of the KG equation, when written in terms of the tortoise coordinate, takes the form of a Schrödinger equation that is exactly solved, obtaining the QNMs by imposing the appropriate WH boundary conditions.", "Final remarks are given in the last section.", "Details on the derivation of the QNMs as well as the setting of the boundary conditions are presented as an Appendix." ], [ "The wormhole sourced by nonlinear electrodynamics", "The action of the (2+1) Einstein theory with cosmological constant, coupled to NLED is given by $S[g_{ab},A_{a}]= \\int d^{3}x \\sqrt{-g} \\left( \\frac{1}{16\\pi }(R - 2\\Lambda ) + L(F) \\right),$ where $R$ is the Ricci scalar and $\\Lambda $ is the cosmological constant; $L(F)$ is the NLED characteristic Lagrangian.", "Varying this action with respect to gravitational field gives the Einstein equations, $G_{ab} + \\Lambda g_{ab} = 8\\pi E_{ab},$ where $E_{ab}$ is the electromagnetic energy-momentum tensor, $4\\pi E_{ab} = g_{ab}L(F) - f_{ac}f_{b}{}^{c}L_{F}, $ where $L_{F}$ stands for the derivative of $L(F)$ with respect to $F$ and $f_{a b}$ are the components of the electromagnetic field tensor.", "The variation with respect to the electromagnetic potential $A_{a}$ entering in $f_{ab} = 2\\partial _{[a} A_{b]}$ , yields the electromagnetic field equations, $\\nabla _{a}(L_{F}f^{ab}) = 0 = \\nabla _{a}( _{\\ast }f )^{a},$ where $( _{\\ast }f)^{a}$ is the dual electromagnetic field tensor which, for (2+1)-dimensional gravity, in terms of $f^{ab}$ , is defined by $(_{\\ast } f)_{a} = \\frac{\\sqrt{-g}}{3} \\left( f^{tr} \\delta ^{\\phi }_{a} + f^{r\\phi } \\delta ^{t}_{a} + f^{\\phi t} \\delta ^{r}_{a} \\right)$ with $(a= t, r, \\phi ).$ We shall consider the particular nonlinear Lagrangian, $L(F)= \\sqrt{-sF};$ these kind of Lagrangians have been called Einstein-power-Maxwell theories [24], [25].", "On the other hand, in [18] was shown that in $(2+1)$ Einstein theory coupled to NLED the most general form of the electromagnetic fields for stationary cyclic symmetric $(2+1)$ spacetimes, i.e., the general solution to Eqs.", "(REF ), is given by $_{\\ast }f = (g_{rr}c/\\sqrt{-g})dr + (a/3L_{F})dt + (b/3L_{F})d\\phi $ , where $a$ , $b$ and $c$ are constant, that by virtue of the Ricci circularity conditions, are subjected to the restriction that $ac=0=bc$ .", "Therefore, in this geometry, in order to describe the electromagnetic field tensor, we have two disjoint branches; [$a=0=b$ , $c\\ne 0$ ] and [($a\\ne 0 \\vee b\\ne 0$ ), $c=0$ ].", "Here we are considering the branch $c\\ne 0$ , and thus the only non-null electromagnetic field tensor component and the electromagnetic invariant are given, respectively, by $f^{\\phi t}=\\frac{3 g_{rr} c}{(\\sqrt{-g})^2}, \\quad F = \\frac{1}{2}f^{\\phi t}f_{\\phi t} = \\frac{9}{2}\\frac{c^{2}}{g_{tt}g_{\\phi \\phi }}.$ With these assumptions a five-parameter family of solutions with a charged rotating wormhole interpretation was previously presented in [18].", "In this work we shall address in detail the (2+1)-dimensional static cyclic symmetric wormhole.", "For the sake of completeness, we give a brief review on the derivation of the solution.", "The field equations of general relativity (with cosmological constant) coupled to NLED for a static cyclic symmetric (2+1)-dimensional spacetime with line element $ds^{2} = - N^{2}(r) dt^{2} + \\frac{dr^{2}}{f^{2}(r)} + r^{2} d\\phi ^{2},$ written in the orthonormal frame $\\lbrace $ $\\theta ^{(0)} = N(r) dt$ , $\\theta ^{(1)} = \\frac{dr}{f(r)}$ , $\\theta ^{(2)} = r d\\phi $ $\\rbrace $ , are given by $&& G_{(0)}{}^{(0)} = 8\\pi E_{(0)}{}^{(0)} - \\Lambda \\delta _{(0)}^{(0)} \\quad \\Rightarrow \\quad \\frac{ (f^{2})_{,r} }{2r} = 2 \\left( L - 2FL_{F} \\right) - \\Lambda , \\\\&&G_{(1)}{}^{(1)} = 8\\pi E_{(1)}{}^{(1)} - \\Lambda \\delta _{(1)}^{(1)} \\quad \\Rightarrow \\quad \\frac{ f^{2} N_{,r} }{rN} = 2L - \\Lambda , \\\\&&G_{(2)}{}^{(2)} = 8\\pi E_{(2)}{}^{(2)} - \\Lambda \\delta _{(2)}^{(2)} \\quad \\Rightarrow \\quad \\frac{ f (f N_{,r} )_{,r} }{N} = 2 \\left( L - 2FL_{F} \\right) - \\Lambda , $ where the comma denotes ordinary derivative with respect to the radial coordinate $r$ .", "The metric given by $ds^{2} = - \\left( - q_{\\alpha }Mr + q_{\\beta } \\sqrt{ - M - \\Lambda r^{2}} \\right)^{2} dt^{2} + \\frac{dr^{2}}{ - M - \\Lambda r^{2} } + r^{2} d\\phi ^{2},$ is a solution of the Einstein-NLED field equations, with cosmological constant, with the nonlinear electromagnetic Lagrangian $L(F)= \\sqrt{-sF}$ , whose electromagnetic field tensor is given by (REF ) and with the electromagnetic parameter $c$ given by $c = \\sqrt{2}M^{2} q_{\\alpha }/ (6 \\sqrt{s}) $ .", "In order to obtain the solution (REF ), note that $L(F)= \\sqrt{-sF}$ is such that $\\left( L - 2FL_{F} \\right) = 0$ , then Eq.", "(REF ) becomes, $(f^{2})_{,r} = - 2\\Lambda r \\Rightarrow f^{2}(r) = - M - \\Lambda r^{2},$ with $M$ being an integration constant.", "On the other hand, according to (REF ), for the line element (REF ) the invariant $F$ takes the form, $F = -\\frac{1}{2} \\left( \\frac{3c}{rN} \\right)^{2}.$ If one replaces $F$ from Eq.", "(REF ) into $L(F)$ in Eq.", "(), we arrive at $\\frac{ f^{2} N_{,r} }{rN} = 2L - \\Lambda \\Rightarrow \\frac{ \\left(- M - \\Lambda r^{2} \\right) N_{,r} }{rN}= 2\\sqrt{\\frac{s}{2} \\left( \\frac{3c}{rN} \\right)^{2} } - \\Lambda \\Rightarrow \\left(- M - \\Lambda r^{2} \\right) N_{,r} + \\Lambda r N = 3\\sqrt{2s}c.$ Now, by substituting $c = \\sqrt{2}M^{2} q_{\\alpha }/ (6 \\sqrt{s}) $ into the previous equation, yields $\\left(- M - \\Lambda r^{2} \\right) N_{,r} + \\Lambda r N = M^{2} q_{\\alpha },$ whose general solution is $N (r) = - q_{\\alpha }Mr + q_{\\beta } \\sqrt{ - M - \\Lambda r^{2}},$ where $q_{\\beta }$ is an integration constant.", "Finally, by substituting (REF ) and (REF ) into (), one finds $\\frac{ f (f N_{,r} )_{,r} }{N} = \\sqrt{ - M - \\Lambda r^{2} } \\frac{ \\left( q_{\\alpha }M \\frac{\\Lambda r}{\\sqrt{- M - \\Lambda r^{2}}} - q_{\\beta }\\Lambda \\right) }{ - q_{\\alpha }Mr + q_{\\beta } \\sqrt{ - M - \\Lambda r^{2}} } = \\frac{ \\left( q_{\\alpha }M r - q_{\\beta } \\sqrt{ - M - \\Lambda r^{2} } \\right) \\Lambda }{ - q_{\\alpha }Mr + q_{\\beta } \\sqrt{ - M - \\Lambda r^{2}} } = - \\Lambda ,$ such that Eq.", "() is trivially satisfied by the Lagrangian $L = \\sqrt{-sF}$ , the structural functions $f^{2}(r)$ , $N^{2}(r)$ given by (REF ) and (REF ), and the electromagnetic field given by (REF )." ], [ "Wormhole properties", "Let us show that the solution (REF ) allows a traversable wormhole interpretation.", "The canonical metric for a (2+1)-dimensional static cyclic symmetric WH [26] is given by $ds^{2} = - e^{2 \\Phi (r)} dt^{2} + \\frac{dr^{2}}{ 1 - \\frac{b(r)}{r} } + r^{2} d\\phi ^{2}.$ By comparison with (REF ) we see that $e^{\\Phi (r)} = -q_{\\alpha }Mr +q_{\\beta } \\sqrt{ - M - \\Lambda r^{2}}$ and $b(r)=r(1+M + \\Lambda r^2)$ , where $- \\Lambda = 1/ l^2$ .", "In this paper the case $q_{\\beta }=0$ will be the subject of our study, $ds^{2} = - \\left( - q_{\\alpha }Mr \\right)^{2} dt^{2} + \\frac{dr^{2}}{\\frac{ r^{2}}{l^2} -M} + r^{2} d\\phi ^{2} \\quad \\textup { with } \\quad M > 0.$ Then we can check the WH properties of the metric (REF ): (i) The existence of a throat $r_0$ where $b(r_0)= r_0$ .", "Such a throat is located at $r_0= \\sqrt{l^{2}M}$ .", "The range of the $r$ -coordinate is in the interval $r \\in [r_0, \\infty )$ .", "(ii) The absence of horizons.", "It is fulfilled since $ e^{2 \\Phi (r)}=( - q_{\\alpha }Mr)^{2}$ is nonzero for all $r \\in [r_0, \\infty )$ .", "(iii) The fulfilment of the flaring out condition that is related to the traversability of the WH.", "We shall see that traversability has a consequence on the form of the QNMs.", "This condition is guaranteed if the derivative of $b(r)$ when evaluated at the throat is less than one, $b^{\\prime }(r_0) < 1$ ; in our case, $b^{\\prime }(r_0)= 1-2M < 1$ .", "The nonlinear field in our case is generated by the Lagrangian $L(F)= \\sqrt{-sF},$ where $F$ , the electromagnetic invariant, and the only non-vanishing electromagnetic component, $f_{t \\phi }$ , are given, respectively, by $F=- \\frac{M^2}{4 s r^4}, \\quad f_{t \\phi }= - \\partial _{\\phi } A_t= \\frac{q_{\\alpha } M^2}{\\sqrt{2s}}.$ Moreover, it is well known that in GR matter obeying the standard energy conditions is not worth to open a throat and so create a traversable wormhole.", "In the case we are analyzing, the NLED energy-momentum tensor does not satisfy the null energy condition (NEC), rendering this into a traversable WH.", "To check the violation of the NEC due to NLED, let us consider the null vector in the orthonormal frame, $ n = (1,1,0), $ and calculate $ E_{(\\alpha )(\\beta )}n^{(\\alpha )}n^{(\\beta )} = E_{(0)(0)} + E_{(1)(1)} = L(F)/(4 \\pi )$ , then, using () to determine $L(F)$ , we obtain that $E_{(\\alpha )(\\beta )}n^{(\\alpha )}n^{(\\beta )}= - \\frac{M}{8 \\pi r^2} < 0,$ from which we see that NEC is violated; particularly, evaluating at the throat $r_0^2= M l^2$ , $E_{(\\alpha )(\\beta )}n^{(\\alpha )}n^{(\\beta )}= - 1/(8 \\pi l^2)$ ." ], [ "The maximal extension and causality: Kruskal-Szekeres and Penrose diagrams ", "In order to understand the causal structure and the structure at infinity of the WH with metric (REF ), we will construct its Penrose diagram.", "Following the standard procedure we derive first the analogue to the Kruskal-Szekeres diagram.", "To start with, since the causal structure is defined by the light cones, we need to consider the radial null curves which by definition satisfy the null condition $0 = ds^{2}(k^{\\alpha },k^{\\beta }) $ , $k^{\\alpha }$ being a null vector; that implies $\\frac{dt}{dr} = \\pm \\frac{1}{\\sqrt{ ( r^{2}/l^2 -M)q_{\\alpha }^{2}M^{2}r^{2} } }.$ Since the metric (REF ) has a coordinate singularity at $r = \\sqrt{ -\\frac{M}{\\Lambda } } = \\sqrt{l^{2}M}$ , we shall use the tortoise coordinate $r_{\\ast }$ defined by $\\frac{dr_{\\ast }}{dr} = \\sqrt{-\\frac{g^{tt}}{g^{rr}}} = \\frac{1}{\\sqrt{ ( {r^{2}}/{l^2} - M ) \\left( q_{\\alpha } Mr \\right)^{2} }}.$ Integrating Eq.", "(REF ) for the tortoise coordinate, $r_{\\ast }$ , we obtain $r_{\\ast } = -\\frac{i}{ 2 \\sqrt{ q_{\\alpha }^{2} M^{3} } }\\ln {\\!\\left( \\frac{ \\sqrt{ M -{r^{2}}/{l^{2}} } + \\sqrt{M} }{ \\sqrt{ M -{r^{2}}/{l^{2}} } - \\sqrt{M} } \\right) }.$ We should remark that $r_{\\ast }$ is real, in spite of how it looks Eq.", "(REF ).", "It turns out that $r_{\\ast }$ in the previous form is very convenient when applying the WH boundary conditions to the KG equation.", "It can be shown that $r_{\\ast }$ can be written equivalently as $r_{\\ast } = - \\frac{ 1 }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } } \\tan ^{-1} \\left({ \\sqrt{\\frac{M}{ {r^2}/{l^2}-M}}} \\right).$ Since the function $\\tan (x)$ is periodic, then $r_{\\ast }$ is not uniquely defined in terms of $r$ , i.e., for each value of $r$ there are multiple values of $r_{\\ast }$ , $r_{\\ast } + \\frac{1}{ \\sqrt{ q_{\\alpha }^{2} M^{3} } } \\pi \\xi _{_{n}}$ , with $\\xi _{_{n}} \\in \\mathbb {Z}$ .", "The range of $r_{\\ast }$ is determined by its values at the throat, $r_0$ , and at infinity: at the throat $ r_{\\ast }(r_0) =\\frac{1}{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }(-\\frac{\\pi }{2} + \\pi \\xi _{_{n}})$ , while at the AdS infinity, $r \\sim \\infty $ , $r_{\\ast } \\sim \\frac{\\pi \\xi _{_{n}}}{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ , where $\\xi _{_{n}}$ is the integer defining each particular branch.", "Since all these branches are equivalent, we select the branch $\\xi _{_{n}}=1$ ; consequently, the range of the tortoise coordinate is $\\frac{\\pi }{2 \\sqrt{ q_{\\alpha }^{2} M^{3} } } \\le r_{\\ast } < \\frac{\\pi }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ .", "From Eq.", "(REF ) we can obtain $r(r_{\\ast })$ , $r^2= M l^2 \\left[{ 1 + \\cot ^2\\!\\left( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast } \\right) }\\right]= M l^2 \\csc ^2\\!\\left( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }\\right).$ The tortoise coordinate as a function of $r$ as well as its inverse are shown in Fig.", "REF .", "Figure: The tortoise coordinate r * r_{\\ast } as a function of the coordinate rr, Eq.", "(), as well as its inverse, Eq.", "() (right) are plotted; the dashed red straight lines show the position of the throat, r 0 2 =Ml 2 r_0^2=M l^2.", "The parameters are fixed as M=1M=1, q α =0.5q_{\\alpha }=0.5 and the AdS parameter is l=2l=2.In terms of the coordinates $(t, r_{\\ast }, \\phi )$ the line element (REF ) becomes $ds^{2} = q_{\\alpha }^{2}M^{2}r^{2}\\left( - dt^{2} + dr^{2}_{\\ast } \\right) + r^{2}d\\phi ^{2}.$ In terms of these coordinates the radial null geodesics satisfy $t = \\pm r_{\\ast } +$ constant.", "This motivates us to define the advanced and retarded null coordinates $v$ and $u$ , respectively, by $v = t + r_{\\ast } \\quad \\textup { and } \\quad u = t - r_{\\ast },$ where $-\\infty < v < \\infty $ , $-\\infty < u < \\infty $ .", "In these coordinates the metric (REF ) becomes $ds^{2} = - q_{\\alpha }^{2} M^{3} l^2 \\csc ^{2}{ \\left( \\frac{u - v}{2}\\sqrt{q_{\\alpha }^{2} M^{3} } \\right) } du dv + r^{2}d\\phi ^{2}.$ From Eq.", "(REF ) and using $r_{\\ast } = (v-u)/2 $ , it can be determined $r$ as a function of $(u,v)$ as, $r^{2} = l^{2}M\\csc ^{2}{ \\left( \\frac{(v - u) }{2}\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) }.$ Despite the coordinate ranges $-\\infty < u < \\infty $ and $-\\infty < v < \\infty $ , the metric (REF ) spans only on the region $\\frac{\\pi }{2 \\sqrt{ q_{\\alpha }^{2} M^{3} }} \\le r_{\\ast } = \\frac{v - u}{2} < \\frac{\\pi }{ \\sqrt{q_{\\alpha }^{2} M^{3} }}$ .", "In order to extend the spacetime beyond the wormhole throat, $r_0 = \\sqrt{l^{2}M}$ , we are going to determine the affine parameter, $\\tau $ , along the null geodesics and reparametrize them with the coordinates $V = V(v)$ , and $U = U(u)$ .", "We know that the geodesic tangent vector $K = K^{\\beta }\\partial _{\\beta } = \\frac{ dx^{\\beta } }{ d \\tau }\\partial _{\\beta }$ satisfies $K^{\\alpha } \\nabla _{\\alpha } K^{\\beta } = 0.$ The tangent vector can be written as $K = \\frac{ dr }{ d \\tau }\\left( \\pm \\frac{1}{\\sqrt{ (r^{2}/l^2 -M)q_{\\alpha }^{2}M^{2}r^{2} } } \\partial _{t} + \\partial _{r} \\right) $ .", "Thus, by substituting $K^{\\beta }$ into (REF ), we find for the affine parameter $ \\tau $ , $\\tau = {C}_{0} \\cot { \\left( \\frac{u - v }{2}\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) } + C_{1},$ where ${C}_{0} $ and $C_{1}$ are integration constants.", "Then, the affine parameter along the null geodesics suggests to define the new coordinates $U$ and $V$ as $U = \\cot { \\left( \\frac{u}{2}\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) } \\quad \\textup { and } \\quad V = \\cot { \\left( -\\frac{v}{2}\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right), } \\quad \\textup { with ranges } \\quad -\\infty < U, V < \\infty ,$ therefore, in terms of $U$ and $V$ the metric (REF ) becomes $ds^{2} = \\frac{4 l^2}{(U+V)^2} dU dV + r^2 d\\phi ^{2},$ where we have used that $r^2= l^2M (1 + U^{2})(1 + V^{2} )/(U+V)^2$ .", "By transforming to $X = (V+U)/2$ and $T = (V-U)/2$ , the metric (REF ) can be reduced to a more usual form given by $ds^{2} = \\frac{ l^{2} }{ X^{2} } \\left( -dT^{2} + dX^{2} \\right) + r^2 d\\phi ^{2}.$ We can see that the coordinates $(T,X)$ are the analogue to the Kruskal coordinates in Schwarzschild spacetime.", "In terms of $(t, r_{\\ast })$ the coordinates $(T, X)$ are $X^2-T^2=UV= \\frac{\\cos \\left({t \\sqrt{q_{\\alpha }^{2}M^3} }\\right) + \\cos \\left({ r_{\\ast } \\sqrt{q_{\\alpha }^{2}M^3}}\\right)}{\\cos \\left({t \\sqrt{q_{\\alpha }^{2}M^3}}\\right) - \\cos \\left({ r_{\\ast } \\sqrt{q_{\\alpha }^{2}M^3}}\\right)}, \\quad \\quad \\frac{ X + T }{X - T} = \\frac{V}{U} = \\frac{ \\sin {\\left( r_{\\ast }\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right)} - \\sin {\\left( t\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) } }{ \\sin {\\left( r_{\\ast }\\sqrt{q_{\\alpha }^{2} M^{3} } \\right)} + \\sin {\\left( t\\sqrt{q_{\\alpha }^{2} M^{3} } \\right) } }.$ from the above equations we deduce that, in terms of $T$ and $X$ , the region corresponding to the wormhole throat $r=r_{0}= \\sqrt{l^{2}M}$ , or $r_{\\ast } = \\frac{\\pi }{2 \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ is determined by the two equations $X^2-T^2 = 1 \\quad \\textup { and } \\quad \\frac{ X + T }{X - T} = \\frac{ 1 - \\sin {\\left( t\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) } }{ 1 + \\sin {\\left( t\\sqrt{q_{\\alpha }^{2} M^{3} } \\right) } }$ Since $X^2-T^2 = 1$ , then $X\\ne T$ and $\\frac{ X + T }{X - T} = \\frac{ 1 - \\sin {\\left( t\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) } }{ 1 + \\sin {\\left( t\\sqrt{q_{\\alpha }^{2} M^{3} } \\right) } }$ can be written as $T = m(t) X, \\quad \\textup { with } \\quad m(t) = -\\sin (t) \\in (-1,1),$ $T = m(t) X$ corresponds to the region determined by all the straight lines that cross the origin $(X=0,T=0)$ with slope between $-1$ and 1.", "The intersection between $X^2-T^2 = 1$ and $T = m(t) X$ , yields $X^2-T^2 = 1$ .", "Thus, the region corresponding to the wormhole throat in terms of $X$ and $T$ , corresponds to $X^2-T^2 = 1$ , i.e.", "the hyperbola with vertices at $(X=\\pm 1,T=0)$ .", "From Eqs.", "(REF ) can be obtained that $T^{2} \\sin ^{2}{\\left( r_{\\ast }\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right)} + X^{2} \\left( \\frac{T^{2} - X^{2} - 1}{T^{2} - X^{2} + 1} \\right)^{2} \\cos ^{2}{\\left( r_{\\ast }\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right)} = X^{2}.$ Regarding infinity, from the previous equation, the asymptotic AdS region $r\\sim \\infty $ , or $r_{\\ast } \\sim \\frac{\\pi }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ , in terms of $X$ and $T$ , is given by $X^{2}\\left( \\frac{T^{2} - X^{2} - 1}{T^{2} - X^{2} + 1} \\right)^{2} \\sim X^{2},$ which is fulfilled by $X \\sim 0$ , or by the regionFor $r_{\\ast }\\sim \\frac{\\pi }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ and $t\\in \\mathbb {R}$ , Eqs.", "(REF ) become $X^2-T^2= \\frac{ \\cos \\left({t \\sqrt{q_{\\alpha }^{2}M^3} }\\right) - 1^{-} }{ \\cos \\left({t \\sqrt{q_{\\alpha }^{2}M^{3}}}\\right) + 1^{-}}, \\quad \\quad \\frac{ X + T }{X - T} = \\frac{V}{U} = \\frac{ 0^{+} - \\sin {\\left( t\\sqrt{ q_{\\alpha }^{2} M^{3} } \\right) } }{ 0^{+} + \\sin {\\left( t\\sqrt{q_{\\alpha }^{2} M^{3} } \\right) } }, \\quad \\textup { being}\\quad 0 \\lessapprox 0^{+}, \\textup { and } 1^{-} \\lessapprox 1.$ For $\\frac{(2n+1)\\pi }{ \\sqrt{q_{\\alpha }^{2}M^3}} \\ne t \\lnot \\approx \\frac{(2n+1)\\pi }{ \\sqrt{q_{\\alpha }^{2}M^3}}$ with $n\\in \\mathbb {Z}$ , the Eqs.", "(REF ), yields $X=0$ and $T^2 = \\frac{1-\\cos \\left({t \\sqrt{q_{\\alpha }^{2}M^3} }\\right) }{1 + \\cos \\left({t \\sqrt{q_{\\alpha }^{2}M^3}}\\right) }$ .", "For $t = \\frac{(2n+1)\\pi }{ \\sqrt{q_{\\alpha }^{2}M^3}}$ with $n\\in \\mathbb {Z}$ , the Eqs.", "(REF ), yields $X^2 - T^2 \\gg 1$ and $|X|\\gg |T|$ .", "For $t\\ne \\frac{(2n+1)\\pi }{ \\sqrt{q_{\\alpha }^{2}M^3}}$ with $t\\sim \\frac{ (2n+1)\\pi ^{-} }{ \\sqrt{q_{\\alpha }^{2}M^3}}$ , or $t\\sim \\frac{ (2n+1)\\pi ^{+} }{ \\sqrt{q_{\\alpha }^{2}M^3}}$ , with $n\\in \\mathbb {Z}$ , the Eqs.", "(REF ), yields $T^2 - X^2 \\gg 1$ and $|X|\\ll |T|.$ , $\\frac{T^{2} - X^{2} - 1}{T^{2} - X^{2} + 1} \\sim 1 \\Rightarrow \\left\\lbrace \\quad (T^{2} - X^{2}) \\gg 1 \\quad \\vee \\quad (X^{2} - T^{2}) \\gg 1 \\quad \\right\\rbrace $ Collecting the regions defined by (REF ) and (REF ), the Kruskal diagram corresponding to the spacetime (REF ) is depicted in Fig.", "REF .", "Figure: It is shown the analogue to the Kruskal-Szekeres diagram for the WH.", "The blue hyperbola with vertices at (X,T)=(±1,0)(X,T)=(\\pm 1 , 0), represents the WH throat.", "Horizontal hyperbolae with vertices at (X,T)=(±∞,0)(X,T)=(\\pm \\infty ,0), as well as the region X=0X=0, represent spatial infinity; the vertical hyperbola (vertices at (X,T)=(0,±∞)(X,T)=(0, \\pm \\infty )) represents time infinity.In the Kruskal diagram, the hyperbola at the top (time infinity) can be identified with the one at the bottom, in case we want to work with the covering space.In order to obtain the Penrose diagram of the spacetime in consideration, we introduce the coordinates $\\lambda $ and $\\rho $ given by $T - X = \\tan { \\left( \\frac{\\lambda - \\rho }{2} \\right) }, \\quad T + X = \\tan { \\left( \\frac{\\lambda + \\rho }{2} \\right) }.$ Then, the WH metric in terms of $(\\lambda , \\rho )$ , yields $ds^{2} = l^2 \\csc ^{2}{\\rho } \\left( -d\\lambda ^{2} + d\\rho ^{2} + M d\\phi ^{2} \\right) = \\frac{l^2}{ \\sin ^{2}{\\rho } } \\left( -d\\lambda ^{2} + d\\rho ^{2} + d\\tilde{\\phi }^{2} \\right), \\quad \\textup { with } \\quad \\tilde{\\phi } = M\\phi ,$ and where $r$ and $\\rho $ are related by $r^{2} = l^{2}M\\csc ^{2}{\\rho }.$ Now in order to draw the Penrose diagram, by using (REF ), we can see that for the range of $\\rho $ , i.e., $\\rho \\in (-\\pi ,0) \\cup (0,\\pi )$ , the regions that define the maximal extension of the spacetime correspond to $&& \\textup {Wormhole throat: } \\quad \\left( r = r_{0} = \\sqrt{l^{2} M} \\right) \\equiv \\left( \\rho = \\pm \\frac{\\pi }{2} \\right).", "\\\\&& \\textup {Asymptotic AdS regions: } \\quad \\left( r \\sim \\infty \\right) \\equiv \\left( \\rho = 0, \\pm \\pi \\right).$ Figure: Penrose diagram for the WH, metric (): Continuous vertical lines show the spatial infinity, ρ=-π,0,π\\rho = - \\pi , 0, \\pi ; while the dashed ones represent WH throats, ρ=-π/2,π/2\\rho = - \\pi /2, \\pi /2.", "The black dots at the top and at the bottom, i + i^{+} and i - i^{-}, denote the time infinity, future and past, respectively.Every light ray coming from infinity ρ∼±π\\rho \\sim \\pm \\pi will pass through the WH throat ρ=±π/2\\rho =\\pm \\pi /2, and reach infinity ρ∼0\\rho \\sim 0, and viceversa.", "In a similar way than for the BTZ Penrose diagram, the WH Penrose diagram can be embedded in the Einstein Universe.The Penrose diagram of the spacetime with metric (REF ) is shown in Fig.", "REF .", "Finally, symmetries allow us to consider the Penrose diagram for the metric (REF ) being just half of the strip $\\rho \\in (-\\pi ,0) \\cup (0, \\pi )$ , as shown in Fig.", "REF .", "In a similar way than the BTZ Penrose diagram, the WH Penrose diagram can be embedded into the Einstein Universe.", "Thus, in Fig.", "REF , every light ray coming from infinity $\\rho \\sim \\pi $ will pass through the WH throat $\\rho =\\pi /2$ , and reach infinity $\\rho \\sim 0$ , and viceversa.", "According to the Penrose diagram, anything that crosses the throat is not lost, but passes to the other part of the WH, in the extended manifold.", "Figure: Penrose diagram for the WH, metric (): Continuous vertical lines show the spatial infinity, while the dashed one represents the WH throat.", "Light rays coming from infinity ρ∼π\\rho \\sim \\pi ,will pass through the WH throat ρ=π/2\\rho =\\pi /2, and then travel towards infinity ρ∼0\\rho \\sim 0." ], [ "QNMs of a massive scalar test field in the WH spacetime", "The QNMs encode the information on how a perturbing field behaves in certain spacetime; they depend on the type of perturbation and on the geometry of the background system.", "The QNMs of the BTZ-BH have been determined for a number of perturbing fields, namely, scalar, massive scalar, electromagnetic, etc [27], [28], [29].", "In this section we address the perturbation of the previously introduced WH, (REF ), by a massive scalar field $\\Psi (t,\\vec{r})$ .", "The effect is described by the solutions of the KG equation, $(\\nabla ^{\\alpha }\\nabla _{\\alpha } - \\mu ^{2})\\Psi (t,\\vec{r}) = 0,$ where $\\mu $ is the mass of the scalar field; equivalently, the KG equation is, $\\partial _{\\alpha }\\left( \\sqrt{-g} g^{\\alpha \\beta }\\partial _{\\beta } \\Psi (t,\\vec{r}) \\right) - \\sqrt{-g}\\mu ^{2}\\Psi (t,\\vec{r}) = 0.$ The scalar field is suggested of the form $\\Psi (t,\\vec{r}) = e^{-i\\omega t}e^{i\\ell \\phi } R(r) ,$ where $\\omega $ is the frequency of the perturbation and $\\ell $ its azimuthal angular momentum.", "Substituting (REF ) into the KG equation, we arrive at a second order equation for $R(r)$ , $R^{\\prime \\prime } + \\frac{2M l^2- 3r^2}{(Ml^2- r^2)r } R^{\\prime } - \\left( \\frac{ \\omega ^{2} - q_{\\alpha } M^2 ( \\ell ^{2} + \\mu ^{2} r^2)}{q_{\\alpha } M^2(M- {r^2}/{l^2}) r^2 }\\right)R = 0.$ $R(r)$ is completely determined once the appropriate boundary conditions are imposed.", "Since Eq.", "(REF ) diverges at the throat, $r_{0}^2=-M/ \\Lambda =M l^2$ , it is useful to put the KG equation in terms of the tortoise coordinate, $r_{\\ast }$ , defined in Eq.", "(REF ).", "For the WH, with $r_{0}$ being the throat, the boundary conditions for the QNMs consist in assuming purely ingoing waves at the throat of the WH, $r=r_0$ , that in terms of the tortoise coordinate is $R(r) \\sim e^{-i\\omega r_{\\ast }}$ .", "While at infinity, $r \\mapsto \\infty $ , the AdS boundary demands the vanishing of the solution; i.e.", "the boundary conditions are $&&{ r \\sim r_{0} \\Rightarrow R(r) \\sim e^{-i\\omega r_{\\ast }}, }\\\\&&{ r \\sim \\infty \\Rightarrow R(r) \\sim 0.", "}$ Let us return to the radial part of the KG Eq.", "(REF ).", "By transforming $R(r_{\\ast }) = \\psi (r_{\\ast })/\\sqrt{r}$ (considering $r$ as a function of $r_{\\ast }$ ), we arrive at $\\ddot{ \\psi } (r_{\\ast }) + \\left[ \\omega ^{2} - V_{\\rm eff}(r_{\\ast }) \\right]\\psi (r_{\\ast }) = 0,$ where $\\dot{f}= df/dr_{\\ast }$ ; while the effective potential $V_{\\rm eff}$ in the Schrödinger-like equation (REF ), is $V_{\\rm eff}(r) = q_{\\alpha }^{2} M^{2} \\left[ \\left( \\mu ^{2} + \\frac{3}{4l^2} \\right) r^{2} + \\ell ^{2} - \\frac{M}{4} \\right], \\quad \\textup { for }\\quad r\\in [r_{0},\\infty )$ that we identify as the potential of a displaced harmonic oscillator, with frequency $\\omega ^2= q_{\\alpha }^{2} M^{2} (\\mu ^{2} + \\frac{3}{4 l^2})$ .", "Note that the displacement is proportional to the angular momentum of the scalar field, $ \\ell ^2$ .", "The effective potential can be written in terms of the tortoise coordinate as $V_{\\rm eff}(r_{\\ast }) = q_{\\alpha }^{2} M^{3} \\left\\lbrace \\left(\\mu ^{2} l^2 + \\frac{3}{4}\\right) \\csc ^2\\!\\left( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }\\right) + \\frac{\\ell ^{2}}{M} - \\frac{1}{4} \\right\\rbrace , \\quad \\textup { for }\\quad \\frac{\\pi }{2 \\sqrt{ q_{\\alpha }^{2} M^{3} } } \\le r_{\\ast } < \\frac{\\pi }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }.$ The coordinate $\\rho $ , that was introduced in section , as $r^{2} = l^{2}M\\csc ^{2}{\\rho }$ [see Eq.", "(REF )], in contrast to the coordinates $r$ and $r_{\\ast }$ , covers both sides of the WH spacetime, side I: $\\rho \\in (0 , \\pi /2]$ , and side II: $\\rho \\in [\\pi /2 , \\pi )$ , connected by the WH throat located at $\\rho = \\frac{\\pi }{2}$ .", "The effective potential in terms of $\\rho $ is $V_{\\rm eff}(\\rho ) = q_{\\alpha }^{2} M^{3} \\left\\lbrace \\left(\\mu ^{2} l^2 + \\frac{3}{4}\\right) \\csc ^2\\!", "(\\rho ) + \\frac{\\ell ^{2}}{M} - \\frac{1}{4} \\right\\rbrace , \\quad \\textup { for }\\quad \\rho \\in (0 , \\pi ).$ The effective potential is depicted in Fig.", "REF , both, as a function of $r$ and of $\\rho $ .", "It diverges at infinity, being, as a function of $r$ , a confining harmonic oscillator-type potential; while as a function of $\\rho $ it is a potential of the Rosen-Morse type [30].", "Figure: The effective potential as a function of the coordinate rr (left) and as a function of ρ\\rho (right) are displayed; the dashed red vertical lines show the position of the throat, r 0 2 =Ml 2 r_0^2=M l^2 or ρ=π/2\\rho = \\pi /2.", "The parameters are fixed as M=1M=1, q α =0.5q_{\\alpha }=0.5 and the AdS parameter is l=2l=2; while the field parameters are ℓ=4,μ=0.1\\ell =4, \\mu =0.1.", "Note that the range of the coordinate rr is r 0 ≤r<∞r_{0} \\le r < \\infty , i.e.", "to the right of the dashed red lines." ], [ "The solution for the QNMs of the WH", "In general, the spectrum of the Schrödinger-like operator $\\hat{H} = -\\frac{d^{2}}{dr^{2}_{\\ast }} + V_{\\rm eff}(r_{\\ast })$ , can be decomposed into three parts: point spectrum (often called discrete spectrum); continuous spectrum and residual spectrum.", "In the case of our interest it turns out that the QNMs correspond to the point spectrum and it could be foreseen from the shape of the effective potential.", "In [32] H. Weyl showed that if $V(x)$ is a real valued continuous function on the real line $\\mathcal {R}=(-L,L)$ , $L\\in \\mathbb {R}$ , and such that $\\lim _{|x|\\rightarrow L} V(x)\\rightarrow \\infty $ , and such that $V(x)$ is monotonic in $|x|\\in (-L,L)$ , then the unbounded operator $-d^{2}/dx^{2} + V(x)$ , acting on $\\mathbf {L}^{2}(\\mathcal {R})$ , has pure point spectrum.", "Moreover, since $V(x)$ has the structure of an infinite well, it implies that all the eigenvalues of the operator $\\hat{H} = -d^{2}/dx^{2} + V(x)$ will be real numbers, necessarily.", "Subsequently, in [33], this result was extended for the case in which $V(x)$ is not necessarily monotonic in $|x|\\in (-L,L)$ .", "Specifically in our case, the potential $V_{\\rm eff}(\\rho )$ , in Eq.", "(REF ), is a real valued continuous function in $(0 , \\pi )$ , and since $\\lim _{|\\rho |\\rightarrow \\pi } V_{\\rm eff}(\\rho ) = \\lim _{|\\rho |\\rightarrow 0} V_{\\rm eff}(\\rho ) \\rightarrow \\infty $ , then according to [32], [33], the the Schrödinger-like operator has a pure point spectrum.", "Thus we can conclude that the QNMs of the scalar field in the wormhole background (REF ) are purely real; i.e., these QNMs are in fact normal modes (NMs) of oscillations.", "In agreement with the previous argument, the general solution of Eq.", "(REF ) with $V_{\\rm eff}(r_{\\ast })$ in Eq.", "(REF ), is given by $\\psi (\\rho ) = B_{1} P^{Z}_{V}\\left( \\sqrt{ 1 - \\csc ^{2}{\\rho } } \\right) + B_{2} Q^{Z}_{V}\\left( \\sqrt{ 1 - \\csc ^{2}{\\rho } } \\right) = B_{1} P^{Z}_{V}\\left( i\\cot {\\rho } \\right) + B_{2} Q^{Z}_{V}\\left( i\\cot {\\rho } \\right) ,$ where $B_{1}$ and $B_{2}$ are integration constants, $P^{Z}_{V}(x)$ are the associated Legendre functions of the first kind, and $Q^{Z}_{V}(x)$ are the associated Legendre functions of the second kind; while the parameters $V$ and $Z$ are given, respectively, by $V = \\sqrt{1 + \\mu ^{2}l^{2} } - \\frac{1}{2}, \\quad Z = \\frac{ i}{\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} }.$ Figure: The quasinormal frequencies ω\\omega are shown, to the left as a function of the WH mass MM and to the right as a function of the AdS parameter ll.", "The AdS parameter is l=0.3l=0.3 in the plot to the left; M=1M=1 in the graphic to the right.The rest of parameters are fixed as q α =0.5q_{\\alpha }=0.5;ℓ=2;μ=0.1\\ell =2; \\mu =0.1.", "The QNMs are shown for n=0n=0, n=1n=1 and n=2n=2 in order from bottom to top.For the sake of fluency in the text we skip the details on imposing the boundary conditions, (REF ) and (), and include them in the Appendix.", "The boundary conditions for the QNMs consist in assuming purely ingoing waves at the throat of the WH, $r=r_0$ , that in terms of the tortoise coordinate are $R(r) \\sim e^{-i\\omega r_{\\ast }} $ .", "While related to the AdS asymptotics it shall be required the vanishing of the solution at infinity, $r \\mapsto \\infty $ ; or in terms of $\\rho $ , $\\rho \\mapsto 0$ or $\\rho \\mapsto \\pi $ .", "These conditions imply restrictions in the values of the arguments of the Gamma functions related to the hypergeometric functions.", "Joining both conditions we arrive at the following restrictions for the WH parameters $(M, \\Lambda , q_{\\alpha })$ that combined with restrictions on the parameters of the perturbing field $(\\ell , \\mu , \\omega )$ amount to $&& 1 - Z + V = -2n, \\quad n\\in \\mathbb {N} + \\lbrace 0\\rbrace , \\\\&& \\frac{1}{2} - i M^{-3/2}\\sqrt{ \\ell ^{2}M^{2} - \\frac{M^{3}}{4} - \\left(\\frac{\\omega }{q_{\\alpha }}\\right)^{2} } + \\sqrt{1 + \\mu ^{2}l^{2} } = -2n, \\\\&&\\Rightarrow \\omega ^{2} = q_{\\alpha }^{2} M^{3} \\left[ \\left(2n + \\frac{1}{2} + \\sqrt{ \\mu ^{2}l^2 + 1} \\right)^2 + \\frac{\\ell ^{2}}{M} - \\frac{1}{4} \\right].$ Besides, the condition (REF ) when applied to the solution (REF ) it renders that the second term becomes a multiple of the first one, and then the solution (REF ), in terms of the hypergeometric function, takes the form $\\psi (\\rho ) = \\tilde{B}_{1} \\left( \\frac{ i\\cot {\\rho } + 1 }{ i\\cot {\\rho } - 1 } \\right)^{ \\frac{Z}{2} } \\!\\!", "\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( -V, V + 1; 1 - Z; \\frac{ 1 - i\\cot {\\rho } }{ 2 } \\right),$ being $\\tilde{B}_{1}$ a constant, while $_{2}\\!\\tilde{F}_{1}\\!", "(a,b;c;x)$ is a regularized Gauss (or ordinary) hypergeometric function, related to the Gauss hypergeometric function $_{2}\\!F_{1}\\!", "(a,b;c;x)$ through $ _{2}\\!\\tilde{F}_{1}\\!", "(a,b;c;x) =$ $_{2}\\!F_{1}\\!", "(a,b;c;x)/\\Gamma (c)$ , see Appendix for details.", "Clearly from the expression for $\\omega ^2$ , Eq.", "(), we see that it is real and always positive.", "It may be a surprise that $\\omega $ is not complex, as corresponds to an open system, as deceptively may appear a WH.", "However a clue that this is not the case (WH as an open system) came from the form of the effective potential.", "The throat is not similar to the event horizon in a BH, where amounts of fields are lost once penetrating the horizon.", "In the case of a WH it is supposed that the waves that penetrate the throat are passing to the continuation of the WH.", "This image is in agreement to the Kruskal-Szekeres and Penrose diagrams.", "Moreover the fact that the frequency of the QNMs has not an imaginary part, tells us that the system will remain the same, i.e.", "massive scalar field solutions are stable.", "In Fig.", "REF are plotted the frequencies as a function of the mass $M$ and of the AdS parameter $l$ , for $n=0,1,2$ ; the tendency is of growing $\\omega $ as $M$ increases, we can deduce a similar behavior for the variation of the electromagnetic parameter $q_{\\alpha }$ , and a very slow increase of $\\omega $ when $l$ increases.", "In here we just note that for the hypergeometric function, if the first or the second argument is a non-positive integer, then the function reduces to a polynomial.", "In our problem it is possible to impose that condition, by making $V=n$ , being $n$ an integer; however, this is not our aim and we will not go further in this direction.", "A remarkable particular case is $\\ell = \\pm \\frac{\\sqrt{M}}{2}$ , $\\pm \\omega = \\left(n + \\frac{1}{2} \\right)\\omega _{0} + \\left( \\sqrt{ \\mu ^{2} l^2 + 1 } - \\frac{1}{2} \\right) \\frac{ \\omega _{0}}{2} , \\textup { with } \\omega _{0} = 2|q_{\\alpha }|\\sqrt{M^{3}}.$ This spectrum resembles the corresponding to a quantum harmonic oscillator under the influence of an electric field $\\mathcal {E}$ , $E_{n} = \\left( n + \\frac{1}{2} \\right)\\hbar \\omega - \\frac{q^{2}\\mathcal {E}^{2}}{2m\\omega ^{2}}$ , with a frequency given by $\\omega = \\sqrt{\\frac{k}{m}}$ .", "i. e. for this particular frequency, $\\omega _{0} = 2\\sqrt{M^{3} q_{\\alpha }^2}$ , the massive scalar field, confined by the WH-AdS spacetime, will oscillate harmonically." ], [ "Final Remarks", "We have determined in exact form the QNMs of a massive scalar field in the background of a charged, static, cyclic symmetric (2+1)-dimensional traversable wormhole, determining that the characteristic frequencies are real and discrete (point spectrum), showing that as far as the test scalar field is concerned, the potential is a confining one.", "The WH Penrose diagram agrees with this interpretation, since light trajectories passing through the WH throat arrive to the extended manifold, i.e.", "the other side of the WH.", "Since there are no propagating degrees of freedom in the purely (2+1)-dimensional gravity, it is important to couple (2+1)-gravity with other fields as well as probe (2+1)- systems with test fields such as scalar fields.", "The BTZ-black hole has been of great relevance providing a mathematical model of a holographic manifold.", "Then (2+1)-systems in which quasinormal modes are exactly calculated, are encouraging examples for trying to go all the way through and find the correspondence with a holography theory [31].", "Holographic principle, roughly speaking, consists in finding a lower-dimensional dual field theory that contains the same information as gravity.", "In the system worked out in the present paper, we consider two fields, an electromagnetic field, characterized by a gauge, $A^{\\mu }$ that could be the starting point to try a quantization scheme.", "We also show that the KG equation for a massive scalar field can be exactly solvable, providing then a scalar field, that could be used in searching for a correspondence in the AdS boundary.", "In other words, the system worked out here stimulates to explore the possibility of obtaining the conformal field associated to this AdS-WH solution in the bulk, according to the AdS/CFT correspondence.", "Acknowledgments: N. B. and P. C. acknowledges partial financial support from CONACYT-Mexico through the project No.", "284489.", "P. C. and L. O. thank Cinvestav for hospitality." ], [ "Appendix", "In this Appendix, we present the details in setting the boundary conditions for the massive scalar field in the WH spacetime with metric (REF )." ], [ "Boundary conditions at the throat $r=r_{0}$ , and at infinity {{formula:e36dfde2-f020-4a16-9f5c-4d829c9e88c5}} ", "Very close to the throat $r \\sim r_{0} = \\sqrt{ -\\frac{ M }{\\Lambda } } = \\sqrt{ l^{2}M }$ we shall require that $R(r_{\\ast }) \\sim e^{-i\\omega r_{\\ast }} $ .", "For simplicity, the description will be presented in terms of the tortoise coordinate, $r_{\\ast }$ .", "Then, the asymptotic form of $e^{-i\\omega r_{\\ast }}$ near the throat $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } }$ , is given by $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } } \\Rightarrow e^{-i\\omega r_{\\ast }} \\sim \\left(e^{i\\pi }\\right)^{- \\frac{ \\omega }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } } } = \\left( -1 \\right)^{ -\\frac{\\omega }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } } }$ In such a way that the condition (REF ) goes like $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } } \\Rightarrow R(r_{\\ast }) = \\frac{\\psi (r_{\\ast })}{ r^{\\frac{1}{2}}\\!", "(r_{\\ast }) }\\sim \\left( -1 \\right)^{ -\\frac{\\omega }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } } } \\sim {\\rm constant} \\equiv \\mathbb {C}-\\lbrace 0\\rbrace .$ To implement this asymptotic behavior in the solution $R(r_{\\ast })=\\psi (r_{\\ast })/r^{\\frac{1}{2}}\\!", "(r_{\\ast })$ , with $\\psi $ given in Eq.", "(REF ), with $\\rho $ and $r_{\\ast }$ related by $\\rho = \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }$ , recalling that the ranges are different, we shall write it as $R(r_{\\ast }) = \\frac{ B_{1} }{ r^{\\frac{1}{2}}\\!", "(r_{\\ast }) } P^{Z}_{V}\\!\\!\\left( i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) \\right) + \\frac{ B_{2} }{ r^{\\frac{1}{2}}\\!", "(r_{\\ast }) } Q^{Z}_{V}\\!\\!\\left( i \\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) \\right) = B_{1} R_{I}(r_{\\ast }) + B_{2} R_{II}(r_{\\ast }),$ the two terms, $R_{I}(r_{\\ast }) = \\frac{ 1 }{ r^{\\frac{1}{2}}\\!", "(r_{\\ast }) } P^{Z}_{V}\\!\\!\\left( i \\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) \\right)$ and $R_{II}(r_{\\ast }) = \\frac{ 1 }{ r^{\\frac{1}{2}}\\!", "(r_{\\ast }) } Q^{Z}_{V}\\!\\!\\left(i \\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) \\right)$ shall be analyzed separately.", "Moreover, in terms of the hypergeometric functions, the quantities $R_{I}(r_{\\ast })$ and $R_{II}(r_{\\ast })$ become $R_{I}(r_{\\ast }) = \\frac{ 1}{ r^{\\frac{1}{2}}\\!", "(r_{\\ast }) }\\left( \\frac{ i \\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) + 1 }{ i \\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) - 1 } \\right)^{ \\frac{ i}{2\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } } \\!\\!\\!\\!\\!", "\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( -V, V + 1; 1 - Z; \\frac{ 1 - i\\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) }{ 2 } \\right),$ The behavior of $R(r_{\\ast })$ at the wormhole throat $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } }$ (in this neighborhood $l^{2}M - r^{2} \\sim 0$ ) is $R_{I}( r_{\\ast } ) \\sim \\frac{ 1 }{ \\sqrt{ \\sqrt{ l^{2}M } } } \\left( \\frac{ i \\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) + 1 }{ i \\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) - 1 } \\right)^{ \\frac{ i}{2\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } } \\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( -V, V + 1; 1 - Z; \\frac{1}{2} \\right).$ Now, using the Bailey's summation theorem $_{2}\\!F_{1}\\!\\!\\left( a, 1 - a; c ;\\frac{1}{2} \\right) = \\frac{\\Gamma (\\frac{c}{2})\\Gamma (\\frac{1+c}{2})}{\\Gamma (\\frac{c+a}{2})\\Gamma (\\frac{1+c-a}{2})},$ the Eq.", "(REF ) takes the form $R_{I}(r_{\\ast }) \\sim \\frac{ 1 }{ \\sqrt{ \\sqrt{ l^{2}M } } } \\left( \\frac{ i \\cot ( \\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) + 1 }{ i \\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) - 1 } \\right)^{ \\frac{ i}{2\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } } \\frac{\\Gamma (\\frac{1-Z}{2})\\Gamma (\\frac{2-Z}{2})}{\\Gamma (\\frac{1-V-Z}{2})\\Gamma (\\frac{2+V-Z}{2})} \\sim R_{\\ast }\\frac{\\Gamma (\\frac{1-Z}{2})\\Gamma (\\frac{2-Z}{2})}{\\Gamma (\\frac{1-V-Z}{2})\\Gamma (\\frac{2+V-Z}{2})},$ where $R_{\\ast }\\in \\mathbb {C}-\\lbrace 0\\rbrace $ is constant.", "While in order that $R_{I}(r_{\\ast })$ behaves properly, the factor $\\Gamma (\\frac{1-Z}{2})\\Gamma (\\frac{2-Z}{2})/ (\\Gamma (\\frac{1-V-Z}{2})\\Gamma (\\frac{2+V-Z}{2}))$ should be finite and non-vanishing; this can be accomplished if $\\frac{1-Z}{2}$ , $\\frac{2-Z}{2}$ , $\\frac{1-V-Z}{2}$ and $\\frac{2+V-Z}{2} = \\frac{1}{2} + \\frac{1 + V-Z}{2}$ , are not in the set $\\lbrace $ 0, -1, -2, -3, .... -$n$ ,... $\\rbrace $ with $n \\in \\mathbb {N};$ consequently, $1+V-Z$ $\\ne $ $-1$ , $-3$ , $-5$ , $-7$ , .. $-(2n+1)$ .", "With this restriction we get to $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } } \\Rightarrow R_{I}(r_{\\ast }) \\sim {\\rm constant} \\in \\mathbb {C} - \\lbrace 0\\rbrace ,$ Having then accomplished that for $ r\\sim r_{0} = \\sqrt{l^{2}M} \\Rightarrow R_{I}(r_{\\ast }) \\sim e^{-i\\omega r_{\\ast }}$ [condition (REF )].", "In order to get the function that describes the asymptotic behavior of the second term in $R(r_{\\ast })$ as $r\\sim r_{0} = \\sqrt{ l^{2}M }$ , $R_{II}(r_{\\ast }) = \\frac{ 1 }{ r^{ \\frac{1}{2} }(r_{\\ast }) } Q^{Z}_{V}\\!\\!\\left( i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) \\right)$ , we write it in terms of the hypergeometric functions, $R_{II}( r_{\\ast } ) &=& \\frac{\\pi }{2} R_{I}( r_{\\ast } ) \\nonumber \\\\&-& \\frac{\\Gamma (Z+V+1)}{\\Gamma (-Z+V+1)} \\frac{\\pi \\csc {(Z\\pi )}}{ 2 r^{ \\frac{1}{2} }\\!", "(r_{\\ast }) }\\left( \\frac{ i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) - 1 }{ i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) + 1 }\\right)^{\\!", "\\frac{ i }{ 2\\sqrt{M} } \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } } \\!\\!\\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( -V, V + 1; 1 + Z; \\frac{ 1 - i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) }{ 2 } \\right).\\nonumber \\\\$ Now we can analyze the behavior of $R_{II}$ as $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3}}}$ , $R_{II}(r_{\\ast }) &\\sim & \\frac{\\pi }{2} R_{I}(r_{\\ast }) - \\frac{\\Gamma (Z+V+1)}{\\Gamma (-Z+V+1)} \\frac{\\pi \\csc {(Z\\pi )}}{2\\sqrt{\\sqrt{ l^{2}M }}}\\left( \\frac{ i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) - 1 }{ i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) + 1 } \\right)^{ \\frac{ i}{2\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } } \\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( -V, V + 1; 1 + Z; \\frac{1}{2} \\right), \\nonumber \\\\&\\sim & \\frac{\\pi }{2} R_{I}( r_{\\ast } ) - \\frac{ \\Gamma (Z+V+1) }{ \\Gamma (-Z+V+1) } \\frac{ \\pi \\csc {(Z\\pi )} }{ 2\\sqrt{ \\sqrt{ l^{2}M } } } \\left( \\frac{ i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) - 1 }{ i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) + 1 } \\right)^{ \\frac{ i}{2\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } } \\frac{\\Gamma (\\frac{1+Z}{2})\\Gamma (\\frac{2+Z}{2})}{\\Gamma (\\frac{1-V+Z}{2})\\Gamma (\\frac{2+V+Z}{2})}\\\\&\\sim & \\frac{\\pi }{2} R_{I}( r_{\\ast } ) - \\frac{\\Gamma (Z+V+1)}{\\Gamma (-Z+V+1)} \\frac{\\pi \\csc {(Z\\pi )}}{2 \\sqrt{\\sqrt{ l^{2}M }} }( -1)^{ \\frac{ i}{2\\sqrt{M}} \\sqrt{ \\ell ^{2} - \\frac{M}{4} - \\frac{\\omega ^{2}}{q_{\\alpha }^{2}M^{2}} } }\\frac{\\Gamma (\\frac{1+Z}{2})\\Gamma (\\frac{2+Z}{2})}{\\Gamma (\\frac{1-V+Z}{2})\\Gamma (\\frac{2+V+Z}{2})}.$ Therefore we have determined the asymptotic behavior of $R_{II}( r_{\\ast } )$ at the throat.", "We considered that as $r_{\\ast } \\sim \\frac{ \\pi }{ 2\\sqrt{ q_{\\alpha }^{2} M^{3} } }$ i.e., ($r \\sim r_{0}$ ), then $ l^{2}M - r^{2}\\sim 0$ , and we also used Eq.", "(REF )." ], [ "Boundary condition at infinity, $(r\\sim \\infty )$ . ", "In what follows we shall impose the second boundary condition at infinity, $r\\sim \\infty $ ; in terms of the tortoise coordinate is equivalent to $r_{\\ast } \\sim \\frac{ \\pi }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ , then $R(r_{\\ast })\\rightarrow 0$ [condition ].", "It will be done separately for $ R_{I}(r_{\\ast })$ and $R_{II}(r_{\\ast })$ .", "It shall be considered first the term $R_{I}(r_{\\ast })$ written in terms of the hypergeometric function, Eq.", "(REF ).", "Since the last argument $x=(1 - i\\cot (\\sqrt{q_{\\alpha }^{2} M^{3}} r_{\\ast }) )/2$ of the hypergeometric function diverges when $r_{\\ast }\\sim \\frac{ \\pi }{ \\sqrt{ q_{\\alpha }^{2} M^{3} } }$ the following identity can be used, $_{2}\\!F_{1}\\!\\!\\left( a, b; c; x \\right) &=& \\Gamma (c) \\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( a, b; c; x \\right) =\\frac{ \\Gamma (c)\\Gamma (b-a) }{ \\Gamma (b)\\Gamma (c-a) } (-x)^{-a} \\!\\!\\!\\!\\!\\quad _{2}\\!F_{1}\\!\\!\\left( a, a - c + 1; a - b + 1; \\frac{1}{x} \\right) \\nonumber \\\\&+& \\frac{ \\Gamma (c)\\Gamma (a-b) }{ \\Gamma (a)\\Gamma (c-b) } (-x)^{-b} \\!\\!\\!\\!\\!\\quad _{2}\\!F_{1}\\!\\!\\left( b, b - c + 1; b - a + 1; \\frac{1}{x} \\right), $ that allows us to write the asymptotic expression for $ R_{I}(r_{\\ast })$ as $R_{I}(r_{\\ast }) &\\sim & \\frac{ \\Gamma (2V+1)C_{1} }{ \\Gamma (V+1)\\Gamma (1-Z+V) } r^{V-\\frac{1}{2}}(r_{\\ast }) \\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( -V, Z-V; -2V; 0 \\right) \\nonumber \\\\&& + \\!\\!\\!\\quad \\frac{ \\Gamma (-2V-1)C_{2} }{ \\Gamma (-V)\\Gamma (-Z-V) } r^{-V-\\frac{3}{2}}\\!", "(r_{\\ast }) \\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( V+1, Z+V+1; 2V+2; 0 \\right),$ where $C_{1}$ and $C_{2}$ are complex constants.", "Using now that $_{2}\\!F_{1}\\!\\!\\left( a, b; c; 0 \\right) = \\Gamma (c) \\!\\!\\!\\!\\!\\quad _{2}\\!\\tilde{F}_{1}\\!\\!\\left( a, b; c; 0 \\right) =1/\\Gamma (c)$ , the previous equation can be written as $R_{I}(r_{\\ast }) \\sim \\frac{ \\Gamma (2V+1)C_{1} }{ \\Gamma (V+1)\\Gamma (1-Z+V) } \\frac{1}{ \\Gamma ^{2}(-2V) } r^{ V-\\frac{1}{2}}(r_{\\ast })+ \\!\\!\\!\\quad \\frac{ \\Gamma (-2V-1)C_{2} }{ \\Gamma (-V)\\Gamma (-Z-V) } \\frac{1}{ \\Gamma ^{2}(2V+2) } r^{ -V-\\frac{3}{2} }(r_{\\ast }).$ On the other hand, given that $V = \\sqrt{ 1 + \\mu ^{2}l^{2} } - \\frac{1}{2}$ , and since $\\mu $ , $l$ $\\in \\mathbb {R}$ $\\Rightarrow $ $\\sqrt{ 1 + \\mu ^{2}l^{2} } > 1 $ $\\Rightarrow $ $V > \\frac{1}{2}$ .", "Then the behavior of $R_{I}(r) $ goes like $\\lim _{r\\rightarrow \\infty } r^{ V-\\frac{1}{2}} \\rightarrow \\infty , \\quad {\\rm and} \\lim _{r\\rightarrow \\infty } r^{ -V-\\frac{3}{2} } \\rightarrow 0.$ Therefore, the fulfilment of the proper behavior, $R_{I}(r) \\sim 0$ when $r\\rightarrow \\infty $ , and keeping the convergence of the second term in (REF ), imposes the condition that $(-2V-1) \\ne 0, -1, -2, -3,...-n,...$ with $n\\in \\mathbb {N}$ , guaranteeing then that $\\Gamma (-2V-1)$ be finite.", "In other words, $(-2V-1) \\ne 0, -1, -2, -3,...-n,... \\Rightarrow (-2V) \\ne 1, 0, -1, -2, -3,...-n,...$ , implying that $1 / \\Gamma (-2V) \\ne 0$ .", "Moreover, $V+1 > 0$ implies that $ 1/\\Gamma (V+1) \\ne 0$ , but the fulfilment of the boundary condition that $R(r)$ vanishes at infinity requires that $ 1/ \\Gamma (1-Z+V) = 0$ ; this condition imposes that $(1-Z+V) = 0, -1, -2, -3,...-n,...$ with $n\\in \\mathbb {N}$ .", "This guarantees the vanishing of the first term in (REF ), accomplishing then the desired behavior at infinity.", "We still have to consider the compatibility of the previously determined values of $V$ and $Z$ with the fulfilment of the first boundary condition, that at the throat $R_{I}(r_{\\ast }) \\sim e^{-i\\omega r_{\\ast }}$ .", "Eq.", "(REF ) imposes that $(1-Z+V)\\ne -1, -3, -5, -7,... -(2n+1),..$ with $n\\in \\mathbb {N}$ .", "Gathering the two conditions lead us to the following $\\left\\lbrace (1-Z+V) = -n : n\\in \\mathbb {N} \\right\\rbrace - \\left\\lbrace (1-Z+V) = -(2n+1) : n\\in \\mathbb {N} \\right\\rbrace = \\left\\lbrace (1-Z+V) = 0, -2, -4, -6,...-2n,..\\right\\rbrace ,$ i.e.", "$R_{I}(r_{\\ast })$ has the asymptotic behaviors (REF ) and () provided $1 - Z + V = -2n, \\textup { with } n\\in \\mathbb {N} + \\lbrace 0\\rbrace .$" ], [ " Behavior of $R_{II}(r_{\\ast })$ at infinity", "In Eq.", "(REF ) was defined $R_{II}(r_{\\ast })$ .", "Substituting the previously derived condition (REF ) into (REF ), leads to the vanishing of the second term of $R_{II}(r_{\\ast })$ since $ 1/\\Gamma ( - Z + 1 + V) = 0$ .", "This will occur whenever (i) $Z$ is not an integer, otherwise $\\csc {\\!", "(Z\\pi )}$ will diverge; and (ii) $1+Z+V \\ne - n$ with $n\\in \\mathbb {N} + 0$ , otherwise $\\Gamma (1+Z+V)$ diverges.", "Summarizing, the fulfilment of the condition (REF ), along with $Z\\ne \\pm n$ and $1+Z+V \\ne - n, \\quad n\\in \\mathbb {N} + 0$ , leads to the following simplifications, $R_{II}(r_{\\ast }) = \\frac{\\pi }{2} R_{I}(r_{\\ast }) \\Rightarrow R(r_{\\ast }) = B_{1}R_{I}(r_{\\ast }) + B_{2}R_{II}(r_{\\ast }) = \\tilde{B}_{1}R_{I}(r_{\\ast }), \\quad \\textup { i.e., } R(r_{\\ast }) = \\tilde{B}_{1} R_{I}(r_{\\ast }), \\quad \\forall r_{\\ast }.$ Being then achieved the fulfilment of the two boundary conditions, at the throat and at infinity, for the solution $R(r_{\\ast })$ for the QNMs of the scalar test field coming from the WH." ] ]
1906.04360
[ [ "Antlia2's role in driving the ripples in the outer gas disk of the\n Galaxy" ], [ "Abstract We employ the earlier published proper motions of the newly discovered Antlia 2 dwarf galaxy derived from Gaia data to calculate its orbital distribution in the cosmologically recent past.", "Using these observationally motivated orbits, we calculate the effect of the Antlia 2 dwarf galaxy on the outer HI disk of the Milky Way, using both test particle and Smoothed Particle Hydrodynamics simulations.", "We find that orbits with low pericenters, $\\sim$ 10 kpc, produce disturbances that match the observed outer HI disk perturbations.", "We have independently recalculated the proper motion of the Antlia 2 dwarf from Gaia data and found a proper motion of $(\\mu_{\\alpha}cos\\delta, \\mu_{\\delta}) = (-0.068,0.032) \\pm (0.023,-0.031)~\\rm mas/yr$, which agrees with results from Torrealba et al.", "(2019) within the errors, but gives lower mean pericenters, e.g., $\\sim$ 15 kpc for our fiducial model of the Milky Way.", "We also show that the Sagittarius dwarf galaxy interaction does not match the observed perturbations in the outer gas disk.", "Thus, Antlia 2 may be the driver of the observed large perturbations in the outer gas disk of the Galaxy.", "The current location of the Antlia 2 dwarf galaxy closely matches that predicted by an earlier dynamical analysis (Chakrabarti \\& Blitz 2009) of the dwarf that drove ripples in the outer Galaxy, and, in particular, its orbit is nearly coplanar to the Galactic disk.", "If the Antlia 2 dwarf galaxy is responsible for the perturbations in the outer Galactic disk, it would have a specific range of proper motions that we predict here; this can be tested soon with Gaia DR-3 and Gaia DR-4 data." ], [ "Introduction", "The recently discovered Antlia 2 dwarf galaxy (Torrealba et al.", "2019) is unique in several ways.", "At a distance of $\\sim $ 130 kpc, and a half-light radius of 2.9 kpc (similar in extent to the Large Magellanic Cloud, but two magnitudes fainter), it is the lowest surface brightness system known.", "Fritz et al.", "(2018) have noted that the fact that there are fewer dwarf galaxies observed near apocenters vs near pericenters suggests that there are more dwarf galaxies to be discovered.", "Possibly, Antlia 2 falls into this group of dwarfs close to apocenters that are just being discovered.", "Yet another intriguing aspect of Antlia 2 is that with a mean [Fe/H] metallicity of -1.4, its inferred stellar mass from the mass-metallicity relation (Kirby et al.", "2013) would be $\\sim 10^{7} M_{\\odot }$ .", "However, substantially lower values result from its current luminosity assuming standard mass-to-light ratios (Torrealba et al.", "2019), which would suggest that it has undergone significant tidal disruption.", "The planar disturbances manifest in the outer HI disk of the Milky Way (Levine, Blitz & Heiles 2006; henceforth LBH06) have been a long standing puzzle.", "Chakrabarti & Blitz (2009; henceforth CB09) analyzed the perturbations observed in the outer HI disk of the Milky Way (LBH06).", "They argued that a new dwarf galaxy was needed to explain the observed disturbances and predicted its orbital parameters.", "Namely, they found that the observed outer disk planar disturbances could be explained by a $\\sim $ 1:100 mass ratio perturber on a near co-planar orbit with a close pericenter approach ($R_{\\rm peri} \\sim 5~h^{-1}$ kpc) that is currently at a distance of $\\sim $ 90 $h^{-1}$ kpc, where the small pericenter and co-planar orbit is constrained by the strength of the observed disturbances and the current distance by the timescale for the initial orbital perturbations to manifest itself as surface density perturbations.", "Here, we use the observed Gaia proper motions of the Antlia 2 dwarf galaxy to investigate if the Antlia 2 dwarf galaxy can produce the observed disturbances in the outer HI disk of the Milky Way.", "We use test particle calculations (Chang & Chakrabarti 2011; henceforth CC11) and fitting relations (Lipnicky, Chakrabarti & Chang 2018; henceforth LCC18) to survey the parameter space, to determine the approximate response.", "We then carry out a smaller set of targeted SPH calculations with GADGET-2 (Springel 2005).", "We find that the low pericenters $R_{\\rm peri} \\sim 10$ kpc of the orbital distribution can explain the observed disturbances in the outer HI disk.", "The tidal debris of the Sgr dwarf suggests that it has approached relatively close to the Galactic disk (Newberg et al.", "2003), and models (Purcell et al.", "2011; Laporte et al.", "2017; D'Onghia et al.", "2016; Haines et al.", "2019) have suggested that it has excited various features in the Galactic disk.", "We show here however that it is not responsible for the large planar disturbances in the outer HI disk.", "Analysis of recent observations (Fritz et al.", "2018), and of recent cosmological simulations (Garrison-Kimmel et al.", "2018; Samuel et al.", "2019) suggests that there may be one (or more) dwarf galaxies now at apocenter, that suffered close approaches to the Galaxy.", "If correct, the perturbation that such a dwarf galaxy would exert on the Galactic disk ought to be explored.", "This paper is organized as follows.", "In §2, we review our methodology for test particle and SPH simulations.", "In §3, we first give our results from orbit integrations for the pericenter distributions where we employ the proper motions reported in Torrealba et (2019).", "We also discuss the results from test particle calculations and the SPH simulations for the Fourier amplitudes and the phase.", "In §3.1, we employ all the Antlia 2 stars and recalculate the proper motions and the corresponding pericenter distributions, which results in the lower pericenters becoming more probable.", "In §4, we discuss future work and conclude.", "Specifically, we include here a prediction of the proper motions that Antlia2 should have if it is in fact the dwarf galaxy that perturbed the Milky Way, which can be tested in the near future by Gaia DR-3 and DR-4 measurements which will have significantly lower errors in the proper motion measurements." ], [ "Methodology", "We first integrate the orbits and sample the errors in the observed Gaia proper-motions as reported in Torrealba et al.", "(2019), to determine an orbital distribution for the Antlia 2 dwarf galaxy.", "In §3.1, we employ all the Antlia 2 stars and re-calculate the proper motions and associated pericenter distributions.", "The orbits of Antlia 2 are integrated backwards in time in a Hernquist (1990) potential that is matched to the Navarro, Frenk & White (1996) (NFW) model in the inner regions, which gives a relation between the Hernquist scale length and the NFW scale radius, for a given concentration, as defined in Springel et al.", "(2005).", "We consider models with a range of the circular velocity values at the virial radius, $v_{200}$ , that correspond to a range of virial mass values, $M_{200}$ , given in the literature.", "We provide orbital distributions that span $v_{200}= 160 - 200$ km/s (which corresponds to $M_{200} = 1 - 1.8 \\times 10^{12} M_{\\odot }$ ), which spans the typical range of Milky Way masses found in the literature (Watkins et al.", "2019; Deason et al.", "2019; Posti & Helmi 2019; Fritz et al.", "2018; Piffl et al.", "2014; Boylan-Kolchin et al.", "2013).", "Given these initial conditions at $t = -1$ Gyr, we first use a parallelized implementation of the test particle code (Chang & Chakrabarti 2011) to determine the range of disk response that corresponds to the orbital distribution determined from the Gaia proper motions and associated uncertainties.", "We determine the initial conditions at $t = -1 $ Gyr because the current errors in the Gaia proper motions for Antlia 2 would not produce robust orbits for longer time integrations (Lipnicky & Chakrabarti 2017).", "The test particle calculations, which sample the errors in the proper motions, have been carried out for 3000 realizations for the $V_{200}=200$ km/s case.", "We then carry out a targeted set of SPH simulations with GADGET-2, as in earlier work (CB09).", "The number of gas, stellar, and halo particles for our fiducial case are $8 \\times 10^{5}, 8 \\times 10^{5}, 1.2 \\times 10^{6}$ respectively.", "We have increased the number of particles in each component by a factor of two and find converged results for the metrics we use here.", "The halo of the Milky Way is initialized with a Hernquist (1990) profile (matched to NFW in the inner regions) with an effective concentration of 9.39, a spin parameter $\\lambda = 0.036$ , and a range of circular velocities $V_{200}$ (see Table 1) that thereby correspond to a range of $M_{200}$ values.", "The simulated galaxies also include an exponential disc of stars and gas, with a flat extended H I disc, as found in surveys of spirals (e.g.", "Wong & Blitz 2002).", "The exponential disk size is set by requiring the disk mass fraction (taken to be 3.7 % of the halo mass) is equal to the disk angular momentum, which results (for these parameters) in a disk scale length of 3.78 kpc.", "The disk mass for the fiducial $v_{200} = 200$ km/s is $6.8 \\times 10^{10} M_{\\odot }$ , and for the $v_{200} = 180$ km/s is $5 \\times 10^{10} M_{\\odot }$ , which are comparable to observed values (Bovy & Rix 2013).", "For both cases, we assume 1:100 mass perturbers to represent Antlia 2's progenitor mass.", "The simulated Antlia 2 dwarf galaxy is also similarly initialized, with stars and dark matter, but does not include gas.", "Its concentration is set from relations derived from cosmological simulations, that show a correlation between the mass and concentration of dark matter halos (Maccio et al.", "2008).", "Antlia 2's progenitor mass is uncertain.", "Its current stellar mass from its measured luminosity is $\\sim 5\\times 10^{5} M_{\\odot }$ (Torrealba et al.", "2019).", "Given its mean [Fe/H] metallicity of -1.39, the Kirby et al.", "(2013) mass-metallicity relation would imply a stellar mass of $\\sim 10^{7} M_{\\odot }$ .", "The difference in the values of the current stellar mass and inferred stellar mass from the mass-metallicity relation may be due to tidal stripping of the satellite.", "Using the $SFR-M_{200}$ relation of Erkal & Read (2018), this would give a progenitor mass of $2 \\times 10^{10} M_{\\odot }$ for an age of 11.2 Gyr, where the age is as given in Torrealba et al.", "(2019).", "Lower stellar masses of $\\sim 10^{6} M_{\\odot }$ would give $M_{200} \\sim 3 \\times 10^{9} M_{\\odot }$ .", "Here, we consider $1:100$ mass-ratio progenitors for the Antlia 2 dwarf galaxy, which are roughly comparable to expectations from using the mass-metallicity relation, along with the $SFR-M_{200}$ relation.", "Comparisons with other dwarf galaxies also support a massive progenitor mass for Antlia 2.", "Tucana is an isolated local group dSph.", "It has a stellar mass of $\\sim 3 \\times 10^{6}~M_{\\odot }$ , a lower metallicity than Antlia of [Fe/H] $\\sim $ -1.95 and no evidence of significant stellar mass loss due to tides.", "Both abundance matching and its stellar kinematics favor a pre-infall halo mass of $M_{200} \\sim 10^{10}~M_{\\odot }$ (Gregory et al.", "2019).", "NGC6822 is an isolated dIrr with an estimated halo mass of $M200 \\sim 2 \\times 10^{10}~M_{\\odot }$ .", "It has a present-day stellar mass of $7.6 \\times 10^{7} M_{\\odot }$ , but has formed stars steadily for a Hubble time.", "If it stopped forming stars $\\sim $ 11.2 Gyrs ago, its stellar mass (assuming a constant star formation rate) would have been $\\sim 10^{7} M_{\\odot }$ – consistent with what we estimate for Antlia 2 given its high [Fe/H] (Read et al.", "2016; Read et al.", "2017).", "(Note that [Fe/H] for NGC 6822 is [Fe/H] $\\sim $ -1 which is higher than Antlia, as expected for its larger stellar mass.)", "Thus, Antlia 2 is roughly consistent with a \"failed\" NGC6822 that fell into the Milky Way $\\sim $ 11 Gyrs ago.", "Table 1 gives the parameters of the SPH simulations, including the simulation name, the $V_{200}$ and $M_{200}$ of the primary galaxy, the pericenter of the Antlia2 dwarf galaxy.", "Here, we adopt an isothermal equation of state, which may be representative of the outskirts of galaxies where the energy injection from supernovae is low due to the low star formation rate in the outskirts (Bigiel et al.", "2010)." ], [ "Results", "Figure REF (a-c) shows Antlia 2's most recent pericenter distribution for $v_{200}$ = 200, 180 and 160 km/s, which vary in $M_{200}$ from $1.86 - 1 \\times 10^{12} M_{\\odot }$ from the backward time-integration of its orbits to $t=-1$ Gyr.", "This range of MW masses is consistent with expectations from the literature, as noted in §2.", "The mean of the pericenter distribution shrinks from 30 kpc for $M_{200} = 10^{12} M_{\\odot }$ to 21 kpc for $M_{200} = 1.86\\times 10^{12} M_{\\odot }$ as the mass of the simulated MW increases, as expected.", "However, these models all have a significant fraction of orbits with low pericenters given the 1 sigma errors in the proper motions reported by Torrealba et al.", "(2019).", "We have also carried out a similar exercise for the MW2014 potential that was employed by Torrealba et al.", "(2019), which is an adaptation from Bovy (2015) (but with a higher mass by a factor of two), and we find a mean pericenter of 38 kpc, with a tail of low pericenters extending to $\\sim $ 10 kpc in that case also.", "Our GADGET-2 and test particle calculations described below will employ the Hernquist-NFW potential.", "Given a projected surface density map, one can compute the individual $m-th$ Fourier amplitudes that describe the strength of the perturbing response as: $a_{m}(r,t)= \\frac{1}{2\\pi }\\int \\Sigma (r,\\phi ) exp(-i m \\phi ) d\\phi \\\\ ,$ where $\\Sigma (r,\\phi )$ is the projected gas surface density.", "The effective Fourier amplitude, $a_{m,eff}$ of the disk for an individual mode $m$ is then given by: $a_{m,eff}(t) = \\frac{1}{r_{\\rm out} - r_{\\rm in}}\\int _{\\rm r_{in}}^{\\rm r_{out}} |a_{m}(r,t)| dr \\\\ ,$ where $r_{\\rm in} = 10\\,{\\rm kpc}$ and $r_{\\rm out} = 25\\,{\\rm kpc}$ are the inner and outer radii that we average over.", "The quantity $a_{t,eff}(t)$ can be calculated by summing the effective response of the modes : $a_{t,eff}(t) = \\sqrt{\\frac{1}{4} \\sum _{m=1}^{m=4}|a_{m,eff}(t)|^{2}}$ In CC11 and in LCC18, we derived scaling relations for this quantity.", "Eqn 10 in LCC18 describes a fitting relation for $a_{t,eff}$ in terms of the ratio of the satellite mass ($m_{sat}$ ) to primary galaxy mass ($M_{host}$ ) and pericenter distance $R_{p}$ that we can use to roughly estimate the pericenter distance, given an assumed satellite mass to primary galaxy mass ratio, and an observed value for $a_{t,eff}$ .", "As discussed below, the observed HI data has wedges excised out of it, and so we have defined a new quantity, $a_{t,13}$ , to mitigate its effects.", "The observed HI data has a value of $a_{t,13} = 0.24$ .", "Using the relation defined in LCC18 as an estimateWe use this estimate (eqn.", "(10) of LCC18) under the assumption that the scaling for individual $a_{m,eff}$ -modes scale similarly to $a_{t,eff}$ .", "for $a_{t,13}$ and using $m_{sat}/M_{host} = 1/100$ , we obtain $R_{p} = 10~\\rm kpc$ .", "Thus, our rough expectation from scaling relations (LCC18) is that low pericenters would be ($\\sim $ 10 kpc) needed to match the outer HI disk planar disturbances.", "The scaling relations from LCC18 also indicate that the power in the Fourier modes scale as $(m_{sat}/M_{host})^{1/2}$ .", "Therefore, if the progenitor mass was $\\sim 10^{9} M_{\\odot }$ , i.e., a 1:1000 mass ratio perturber, the power in the Fourier modes would be lower by a factor of 2.", "The HI map constructed by LBH06 excludes regions that lie with $\\pm $ 15 degrees of the Sun-Galactic center line because distances are difficult to determine in these regions as the velocity dispersion is larger than the line of sight velocity.", "The wedges that are excised from the map will affect our calculations of the Fourier amplitudes.", "Since the odd modes are less affected (Chakrabarti & Blitz 2011) by the wedges, we focus here on the $m=1$ and $m=3$ modes, and our definition of $a_{t,13}$ will only include the sum of these modes, i.e.", ": $a_{\\rm t, 13}(t) = \\sqrt{\\frac{1}{2} (|a_{m=1,eff}(t)|^{2} + |a_{m=3,eff}(t)|^{2})}$ where we sum (in quadrature) the $m=1$ and $m=3$ modes.", "We first symmetrize the wedges, and these symmetrized wedges produce an artificial amount of power in even modes.", "We have checked the effects of the angular cuts in our simulated data (test particle and SPH) and find that the power in the odd modes are not significantly affected by the (symmetrized) angular cuts.", "Figure: Effective Fourier amplitudes vs pericenter, for test particle calculations (blue dots) that sample the uncertainty in the observed Gaia proper motions for V 200 =200V_{200} = 200 km/s, and SPH simulations of Antlia 2 (red) for specific realizations (see Table 1 for the description), along with the HI data (green) (shown at an arbitrary pericenter).", "The Sgr dwarf case is also over-plotted in magenta, and its contribution is not sufficient to explain the disturbances in the outer HI disk of the Galaxy.Figure REF depicts the effective Fourier amplitudes, $a_{t,13}$ , from the test particle calculations (blue points) for our $V_{200} = 200$ km$\\,{\\rm s^{-1}}$ model, which samples the orbital distribution as defined above.", "Red points are the results from our SPH calculations, and the HI data, in green, is shown at an arbitrary pericenter.", "Here it is clear that only orbits with low pericenters ($R_{\\rm p} 10$ kpc) are able to match the observed level of Fourier power in the outer HI disk of the Milky Way.", "The red dots are our fiducial case ($v_{200} = 200$ km/s) and the red triangles are the $v_{200} = 180$ km/s case.", "As expected (LCC18; CC11), $a_{t,13}$ primarily depends on $m_{sat}/M_{host}$ and the pericenter distance (the disk mass does not have a significant effect).", "The test particle calculations underestimate the disk response relative to the SPH calculations, especially at low pericenters, due to the nature of the collisional gas in the SPH simulations.", "For larger pericenters, the results are quite similar, as would be expected.", "Similar results are found for simulations done with only the stellar disk, which indicates that the self-gravity of the disk also influences the Fourier amplitudes.", "We now investigate if the Sgr dwarf can excite the observed planar disturbances in the HI disk of the Galaxy.", "We adopt a progenitor mass at $t = -1~\\rm Gyr$ of $10^{10} M_{\\odot }$ , which is consistent with other models (Purcell et al.", "2011; Laporte et al.", "2017), accounting for the mass loss at $t=-1~\\rm Gyr$ relative to the progenitor masses used at earlier times (several Gyr ago).", "As with the Antlia2 dwarf galaxy, we derive the orbit distribution of the Sgr dwarf using the Gaia DR-2 proper motion (Helmi et al.", "2018) combined with its radial velocity (McConnachie 2012) and an assumed heliocentric distance of $26~{\\rm kpc}$ (Monaco et al.", "2004).", "For the Sgr dwarf, the Gaia proper motions have very small errors, and therefore, for a given potential, its pericenter is tightly constrained.", "As shown in Figure REF , the Sgr dwarf (magenta point), does not drive sufficiently large planar disturbances to explain the observed HI data.", "Sgr is on a polar orbit rather than Antlia 2's near-co-planar orbit, which we previously showed (CC11) is less effective in driving planar disturbances.", "Sgr's pericenter of $\\sim $ 13 kpc is larger than the lowest pericenters of Antlia2's orbital distribution, which also leads to a reduced effect on the outer gas disk relative to what Antlia2 can produce, given the tail of low pericenters.", "Finally, the time of pericenter also enters into this – the minimum of $R_{p} = 13$ kpc occurs in Sgr's orbit at t=-0.05 Gyr, thus the effect we see now should be due to Sgr's previous disk crossing, which occurred at t = -0.4 Gyr, when the Sgr dwarf crossed the disk at 15 kpc.", "This too is higher than the pericenters of $\\sim $ 10 kpc for a 1:100 mass ratio perturber needed to match the power in the outer HI disk.", "Figure: (a) Phase of the m=1 mode vs radius from the FRp8 simulation (black), compared to the HI data (red), (b) dφ/drd\\phi /dr of the m=1 mode vs r from test particle calculations, red line shows the gradient of the m=1m=1 phase from the HI data.The projected gas surface density can be decomposed into the Fourier amplitudes $a_{m}(r)$ and the phase of the modes $\\phi _{m}(r)$ .", "The radial distribution of the phase of the modes expresses how tightly or loosely a spiral pattern is wrapped, and is given by: $\\phi (r,m) = arctan\\frac{-Imag[FFT(\\Sigma (r,\\phi )]}{Re [FFT \\Sigma (r,\\phi )]}$ Chakrabarti & Blitz (2011) (henceforth CB11) used the phase of the modes to estimate the azimuthal location of the perturber.", "We focus here on the $m=1$ phase, because the wedges in the raw HI map only minimally affect the odd modes as discussed above.", "We ignored the $m=3$ mode as it has a three-fold degeneracy.", "Figure REF (a) shows $\\phi _{1}(r)$ from the FRp8 simulation that is able to match the observed disturbances in the outer HI disk of the MW (black), over-plotted with the phase from the raw HI data (red).", "Interestingly, both the simulation and the data display a relatively flat phase variation in the outskirts.", "A flat phase variation implies at at least within a certain radial range, the disturbances are nearly radial, which suggests that the origin of the observed HI disturbance cannot arise from a nonaxisymmetric perturbation at smaller radii, which would produce a tight spiral pattern, e.g.", "$d\\phi _1/dr < 0$ .", "Figure REF (b) shows the average value of $d\\phi _{1}/dr$ from 20 - 25 kpc from the test particle calculations (black) for a range of pericenters, over-plotted with the same metric from the raw HI data (red).", "The low pericenter models display a relatively flat phase variation, while the larger pericenters have a negative slope.", "Thus, the phase gives an independent constraint of the pericenter and more evidence favoring a subhalo excitation of the HI disk (CB09).", "Figure: Comparison of our orbit integration calculation over the past 1 Gyr (shown in blue dots), with our GADGET calculation with a live halo (orange triangles), and the corresponding calculation for a static halo (green dots).We have restricted our calculations here to the past 1 Gyr due to the currently large error bars in the Gaia proper motion data; integrating back to even earlier times would result in significant uncertainties in the orbit, as shown in earlier work (Lipnicky & Chakrabarti 2017; Lux et al.", "2010).", "Over this timescale, we have compared in Figure REF the results of our orbit integration calculation (blue dots) to a GADGET calculation for Antlia2's orbit in a live halo (orange triangles), and to Antlia2's orbit in a static halo (green dots).", "Although there are differences between these three calculations that arise from both tidal stripping of the Antlia 2 satellite over this timescale, the time evolution of the dark matter halo, and as well as our implementation of the Chandrasekhar dynamical friction formula (Chang & Chakrabarti 2011), in all three cases, the difference is less than 20 %, with the difference between the static halo case being generally less than 10 %.", "Similar results are found for the velocity comparison for these three difference kinds of calculations.", "Thus, our analysis of Antlia 2's orbit here is approximate, and is not accurate to better than 20 %." ], [ "Proper motions from all of the Antlia 2 stars", "We have recalculated the proper motions of Antlia 2 using all of the stars identified as belonging to Antlia 2, using a more conservative background model.", "Torrealba et al.", "(2019) determined the proper motion using only stars with radial velocities that they obtained follow-up observations of.", "Here, we independently measure the mean proper motion of Antlia 2 by constructing a probabilistic model of the cluster and background stellar kinematics using astrometric and photometric data from Gaia.", "We start by selecting all Gaia sources within $2.5^\\circ $ of the center of Antlia 2 with parallax $\\varpi < 0.25~{\\rm mas}$ , then select candidate Antlia 2 member stars using photometric selections based on Figure 1 of Torrealba et al.", "(2019).", "We then construct a probabilistic mixture model in proper motions for the foreground (i.e.", "Antlia 2) and background (i.e.", "Galactic field) using Gaussians in order to infer the mean proper motion of Antlia 2, following the approach described in Section 3.1 of Price-Whelan et al.", "(2018).", "Briefly, we first mask the sky region immediately around Antlia 2 (removing stars within three times the effective radius defined in Torrealba et al.", "(2019) to define a “background” region, and construct a noise-deconvolved representation of the Galactic field proper motion distribution in this sky region using a five-component Gaussian mixture model.", "We then represent Antlia 2 as an additional Gaussian component with unknown mean and variance.", "We generate posterior samplings in the mean and internal proper motion dispersion of Antlia 2 and a mixture weight parameter ($f$ in Price-Whelan et al.", "(2018)) using an ensemble Markov Chain Monte Carlo (MCMC) sampler emcee (Foreman-Mackey et al.", "2013).", "From this inference, which properly marginalizes over the unknown Galactic background properties and utilizes the full Gaia covariance matrix information for proper motions of stars in this field, we derive a proper motion for Antlia 2 of $(\\mu _{\\alpha } \\cos \\delta , \\mu _{\\delta }) = (-0.068, 0.032) \\pm (0.023, -0.031)~\\rm mas~\\rm yr^{-1}$ .", "The revision in the mean value of the proper motions leads a revision in the pericenter distributions.", "Figure REF (a-c) shows the pericenter distributions for $v_{200} = 200$ km/s, 180 km/s and 160 km/s when we use all of the stars for Antlia 2.", "These figures are analogous to Figure REF , where we used the proper motions cited in Torrealba et al.", "(2019) to calculate the pericenter distributions.", "One difference that is immediately clear is that upon using the proper motions derived from all of the Antlia 2 stars, the relative probability of the low pericenters increasing significantly for all the mass models, and is particularly significant for the $v_{200} = 200$ km/s case (where more than 70 % of cases have low pericenters that are less than 10 kpc).", "Thus this calculation shows that the low pericenters may very well be viable.", "However, as with the proper motions cited in Torrealba et al.", "(2019), the errors in our proper motion inference are also large.", "Therefore, this calculation should be redone upon the release of Gaia DR-3 data that will have considerably lower errors on the proper motions.", "Figure: (a) Pericenter distributions using all of the Antlia 2 stars to re-derive the proper motions (see text) for v 200 =v_{200} = 200 km/s (our fiducial model), (b) for v 200 v_{200} =180 km/s, (c) for v 200 v_{200}= 160 km/s" ], [ "Discussion & Conclusion", "In summary, the orbital distributions for Antlia 2 have a tail of low pericenters of $\\sim $ 10 kpc for a range of Milky Way masses commonly cited in the literature (from $\\sim 10^{12} - 2 \\times 10^{12} M_{\\odot })$ , when we employ the proper motions cited in Torrealba et al.", "(2019).", "The probability of the low pericenters increases significantly when we employ our revised calculation of the proper motions using Gaia DR-2 data, where we have used all of the Antlia 2 stars with proper motions from Gaia DR-2.", "A close interaction of this kind with a 1:100 mass ratio perturber is sufficient to explain the planar disturbances observed in the outer HI disk of the Milky Way.", "Moreover, the phase of the disturbances has a flat radial variation for the HI data, as do the Antlia 2 simulations with low pericenters, independently confirming that low pericenters are needed to match the disturbances manifest in the outer gas disk of the Galaxy.", "We show that the tidal strength of the Sgr dwarf is insufficient to explain the disturbances in the outer gas of the Galaxy.", "Of the other tidal players of the Milky Way, the Large and Small Magellanic Clouds are too distant and have not approached closer in the recent past (Besla et al.", "2007; Besla et al.", "2012) to account for this level of Fourier power in the outer HI disk.", "Thus, Antlia 2 may be the driver of the observed large perturbations on the outskirts of our Galaxy.", "If Antlia 2 is responsible for the outer HI disk planar disturbances, its proper motions are constrained to those that give orbits with low pericenters.", "Figure REF shows the proper motions that correspond to the low pericenters of the orbital distribution $R_{p} $ 10 kpc for the $V_{200} = 220, 200, 180, 160$ km/s models, over-plotted with the mean and 1-sigma errors of the Gaia proper motions cited by Torrealba et al.", "(2019) (shown in orange), along with our calculation of the Gaia proper motions (shown in green).", "The low pericenters correspond to proper motions with a wide range of $\\mu _{\\alpha } cos \\delta $ values, but the $\\mu _\\delta $ values are constrained to be close to (and below) the Gaia proper motions cited by Torrealba et al.", "(2019).", "The $\\mu _{\\delta }$ values are nevertheless higher than kinematic proper motions cited by Torrealba et al.", "(2019).", "Given that the probability of the low pericenters increases significantly when we use our proper motion calculations it is not surprising that our model predictions overlap with these revised proper motion values.", "Past HST measurements of proper motions for dwarf galaxies, such as the LMC, have changed significantly (and outside the scope of the 1-sigma errors) from the 2 epoch to 3 epoch values (Kallivayallil et al.", "2013).", "The mean proper motion of the stars in the Antlia 2 dwarf galaxy are affected by correlated proper motion errors, and may well be revised upon future data releases.", "Our prediction for Antlia 2's proper motions (for the potentials considered here), can soon be tested by upcoming improved data from Gaia DR-3 and Gaia DR-4.", "Possibly machine learning techniques may be able to improve on the proper motion measurement errors even before the release of Gaia DR-3 and DR-4 data.", "Antlia 2 presents an unique laboratory for the study of a dark-matter dominated dwarf galaxy, if it is indeed the perturber that drove the ripples in the outer gas disk of our Galaxy.", "Since its mass was predicted from a dynamical analysis, its effect on the Galaxy sets bounds on its dark matter content more strictly than forward-modeling approaches.", "With its half-light radius of 3 kpc, one may also be able to obtain more stringent constraints on its dark matter density profile than for other dwarf galaxies, and thereby effectively discriminate between self-interacting dark matter and CDM models (Fry et al.", "2015).", "One mechanism that may explain the diversity of dark matter density profiles in Milky Way dwarf galaxies are \"dark matter heating\" models (Read et al.", "2019).", "In a forthcoming companion paper, we investigate the structure of Antlia 2, contrasting the effects of self-interacting dark matter models and CDM models on the evolution of its density profile.", "Kahlhoefer et al.", "(2019) have recently noted that self-interacting dark matter models with large cross sections may help to explain the diversity of density profiles in Milky Way dwarf galaxies, from very compact systems like Draco to very diffuse systems like CraterII, especially when their orbital evolution is considered, as the time evolution of the density profile depends sensitively on the orbit of the dwarf galaxy.", "Figure: The proper motions for Antlia2 (for v 200 =v_{200} = 160, 180, 200 and 220 km/s) that correspond to the low pericenters (R p 10R_{p} 10 kpc) of the orbital distributions.", "The Gaia proper motions and 1-sigma error given in Torrealba et al.", "(2019) are shown in orange, and our recalculation of the proper motions are shown in green.", "Our independent calculation of the proper motion gives (μ α cosδ,μ δ )=(-0.068,0.032)±(0.023,-0.031) mas yr -1 (\\mu _{\\alpha } \\cos \\delta , \\mu _{\\delta }) = (-0.068, 0.032) \\pm (0.023, -0.031)~\\rm mas~\\rm yr^{-1}, which differs somewhat from that cited in Torrelaba et al.", "(2019).", "If Antlia2 is indeed the dwarf galaxy that perturbed the outer HI disk, its μ δ \\mu _{\\delta } values are constrained, while a large range of values for μ α cosδ\\mu _{\\alpha }cos \\delta is possible (depending on the potential assumed).Much of this work was conducted at the KITP Santa Barbara long-term program \"Dynamical Models for Stars and Gas in Galaxies in the Gaia Era\".", "SC gratefully acknowledges the hospitality of the KITP during her visit.", "The simulations have been performed on Xsede, NERSC, and on Google Cloud.", "SC gratefully acknowledges support from NASA ATP NNX17AK90G, NSF AAG grant 1517488, and from Research Corporation for Scientific Advancement's Time Domain Astrophysics Scialog.", "PC is supported by NASA grant NNH17ZDA001N-ATP, NSF CAREER grant AST-1255469, and the Simons Foundation.", "This research was supported in part by Grant No.", "NSF PHY-1748958.", "We also acknowledge the Flatiron Institute for providing HPC resources that have contributed to the results reported here." ] ]
1906.04203
[ [ "Spines for amoebas of rational curves" ], [ "Abstract To every rational complex curve $C \\subset (\\mathbf{C}^\\times)^n$ we associate a rational tropical curve $\\Gamma \\subset \\mathbf{R}^n$ so that the amoeba $\\mathcal{A}(C) \\subset \\mathbf{R}^n$ of $C$ is within a bounded distance from $\\Gamma$.", "In accordance with the terminology introduced by Passare and Rullg{\\aa}rd, we call $\\Gamma$ the spine of $\\mathcal{A}(C)$.", "We use spines to describe tropical limits of sequences of rational complex curves." ], [ "paper=a4 fontsize=12pt draft=true DIV=calc title [enumerate]label*=(),ref=(),itemsep=0pt,topsep=5pt [itemize]itemsep=0pt,topsep=5pt DIV=calc MySpace labelsep=MySpace 7000 8000" ] ]
1906.04500
[ [ "Maximum Mean Discrepancy Gradient Flow" ], [ "Abstract We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties.", "The MMD is an integral probability metric defined for a reproducing kernel Hilbert space (RKHS), and serves as a metric on probability measures for a sufficiently rich RKHS.", "We obtain conditions for convergence of the gradient flow towards a global optimum, that can be related to particle transport when optimizing neural networks.", "We also propose a way to regularize this MMD flow, based on an injection of noise in the gradient.", "This algorithmic fix comes with theoretical and empirical evidence.", "The practical implementation of the flow is straightforward, since both the MMD and its gradient have simple closed-form expressions, which can be easily estimated with samples." ], [ "Introduction", "We address the problem of defining a gradient flow on the space of probability distributions endowed with the Wasserstein metric, which transports probability mass from a starting distribtion $\\nu $ to a target distribution $\\mu $ .", "Our flow is defined on the maximum mean discrepancy (MMD) , an integral probability metric which uses the unit ball in a characteristic RKHS as its witness function class.", "Specifically, we choose the function in the witness class that has the largest difference in expectation under $\\nu $ and $\\mu $ : this difference constitutes the MMD.", "The idea of descending a gradient flow over the space of distributions can be traced back to the seminal work of , who revealed that the Fokker-Planck equation is a gradient flow of the Kullback-Leibler divergence.", "Its time-discretization leads to the celebrated Langevin Monte Carlo algorithm, which comes with strong convergence guarantees (see , ), but requires the knowledge of an analytical form of the target $\\mu $ .", "A more recent gradient flow approach, Stein Variational Gradient Descent (SVGD) , also leverages this analytical $\\mu $ .", "The study of particle flows defined on the MMD relates to two important topics in modern machine learning.", "The first is in training Implicit Generative Models, notably generative adversarial networks .", "Integral probability metrics have been used extensively as critic functions in this setting: these include the Wasserstein distance , , and maximum mean discrepancy , , , , , .", "In , a connection between IGMs and particle transport is proposed, where it is shown that gradient flow on the witness function of an integral probability metric takes a similar form to the generator update in a GAN.", "The critic IPM in this case is the Kernel Sobolev Discrepancy (KSD), which has an additional gradient norm constraint on the witness function compared with the MMD.", "It is intended as an approximation to the negative Sobolev distance from the optimal transport literature , , .", "There remain certain differences between gradient flow and GAN training, however.", "First, and most obviously, gradient flow can be approximated by representing $\\nu $ as a set of particles, whereas in a GAN $\\nu $ is the output of a generator network.", "The requirement that this generator network be a smooth function of its parameters causes a departure from pure particle flow.", "Second, in modern implementations , , , the kernel used in computing the critic witness function for an MMD GAN critic is parametrized by a deep network, and an alternating optimization between the critic parameters and the generator parameters is performed.", "Despite these differences, we anticipate that the theoretical study of MMD flow convergence will provide helpful insights into conditions for GAN convergence, and ultimately, improvements to GAN training algorithms.", "Regarding the second topic, we note that the properties of gradient descent for large neural networks have been modeled using the convergence towards a global optimum of particle transport in the population limit, when the number of particles goes to infinity , , , .", "In particular, show that gradient descent on the parameters of a neural network can also be seen as a particle transport problem, which has as its population limit a gradient flow of a functional defined for probability distributions over the parameters of the network.", "This functional is in general non-convex, which makes the convergence analysis challenging.", "The particular structure of the MMD allows us to relate its gradient flow to neural network optimization in a well-specified regression setting similar to , (we make this connection explicit in subsec:trainingneuralnetworks).", "Our main contribution in this work is to establish conditions for convergence of MMD gradient flow to its global optimum.", "We give detailed descriptions of MMD flow for both its continuous-time and discrete instantiations in sec:gradientflow.", "In particular, the MMD flow may employ a sample approximation for the target $\\mu $ : unlike e.g.", "Langevin Monte Carlo or SVGD, it does not require $\\mu $ in analytical form.", "Global convergence is especially challenging to prove: while for functionals that are displacement convex, the gradient flow can be shown to converge towards a global optimum , the case of non-convex functionals, like the MMD, requires different tools.", "A modified gradient flow is proposed in that uses particle birth and death to reach global optimality.", "Global optimality may also be achieved simply by teleporting particles from $\\nu $ to $\\mu $ , as occurs for the Sobolev Discrepancy flow absent a kernel regulariser .", "Note, however, that the regularised Kernel Sobolev Discrepancy flow does not rely on teleportation.", "Our approach takes inspiration in particular from , where it is shown that although the 1-Wasserstein distance is non-convex, it can be optimized up to some barrier that depends on the diameter of the domain of the target distribution.", "Similarly to , we provide in sec:convergencemmdflow a barrier on the gradient flow of the MMD, although the tightness of this barrier in terms of the target diameter remains to be established.", "We obtain a further condition on the evolution of the flow to ensure global optimality, and give rates of convergence in that case, however the condition is a strong one: it implies that the negative Sobolev distance between the target and the current particles remains bounded at all times.", "We thus propose a way to regularize the MMD flow, based on a noise injection (Section ) in the gradient, with more tractable theoretical conditions for convergence.", "Encouragingly, the noise injection is shown in practice to ensure convergence in a simple illustrative case where the original MMD flow fails.", "Finally, while our emphasis has been on establishing conditions for convergence, we note that MMD gradient flow has a simple $O(MN+N^2)$ implementation for $N$ $\\nu $ -samples and $M$ $\\mu $ -samples, and requires only evaluating the gradient of the kernel $k$ on the given samples." ], [ "Construction of the gradient flow", "In this section we introduce the gradient flow of the Maximum Mean Discrepancy (MMD) and highlight some of its properties.", "We start by briefly reviewing the MMD introduced in .", "We define ${{\\mathcal {X}}}\\subset {{\\mathbb {R}}}^d$ as the closure of a convex open set, and $\\mathcal {P}_2({{\\mathcal {X}}})$ as the set of probability distributions on ${{\\mathcal {X}}}$ with finite second moment, equipped with the 2-Wassertein metric denoted $W_2$ .", "For any $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $L_2(\\nu )$ is the set of square integrable functions w.r.t.", "$\\nu $ .", "The reader may find a relevant mathematical background in sec:appendixmathbackground." ], [ "Maximum Mean Discrepancy.", "Given a characteristic kernel $k : {{\\mathcal {X}}}\\times {{\\mathcal {X}}}\\rightarrow {{\\mathbb {R}}}$ , we denote by ${{\\mathcal {H}}}$ its corresponding RKHS (see ).", "The space ${{\\mathcal {H}}}$ is a Hilbert space with inner product $\\langle .,.", "\\rangle _{{{\\mathcal {H}}}}$ and norm $\\Vert .", "\\Vert _{{{\\mathcal {H}}}}$ .", "We will rely on specific assumptions on the kernel which are given in sec:assumptionskernel.", "In particular, assump:lipschitzgradientk states that the gradient of the kernel, $\\nabla k$ , is Lipschitz with constant $L$ .", "For such kernels, it is possible to define the Maximum Mean Discrepancy as a distance on $\\mathcal {P}_2({{\\mathcal {X}}})$ .", "The MMD can be written as the RKHS norm of the unnormalised witness function $f_{\\mu ,\\nu }$ between $\\mu $ and $\\nu $ , which is the difference between the mean embeddings of $\\nu $ and $\\mu $ , $MMD(\\mu ,\\nu ) = \\Vert f_{\\mu ,\\nu } \\Vert _{{{\\mathcal {H}}}}, \\qquad f_{\\nu ,\\mu }(z) = \\int k(x,z)\\mathop {}\\!\\mathrm {d}\\nu (x) - \\int k(x,z)\\mathop {}\\!\\mathrm {d}\\mu (x) \\quad \\forall z\\in {{\\mathcal {X}}}$ Throughout the paper, $\\mu $ will be fixed and $\\nu $ can vary, hence we will only consider the dependence in $\\nu $ and denote by ${{\\mathcal {F}}}(\\nu )= \\frac{1}{2}MMD^2(\\mu ,\\nu )$ .", "A direct computation shows that for any finite measure $\\chi $ such that $\\nu + \\epsilon \\chi \\in {{\\mathcal {P}}}_2({{\\mathcal {X}}})$ , we have $\\lim _{\\epsilon \\rightarrow 0} \\epsilon ^{-1}({{\\mathcal {F}}}(\\nu +\\epsilon \\chi ) - {{\\mathcal {F}}}(\\nu )) = \\int f_{\\mu ,\\nu }(x)d\\chi (x).$ This means that $f_{\\mu ,\\nu }$ is the differential of ${{\\mathcal {F}}}(\\nu )$ .", "Interestingly, ${{\\mathcal {F}}}(\\nu )$ admits a free-energy expression: ${{\\mathcal {F}}}(\\nu ) = \\int V(x) \\mathop {}\\!\\mathrm {d}\\nu (x) +\\frac{1}{2} \\int W(x,y)\\mathop {}\\!\\mathrm {d}\\nu (x)\\mathop {}\\!\\mathrm {d}\\nu (y) + C.$ where $V$ is a confinement potential, $W$ an interaction potential and $C$ a constant defined by: $V(x)=-\\int k(x,x^{\\prime })\\mathop {}\\!\\mathrm {d}\\mu (x^{\\prime }), \\quad W(x,x^{\\prime })=k(x,x^{\\prime }), \\quad C = \\frac{1}{2} \\int k(x,x^{\\prime })\\mathop {}\\!\\mathrm {d}\\mu (x) \\mathop {}\\!\\mathrm {d}\\mu (x^{\\prime })$ Formulation eq:mmdasfreeenergy and the simple expression of the differential in prop:differentialmmd will be key to construct a gradient flow of ${{\\mathcal {F}}}(\\nu )$ , to transport particles.", "In eq:potentials, $V$ reflects the potential generated by $\\mu $ and acting on each particle, while $W$ reflects the potential arising from the interactions between those particles." ], [ "Gradient flow of the MMD.", "We consider now the problem of transporting mass from an initial distribution $\\nu _0$ to a target distribution $\\mu $ , by finding a continuous path $\\nu _t$ starting from $\\nu _0$ that converges to $\\mu $ while decreasing ${{\\mathcal {F}}}(\\nu _t)$ .", "Such a path should be physically plausible, in that teleportation phenomena are not allowed.", "For instance, the path $\\nu _t = (1-e^{-t})\\mu + e^{-t}\\nu _0$ would constantly teleport mass between $\\mu $ and $\\nu _0$ although it decreases ${{\\mathcal {F}}}$ since ${{\\mathcal {F}}}(\\nu _t)=e^{-2t}{{\\mathcal {F}}}(\\nu _0)$ .", "The physicality of the path is understood in terms of classical statistical physics: given an initial configuration $\\nu _0$ of $N$ particles, these can move towards a new configuration $\\mu $ through successive small transformations, without jumping from one location to another.", "Optimal transport theory provides a way to construct such a continuous path by means of the continuity equation.", "Given a vector field $V_t$ on ${{\\mathcal {X}}}$ and an initial condition $\\nu _0$ , the continuity equation is a partial differential equation which defines a path $\\nu _t$ evolving under the action of the vector field $V_t$ , and reads $\\partial _t \\nu _t = -div(\\nu _t V_t)$ for all $t \\ge 0$ .", "The reader can find more detailed discussions in subsec:wassersteinflow or .", "Following , a natural choice is to choose $V_t$ as the negative gradient of the differential of ${{\\mathcal {F}}}(\\nu _t)$ at $\\nu _t$ , since it corresponds to a gradient flow of ${{\\mathcal {F}}}$ associated with the $W_2$ metric (see subsec:gradientflowsfunctionals).", "By prop:differentialmmd, we know that the differential of ${{\\mathcal {F}}}(\\nu _t)$ at $\\nu _t$ is given by $f_{\\mu ,\\nu _t}$ , hence $V_t(x) = -\\nabla f_{\\mu ,\\nu _t}(x)$ .Also, $V_t=\\nabla V+\\nabla W \\star \\nu _t$ (see subsec:gradientflowsfunctionals) where $\\star $ denotes the classical convolution.", "The gradient flow of ${{\\mathcal {F}}}$ is then defined by the solution $(\\nu _t)_{t\\ge 0}$ of $\\partial _t \\nu _t = div(\\nu _t \\nabla f_{\\mu ,\\nu _t}).$ Equation eq:continuitymmd is non-linear in that the vector field depends itself on $\\nu _t$ .", "This type of equation is associated in the probability theory literature to the so-called McKean-Vlasov process , , $d X_t = -\\nabla f_{\\mu ,\\nu _t}(X_t)dt \\qquad X_0\\sim \\nu _0.$ In fact, eq:mcKeanVlasovprocess defines a process $(X_t)_{t\\ge 0}$ whose distribution $(\\nu _t)_{t\\ge 0}$ satisfies eq:continuitymmd, as shown in prop:existenceuniqueness.", "$(X_t)_{t\\ge 0}$ can be interpreted as the trajectory of a single particle, starting from an initial random position $X_0$ drawn from $\\nu _0$ .", "The trajectory is driven by the velocity field $-\\nabla f_{\\mu ,\\nu _t}$ , and is affected by other particles.", "These interactions are captured by the velocity field through the dependence on the current distribution $\\nu _t$ of all particles.", "Existence and uniqueness of a solution to eq:continuitymmd,eq:mcKeanVlasovprocess are guaranteed in the next proposition, whose proof is given proof:prop:existenceuniqueness.", "Proposition 1 Let $\\nu _0\\in \\mathcal {P}_2({{\\mathcal {X}}})$ .", "Then, under assump:lipschitzgradientk, there exists a unique process $(X_t)_{t\\ge 0}$ satisfying the McKean-Vlasov equation in eq:mcKeanVlasovprocess such that $X_0 \\sim \\nu _0$ .", "Moreover, the distribution $\\nu _t$ of $X_t$ is the unique solution of eq:continuitymmd starting from $\\nu _0$ , and defines a gradient flow of ${{\\mathcal {F}}}$ .", "Besides existence and uniqueness of the gradient flow of ${{\\mathcal {F}}}$ , one expects ${{\\mathcal {F}}}$ to decrease along the path $\\nu _t$ and ideally to converge towards 0.", "The first property, stated in the next proposition, is rather easy to get and is the object of prop:decaymmd, similar to the result for KSD flow in .", "Proposition 2 Under assump:lipschitzgradientk, ${{\\mathcal {F}}}(\\nu _t)$ is decreasing in time and satisfies: $\\frac{d{{\\mathcal {F}}}(\\nu _t)}{dt}= - \\int \\Vert \\nabla f_{\\mu ,\\nu _t}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu _t(x).$ This property results from eq:continuitymmd and the energy identity in and is proved in proof:prop:decaymmd.", "From eq:timeevolutionmmd, ${{\\mathcal {F}}}$ can be seen as a Lyapunov functional for the dynamics defined by eq:continuitymmd, since it is decreasing in time.", "Hence, the continuous-time gradient flow introduced in eq:continuitymmd allows to formally consider the notion of gradient descent on $\\mathcal {P}_2({{\\mathcal {X}}})$ with ${{\\mathcal {F}}}$ as a cost function.", "A time-discretized version of the flow naturally follows, and is provided in the next section." ], [ "Euler scheme", "We consider here a forward-Euler scheme of eq:continuitymmd.", "For any $T: {{\\mathcal {X}}}\\rightarrow {{\\mathcal {X}}}$ a measurable map, and $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , we denote the pushforward measure by $T_{\\#}\\nu $ (see subsec:wassersteinflow).", "Starting from $\\nu _0\\in \\mathcal {P}_2({{\\mathcal {X}}})$ and using a step-size $\\gamma >0$ , a sequence $\\nu _n\\in \\mathcal {P}_2({{\\mathcal {X}}})$ is given by iteratively applying $\\nu _{n+1} = (I - \\gamma \\nabla f_{\\mu ,\\nu _n})_{\\#}\\nu _n.$ For all $n\\ge 0$ , equation eq:eulerscheme is the distribution of the process defined by $X_{n+1} = X_n - \\gamma \\nabla f_{\\mu ,\\nu _n}(X_n) \\qquad X_0\\sim \\nu _0.$ The asymptotic behavior of eq:eulerscheme as $n\\rightarrow \\infty $ will be the object of sec:convergencemmdflow.", "For now, we provide a guarantee that the sequence $(\\nu _n)_{n\\in \\mathbb {N}}$ approaches $(\\nu _t)_{t\\ge 0}$ as the step-size $\\gamma \\rightarrow 0$ .", "Proposition 3 Let $n\\ge 0$ .", "Consider $\\nu _n$ defined in eq:eulerscheme, and the interpolation path $\\rho _t^{\\gamma }$ defined as: $\\rho _t^{\\gamma } = (I-(t- n\\gamma ) \\nabla f_{\\mu ,\\nu _n})_{\\#}\\nu _n$ , $\\forall t\\in [n\\gamma ,(n+1)\\gamma )$ .", "Then, under assump:lipschitzgradientk, $\\forall \\;T>0$ , $W_2(\\rho _t^{\\gamma },\\nu _t)\\le \\gamma C(T) \\quad \\forall t\\in [0,T]$ where $C(T)$ is a constant that depends only on $T$ .", "A proof of prop:convergenceeulerscheme is provided in proof:prop:convergenceeulerscheme and relies on standard techniques to control the discretization error of a forward-Euler scheme.", "prop:convergenceeulerscheme means that $\\nu _n$ can be linearly interpolated giving rise to a path $\\rho _t^{\\gamma }$ which gets arbitrarily close to $\\nu _t$ on bounded intervals.", "Note that as $T \\rightarrow \\infty $ the bound $C(T)$ it is expected to blow up.", "However, this result is enough to show that eq:eulerscheme is indeed a discrete-time flow of ${{\\mathcal {F}}}$ .", "In fact, provided that $\\gamma $ is small enough, ${{\\mathcal {F}}}(\\nu _n)$ is a decreasing sequence, as shown in prop:decreasingfunctional.", "Proposition 4 Under assump:lipschitzgradientk, and for $\\gamma \\le 2/3L$ , the sequence ${{\\mathcal {F}}}(\\nu _n)$ is decreasing, and ${{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\nu _n)\\le -\\gamma (1-\\frac{3\\gamma }{2}L )\\int \\Vert \\nabla f_{\\mu , \\nu _n}(x)\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu _n(x), \\quad \\forall n\\ge 0.$ prop:decreasingfunctional, whose proof is given in proof:prop:decreasingfunctional, is a discrete analog of prop:decaymmd.", "In fact, eq:eulerscheme is intractable in general as it requires the knowledge of $\\nabla f_{\\mu ,\\nu _n}$ (and thus of $\\nu _n$ ) exactly at each iteration $n$ .", "Nevertheless, we present in sec:samplebased a practical algorithm using a finite number of samples which is provably convergent towards eq:eulerscheme as the sample-size increases.", "We thus begin by studying the convergence properties of the time discretized MMD flow eq:eulerscheme in the next section." ], [ "Convergence properties of the MMD flow", "We are interested in analyzing the asymptotic properties of the gradient flow of ${{\\mathcal {F}}}$ .", "Although we know from prop:decaymmd,prop:decreasingfunctional that ${{\\mathcal {F}}}$ decreases in time, it can very well converge to local minima.", "One way to see this is by looking at the equilibrium condition for eq:timeevolutionmmd.", "As a non-negative and decreasing function, $t \\mapsto {{\\mathcal {F}}}(\\nu _t)$ is guaranteed to converge towards a finite limit $l\\ge 0$ , which implies in turn that the r.h.s.", "of eq:timeevolutionmmd converges to 0.", "If $\\nu _t$ happens to converge towards some distribution $\\nu ^{*}$ , it is possible to show that the equilibrium condition eq:equilibriumcondition must hold , $\\int \\left\\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) =0.$ Condition eq:equilibriumcondition does not necessarily imply that $\\nu ^{*}$ is a global optimum unless when the loss function has a particular structure .", "For instance, this would hold if the kernel is linear in at least one of its dimensions.", "However, when a characteristic kernel is required (to ensure the MMD is a distance), such a structure can't be exploited.", "Similarly, the claim that KSD flow converges globally, , requires an assumption that excludes local minima which are not global (see subsec:equilibriumcondition; recall KSD is related to MMD).", "Global convergence of the flow is harder to obtain, and will be the topic of this section.", "The main challenge is the lack of convexity of ${{\\mathcal {F}}}$ w.r.t.", "the Wassertein metric.", "We show that ${{\\mathcal {F}}}$ is merely $\\Lambda $ -convex, and that standard optimization techniques only provide a loose bound on its asymptotic value.", "We next exploit a Lojasiewicz type inequality to prove convergence to the global optimum provided that a particular quantity remains bounded at all times." ], [ "Optimization in a ($W_2$ ) non-convex setting", "The displacement convexity of a functional ${{\\mathcal {F}}}$ is an important criterion in characterizing the convergence of its Wasserstein gradient flow.", "Displacement convexity states that $t\\mapsto {{\\mathcal {F}}}(\\rho _t)$ is a convex function whenever $(\\rho _t)_{t\\in [0,1]}$ is a path of minimal length between two distributions $\\mu $ and $\\nu $ (see def:displacementconvexity).", "Displacement convexity should not be confused with mixture convexity, which corresponds to the usual notion of convexity.", "As a matter of fact, ${{\\mathcal {F}}}$ is mixture convex in that it satisfies: ${{\\mathcal {F}}}(t\\nu +(1-t)\\nu ^{\\prime })\\le t{{\\mathcal {F}}}(\\nu )+(1-t){{\\mathcal {F}}}(\\nu ^{\\prime })$ for all $t\\in [0,1]$ and $\\nu ,\\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})$ (see lem:mixtureconvexity).", "Unfortunately, ${{\\mathcal {F}}}$ is not displacement convex.", "Instead, ${{\\mathcal {F}}}$ only satisfies a weaker notion of displacement convexity called $\\Lambda $ -displacement convexity, given in def:lambda-convexity (subsec:lambdaconvexity).", "Proposition 5 Under assump:diffkernel,assump:lipschitzgradientk,assump:boundedfourthoder, ${{\\mathcal {F}}}$ is $\\Lambda $ -displacement convex, and satisfies ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu )+t{{\\mathcal {F}}}(\\nu ^{\\prime })-\\int _0^1 \\Lambda (\\rho _s, v_s ) G(s,t)\\mathop {}\\!\\mathrm {d}s$ for all $\\nu , \\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})$ and any displacement geodesic $(\\rho _t)_{t\\in [0,1]}$ from $\\nu $ to $\\nu ^{\\prime }$ with velocity vectors $(v_t)_{t \\in [0,1]}$ .", "The functional $\\Lambda $ is defined for any pair $(\\rho ,v)$ with $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ and $\\Vert v\\Vert \\in L_2(\\rho )$ , $\\Lambda (\\rho ,v) = \\left\\Vert \\int v(x).\\nabla _x k(x,.)", "\\mathop {}\\!\\mathrm {d}\\rho (x) \\right\\Vert ^2_{\\mathcal {H}} - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}} \\int \\left\\Vert v(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho (x),$ where $(s,t)\\mapsto G(s,t)= s(1-t) \\mathbb {1}\\lbrace s\\le t\\rbrace + t(1-s) \\mathbb {1}\\lbrace s\\ge t\\rbrace $ and $\\lambda $ is defined in assump:boundedfourthoder.", "prop:lambdaconvexity can be obtained by computing the second time derivative of ${{\\mathcal {F}}}(\\rho _t)$ , which is then lower-bounded by $\\Lambda (\\rho _t,v_t)$ (see proof:prop:lambdaconvexity).", "In eq:lambda, the map $\\Lambda $ is a difference of two non-negative terms: thus $\\int _0^1 \\Lambda (\\rho _s, v_s ) G(s,t)\\mathop {}\\!\\mathrm {d}s$ can become negative, and displacement convexity does not hold in general.", "provides a convergence when only $\\Lambda $ -displacement convexity holds as long as either the potential or the interaction term is convex enough.", "In fact, as mentioned in , the convexity of either term could compensate for a lack of convexity of the other.", "Unfortunately, this cannot be applied for MMD since both terms involve the same kernel but with opposite signs.", "Hence, even under convexity of the kernel, a concave term appears and cancels the effect of the convex term.", "Moreover, the requirement that the kernel be positive semi-definite makes it hard to construct interesting convex kernels.", "However, it is still possible to provide an upper bound on the asymptotic value of ${{\\mathcal {F}}}(\\nu _n)$ when $(\\nu _n)_{n \\in \\mathbb {N}}$ are obtained using eq:eulerscheme.", "This bound is given in th:ratesmmd, and depends on a scalar $ K(\\rho ^n) := \\int _0^1\\Lambda (\\rho _s^n,v_s^n)(1-s)\\mathop {}\\!\\mathrm {d}s$ , where $(\\rho _s^n)_{s\\in [0,1]}$ is a constant speed displacement geodesic from $\\nu _n$ to the optimal value $\\mu $ , with velocity vectors $(v_s^n)_{s \\in [0,1]}$ of constant norm.", "Theorem 6 Let $\\bar{K}$ be the average of $(K(\\rho ^j))_{0\\le j \\le n}$ .", "Under assump:diffkernel,assump:lipschitzgradientk,assump:boundedfourthoder and if $\\gamma \\le 1/3L$ , ${{\\mathcal {F}}}(\\nu _n) \\le \\frac{W_2^2(\\nu _0,\\mu )}{2 \\gamma n} -\\bar{K}.$ th:ratesmmd is obtained using techniques from optimal transport and optimization.", "It relies on prop:lambdaconvexity and prop:decreasingfunctional to prove an extended variational inequality (see prop:evi), and concludes using a suitable Lyapunov function.", "A full proof is given in proof:th:ratesmmd.", "When $\\bar{K}$ is non-negative, one recovers the usual convergence rate as $O(\\frac{1}{n})$ for the gradient descent algorithm.", "However, $\\bar{K}$ can be negative in general, and would therefore act as a barrier on the optimal value that ${{\\mathcal {F}}}(\\nu _n)$ can achieve when $n\\rightarrow \\infty $ .", "In that sense, the above result is similar to .", "th:ratesmmd only provides a loose bound, however.", "In sec:Lojasiewiczinequality we show global convergence, under the boundedness at all times $t$ of a specific distance between $\\nu _t$ and $\\mu $ ." ], [ "A condition for global convergence", "The lack of convexity of ${{\\mathcal {F}}}$ , as shown in subsection:barrieroptimization, suggests that a finer analysis of the convergence should be performed.", "One strategy is to provide estimates for the dynamics in prop:decaymmd using differential inequalities which can be solved using the Gronwall's lemma (see ).", "Such inequalities are known in the optimization literature as Lojasiewicz inequalities (see ), and upper-bound ${{\\mathcal {F}}}(\\nu _t)$ by the absolute value of its time derivative $\\int \\Vert \\nabla f_{\\mu ,\\nu _t}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu _t(x)$ .", "The latter is the squared weighted Sobolev semi-norm of $f_{\\mu ,\\nu _t}$ (see subsection:Lojasiewicz), also written $\\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)}$ .", "Thus one needs to find a relationship between ${{\\mathcal {F}}}(\\nu _t) = \\frac{1}{2} \\Vert f_{\\mu ,\\nu _t} \\Vert _{\\mathcal {H}}^2 $ and $\\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)}$ .", "For this purpose, we consider the weighted negative Sobolev distance on $\\mathcal {P}_2({{\\mathcal {X}}})$ , defined by duality using $\\Vert .", "\\Vert _{\\dot{H}(\\nu )}$ (see also ).", "Definition 1 Let $\\nu \\in \\mathcal {P}_2({\\mathbf {x}})$ , with its corresponding weighted Sobolev semi-norm $ \\Vert .", "\\Vert _{\\dot{H}(\\nu )} $ .", "The weighted negative Sobolev distance $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )}$ between any $p$ and $q$ in $\\mathcal {P}_2({\\mathbf {x}})$ is defined as $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )} = \\sup _{f\\in L_2(\\nu ), \\Vert f \\Vert _{\\dot{H}(\\nu )} \\le 1 } \\left|\\int f(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int f(x)\\mathop {}\\!\\mathrm {d}q(x) \\right|$ with possibly infinite values.", "Equation eq:negsobolev plays a fundamental role in dynamic optimal transport.", "It can be seen as the minimum kinetic energy needed to advect the mass $\\nu $ to $q$ (see ).", "It is shown in proof:prop:lojasiewicz that $\\Vert f_{\\mu ,\\nu _t} \\Vert ^2_{\\mathcal {H}} \\le \\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)} \\Vert \\mu -\\nu _t\\Vert _{\\dot{H}^{-1}(\\nu _t)} .$ Provided that $\\Vert \\mu - \\nu _t \\Vert _{\\dot{H}^{-1}(\\nu _t)} $ remains bounded by some positive constant $C$ at all times, eq:inequalitynegsobolev leads to a functional version of Lojasiewicz inequality for ${{\\mathcal {F}}}$ .", "It is then possible to use the general strategy explained earlier to prove the convergence of the flow to a global optimum: Proposition 7 Under assump:lipschitzgradientk, If $\\Vert \\mu - \\nu _t \\Vert ^2_{\\dot{H}^{-1}(\\nu _t)} \\le C$ , for all $t\\ge 0$ , then: $\\mathcal {F}(\\nu _t)\\le \\frac{C}{C\\mathcal {F}(\\nu _0)^{-1} + 4t}$ , If $\\Vert \\mu - \\nu _n \\Vert ^2_{\\dot{H}^{-1}(\\nu _n)} \\le C$ for all $n\\ge 0$ , then: $\\mathcal {F}(\\nu _n)\\le \\frac{C}{C\\mathcal {F}(\\nu _0)^{-1} + 4\\gamma (1-\\frac{3}{2}\\gamma L) n}$ .", "Proofs of prop:lojasiewicz (i) and (ii) are direct consequences of prop:decaymmd,prop:decreasingfunctional and the bounded energy assumption: see proof:prop:lojasiewicz.", "The fact that eq:negsobolev appears in the context of Wasserstein flows of ${{\\mathcal {F}}}$ is not a coincidence.", "Indeed, eq:negsobolev is a linearization of the Wasserstein distance (see , and subsec:Lojasiewiczdifferentmetrics).", "Gradient flows of ${{\\mathcal {F}}}$ defined under different metrics would involve other kinds of distances instead of eq:negsobolev.", "For instance, consider gradient flows under a hybrid metric (a mixture between the Wasserstein distance and KL divergence), where convergence rates can then be obtained provided that the chi-square divergence $\\chi ^2(\\mu \\Vert \\nu _t)$ remains bounded.", "As shown in subsec:Lojasiewiczdifferentmetrics, $\\chi ^2(\\mu \\Vert \\nu _t)^{\\frac{1}{2}}$ turns out to linearize $KL(\\mu \\Vert \\nu _t)^{\\frac{1}{2}}$ when $\\mu $ and $\\nu _t$ are close.", "Hence, we conjecture that gradient flows of ${{\\mathcal {F}}}$ under a metric $d$ can be shown to converge when the linearization of the metric remains bounded.", "This can be verified on simple examples for $\\Vert \\mu - \\nu _t \\Vert _{\\dot{H}^{-1}(\\nu _t)} $ as discussed in subsec:simpleexample.", "However, it remains hard to guarantee this condition in general.", "One possible approach could be to regularize ${{\\mathcal {F}}}$ using an estimate of eq:negsobolev.", "Indeed, considers the gradient flow of a regularized version of the negative Sobolev distance which can be written in closed form, and shows that this decreases the MMD.", "Combing both losses could improve the overall convergence properties of the MMD, albeit at additional computational cost.", "In the next section, we propose a different approach to improve the convergence, and a particle-based algorithm to approximate the MMD flow in practice.", "A practical algorithm to descend the MMD flow A noisy update as a regularization We showed in subsection:barrieroptimization that ${{\\mathcal {F}}}$ is a non-convex functional, and derived a condition in sec:Lojasiewiczinequality to reach the global optimum.", "We now address the case where such a condition does not necessarily hold, and provide a regularization of the gradient flow to help achieve global optimality in this scenario.", "Our starting point will be the equilibrium condition in eq:equilibriumcondition.", "If an equilibrium $\\nu ^*$ that satisfies eq:equilibriumcondition happens to have a positive density, then $f_{\\mu ,\\nu ^{*}}$ would be constant everywhere.", "This in turn would mean that $f_{\\mu ,\\nu ^{*}}=0$ when the RKHS does not contain constant functions, as for a gaussian kernel .", "Hence, $\\nu ^*$ would be a global optimum since ${{\\mathcal {F}}}(\\nu ^{*})=0$ .", "The limit distribution $\\nu ^*$ might be singular, however, and can even be a dirac distribution .", "Although the gradient $\\nabla f_{\\mu ,\\nu ^{*}}$ is not identically 0 in that case, eq:equilibriumcondition only evaluates it on the support $\\nu ^{*}$ , on which $\\nabla f_{\\mu ,\\nu ^{*}}=0$ holds.", "Hence a possible fix would be to make sure that the unnormalised witness gradient is also evaluated at points outside of the support of $\\nu ^{*}$ .", "Here, we propose to regularize the flow by injecting noise into the gradient during updates of eq:eulerschemeparticles, $X_{n+1} = X_{n} -\\gamma \\nabla f_{\\mu ,\\nu _n}(X_n+ \\beta _n U_n), \\qquad n\\ge 0,$ where $U_n$ is a standard gaussian variable and $\\beta _n$ is the noise level at $n$ .", "Compared to eq:eulerscheme, the sample here is first blurred before evaluating the gradient.", "Intuitively, if $\\nu _n$ approaches a local optimum $\\nu ^{*}$ , $ \\nabla f_{\\mu ,\\nu _n}$ would be small on the support of $\\nu _n$ but it might be much larger outside of it, hence evaluating $\\nabla f_{\\mu ,\\nu _n}$ outside the support of $\\nu _n$ can help in escaping the local minimum.", "The stochastic process eq:discretizednoisyflow is different from adding a diffusion term to eq:continuitymmd.", "The latter case would correspond to regularizing ${{\\mathcal {F}}}$ using an entropic term as in , (see also subsec:klflow on the Langevin diffusion) and was shown to converge to a global optimum that is in general different from the global minmum of the un-regularized loss.", "Eq.", "eq:discretizednoisyflow is also different from , , where ${{\\mathcal {F}}}$ (and thus its associated velocity field) is regularized by convolving the interaction potential $W$ in eq:potentials with a mollifier.", "The optimal solution of a regularized version of the functional ${{\\mathcal {F}}}$ will be generally different from the non-regularized one, however, which is not desirable in our setting.", "Eq.", "eq:discretizednoisyflow is more closely related to the continuation methods , , and graduated optimization used for non-convex optimization in Euclidian spaces, which inject noise into the gradient of a loss function $F$ at each iteration.", "The key difference is the dependence of $f_{\\mu ,\\nu _n}$ of $\\nu _n$ , which is inherently due to functional optimization.", "We show in thm:convergencenoisygradient that eq:discretizednoisyflow attains the global minimum of ${{\\mathcal {F}}}$ provided that the level of the noise is well controlled, with the proof given in proof:thm:convergencenoisygradient.", "Proposition 8 Let $(\\nu _n)_{n\\in \\mathbb {N}}$ be defined by eq:discretizednoisyflow with an initial $\\nu _0$ .", "Denote $\\mathcal {D}_{\\beta _n}(\\nu _n)=\\mathbb {E}_{x\\sim \\nu _n, u\\sim g}[\\Vert \\nabla f_{\\mu ,\\nu _n}(x+\\beta _n u) \\Vert ^2]$ with $g$ the density of the standard gaussian distribution.", "Under assump:lipschitzgradientk,assump:Lipschitzgradrkhs, and for a choice of $\\beta _n$ such that $8\\lambda ^2\\beta _n^2 {{\\mathcal {F}}}(\\nu _n) \\le \\mathcal {D}_{\\beta _n}(\\nu _n),$ $\\text{the following inequality holds: }\\quad \\quad {{\\mathcal {F}}}(\\nu _{n+1}) - {{\\mathcal {F}}}(\\nu _n ) \\le -\\frac{\\gamma }{2}(1-3\\gamma L)\\mathcal {D}_{\\beta _n}(\\nu _n), &&$ where $\\lambda $ and $L$ are defined in assump:lipschitzgradientk,assump:Lipschitzgradrkhs and depend only on the choice of the kernel.", "Moreover if $\\sum _{i=0}^n \\beta _i^2 \\rightarrow \\infty ,$ then ${{\\mathcal {F}}}(\\nu _n)\\le {{\\mathcal {F}}}(\\nu _0) e^{-4\\lambda ^2\\gamma (1-3\\gamma L)\\sum _{i=0}^n \\beta ^2_i}.$ A particular case where $\\sum _{i=0}^n \\beta _i^2 \\rightarrow \\infty $ holds is when $\\beta _n$ decays as $1/\\sqrt{n}$ while still satisfying eq:controllevelnoise.", "In this case, convergence occurs in polynomial time.", "At each iteration, the level of the noise needs to be adjusted such that the gradient is not too blurred.", "This ensures that each step decreases the loss functional.", "However, $\\beta _n$ does not need to decrease at each iteration: it could increase adaptively whenever needed.", "For instance, when the sequence gets closer to a local optimum, it is helpful to increase the level of the noise to probe the gradient in regions where its value is not flat.", "Note that for $\\beta _n = 0$ in eq:decreasinglossiterations , we recover a similar bound to prop:decreasingfunctional.", "The sample-based approximate scheme We now provide a practical algorithm to implement the noisy updates in the previous section, which employs a discretization in space.", "The update eq:discretizednoisyflow involves computing expectations of the gradient of the kernel $k$ w.r.t the target distribution $\\mu $ and the current distribution $\\nu _n$ at each iteration $n$ .", "This suggests a simple approximate scheme, based on samples from these two distributions, where at each iteration $n$ , we model a system of $N$ interacting particles $(X_n^i)_{1\\le i\\le N}$ and their empirical distribution in order to approximate $\\nu _n$ .", "More precisely, given i.i.d.", "samples $(X^i_0)_{1\\le i\\le N}$ and $(Y^{m})_{1\\le m\\le M}$ from $\\nu _0$ and $\\mu $ and a step-size $\\gamma $ , the approximate scheme iteratively updates the $i$ -th particle as $X_{n+1}^{i} = X_n^i -\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(X_n^i+\\beta _n U_n^i),$ where $U_{n}^{i}$ are i.i.d standard gaussians and $\\hat{\\mu },\\,\\hat{\\nu }_n$ denote the empirical distributions of $(Y^{m})_{1\\le m\\le M}$ and $(X^i_n)_{1\\le i\\le N}$ , respectively.", "It is worth noting that for $\\beta _n=0$ , eq:eulermaruyama is equivalent to gradient descent over the particles $(X_n^{i})$ using a sample based version of the MMD.", "Implementing eq:eulermaruyama is straightforward as it only requires to evaluate the gradient of $k$ on the current particles and target samples.", "Pseudocode is provided in euclid.", "The overall computational cost of the algorithm at each iteration is $O((M+N)N)$ with $O(M+N)$ memory.", "The computational cost becomes $O(M+N)$ when the kernel is approximated using random features, as is the case for regression with neural networks (subsec:trainingneuralnetworks).", "This is in contrast to the cubic cost of the flow of the KSD , which requires solving a linear system at each iteration.", "The cost can also be compared to the algorithm in , which involves computing empirical CDF and quantile functions of random projections of the particles.", "The approximation scheme in eq:eulermaruyama is a particle version of eq:discretizednoisyflow, so one would expect it to converge towards its population version eq:discretizednoisyflow as $M$ and $N$ goes to infinity.", "This is shown below.", "Theorem 9 Let $n\\ge 0$ and $T>0$ .", "Let $\\nu _n$ and $\\hat{\\nu }_n$ defined by eq:eulerscheme and eq:eulermaruyama respectively.", "Suppose assump:lipschitzgradientk holds and that $\\beta _n<B$ for all $n$ , for some $B>0$ .", "Then for any $\\frac{T}{\\gamma }\\ge n$ : $\\mathbb {E}\\left[W_{2}(\\hat{\\nu }_{n},\\nu _{n})\\right]\\le \\frac{1}{4}\\left(\\frac{1}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{1}{\\sqrt{M}}var(\\mu )^{\\frac{1}{2}})\\right)(e^{4LT}-1)$ prop:convergenceeulermaruyama controls the propagation of the chaos at each iteration, and uses techniques from .", "Notice also that these rates remain true when no noise is added to the updates, i.e.", "for the original flow when $B=0$ .", "A proof is provided in proof:propagationchaos.", "The dependence in $\\sqrt{M}$ underlines the fact that our procedure could be interesting as a sampling algorithm when one only has access to $M$ samples of $\\mu $ (see subsec:klflow for a more detailed discussion).", "Experiments Figure: Comparison between different training methods for student-teacher ReLU networks with gaussian output non-linearity and synthetic data uniform on a hyper-sphere.", "In blue, eq:eulermaruyama is used without noise β n =0\\beta _n=0 while in red noise is added with the following schedule: β 0 >0\\beta _0>0 and β n \\beta _n is decreased by half after every 10 3 10^3 epochs.", "In green, a diffusion term is added to the particles with noise level kept fixed during training (β n =β 0 \\beta _n=\\beta _0).", "In purple, the KSD is used as a cost function instead of the MMD.", "In all cases, the kernel is estimated using random features (RF) with a batch size of 10 2 10^2.", "Best step-size was selected for each method from {10 -3 ,10 -2 ,10 -1 }\\lbrace 10^{-3},10^{-2},10^{-1}\\rbrace and was used for 10 4 10^4 epochs on a dataset of 10 3 10^3 samples (RF).", "Initial parameters of the networks are drawn from i.i.d.", "gaussians: 𝒩(0,1)\\mathcal {N}(0,1) for the teacher and 𝒩(10 -3 ,1)\\mathcal {N}(10^{-3},1) for the student.", "Results are averaged over 10 different runs.fig:experimentsstudentteacher illustrates the behavior of the proposed algorithm eq:eulermaruyama in a simple setting and compares it with three other methods: MMD without noise injection (blue traces), MMD with diffusion (green traces) and KSD (purple traces, ).", "Here, a student network is trained to produce the outputs of a teacher network using gradient descent.", "More details on the experiment are provided in sec:experimentsneuralnetwork.", "As discussed in subsec:trainingneuralnetworks, this setting can be seen as a stochastic version of the MMD flow since the kernel is estimated using random features at each iteration (eq:randomfeatureskernel in sec:experimentsneuralnetwork).", "Here, the MMD flow fails to converge towards the global optimum.", "Such behavior is consistent with the observations in when the parameters are initialized from a gaussian noise with relatively high variance (which is the case here).", "On the other hand, adding noise to the gradient seems to lead to global convergence.", "Indeed, the training error decreases below $10^{-5}$ and leads to much better validation error.", "While adding a small diffusion term (green) help convergence, the noise-injection (red) still outperforms it.", "This also holds for KSD (purple) which leads to a good solution (b) although at a much higher computational cost (a).", "Our noise injection method (red) is also robust to the amount of noise and achieves best performance over a wide region (c).", "On the other hand, MMD + diffusion (green) performs well only for much smaller values of noise that are located in a narrow region.", "This is expected since adding a diffusion changes the optimal solution, unlike the injection where the global optimum of the MMD remains a fixed point of the algorithm.", "Another illustrative experiment on a simple flow between Gaussians is given in sec:experimentsgaussian.", "Conclusion We have introduced MMD flow, a novel flow over the space of distributions, with a practical space-time discretized implementation and a regularisation scheme to improve convergence.", "We provide theoretical results, highlighting intrinsic properties of the regular MMD flow, and guarantees on convergence based on recent results in optimal transport, probabilistic interpretations of PDEs, and particle algorithms.", "Future work will focus on a deeper understanding of regularization for MMD flow, and its application in sampling and optimization for large neural networks.", "This appendix is organized as follows.", "In sec:appendixmathbackground, the mathematical background needed for this paper is given.", "In sec:assumptionskernel, we state the main assumptions used in this work.", "sec:appendixgradientflow is dedicated to the construction of the gradient flow of the MMD.", "sec:appendixconvergence provides proofs for the convergence results in sec:convergencemmdflow.", "sec:appendixalgorithms is dedicated to the modified gradient flow based on noise injection.", "In subsec:trainingneuralnetworks, we discuss the connexion with optimization of neural networks.", "sec:experiments provides details about the experiments.", "Finally, some auxiliary results are provided in sec:auxiliaryresults.", "Mathematical background We define ${{\\mathcal {X}}}\\subset {{\\mathbb {R}}}^d$ as the closure of a convex open set, and $\\mathcal {P}_2({{\\mathcal {X}}})$ as the set of probability distributions on ${{\\mathcal {X}}}$ with finite second moment, equipped with the 2-Wassertein metric denoted $W_2$ .", "For any $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $L_2(\\nu )$ is the set of square integrable functions w.r.t.", "$\\nu $ .", "Maximum Mean Discrepancy and Reproducing Kernel Hilbert Spaces We recall here fundamental definitions and properties of reproducing kernel Hilbert spaces (RKHS) (see ) and Maximum Mean Discrepancies (MMD).", "Given a positive semi-definite kernel $(x,y)\\mapsto k(x,y)\\in {{\\mathbb {R}}}$ defined for all $x,y\\in {{\\mathcal {X}}}$ , we denote by ${{\\mathcal {H}}}$ its corresponding RKHS (see ).", "The space ${{\\mathcal {H}}}$ is a Hilbert space with inner product $\\langle .,.", "\\rangle _{{{\\mathcal {H}}}}$ and corresponding norm $\\Vert .", "\\Vert _{{{\\mathcal {H}}}}$ .", "A key property of ${{\\mathcal {H}}}$ is the reproducing property: for all $f \\in {{\\mathcal {H}}}, f(x) = \\langle f, k(x, .", ")\\rangle _{{{\\mathcal {H}}}}$ .", "Moreover, if $k$ is $m$ -times differentiable w.r.t.", "each of its coordinates, then any $f\\in {{\\mathcal {H}}}$ is $m$ -times differentiable and $\\partial ^{\\alpha }f(x)=\\langle f, \\partial ^{\\alpha } k(x,.)", "\\rangle _{{{\\mathcal {H}}}}$ where $\\alpha $ is any multi-index with $\\alpha \\le m$ .", "When $k$ has at most quadratic growth, then for all $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $\\int k(x,x) \\mathop {}\\!\\mathrm {d}\\mu (x) <\\infty $ .", "In that case, for any $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $ \\phi _{\\mu } := \\int k(.,x)\\mathop {}\\!\\mathrm {d}\\mu (x)$ is a well defined element in ${{\\mathcal {H}}}$ called the mean embedding of $\\mu $ .", "The kernel $k$ is said to be characteristic when such mean embedding is injective, that is any mean embedding is associated to a unique probability distribution.", "When $k$ is characteristic, it is possible to define a distance between distributions in $\\mathcal {P}_2({{\\mathcal {X}}})$ called the Maximum Mean Discrepancy: $MMD(\\mu ,\\nu ) = \\Vert \\phi _{\\mu } - \\phi _{\\nu }\\Vert _{{{\\mathcal {H}}}} \\qquad \\forall \\; \\mu ,\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}}).$ The difference between the mean embeddings of $\\mu $ and $\\nu $ is an element in ${{\\mathcal {H}}}$ called the unnormalised witness function between $\\mu $ and $\\nu $ : $f_{\\mu ,\\nu } = \\phi _{\\nu } - \\phi _{\\mu }$ .", "The MMD can also be seen as an Integral Probability Metric: $MMD(\\mu ,\\nu ) = \\sup _{g\\in \\mathcal {B}} \\int g\\mathop {}\\!\\mathrm {d}\\mu - \\int g \\mathop {}\\!\\mathrm {d}\\nu $ where $\\mathcal {B} = \\lbrace g\\in {{\\mathcal {H}}}: \\; \\Vert g\\Vert _{{{\\mathcal {H}}}}\\le 1 \\rbrace $ is the unit ball in the RKHS.", "2-Wasserstein geometry For two given probability distributions $\\nu $ and $\\mu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ , we denote by $\\Pi (\\nu ,\\mu )$ the set of possible couplings between $\\nu $ and $\\mu $ .", "In other words $\\Pi (\\nu ,\\mu )$ contains all possible distributions $\\pi $ on ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ such that if $(X,Y) \\sim \\pi $ then $X \\sim \\nu $ and $Y\\sim \\mu $ .", "The 2-Wasserstein distance on $\\mathcal {P}_2({{\\mathcal {X}}})$ is defined by means of an optimal coupling between $\\nu $ and $\\mu $ in the following way: $W_2^2(\\nu ,\\mu ) := \\inf _{\\pi \\in \\Pi (\\nu ,\\mu )} \\int \\left\\Vert x - y\\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\pi (x,y) \\qquad \\forall \\nu , \\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ It is a well established fact that such optimal coupling $\\pi ^*$ exists , .", "Moreover, it can be used to define a path $(\\rho _t)_{t\\in [0,1]}$ between $\\nu $ and $\\mu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ .", "For a given time $t$ in $[0,1]$ and given a sample $(x,y)$ from $\\pi ^{*}$ , it is possible to construct a sample $z_t$ from $\\rho _t$ by taking the convex combination of $x$ and $y$ : $z_t = s_t(x,y)$ where $s_t$ is given by: $s_t(x,y) = (1-t)x+ty \\qquad \\forall x,y\\in {{\\mathcal {X}}}, \\; \\forall t\\in [0,1].$ The function $s_t$ is well defined since ${{\\mathcal {X}}}$ is a convex set.", "More formally, $\\rho _t$ can be written as the projection or push-forward of the optimal coupling $\\pi ^{*}$ by $s_t$ : $\\rho _t = (s_t)_{\\#}\\pi ^{*}$ We recall that for any $T: {{\\mathcal {X}}}\\rightarrow {{\\mathcal {X}}}$ a measurable map, and any $\\rho \\in \\mathcal {P}({{\\mathcal {X}}})$ , the push-forward measure $T_{\\#}\\rho $ is characterized by: $\\int _{y \\in {{\\mathcal {X}}}} \\phi (y) \\mathop {}\\!\\mathrm {d}T_{\\#}\\rho (y) =\\int _{x \\in {{\\mathcal {X}}}}\\phi (T(x)) \\mathop {}\\!\\mathrm {d}\\rho (x) \\text{ for every measurable and bounded function $\\phi $.", "}$ It is easy to see that eq:displacementgeodesic satisfies the following boundary conditions at $t=0,1$ : $\\rho _0 = \\nu \\qquad \\rho _1 = \\mu .$ Paths of the form of eq:displacementgeodesic are called displacement geodesics.", "They can be seen as the shortest paths from $\\nu $ to $\\mu $ in terms of mass transport ( Theorem 5.27).", "It can be shown that there exists a velocity vector field $(t,x)\\mapsto V_t(x)$ with values in ${{\\mathbb {R}}}^d$ such that $\\rho _t$ satisfies the continuity equation: $\\partial _t \\rho _t + div(\\rho _t V_t ) = 0 \\qquad \\forall t\\in [0,1].$ This equation expresses two facts, the first one is that $-div(\\rho _t V_t)$ reflects the infinitesimal changes in $\\rho _t$ as dictated by the vector field (also referred to as velocity field) $V_t$ , the second one is that the total mass of $\\rho _t$ does not vary in time as a consequence of the divergence theorem.", "Equation eq:continuityequation is well defined in the distribution sense even when $\\rho _t$ does not have a density.", "At each time $t$ , $V_t$ can be interpreted as a tangent vector to the curve $(\\rho _t)_{t\\in [0,1]}$ so that the length $l((\\rho _t)_{t\\in [0,1]})$ of the curve $(\\rho _t)_{t\\in [0,1]}$ would be given by: $l((\\rho _t)_{t\\in [0,1]})^2 = \\int _0^1 \\Vert V_t \\Vert ^2_{L_2(\\rho _t)} \\mathop {}\\!\\mathrm {d}t \\quad \\text{ where } \\quad \\left\\Vert V_t \\right\\Vert ^2_{L_2(\\rho _t)} = \\int \\left\\Vert V_t(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho _t(x)$ This perspective allows to provide a dynamical interpretation of the $W_2$ as the length of the shortest path from $\\nu $ to $\\mu $ and is summarized by the celebrated Benamou-Brenier formula (): $W_2(\\nu ,\\mu ) = \\inf _{(\\rho _t,V_t)_{t\\in [0,1]}} l((\\rho _t)_{t\\in [0,1]})$ where the infimum is taken over all couples $\\rho $ and $v$ satisfying eq:continuityequation with boundary conditions given by eq:boundaryconditions.", "If $(\\rho _t,V_t)_{t\\in [0,1]}$ satisfies eq:continuityequation and eq:boundaryconditions and realizes the infimum in eq:benamou-brenier-formula, it is then simply called a geodesic between $\\nu $ and $\\mu $ ; moreover it is called a constant-speed geodesic if, in addition, the norm of $V_t$ is constant for all $t\\in [0,1]$ .", "As a consequence, eq:displacementgeodesic is a constant-speed displacement geodesic.", "Remark 1 Such paths should not be confused with another kind of paths called mixture geodesics.", "The mixture geodesic $(m_t)_{t\\in [0,1]}$ from $\\nu $ to $\\mu $ is obtained by first choosing either $\\nu $ or $\\mu $ according to a Bernoulli distribution of parameter $t$ and then sampling from the chosen distribution: $m_t = (1-t)\\nu + t\\mu \\qquad \\forall t \\in [0,1].$ Paths of the form eq:mixturegeodesic can be thought as the shortest paths between two distributions when distances on $\\mathcal {P}_2({{\\mathcal {X}}})$ are measured using the MMD (see Theorem 5.3).", "We refer to for an overview of the notion of shortest paths in probability spaces and for the differences between mixture geodesics and displacement geodesics.", "Although, we will be interested in the MMD as a loss function, we will not consider the geodesics that are naturally associated to it and will rather consider the displacement geodesics defined in eq:displacementgeodesic for reasons that will become clear in subsec:lambdaconvexity.", "Gradient flows on the space of probability measures Consider a real valued functional ${{\\mathcal {F}}}$ defined over $\\mathcal {P}_2({\\mathbf {x}})$ .", "We call $\\frac{\\partial {{{\\mathcal {F}}}}}{\\partial {\\nu }}$ if it exists, the unique (up to additive constants) function such that $\\frac{d}{d\\epsilon }{{\\mathcal {F}}}(\\nu +\\epsilon (\\nu ^{\\prime }-\\nu ))\\vert _{\\epsilon =0}=\\int \\frac{\\partial {{{\\mathcal {F}}}}}{\\partial {\\nu }}(\\nu ) (\\mathop {}\\!\\mathrm {d}\\nu ^{\\prime }-\\mathop {}\\!\\mathrm {d}\\nu ) $ for any $\\nu ^{\\prime } \\in \\mathcal {P}_2({{\\mathcal {X}}})$ .", "The function $\\frac{\\partial {{{\\mathcal {F}}}}}{\\partial {\\nu }}$ is called the first variation of ${{\\mathcal {F}}}$ evaluated at $\\nu $ .", "We consider here functionals ${{\\mathcal {F}}}$ of the form: ${{\\mathcal {F}}}(\\nu )=\\int U(\\nu (x)) \\nu (x)dx + \\int V(x)\\nu (x)dx + \\int W(x,y)\\nu (x)\\nu (y)dxdy$ where $U$ is the internal potential, $V$ an external potential and $W$ an interaction potential.", "The formal gradient flow equation associated to such functional can be written (see , Lemma 8 to 10): $\\frac{\\partial \\nu }{\\partial t}= div( \\nu \\nabla \\frac{\\partial {{\\mathcal {F}}}}{\\partial \\nu })=div( \\nu \\nabla (U^{\\prime }(\\nu ) + V + W * \\nu ))$ where $div$ is the divergence operator and $\\nabla \\frac{\\partial {{\\mathcal {F}}}}{\\partial \\nu }$ is the strong subdifferential of ${{\\mathcal {F}}}$ associated to the $W_2$ metric (see , Lemma 10.4.1).", "Indeed, for some generalized notion of gradient $\\nabla _{W_2}$ , and for sufficiently regular $\\nu $ and ${{\\mathcal {F}}}$ , the r.h.s.", "of eq:continuityequation1 can be formally written as $-\\nabla _{W_2}{{\\mathcal {F}}}(\\nu )$ .", "The dissipation of energy along the flow is then given by: $\\frac{d {{\\mathcal {F}}}(\\nu _t)}{dt} =-D(\\nu _t) \\quad \\text{ with } D(\\nu )= \\int \\Vert \\nabla \\frac{\\partial {{\\mathcal {F}}}(\\nu _t(x))}{\\partial \\nu }\\Vert ^2 \\nu _t(x)dx$ Such expression can be obtained by the following formal calculations: $\\frac{d{{\\mathcal {F}}}(\\nu _t)}{dt}=\\int \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu _t} \\frac{\\partial \\nu _t}{\\partial t}=\\int \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu } div( \\nu _t \\nabla \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu })=-\\int \\Vert \\nabla \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu }\\Vert ^2d \\nu _t.$ Displacement convexity Just as for Euclidian spaces, an important criterion to characterize the convergence of the Wasserstein gradient flow of a functional ${{\\mathcal {F}}}$ is given by displacement convexity (see )): Definition 2 [Displacement convexity] We say that a functional $\\nu \\mapsto \\mathcal {F}(\\nu )$ is displacement convex if for any $\\nu $ and $\\nu ^{\\prime }$ and a constant speed geodesic $(\\text{$\\rho _{t}$})_{t \\in [0,1]}$ between $\\nu $ and $\\nu ^{\\prime }$ with velocity vector field $(V_{t})_{t \\in [0,1]}$ as defined by eq:continuityequation, the following holds: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu _{0})+t{{\\mathcal {F}}}(\\nu _{1}) \\qquad \\forall \\; t\\in [0,1].$ def:displacementconvexity can be relaxed to a more general notion of convexity called $\\Lambda $ -displacement convexity (see ).", "We first define an admissible functional $\\Lambda $ : Definition 3 [Admissible $\\Lambda $ functional] Consider a functional $(\\rho ,v)\\mapsto \\Lambda (\\rho ,v) \\in {{\\mathbb {R}}}$ defined for any probability distribution $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ and any square integrable vector field $v$ w.r.t $\\rho $ .", "We say that $\\Lambda $ is admissible, if it satisfies: For any $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $v\\mapsto \\Lambda (\\rho ,v)$ is a quadratic form.", "For any geodesic $(\\rho _t)_{0\\le t\\le 1}$ between two distributions $\\nu $ and $\\nu ^{\\prime }$ with corresponding vector fields $(V_t)_{t \\in [0,1]}$ it holds that $\\inf _{0\\le t\\le 1}\\Lambda (\\rho _t,V_t)/\\Vert V_t\\Vert _{L_{2}(\\rho _t)}^{2}>-\\infty $ We can now define the notion of $\\Lambda $ -convexity: Definition 4 [$\\Lambda $ convexity] We say that a functional $\\nu \\mapsto \\mathcal {F}(\\nu )$ is $\\Lambda $ -convex if for any $\\nu ,\\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})^2$ and a constant speed geodesic $(\\text{$\\rho _{t}$})_{t \\in [0,1]}$ between $\\nu $ and $\\nu ^{\\prime }$ with velocity vector field $(V_{t})_{t \\in [0,1]}$ as defined by eq:continuityequation, the following holds: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu _{0})+t{{\\mathcal {F}}}(\\nu _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds \\qquad \\forall \\; t\\in [0,1].$ where $(\\rho ,v)\\mapsto \\Lambda (\\rho ,v)$ satisfies def:conditionslambda, and $G(s,t)=s(1-t) \\mathbb {I}\\lbrace s\\le t\\rbrace +t(1-s) \\mathbb {I}\\lbrace s\\ge t\\rbrace $ .", "A particular case is when $\\Lambda (\\rho ,v)= \\lambda \\int \\left\\Vert v(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho (x) $ for some $\\lambda \\in {{\\mathbb {R}}}$ .", "In that case, eq:lambdadisplacementconvex becomes: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu _{0})+t{{\\mathcal {F}}}(\\nu _{1})-\\frac{\\lambda }{2}t(1-t)W_2^2(\\nu _0,\\nu _1) \\qquad \\forall \\; t\\in [0,1].$ def:displacementconvexity is a particular case of def:lambda-convexity, where in eq:semi-convexity one has $\\lambda =0$ .", "Comparison with the Kullback Leilber divergence flow Continuity equation and McKean Vlasov process.", "A famous example of a free energy eq:lyapunov is the Kullback-Leibler divergence, defined for $\\nu , \\mu \\in \\mathcal {P}({{\\mathcal {X}}})$ by $KL(\\nu ,\\mu )=\\int log(\\frac{\\nu (x)}{\\mu (x)})\\nu (x)dx$ .", "Indeed, $KL(\\nu , \\mu )=\\int U(\\nu (x))dx + \\int V(x) \\nu (x)dx$ with $U(s)=s\\log (s)$ the entropy function and $V(x)=-log(\\mu (x))$ .", "In this case, $\\nabla \\frac{\\partial {{\\mathcal {F}}}}{\\partial \\nu }= \\nabla \\log (\\nu ) + \\nabla V= \\nabla \\log (\\frac{\\nu }{\\mu })$ and equation eq:continuityequation1 leads to the classical Fokker-Planck equation $\\frac{\\partial {\\nu }}{\\partial t}= div(\\nu \\nabla V )+ \\Delta \\nu ,$ where $\\Delta $ is the Laplacian operator.", "It is well-known (see for instance ) that the distribution of the Langevin diffusion in eq:langevindiffusion satisfies eq:Fokker-Planck, $dX_t= -\\nabla \\log \\mu (X_t)dt+\\sqrt{2}dB_t.$ Here, $(B_t)_{t\\ge 0}$ is a $d$ -dimensional Brownian motion.", "While the entropy term in the $KL$ functional prevents the particles from \"crashing\" onto the mode of $\\mu $ , this role could be played by the interaction energy $W$ defined in eq:potentials for the MMD.", "Indeed, consider for instance the gaussian kernel $k(x,x^{\\prime })=e^{-\\Vert x-x^{\\prime }\\Vert ^2}$ .", "It is convex thus attractive at long distances ($\\Vert x-x^{\\prime }\\Vert >1$ ) but repulsive at small distances so repulsive.", "Convergence to a global minimum.", "The solution to the Fokker-Planck equation describing the gradient flow of the $KL$ can be shown to converge towards $\\mu $ under mild assumptions.", "This follows from the displacement convexity of the $KL$ along the Wasserstein geodesics.", "Unfortunately the MMD is not displacement convex in general, as shown in subsection:barrieroptimization or subsec:appendixlambdaconvexity.", "This makes the task of proving the convergence of the gradient flow of the MMD to the global optimum $\\mu $ much harder.", "Sampling algorithms derived from gradient flows.", "Two settings are usually encountered in the sampling literature: density-based, i.e.", "the target $\\mu $ is known up to a constant, or sample-based, i.e.", "only a set of samples $X \\sim \\mu $ is accessible.", "The Unadjusted Langevin Algorithm (ULA), which involves a time-discretized version of the Langevin diffusion falls into the first category since it requires the knowledge of $\\nabla \\log \\mu $ .", "In a sample-based setting, it may be difficult to adapt the ULA algorithm, since this would require to estimate $\\nabla \\log (\\mu )$ based on a set of samples of $\\mu $ , before plugging this estimate in the update of the algorithm.", "This problem, sometimes referred to as score estimation in the literature, has been the subject of a lot of work but remains hard especially in high dimensions (see ,,).", "In contrast, the discretized flow (in time and space) of the MMD presented in sec:samplebased is naturally adapted to the sample-based setting.", "Main assumptions We state here all the assumptions on the kernel $k$ used to prove all the results: $k$ is continuously differentiable on ${{\\mathcal {X}}}$ with $L$ -Lipschitz gradient: $\\Vert \\nabla k(x,x^{\\prime }) - \\nabla k(y,y^{\\prime })\\Vert \\le L(\\Vert x-y\\Vert + \\Vert x^{\\prime }-y^{\\prime } \\Vert ) $ for all $x,x^{\\prime },y,y^{\\prime } \\in {{\\mathcal {X}}}$ .", "$k$ is twice differentiable on ${{\\mathcal {X}}}$ .", "$\\Vert Dk(x,y) \\Vert \\le \\lambda $ for all $x,y\\in {{\\mathcal {X}}}$ , where $Dk(x,y)$ is an $\\mathbb {R}^{d^2}\\times \\mathbb {R}^{d^2}$ matrix with entries given by $\\partial _{x_{i}}\\partial _{x_{j}}\\partial _{x^{\\prime }_{i}}\\partial _{x_{j}^{\\prime }}k(x,y)$ .", "$ \\sum _{i=1}^d\\Vert \\partial _i k(x,.)", "- \\partial _i k(y,.", ")\\Vert ^2_{{{\\mathcal {H}}}} \\le \\lambda ^2 \\Vert x-y\\Vert ^2 $ for all $x,y\\in {{\\mathcal {X}}}$ .", "Construction of the gradient flow of the MMD Continuous time flow Existence and uniqueness of a solution to eq:continuitymmd,eq:mcKeanVlasovprocess is guaranteed under Lipschitz regularity of $\\nabla k$ .", "[Proof of prop:existenceuniqueness][Existence and uniqueness] Under assump:lipschitzgradientk, the map $(x,\\nu )\\mapsto \\nabla f_{\\mu ,\\nu }(x)=\\int \\nabla k(x,.", ")d \\nu - \\int \\nabla k(x,.)", "d \\mu $ is Lipschitz continuous on ${{\\mathcal {X}}}\\times \\mathcal {P}_2({{\\mathcal {X}}})$ (endowed with the product of the canonical metric on ${{\\mathcal {X}}}$ and $W_2$ on $\\mathcal {P}_2({{\\mathcal {X}}})$ ), see prop:gradwitnessfunction.", "Hence, we benefit from standard existence and uniqueness results of McKean-Vlasov processes (see ).", "Then, it is straightforward to verify that the distribution of eq:mcKeanVlasovprocess is solution of eq:continuitymmd by ItÃŽ's formula (see ).", "The uniqueness of the gradient flow, given a starting distribution $\\nu _0$ , results from the $\\lambda $ -convexity of ${{\\mathcal {F}}}$ (for $\\lambda =3L$ ) which is given by lem:lambdaconvexitybis, and .", "The existence derive from the fact that the sub-differential of ${{\\mathcal {F}}}$ is single-valued, as stated by prop:differentialmmd, and that any $\\nu _0$ in $\\mathcal {P}_2({{\\mathcal {X}}})$ is in the domain of ${{\\mathcal {F}}}$ .", "One can then apply .", "[Proof of prop:decaymmd][Decay of the MMD] Recalling the discussion in subsec:gradientflowsfunctionals, the time derivative of ${{\\mathcal {F}}}(\\nu _t)$ along the flow is formally given by eq:dissipationenergy.", "But we know from prop:differentialmmd that the strong differential $\\nabla \\frac{\\delta {{\\mathcal {F}}}(\\nu )}{\\delta \\nu }$ is given by $\\nabla f_{\\mu ,\\nu }$ .", "Therefore, one formally obtains the desired expression by exchanging the order of derivation and integration, performing an integration by parts and using the continuity equation (see (REF )).", "We refer to for similar calculations.", "One can also obtain directly the same result using the energy identity in which holds for $\\lambda $ -displacement convex functionals.", "The result applies here since, by lem:lambdaconvexitybis, we know that ${{\\mathcal {F}}}$ is $\\lambda $ -displacement convex with $\\lambda = 3L$ .", "Time-discretized flow We prove that eq:eulerscheme approximates eq:continuitymmd.", "To make the dependence on the step-size $\\gamma $ explicit, we will write: $\\nu _{n+1}^{\\gamma } =(I-\\gamma \\nabla f_{\\mu ,\\nu _n^{\\gamma }})_{\\#}\\nu _{n}^{\\gamma }$ (so $\\nu _n^{\\gamma }=\\nu _n$ for any $n \\ge 0$ ).", "We start by introducing an auxiliary sequence $\\bar{\\nu }_{n}^{\\gamma }$ built by iteratively applying $\\nabla f_{\\mu ,\\nu _{\\gamma n}}$ where $\\nu _{\\gamma n}$ is the solution of eq:continuitymmd at time $t= \\gamma n$ : $\\bar{\\nu }_{n+1}^{\\gamma } =(I-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}})_{\\#}\\bar{\\nu }_{n}^{\\gamma }$ with $\\bar{\\nu }_{0}=\\nu _{0}$ .", "Note that the latter sequence involves the continuous-time process $\\nu _t$ of eq:continuitymmd with $t=\\gamma n$ .", "Using $\\nu _n^{\\gamma }$ , we also consider the interpolation path $\\rho _{t}^{\\gamma }=(I-(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }})_{\\#}\\nu _{n}^{\\gamma }$ for all $t\\in [n\\gamma ,(n+1)\\gamma )$ and $n\\in \\mathbb {N}$ , which is the same as in prop:convergenceeulerscheme.", "[Proof of prop:convergenceeulerscheme] Let $\\pi $ be an optimal coupling between $\\nu _{n}^{\\gamma }$ and $\\nu _{\\gamma n}$ , and $(x,y)$ a sample from $\\pi $ .", "For $t\\in [n\\gamma ,(n+1)\\gamma )$ we write $y_{t} =y_{n\\gamma }-\\int _{n\\gamma }^{t}\\nabla f_{\\mu ,\\nu _{s}}(y_u)\\mathop {}\\!\\mathrm {d}u$ and $x_{t} =x-(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)$ where $y_{n\\gamma }= y$ .", "We also introduce the approximation error $ E(t,n\\gamma ):=y_{t}-y+(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{\\gamma n}}(y)$ for which we know by lem:Taylor-expansion that $\\mathcal {E}(t,n\\gamma ):=\\mathbb {E}[E(t,n\\gamma )^2]^{\\frac{1}{2}}$ is upper-bounded by $(t-n\\gamma )^{2}C$ for some positive constant $C$ that depends only on $T$ and the Lipschitz constant $L$ .", "This allows to write: $W_{2}(\\rho _{t}^{\\gamma },\\nu _{t}) & \\le \\mathbb {E}\\left[\\left\\Vert y-x+(t-n\\gamma )(\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)-\\nabla f_{\\mu ,\\nu _{\\gamma n}}(y))+E(t,n\\gamma )\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\& \\le W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+4L(t-n\\gamma )W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+\\mathcal {E}(t,n\\gamma )\\\\& \\le (1+4\\gamma L)W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+(t-\\gamma n)^2C\\\\ &\\le (1+4\\gamma L)\\left( W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })+W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma }) \\right)+\\gamma ^{2}C \\\\& \\le \\gamma \\left[\\left(1+4\\gamma L\\right)M(T)+\\gamma C\\right]$ The second line is obtained using that $\\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)$ is jointly $2L$ -Lipschitz in $x$ and $\\nu $ (see prop:gradwitnessfunction) and by the fact that $W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n}) = \\mathbb {E}_{\\pi }[\\Vert y-x\\Vert ^2]^{\\frac{1}{2}}$ .", "The third one is obtained using $t-n \\gamma \\le \\gamma $ .", "For the last inequality, we used lem:eulererror1,lem:eulererror2 where $M(T)$ is a constant that depends only on $T$ .", "Hence for $\\gamma \\le \\frac{1}{4L}$ we get $W_{2}(\\rho _{t}^{\\gamma },\\nu _{t})\\le \\gamma (\\frac{C}{4L}+2M(T)).$ Lemma 10 For any $n\\ge 0$ : $W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })\\le \\gamma \\frac{C}{2L}(e^{n\\gamma 2L}-1)$ Let $\\pi $ be an optimal coupling between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{\\gamma n}$ and $(\\bar{x}$ , $x)$ a joint sample from $\\pi $ .", "Consider also the joint sample $(\\bar{y},y)$ obtained from $(\\bar{x}$ ,$x)$ by applying the gradient flow of ${{\\mathcal {F}}}$ in continuous time to get $y := x_{(n+1)\\gamma }=x_{n \\gamma }-\\int _{n\\gamma }^{(n+1)\\gamma }\\nabla f_{\\mu ,\\nu _{s}}(x_u)\\mathop {}\\!\\mathrm {d}u$ with $x_{n\\gamma } = x$ and by taking a discrete step from $\\bar{x}$ to write $\\bar{y}=\\bar{x}-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})$ .", "It is easy to see that $y\\sim \\nu _{\\gamma (n+1)}$ (i.e.", "a sample from the continous process eq:continuitymmd at time $t=(n+1)\\gamma $ ) and $\\bar{y}\\sim \\bar{\\nu }_{n+1}^{\\gamma }$ (i.e.", "a sample from eq:intermedprocesstime).", "Moreover, we introduce the approximation error $E((n+1)\\gamma ,n\\gamma ):=y-x+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)$ for which we know by lem:Taylor-expansion that $\\mathcal {E}((n+1)\\gamma ,n\\gamma ):=\\mathbb {E}[E((n+1)\\gamma ,n\\gamma )^2]^{\\frac{1}{2}}$ is upper-bounded by $\\gamma ^{2}C$ for some positive constant $C$ that depends only on $T$ and the Lipschitz constant $L$ .", "Denoting by $a_{n}=W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })$ , one can therefore write: $a_{n+1}\\le & \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)-\\bar{x}+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})+E((n+1)\\gamma ,n\\gamma )\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\\\le & \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}+\\gamma \\mathbb {E_{\\pi }}\\left[\\left\\Vert \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)-\\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x}))\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}+\\gamma ^{2}C$ Using that $\\nabla f_{\\mu ,\\nu _{\\gamma n}}$ is $2L$ -Lipschitz by prop:gradwitnessfunction and recalling that $\\mathbb {E}_{\\pi }\\left[\\Vert x-\\bar{x}\\Vert ^{2}\\right]^{\\frac{1}{2}}=W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })$ , we get the recursive inequality $a_{n+1}\\le (1+2 \\gamma L)a_{n}+\\gamma ^{2}C$ .", "Finally, using lem:Discrete-Gronwall-lemma and recalling that $a_{0}=0$ , since by definition $\\bar{\\nu }_{0}^{\\gamma }=\\nu _{0}^{\\gamma }$ , we conclude that $a_{n}\\le \\gamma \\frac{C}{2L}(e^{n\\gamma 2L}-1)$ .", "Lemma 11 For any $T>0$ and $n$ such that $n\\gamma \\le T$ $W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })\\le \\gamma \\frac{C}{8L^2}(e^{4TL}-1)^{2}$ Consider now an optimal coupling $\\pi $ between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{n}^{\\gamma }$ .", "Similarly to lem:eulererror1, we denote by $(\\bar{x},x)$ a joint sample from $\\pi $ and $(\\bar{y},y)$ is obtained from $(\\bar{x},x)$ by applying the discrete updates : $\\bar{y}=\\bar{x}-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})$ and $y=x-\\gamma \\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)$ .", "We again have that $y\\sim \\nu _{n+1}^{\\gamma }$ (i.e.", "a sample from the time discretized process eq:eulerscheme) and $\\bar{y}\\sim \\bar{\\nu }_{n+1}^{\\gamma }$ (i.e.", "a sample from eq:intermedprocesstime).", "Now, denoting by $b_{n}=W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })$ , it is easy to see from the definition of $\\bar{y}$ and $y$ that we have: $b_{n+1} & \\le \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\gamma \\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)-\\bar{x}+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\&\\le (1+2\\gamma L) \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^2\\right]^{\\frac{1}{2}} + 2\\gamma L W_2(\\nu _n^{\\gamma },\\nu _{\\gamma n}))\\\\& \\le (1+ 4\\gamma L)b_n + \\gamma L W_2(\\bar{\\nu }_n^{\\gamma },\\nu _{\\gamma n})$ The second line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is $2L$ -Lipschitz in both $x$ and $\\nu $ by prop:gradwitnessfunction.", "The third line follows by triangular inequality and using $\\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^2\\right]^{\\frac{1}{2}}= W_2(\\nu _n^{\\gamma },\\bar{\\nu }_n^{\\gamma }) = b_n$ , since $\\pi $ is an optimal coupling between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{n}^{\\gamma }$ .", "By lem:eulererror1, we have $W_2(\\bar{\\nu }_n^{\\gamma },\\nu _{\\gamma n})\\le \\gamma \\frac{C}{2L}(e^{2n\\gamma L}-1)$ , hence, for any $n$ such that $n\\gamma \\le T$ we get the recursive inequality $b_{n+1}\\le (1+4\\gamma L)b_{n}+(C/2L)\\gamma ^{2}(e^{2TL}-1).$ Finally, using again lem:Discrete-Gronwall-lemma, it follows that $b_{n}\\le \\gamma \\frac{C}{8L^2}(e^{4TL}-1)^{2}$ .", "Lemma 12 [Taylor expansion] Consider the process $ \\dot{x}_t = - \\nabla f_{\\mu ,\\nu _t}(x_t) $ , and denote by $\\mathcal {E}(t,s) = \\mathbb {E}[ \\Vert x_t - x_s +(t-s)\\nabla f_{\\mu ,\\nu _s}(x_s) \\Vert ^2 ]^{\\frac{1}{2}} $ for $0\\le s \\le t \\le T$ .", "Then one has: $\\mathcal {E}(t,s)\\le 2L^2 r_0 e^{LT}(t-s)^2$ with $r_0 = \\mathbb {E}_{(x,z)\\sim \\nu _0 \\otimes \\mu }[\\Vert x-z \\Vert ]$ By definition of $x_t$ and $\\mathcal {E}(t,s)$ one can write: $\\mathcal {E}(t,s)&=\\mathbb {E}\\left[\\left\\Vert \\int _{s}^t (\\nabla f_{\\mu ,\\nu _s}(x_s) - \\nabla f_{\\mu ,\\nu _u}(x_u))\\mathop {}\\!\\mathrm {d}u \\right\\Vert ^2 \\right]^{\\frac{1}{2}} \\\\&\\le \\int _{s}^t \\mathbb {E}\\left[\\left\\Vert (\\nabla f_{\\mu ,\\nu _s}(x_s) - \\nabla f_{\\mu ,\\nu _u}(x_u)) \\right\\Vert ^2 \\right]^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u\\\\&\\le 2L\\int _{s}^t \\mathbb {E}\\left[(\\left\\Vert x_s - x_u \\right\\Vert + W_2(\\nu _s,\\nu _u))^2 \\right]^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u \\le 4L\\int _{s}^t \\mathbb {E}\\left[\\left\\Vert x_s - x_u \\right\\Vert ^2\\right]^{\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u$ Where we used an integral expression for $x_t$ in the first line then applied a triangular inequality for the second line.", "The last line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is jointly $2L$ -Lipschitz in $x$ and $\\nu $ by prop:gradwitnessfunction and that $W_2(\\nu _s,\\nu _u) \\le \\mathbb {E}\\left[\\left\\Vert x_s - x_u \\right\\Vert ^2\\right]^{\\frac{1}{2}}$ .", "Now we use again an integral expression for $x_u$ which further gives: $\\mathcal {E}(t,s) \\le & 4L \\int _{s}^t \\mathbb {E}\\left[\\left\\Vert \\int _s^u \\nabla f_{\\mu ,\\nu _l}(x_l) \\mathop {}\\!\\mathrm {d}l \\right\\Vert ^2 \\right]^{\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u\\\\\\le & 4L \\int _{s}^t \\int _s^u \\mathbb {E}\\left[ \\left\\Vert \\mathbb {E}\\left[ \\nabla _1 k(x_l,x_l^{\\prime }) - \\nabla _1 k(x_l,z) \\right] \\right\\Vert ^2 \\right]^\\frac{1}{2}\\mathop {}\\!\\mathrm {d}l\\mathop {}\\!\\mathrm {d}u\\\\\\le &4L^2 \\int _{s}^t \\int _s^u \\mathbb {E}\\left[\\left\\Vert x_l^{\\prime } - z \\right\\Vert \\right] \\mathop {}\\!\\mathrm {d}l \\mathop {}\\!\\mathrm {d}u$ Again, the second line is obtained using a triangular inequality and recalling the expression of $\\nabla f_{\\mu ,\\nu }(x)$ from prop:gradwitnessfunction.", "The last line uses that $\\nabla k$ is $L$ -Lipschitz by assump:lipschitzgradientk.", "Now we need to make sure that $\\Vert x_l^{\\prime } - z \\Vert $ remains bounded at finite times.", "For this we will first show that $ r_t = \\mathbb {E}[\\Vert x_t - z \\Vert ]$ satisfies an integro-differential inequality: $r_t\\le & \\mathbb {E}\\left[\\left\\Vert x_0 - z -\\int _0^t \\nabla f_{\\mu ,\\nu _s}(x_s) \\mathop {}\\!\\mathrm {d}s \\right\\Vert \\right]\\\\\\le &r_0 +\\int _0^t \\mathbb {E}\\left[\\left\\Vert \\nabla _1 k(x_s,x_s^{\\prime })- \\nabla _1 k(x_s,z) \\right\\Vert \\right] \\mathop {}\\!\\mathrm {d}s\\le r_0 + L\\int _0^t r_s \\mathop {}\\!\\mathrm {d}s$ Again, we used an integral expression for $x_t$ in the first line, then a triangular inequality recalling the expression of $\\nabla f_{\\mu ,\\nu _s}$ .", "The last line uses again that $\\nabla k$ is $L$ -Lipschitz.", "By Gronwall's lemma it is easy to see that $r_t \\le r_0e^{Lt}$ at all times.", "Moreover, for all $t\\le T$ we have a fortiori that $r_t \\le r_0 e^{LT}$ .", "Recalling back the upper-bound on $\\mathcal {E}(t,s)$ we have finally: $\\mathcal {E}(t,s)\\le 4L^2 r_0 e^{LT} \\int _{s}^t \\int _s^u \\mathop {}\\!\\mathrm {d}l \\mathop {}\\!\\mathrm {d}u = 2L^2 r_0 e^{LT}(t-s)^2$ We show now that eq:eulerscheme decreases the functional ${{\\mathcal {F}}}$ .", "In all the proofs, the step-size $\\gamma $ is fixed.", "[Proof of prop:decreasingfunctional] Consider a path between $\\nu _n$ and $\\nu _{n+1}$ of the form $\\rho _t =(I-\\gamma t\\nabla f_{\\mu ,\\nu _n})_{\\#}\\nu _n$ .", "We know by prop:gradwitnessfunction that $\\nabla f_{\\mu ,\\nu _n}$ is $2L$ Lipschitz, thus by lem:derivativemmdaugmented and using $\\phi (x) = -\\gamma \\nabla f_{\\mu ,\\nu _n}(x)$ , $\\psi (x) = x$ and $q = \\nu _n$ it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and hence absolutely continuous.", "Therefore one can write: $\\mathcal {F}(\\rho _1)-\\mathcal {F}(\\rho _0) = \\dot{\\mathcal {F}}(\\rho _0)+ \\int _0^1 \\dot{{{\\mathcal {F}}}}(\\rho _t)- \\dot{{{\\mathcal {F}}}}(\\rho _0)dt.$ Moreover, lem:derivativemmdaugmented also allows to write: $\\dot{\\mathcal {F}}(\\rho _0) = -\\gamma \\int \\Vert \\nabla f_{\\mu ,\\nu _n}(x) \\Vert ^2 d\\nu _n(x); \\qquad \\vert \\dot{{{\\mathcal {F}}}}(\\rho _t)- \\dot{{{\\mathcal {F}}}}(\\rho _0)\\vert \\le 3L t\\gamma ^2 \\int \\Vert \\nabla f_{\\mu ,\\nu _n}(X) \\Vert ^2 d\\nu _n(X).$ where $t\\le 1$ .", "Hence, the result follows directly by applying the above expression to eq:taylorexpansiondecreasing.", "Convergence of the gradient flow of the MMD Equilibrium condition We discuss here the equilibrium condition eq:equilibriumcondition and relate it to .", "Recall that eq:equilibriumcondition is given by: $\\int \\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) = 0$ .", "Under some mild assumptions on the kernel which are states in it is possible to write eq:equilibriumcondition as: $\\int \\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) = \\langle f_{\\mu ,\\nu ^{*}} , D_{\\nu ^{*}} f_{\\mu ,\\nu ^{*}}\\rangle _{{{\\mathcal {H}}}} = 0$ where $D_{\\nu ^{*}}$ is a Hilbert-Schmidt operator given by: $D_{\\nu ^{*}} = \\int \\sum _{i=1}^d \\partial _i k(x,.", ")\\otimes \\partial _i k(x,.)", "\\mathop {}\\!\\mathrm {d}\\nu ^{*}(x)$ Hence eq:equilibriumcondition is equivalent to say that $f_{\\mu ,\\nu ^{*}}$ belongs to the null space of $D_{\\nu ^{*}}$ .", "In , a similar equilibrium condition is derived by considering the time derivative of the MMD along the KSD gradient flow: $\\frac{1}{2} \\frac{d}{dt} MMD^2(\\mu ,\\nu _t) = - \\lambda \\langle f_{\\mu ,\\nu _t}, (\\frac{1}{\\lambda }I - (D_{\\nu _t} +\\lambda I )^{-1})f_{\\mu ,\\nu _t} \\rangle _{{{\\mathcal {H}}}}$ The r.h.s is shown to be always negative and thus the MMD decreases in time.", "Hence, as $t$ approaches $\\infty $ , the r.h.s tends to 0 since the MMD converges to some limit value $l$ .", "This provides the equilibrium condition: $\\lambda \\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ It is further shown in that the above equation is also equivalent to having $f_{\\mu ,\\nu ^{*}}$ in the null space of $D_{\\nu ^{*}}$ in the case when $D_{\\nu ^{*}}$ has finite dimensions.", "We generalize this statement to infinite dimension in prop:nullspacediffoperator.", "In , it is simply assumed that if $f_{\\mu ,\\nu ^{*}} \\ne 0$ then $D_{\\nu ^{*}} f_{\\mu ,\\nu ^{*}} \\ne 0 $ which exactly amounts to assuming that local optima which are not global don't exist.", "Proposition 13 $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0 \\iff f_{\\mu ,\\nu ^{*}} \\in null(D_{\\nu ^{*}})$ This follows simply by recalling $D_{\\nu ^{*}}$ is a symmetric non-negative Hilbert-Schmidt operator it has therefore an eigen-decomposition of the form: $D_{\\nu ^{*}} = \\sum _{i=1}^{\\infty } \\lambda _i e_i \\otimes e_i$ where $e_i$ is an ortho-norrmal basis of ${{\\mathcal {H}}}$ and $\\lambda _i$ are non-negative.", "Moreover, $f_{\\mu ,\\nu ^{*}}$ can be decomposed in $(e_i)_{1\\le i}$ in the form: $f_{\\mu ,\\nu ^{*}} = \\sum _{i=0}^{\\infty } \\alpha _i e_i$ where $\\alpha _i$ is a squared integrable sequence.", "It follows that $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}}$ can be written as: $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = \\sum _{i=1}^{\\infty } \\frac{\\lambda _i}{\\lambda _i+\\lambda } \\alpha _i^2$ Hence, if $f_{\\mu ,\\nu ^{*}}\\in null(D_{\\nu ^{*}})$ then $\\langle f_{\\mu ,\\nu ^{*}}, D_{\\nu ^{*}}f_{\\mu ,\\nu ^{*}}\\rangle _{{{\\mathcal {H}}}}= 0$ , so that $\\sum _{i=1}^{\\infty } \\lambda _i \\alpha _i^2 = 0$ .", "Since $\\lambda _i$ are non-negative, this implies that $\\lambda _i \\alpha _i^2= 0$ for all $i$ .", "Therefore, it must be that $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ .", "Similarly, if $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} =0 $ then $\\frac{\\lambda _i\\alpha _i^2}{\\lambda _i + \\lambda } = 0$ hence $\\langle f_{\\mu ,\\nu {*}}, D_{\\nu ^{*}} f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ .", "This means that $f_{\\mu ,\\nu {*}}$ belongs to $null(D_{\\nu ^*})$ .", "$\\Lambda $ -displacement convexity of the MMD We provide now a proof of prop:lambdaconvexity: [Proof of prop:lambdaconvexity][$\\Lambda $ - displacement convexity of the MMD] To prove that $\\nu \\mapsto {{\\mathcal {F}}}(\\nu )$ is $\\Lambda $ -convex we need to compute the second time derivative $\\ddot{{{\\mathcal {F}}}}(\\rho _{t})$ where $(\\rho _{t})_{t \\in [0,1]}$ is a displacement geodesic between two probability distributions $\\nu _{0}$ and $\\nu _{1}$ as defined in eq:displacementgeodesic.", "Such geodesic always exists and can be written as $\\rho _t = (s_t)_{\\#}\\pi $ with $s_t = x + t(y-x)$ for all $t\\in [0,1]$ and $\\pi $ is an optimal coupling between $\\nu _0$ and $\\nu _1$ (, Theorem 5.27).", "We denote by $V_t$ the corresponding velocity vector as defined in eq:continuityequation.", "Recall that ${{\\mathcal {F}}}(\\rho _t) = \\frac{1}{2} \\Vert f_{\\mu ,\\rho _t}\\Vert ^2_{\\mathcal {H}}$ , with $f_{\\mu ,\\rho _t}$ defined in eq:witnessfunction.", "We start by computing the first derivative of $ t\\mapsto {{\\mathcal {F}}}(\\rho _t) $ .", "Since assump:diffkernel,assump:lipschitzgradientk hold, lem:secondderivativeaugmentedmmd applies for $\\phi (x,y) = y-x$ , $\\psi (x,y) = x$ and $q = \\pi $ , thus we know that $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ is well defined and given by: $\\begin{split}\\ddot{{{\\mathcal {F}}}}(\\rho _t) =&\\mathbb {E}\\left[ (y-x)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))(y^{\\prime }-x^{\\prime })\\right]\\\\&+ \\mathbb {E}\\left[ (y-x)^T( H_1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-H_1 k(s_t(x,y),z))(y-x)\\right]\\end{split}$ Moreover, assump:boundedfourthoder also holds which means by lem:secondderivativeaugmentedmmd that the second term in eq:hessian can be lower-bounded by $-\\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t)\\mathbb {E}[ \\Vert y-x \\Vert ^2]$ so that: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) =\\mathbb {E}\\left[ (y-x)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))(y^{\\prime }-x^{\\prime })\\right] - \\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t) \\mathbb {E}[ \\Vert y-x \\Vert ^2]$ Recall now that $(\\rho _t)_{t \\in [0,1]}$ is a constant speed geodesic with velocity vector $(V_t)_{t\\in [0,1]}$ thus by a change of variable, one further has: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\int \\left[ V_t^T(x)\\nabla _1 \\nabla _2 k(x,x^{\\prime })V_t(x^{\\prime })\\right]\\mathop {}\\!\\mathrm {d}\\rho _t(x) - \\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t) \\int \\Vert V_t(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho _t(x).$ Now we can introduce the function $\\Lambda (\\rho ,v) = \\langle v ,( C_{\\rho } -\\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}} I) v \\rangle _{L_2(\\rho )}$ which is defined for any pair $(\\rho ,v)$ with $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ and $v$ a square integrable vector field in $L_2(\\rho )$ and where $C_{\\rho }$ is a non-negative operator given by $(C_{\\rho }v)(x)=\\int \\nabla _{x}\\nabla _{x^{\\prime }}k(x,x^{\\prime })v(x^{\\prime })d\\rho (x^{\\prime })$ for any $x \\in {{\\mathcal {X}}}$ .", "This allows to write $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\Lambda (\\rho _t,V_t)$ .", "It is clear that $\\Lambda (\\rho ,.", ")$ is a quadratic form on $L_2(\\rho )$ and satisfies the requirement in def:conditionslambda.", "Finally, using lem:integrallambdaconvexity and def:lambda-convexity we conclude that ${{\\mathcal {F}}}$ is $\\Lambda $ -convex.", "Moreover, by the reproducing property we also know that for all $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ : $ \\mathbb {E}_{\\rho }\\left[ v(x)^T \\nabla _1 \\nabla _2 k(x,x^{\\prime }) v(x^{\\prime }) \\right] = \\mathbb {E}_{\\rho }\\left[\\left\\langle v(x)^T \\nabla _1 k(x,.", "), v(x^{\\prime })^T \\nabla _1k(x^{\\prime },.)", "\\right\\rangle _{{{\\mathcal {H}}}}\\right].$ By Bochner integrability of $v(x)^T \\nabla _1 k(x,.", ")$ it is possible to exchange the order of the integral and the inner-product .", "This leads to the expression $\\Vert \\mathbb {E}[v(x)^T \\nabla _1 k(x,.", ")]\\Vert ^2_{{{\\mathcal {H}}}}$ .", "Hence $\\Lambda (\\rho ,v)$ has a second expression of the form: $\\Lambda (\\rho ,v) = \\left\\Vert \\mathbb {E}_{\\rho }\\left[v(x)^T \\nabla _1 k(x,.", ")\\right]\\right\\Vert ^2_{{{\\mathcal {H}}}} - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}}\\mathbb {E}_{\\rho }\\left[\\left\\Vert v(x)\\right\\Vert ^2 \\right].$ We also provide a result showing $\\Lambda $ convexity for ${{\\mathcal {F}}}$ only under assump:lipschitzgradientk: Lemma 14 ($\\Lambda $ -displacement convexity) Under assump:lipschitzgradientk, for any $\\nu ,\\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})$ and any constant speed geodesic $\\rho _t$ from $\\nu $ to $\\nu ^{\\prime }$ , ${{\\mathcal {F}}}$ satisfies for all $0\\le t\\le 1$ : ${{\\mathcal {F}}}(\\rho _t) \\le (1-t){{\\mathcal {F}}}(\\nu ) + t{{\\mathcal {F}}}(\\nu ^{\\prime }) + 3L W_2^2(\\nu ,\\nu ^{\\prime }) \\qquad $ Let $\\rho _t$ be a constant speed geodesic of the form $\\rho _t = s_t{\\#}\\pi $ where $\\pi $ is an optimal coupling between $\\nu $ and $\\nu ^{\\prime }$ and $s_t(x,y) = x + t(y-x)$ .", "Since assump:lipschitzgradientk holds, one can apply lem:derivativemmdaugmented with $\\psi (x,y) =x $ , $\\phi (x,y)= y-x$ and $q = \\pi $ .", "Hence, one has that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and its differential satisfies: $\\vert \\dot{{{\\mathcal {F}}}}(\\rho _t) - \\dot{{{\\mathcal {F}}}}(\\rho _s) \\vert \\le 3L\\vert t-s \\vert \\int \\Vert y-x\\Vert ^2\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ This implies that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is Lipschitz continuous and therefore is differentiable for almost all $t\\in [0,1]$ by Rademacher's theorem.", "Hence, $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ is well defined for almost all $t\\in [0,1]$ .", "Moreover, from the above inequality it follows that $\\ddot{{{\\mathcal {F}}}}(\\rho _t)\\ge - 3L \\int \\Vert y-x\\Vert ^2\\mathop {}\\!\\mathrm {d}\\pi (x,y) = -3LW_2^2(\\nu ,\\nu ^{\\prime })$ for almost all $t\\in [0,1]$ .", "Using lem:integrallambdaconvexity it follows directly that ${{\\mathcal {F}}}$ satisfies the desired inequality.", "Descent up to a barrier To provide a proof of th:ratesmmd, we need the following preliminary results.", "Firstly, an upper-bound on a scalar product involving $\\nabla f_{\\mu , \\nu }$ for any $\\mu , \\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ in terms of the loss functional ${{\\mathcal {F}}}$ , is obtained using the $\\Lambda $ -displacement convexity of ${{\\mathcal {F}}}$ in lem:gradflowlambdaversion.", "Then, an EVI (Evolution Variational Inequality) is obtained in prop:evi on the gradient flow of ${{\\mathcal {F}}}$ in $W_2$ .", "The proof of the theorem is given afterwards.", "Lemma 15 Let $\\nu $ be a distribution in $\\mathcal {P}_2({{\\mathcal {X}}})$ and $\\mu $ the target distribution such that ${{\\mathcal {F}}}(\\mu )=0$ .", "Let $\\pi $ be an optimal coupling between $\\nu $ and $\\mu $ , and $(\\rho _t)_{t \\in [0,1]}$ the displacement geodesic defined by eq:displacementgeodesic with its corresponding velocity vector $(V_t)_{t\\in [0,1]}$ as defined in eq:continuityequation.", "Finally let $\\nabla f_{\\nu ,\\mu }(X)$ be the gradient of the unnormalised witness function between $\\mu $ and $\\nu $ .", "The following inequality holds: $\\int \\nabla f_{\\mu , \\nu }(x).", "(y-x) d\\pi (x,y)\\le {{\\mathcal {F}}}(\\mu )- {{\\mathcal {F}}}(\\nu ) -\\int _0^1 \\Lambda (\\rho _s,V_s)(1-s)ds$ where $\\Lambda $ is defined prop:lambdaconvexity.", "Recall that for all $t\\in [0,1]$ , $\\rho _t$ is given by $\\rho _t = (s_t)_{\\#}\\pi $ with $s_t = x + t(y-x)$ .", "By $\\Lambda $ -convexity of $\\mathcal {F}$ the following inequality holds: $\\mathcal {F}(\\rho _{t})\\le (1-t)\\mathcal {F}(\\nu )+t \\mathcal {F}(\\mu ) - \\int _0^1 \\Lambda (\\rho _s,V_s)G(s,t)ds$ Hence by bringing $\\mathcal {F}(\\nu )$ to the l.h.s and dividing by $t$ and then taking its limit at 0 it follows that: $\\dot{{{\\mathcal {F}}}}(\\rho _t)\\vert _{t=0}\\le \\mathcal {F}(\\mu )-\\mathcal {F}(\\nu )-\\int _0^1 \\Lambda (\\rho _s,V_s)(1-s)ds.", "$ where $\\dot{{{\\mathcal {F}}}}(\\rho _t)=d{{\\mathcal {F}}}(\\rho _t)/dt$ and since $\\lim _{t \\rightarrow 0}G(s,t)=(1-s)$ .", "Moreover, under assump:lipschitzgradientk, lem:derivativemmdaugmented applies for $\\phi (x,y) = y-x$ , $\\psi (x,y)= x$ and $q = \\pi $ .", "It follows therefore that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is differentiable with time derivative given by: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\int \\nabla f_{\\mu ,\\rho _t}(s_t(x,y)).", "(y-x)\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ .", "Hence at $t=0$ we get: $\\dot{{{\\mathcal {F}}}}(\\rho _t)\\vert _{t=0} = \\int \\nabla f_{\\mu ,\\nu }(x).", "(y-x)\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ which shows the desired result when used in eq:firstorderlambda.", "Proposition 16 Consider the sequence of distributions $\\nu _n$ obtained from eq:eulerscheme.", "For $n\\ge 0$ , consider the scalar $ K(\\rho ^n) := \\int _0^1\\Lambda (\\rho _s^n,V_s^n)(1-s)\\mathop {}\\!\\mathrm {d}s$ where $(\\rho _s^n)_{0\\le s\\le 1}$ is a constant speed displacement geodesic from $\\nu _n$ to the optimal value $\\mu $ with velocity vectors $(V_s^n)_{0\\le s\\le 1}$ .", "If $\\gamma \\le 1/L$ , where $L$ is the Lispchitz constant of $\\nabla k$ in assump:lipschitzgradientk, then: $2\\gamma ({{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\mu ))\\le W_2^2(\\nu _n,\\mu )-W_2^2(\\nu _{n+1},\\mu )-2\\gamma K(\\rho ^n).$ Let $\\Pi ^n$ be the optimal coupling between $\\nu _n$ and $\\mu $ , then the optimal transport between $\\nu _n$ and $\\mu $ is given by: $W_2^2(\\mu ,\\nu _n)=\\int \\Vert X-Y \\Vert ^2 d\\Pi ^n(\\nu _n,\\mu )$ Moreover, consider $Z=X-\\gamma \\nabla f_{\\mu , \\nu _n}(X)$ where $(X,Y)$ are samples from $\\pi ^n$ .", "It is easy to see that $(Z,Y)$ is a coupling between $\\nu _{n+1}$ and $\\mu $ , therefore, by definition of the optimal transport map between $\\nu _{n+1}$ and $\\mu $ it follows that: $W_2^2(\\nu _{n+1},\\mu )\\le \\int \\Vert X-\\gamma \\nabla f_{\\mu , \\nu _n}(X)-Y\\Vert ^2 d\\pi ^n(\\nu _n,\\mu )$ By expanding the r.h.s in eq:optimalupper-bound, the following inequality holds: $W_2^2(\\nu _{n+1},\\mu )\\le W_2^2(\\nu _{n},\\mu ) -2\\gamma \\int \\langle \\nabla f_{\\mu , \\nu _n}(X), X-Y \\rangle d\\pi ^n(\\nu _n,\\mu )+ \\gamma ^2D(\\nu _n)$ where $D(\\nu _n) = \\int \\Vert \\nabla f_{\\mu , \\nu _n}(X)\\Vert ^2 d\\nu _n $ .", "By lem:gradflowlambdaversion it holds that: $-2\\gamma \\int \\nabla f_{\\mu , \\nu _n}(X).", "(X-Y) d\\pi (\\nu ,\\mu )\\le -2\\gamma \\left({{\\mathcal {F}}}(\\nu _n)- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right)$ where $(\\rho ^n_t)_{0\\le t \\le 1}$ is a constant-speed geodesic from $\\nu _n$ to $\\mu $ and $K(\\rho ^n):=\\int _0^1 \\Lambda (\\rho ^n_s,v^n_s)(1-s)ds$ .", "Note that when $K(\\rho ^n)\\le 0$ it falls back to the convex setting.", "Therefore, the following inequality holds: $W_2^2(\\nu _{n+1},\\mu )\\le W_2^2(\\nu _{n},\\mu ) - 2\\gamma \\left({{\\mathcal {F}}}(\\nu _n)- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right) +\\gamma ^2 D(\\nu _n)$ Now we introduce a term involving ${{\\mathcal {F}}}(\\nu _{n+1})$ .", "The above inequality becomes: $W_2^2(\\nu _{n+1},\\mu )\\le & W_2^2(\\nu _{n},\\mu ) - 2\\gamma \\left({{\\mathcal {F}}}(\\nu _{n+1})- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right) \\\\&+\\gamma ^2 D(\\nu _n) -2\\gamma ({{\\mathcal {F}}}(\\nu _n)-{{\\mathcal {F}}}(\\nu _{n+1}))$ It is possible to upper-bound the last two terms on the r.h.s.", "by a negative quantity when the step-size is small enough.", "This is mainly a consequence of the smoothness of the functional ${{\\mathcal {F}}}$ and the fact that $\\nu _{n+1}$ is obtained by following the steepest direction of ${{\\mathcal {F}}}$ starting from $\\nu _n$ .", "prop:decreasingfunctional makes this statement more precise and enables to get the following inequality: $\\gamma ^2 D(\\nu _n) -2\\gamma ({{\\mathcal {F}}}(\\nu _n)-{{\\mathcal {F}}}(\\nu _{n+1})\\le -\\gamma ^2 (1-3\\gamma L)D(\\nu _n),$ where $L$ is the Lispchitz constant of $\\nabla k$ .", "Combining eq:mainineq2 and eq:decreasingfunctional we finally get: $2\\gamma ({{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\mu ))+\\gamma ^2(1-3\\gamma L)D(\\nu _n)\\le W_2^2(\\nu _n,\\mu )-W_2^2(\\nu _{n+1},\\mu )-2\\gamma K(\\rho ^n).$ and under the condition $\\gamma \\le 1/(3L)$ we recover the desired result.", "We can now give the proof of the th:ratesmmd.", "[Proof of th:ratesmmd] Consider the Lyapunov function $L_j = j \\gamma ({{\\mathcal {F}}}(\\nu _j) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )$ for any iteration $j$ .", "At iteration $j+1$ , we have: $L_{j+1} &= j\\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _{j+1},\\mu )\\\\&\\le j\\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )-\\gamma K(\\rho ^j)\\\\&\\le j\\gamma ({{\\mathcal {F}}}(\\nu _{j}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )-\\gamma K(\\rho ^j) -j\\gamma ^2 (1-\\frac{3}{2} \\gamma L )\\int \\Vert \\nabla f_{\\mu , \\nu _j}(X)\\Vert ^2 d\\nu _j \\\\&\\le L_j - \\gamma K(\\rho ^j).$ where we used  prop:evi and prop:decreasingfunctional successively for the two first inequalities.", "We thus get by telescopic summation: $L_n \\le L_0 -\\gamma \\sum _{j = 0}^{n-1} K(\\rho ^j)$ Let us denote $\\bar{K}$ the average value of $(K(\\rho ^j))_{0\\le j \\le n}$ over iterations up to $n$ .", "We can now write the final result: ${{\\mathcal {F}}}(\\nu _{n}) - {{\\mathcal {F}}}(\\mu ) \\le \\frac{W_2^2(\\nu _0, \\mu )}{2 \\gamma n} -\\bar{K}$ Lojasiewicz type inequalities Given a probability distribution $\\nu $ , the weighted Sobolev semi-norm is defined for all squared integrable functions $f$ in $L_2(\\nu )$ as $ \\Vert f \\Vert _{\\dot{H}(\\nu )} = \\left(\\int \\left\\Vert \\nabla f(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}}$ with the convention $\\Vert f \\Vert _{\\dot{H}(\\nu )} = +\\infty $ if $f$ does not have a square integrable gradient.", "The Negative weighted Sobolev distance $ \\Vert .", "\\Vert _{\\dot{H}^{-1}(\\nu )} $ is then defined on distributions as the dual norm of $ \\Vert .\\Vert _{\\dot{H}(\\nu )} $ .", "For convenience, we recall the definition of $ \\Vert .", "\\Vert _{\\dot{H}^{-1}(\\nu )} $ : Definition 5 Let $\\nu \\in \\mathcal {P}_2({\\mathbf {x}})$ , with its corresponding weighted Sobolev semi-norm $ \\Vert .", "\\Vert _{\\dot{H}(\\nu )} $ .", "The weighted negative Sobolev distance $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )}$ between any $p$ and $q$ in $\\mathcal {P}_2({\\mathbf {x}})$ is defined as $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )} = \\sup _{f\\in L_2(\\nu ), \\Vert f \\Vert _{\\dot{H}(\\nu )} \\le 1 } \\left|\\int f(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int f(x)\\mathop {}\\!\\mathrm {d}q(x) \\right|$ with possibly infinite values.", "There are several possible choices for the set of test functions $f$ .", "While it is often required that $f$ vanishes at the boundary (see ), we do not make such restriction and rather use the definition from .", "We refer to for more discussion on the relationship between different choices for the set of test functions.", "We provide now a proof for prop:lojasiewicz.", "[Proof of prop:lojasiewicz] This proof follows simply from the definition of the negative Sobolev distance.", "Under assump:lipschitzgradientk, the kernel has at most quadratic growth hence, for any $\\mu ,\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})^2$ , $f_{\\mu ,\\nu }\\in L_2(\\nu )$ .", "Consider $g = \\Vert f_{\\mu , \\nu _t}\\Vert ^{-1}_{\\dot{H}(\\nu _t)} f_{\\mu , \\nu _t}$ , then $g\\in L_2(\\nu _t)$ and $\\Vert g \\Vert _{\\dot{H}(\\nu _t)}\\le 1$ .", "Therefore, we directly have: $\\left|\\int g \\mathop {}\\!\\mathrm {d}\\nu _t - \\int g \\mathop {}\\!\\mathrm {d}\\mu \\right|\\le \\left\\Vert \\nu _t - \\mu \\right\\Vert _{\\dot{H}^{-1}(\\nu _t)}$ Now, recall the definition of $g$ , which implies that $\\left|\\int g \\mathop {}\\!\\mathrm {d}\\nu _t - \\int g \\mathop {}\\!\\mathrm {d}\\mu \\right|= \\left\\Vert \\nabla f_{\\mu , \\nu _t}\\right\\Vert ^{-1}_{L_2(\\nu _t)} \\left|\\int f_{\\mu , \\nu _t}\\mathop {}\\!\\mathrm {d}\\nu _t-\\int f_{\\mu , \\nu _t} \\mathop {}\\!\\mathrm {d}\\mu \\right|.$ Moreover, we have that $\\int f_{\\mu , \\nu _t}\\mathop {}\\!\\mathrm {d}\\nu _t-\\int f_{\\mu ,\\nu _t}\\mathop {}\\!\\mathrm {d}\\mu = \\Vert f_{\\mu , \\nu _t}\\Vert ^2_{{{\\mathcal {H}}}}$ , since $f_{\\mu , \\nu _t}$ is the unnormalised witness function between $\\nu _t$ and $\\mu $ .", "Combining eq:loja1 and eq:loja2 we thus get the desired Lojasiewicz inequality on $f_{\\mu ,\\nu _t}$ : $\\Vert f_{\\mu ,\\nu _t} \\Vert ^2_{\\mathcal {H}} \\le \\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)} \\Vert \\mu -\\nu _t\\Vert _{\\dot{H}^{-1}(\\nu _t)}$ where $\\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)}=\\Vert \\nabla f_{\\mu , \\nu _t} \\Vert _{L_2(\\nu _t)}$ by definition.", "Then, using prop:decaymmd and recalling by assumption that: $\\Vert \\mu - \\nu _t \\Vert ^2_{\\dot{H}^{-1}(\\nu _t)} \\le C$ , we have: $\\dot{{{\\mathcal {F}}}}(\\nu _t) = - \\Vert \\nabla f_{\\mu , \\nu _t} \\Vert ^2_{L_2(\\nu _t)} \\le -\\frac{1}{C}\\Vert f_{\\mu ,\\nu _t} \\Vert ^4_{\\mathcal {H}}= -\\frac{4}{C}{{\\mathcal {F}}}(\\nu _t)^2 $ It is clear that if $\\mathcal {F}(\\nu _0)>0$ then ${{\\mathcal {F}}}(\\nu _t)>0$ at all times by uniqueness of the solution.", "Hence, one can divide by ${{\\mathcal {F}}}(\\nu _t)^2$ and integrate the inequality from 0 to some time $t$ .", "The desired inequality is obtained by simple calculations.", "Then, using prop:decreasingfunctional and eq:PLinequality where $\\nu _t$ is replaced by $\\nu _n$ it follows: ${{\\mathcal {F}}}(\\nu _{n+1}) - {{\\mathcal {F}}}(\\nu _n) \\le -\\gamma \\left(1-\\frac{3}{2} L\\gamma \\right)\\Vert \\nabla f_{\\mu ,\\nu _n}\\Vert _{L_2(\\nu _n)}^2 \\le -\\frac{4}{C}\\gamma \\left(1-\\frac{3}{2}\\gamma L\\right){{\\mathcal {F}}}(\\nu _n)^2.$ Dividing by both sides of the inequality by $ {{\\mathcal {F}}}(\\nu _n){{\\mathcal {F}}}(\\nu _{n+1})$ and recalling that ${{\\mathcal {F}}}(\\nu _{n+1})\\le {{\\mathcal {F}}}(\\nu _n)$ it follows directly that: $\\frac{1}{{{\\mathcal {F}}}(\\nu _n)} - \\frac{1}{{{\\mathcal {F}}}(\\nu _{n+1})} \\le -\\frac{4}{C}\\gamma \\left(1-\\frac{3}{2}\\gamma L\\right).$ The proof is concluded by summing over $n$ and rearranging the terms.", "A simple example Consider a gaussian target distribution $\\mu (x) = \\mathcal {N}(a,\\Sigma ) $ and initial distribution $\\nu _0 = \\mathcal {N}(a_0,\\Sigma _0)$ .", "In this case it is sufficient to use a kernel that captures the first and second moments of the distribution.", "We simply consider a kernel of the form $k(x,y)= (x^{\\top }y)^2 + x^{\\top }y$ .", "In this case, it is easy to see by simple computations that the following equation holds: $\\dot{X}_t = - (\\Sigma _t-\\Sigma + a_t a_t^{\\top }-a a^{\\top } )X_t - (a_t-a),\\qquad \\forall t \\ge 0$ Where $a_t$ and $\\Sigma _t$ are the mean and covariance matrix of $\\nu _t$ and satisfy the equations: $\\dot{\\Sigma }_t &= - (S_t \\Sigma _t + \\Sigma _t S_t )\\\\\\dot{a}_t &= - S_t a_t -(a_t-a).$ Where we introduced $S_t = \\Sigma _t-\\Sigma + a_t a_t^{\\top }-aa^{\\top }$ for simplicity.", "eq:example1mckeanvlassov implies that $\\nu _t$ is in fact a gaussian distribution since $X_t$ is obtained by summing gaussian increments.", "The same conclusion can be reached by solving the corresponding continuity equation.", "Thus we will be only interested in the behavior of $a_t$ and $\\Sigma _t$ .", "First we can express the squared MMD in terms of those parameters: $MMD^2(\\mu ,\\nu _t) = \\Vert S_t \\Vert ^2 + \\Vert a_t-a \\Vert ^2.$ Since $a_t$ and $\\Sigma _t$ are obtained from the gradient flow of the MMD, it follows that $\\Vert a_t-a \\Vert ^2$ and $\\Vert S_t \\Vert ^2$ remain bounded.", "Moreover, the Negative Sobolev distance is obtained by solving a finite dimensional quadratic problem and can be simply written as: $D(\\mu ,\\nu _t) = tr(Q_t \\Sigma _t Q_t) + \\Vert a_t-a\\Vert ^2$ where $Q_t$ is the unique solution of the Lyapounov equation: $\\Sigma _t Q_t + Q_t \\Sigma _t = \\Sigma _t- \\Sigma + (a_t-a)(a_t-a)^{\\top }:=G_t.$ We first consider the one dimensional case, for which eq:Lyapounov has a particularly simple solution and allows to provide a closed form expression for the negative Sobolev distance: $Q_t= \\frac{G_t}{2\\Sigma _t}, \\qquad D(\\mu ,\\nu _t) = \\frac{G_t^2}{4\\Sigma _t} + (a_t-a)^2.$ Recalling eq:example1MMD and that $MMD^2(\\mu ,\\nu _t)$ is bounded at all times by definition of $\\nu _t$ , it follows that both $G_t$ and $a_t-a$ are also bounded.", "Hence, it is easy to see that $D(\\mu ,\\nu _t)$ will remain bounded iff $\\Sigma _t$ remains bounded away from 0.", "This analysis generalizes the higher dimensions using which provides an expression for $Q_t$ in terms of $G_t$ and the singular value decomposition of $\\Sigma _t = U_t D_t U_t^{\\top }$ : $Q_t = U_t \\left( \\left(\\frac{1}{(D_t)_i + (D_t)_j }\\right)\\odot U_t^{\\top } G_t U_t\\right) U_t^{\\top }.$ Here, $\\odot $ denotes the Hadamard product of matrices.", "It is easy to see from this expression that $D(\\mu ,\\nu _t)$ will be bounded if all singular values $((D_t)_i)_{1\\le i \\le d}$ of $\\Sigma _t$ remain bounded away from 0.", "Lojasiewicz-type inequalities for ${{\\mathcal {F}}}$ under different metrics The Wasserstein gradient flow of ${{\\mathcal {F}}}$ can be seen as the continuous-time limit of the so called minimizing movement scheme .", "Such proximal scheme is defined using an initial distribution $\\nu _0$ , a step-size $\\tau $ , and an iterative update equation: $\\nu _{n+1} \\in \\arg \\min _{\\nu } {{\\mathcal {F}}}(\\nu ) + \\frac{1}{2\\tau } W_2^2(\\nu ,\\nu _n).$ In , it is shown that the continuity equation $\\partial _t \\nu _t = div(\\nu _t \\nabla f_{\\mu ,\\nu _t})$ can be obtained as the limit when $\\tau \\rightarrow 0$ of eq:minimizingmovementscheme using suitable interpolations between the elements $\\nu _n$ .", "In , a different transport equation that includes a birth-death term is considered: $\\partial _t \\nu _t = \\beta div(\\nu _t \\nabla f_{\\mu ,\\nu _t}) + \\alpha (f_{\\mu ,\\nu _t} - \\int f_{\\mu ,\\nu _t}(x)\\mathop {}\\!\\mathrm {d}\\nu _t(x) )\\nu _t$ When $\\beta =0$ and $\\alpha =1$ , it is shown formally in that the above dynamics corresponds to the limit of a proximal scheme using the KL instead of the Wasserstein distance.", "For general $\\beta $ and $\\alpha $ , eq:birthdeath corresponds to the limit of a different proximal scheme where $W_2^2(\\nu ,\\nu _n)$ is replaced by the Wasserstein-Fisher-Rao distance $d^2_{\\alpha ,\\beta }(\\nu ,\\nu _n)$ (see , , ).", "$d^2_{\\alpha ,\\beta }(\\nu ,\\nu _n)$ is an interpolation between the squared Wasserstein distance ($\\beta =1$ and $\\alpha =0$ ) and the squared Fisher-Rao distance as defined in ($\\beta =0$ and $\\alpha = 1$ ).", "Such scheme is consistent with the one proposed in and which uses the $KL$ .", "In fact, as we will show later, both the $KL$ and the Fisher-Rao distance have the same local behavior therefore both proximal schemes are expected to be equivalent in the limit when $\\tau \\rightarrow 0$ .", "Under eq:birthdeath, the time evolution of ${{\\mathcal {F}}}$ is given by : $\\dot{{{\\mathcal {F}}}}(\\nu _t) = -\\beta \\int \\Vert \\nabla f_{\\mu ,\\nu _t}\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu _t(x) -\\alpha \\int \\left|f_{\\mu ,\\nu _t}(x)-\\int f_{\\mu ,\\nu _t}(x^{\\prime })\\mathop {}\\!\\mathrm {d}\\nu _t(x^{\\prime })\\right|^2\\mathop {}\\!\\mathrm {d}\\nu _t(x)$ We would like to apply the same approach as in sec:Lojasiewiczinequality to provide a condition on the convergence of eq:birthdeath.", "Hence we first introduce an analogue to the Negative Sobolev distance in def:negsobolev by duality: $D_{\\nu }(p,q) =\\sup _{\\begin{array}{c}g\\in L_2(\\nu )\\\\ \\beta \\Vert \\nabla g \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert g- \\bar{g} \\Vert ^2_{L_2(\\nu )} \\le 1 \\end{array}} \\left|\\int g(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int g(x) \\mathop {}\\!\\mathrm {d}q(x)\\right|$ where $\\bar{g}$ is simply the expectation of $g$ under $\\nu $ .", "Such quantity defines a distance, since it is the dual of a semi-norm.", "Now using the particular structure of the MMD, we recall that $f_{\\mu ,\\nu }\\in L_2(\\nu )$ and that $\\beta \\Vert \\nabla f \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f- \\bar{f} \\Vert ^2_{L_2(\\nu )}<\\infty $ .", "Hence for a particular $g$ of the form: $g = \\frac{f_{\\mu ,\\nu }}{\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} \\right)^\\frac{1}{2}}$ the following inequality holds: $D_{\\nu }(\\mu ,\\nu ) \\ge \\frac{\\left|\\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\nu (x) - \\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\mu (x)\\right|}{\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )}\\right)^{\\frac{1}{2}} }.$ But since $f_{\\mu ,\\nu }$ is the unnormalised witness function between $\\mu $ and $\\nu $ we have that $2{{\\mathcal {F}}}(\\nu ) = \\left|\\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\nu (x) - \\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\mu (x)\\right|$ .", "Hence one can write that: $D^2_{\\nu }(\\mu ,\\nu )\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )}\\right) \\ge 4{{\\mathcal {F}}}^2(\\nu )$ Now provided that $D^2_{\\nu }(\\mu ,\\nu _t)$ remains bounded at all time $t$ by some constant $C>0$ one can easily deduce a rate of convergence for ${{\\mathcal {F}}}(\\nu _t)$ just as in prop:lojasiewicz.", "In fact, in the case when $\\beta = 1$ and $\\alpha =0$ one recovers prop:lojasiewicz.", "Another interesting case is when $\\beta =0$ and $\\alpha =1$ .", "In this case, $D_{\\nu }(p,q)$ is defined for $p$ and $q$ such that the difference $p-q$ is absolutely continuous w.r.t.", "$\\nu $ .", "Moreover, $D_{\\nu }(p,q)$ has the simple expression: $D_{\\nu }(p,q) = \\int \\left(\\frac{p-q }{\\nu }(x)\\right)^2 \\mathop {}\\!\\mathrm {d}\\nu (x)$ where $\\frac{ p-q }{ \\nu }$ denotes the radon nikodym density of $p-q$ w.r.t.", "$\\nu $ .", "More importantly, $D^2_{\\nu }(\\mu ,\\nu )$ is exactly equal to $\\chi ^2(\\mu \\Vert \\nu )^{\\frac{1}{2}}$ .", "As we will show now, $(\\chi ^2)^{\\frac{1}{2}}$ turns out to be a linearization of $\\sqrt{2} KL^{\\frac{1}{2}}$ and the Fisher-Rao distance.", "Linearization of the KL and the Fisher-Rao distance.", "We first show the result for the KL.", "Given a probability distribution $\\nu ^{\\prime }$ that is absolutely continuous w.r.t to $\\nu $ and for $0<\\epsilon < 1$ denote by $G(\\epsilon ) := KL(\\nu \\Vert (\\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) )$ .", "It can be shown that $G(\\epsilon ) = \\frac{1}{2}\\chi ^2(\\nu ^{\\prime }\\Vert \\nu )\\epsilon ^2 +o(\\epsilon ^2)$ .", "To see this, one needs to perform a second order Taylor expansion of $G(\\epsilon )$ at $\\epsilon =0$ .", "Exchanging the derivatives and the integral, $\\dot{G}(\\epsilon )$ and $\\ddot{G}(\\epsilon )$ are both given by: $\\dot{G}(\\epsilon ) = -\\int \\frac{\\mu -\\nu }{\\nu +\\epsilon (\\mu -\\nu )}\\mathop {}\\!\\mathrm {d}\\nu \\\\\\ddot{G}(\\epsilon ) = \\int \\frac{(\\nu -\\mu )^2}{(\\nu +\\epsilon (\\mu -\\nu ))^2} \\mathop {}\\!\\mathrm {d}\\nu $ Hence, we have for $\\epsilon =0$ : $\\dot{G}(0) = 0$ and $\\ddot{G}(0) = \\chi ^2(\\mu \\Vert \\nu )$ .", "Therefore, it follows: $G(\\epsilon ) =\\frac{1}{2} \\chi ^2(\\mu \\Vert \\nu ) \\epsilon ^2 + o(\\epsilon ^2)$ , which means that $\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon }\\left[2KL\\left(\\nu \\Vert \\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) \\right) \\right]^{\\frac{1}{2}} = \\chi ^2(\\nu ^{\\prime }\\Vert \\nu )^\\frac{1}{2}.$ The same approach can be used for the Fisher-Rao distance $d_{0,1}(\\nu ,\\nu ^{\\prime })$ .", "From we have that: $d^2_{0,1}(\\nu ,\\nu ^{\\prime }) = 2\\int (\\sqrt{\\nu (x)}-\\sqrt{\\nu ^{\\prime }(x)})^2\\mathop {}\\!\\mathrm {d}x$ where $\\nu $ and $\\nu ^{\\prime }$ are assumed to have a density w.r.t.", "Lebesgue measure.", "Using the exact same approach as for the KL one easily show that $ \\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon }\\left[2d^2_{0,1}\\left(\\nu \\Vert \\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) \\right) \\right]^{\\frac{1}{2}} = \\chi ^2(\\nu ^{\\prime }\\Vert \\nu )^\\frac{1}{2}.$ Linearization of the $W_2$ .", "Similarly, it can be shown that the Negative weighted Sobolev distance is a linearization of the $W_2$ under suitable conditions.", "We recall here which relates the two quantities: Theorem 17 Let $\\nu \\in \\mathcal {P}({{\\mathcal {X}}})$ be a probability measure with finite second moment, absolutely continuous w.r.t the Lebesgue measure and let $h\\in L^{\\infty }({{\\mathcal {X}}})$ with $\\int h(x)\\mathop {}\\!\\mathrm {d}\\nu (x)=0$ .", "Then $\\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}\\le \\lim \\inf _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,(1+\\epsilon h )\\nu ).$ thm:villani implies that for any probability distribution $\\nu ^{\\prime }$ that has a bounded density w.r.t.", "to $\\nu $ one has: $\\Vert \\nu ^{\\prime }-\\nu \\Vert _{\\dot{H}^{-1}(\\nu )}\\le \\lim \\inf _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,\\nu +\\epsilon (\\nu ^{\\prime }-\\nu )).$ To get the converse inequality, one needs to assume that the support of $\\nu $ is ${{\\mathcal {X}}}$ .", "prop:conversesobolevwasserstein provides such inequality and uses techniques from .", "Proposition 18 Let $\\nu \\in \\mathcal {P}({{\\mathcal {X}}})$ be a probability measure with finite second moment, absolutely continuous w.r.t the Lebesgue measure with support equal to ${{\\mathcal {X}}}$ and let $h\\in L^{\\infty }({{\\mathcal {X}}})$ with $\\int h(x)\\mathop {}\\!\\mathrm {d}\\nu (x)=0$ and $1+h\\ge 0$ .", "Then $\\lim \\sup _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,(1+\\epsilon h )\\nu )\\le \\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}$ Consider the elliptic equation: $\\nu h + div(\\nu \\nabla F) = 0$ with Neumann boundary condition on $\\partial {{\\mathcal {X}}}$ .", "Such equation admits a unique solution $F$ in $\\dot{H}(\\nu )$ up to a constant since $\\nu $ is supported on all of ${{\\mathcal {X}}}$ (see ).", "Moreover, we have that $ \\int F(x)h(x)\\mathop {}\\!\\mathrm {d}\\nu (x) = \\int \\Vert \\nabla F(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu (x)$ which implies that $\\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )} \\ge \\Vert F \\Vert _{\\dot{H}(\\nu )}$ .", "Now consider the path: $s_u = (1 + u\\epsilon h)\\nu $ for $u\\in [0,1]$ .", "$s_u$ is a probability distribution for all $u\\in [0,1]$ with $s_0= \\nu $ and $s_1 = (1+\\epsilon h)\\nu $ .", "It is easy to see that $s_u$ satisfies the continuity equation: $\\partial _u s_u +div(s_u V_u )=0$ with $V_u = \\frac{\\epsilon \\nabla F}{1+u\\epsilon h}$ .", "Indeed, for any smooth test function $f$ one has: $\\frac{\\mathop {}\\!\\mathrm {d}}{\\mathop {}\\!\\mathrm {d}u}\\int f(x)\\mathop {}\\!\\mathrm {d}s_u(x) = \\epsilon \\int f(x)h(x)\\mathop {}\\!\\mathrm {d}\\nu (x) = \\epsilon \\int \\nabla f(x).\\nabla F(x) \\mathop {}\\!\\mathrm {d}\\nu (x) = \\int \\nabla f(x).V_u(x)\\mathop {}\\!\\mathrm {d}s_u(x).$ We used the definition of $F$ for the second equality and that $\\nu $ admits a density w.r.t.", "to $s_u$ provided that $\\epsilon $ is small enough.", "Such density is given by $1/(1+u\\epsilon h)$ and is positive and bounded when $\\epsilon \\le \\frac{1}{2\\Vert h \\Vert _{\\infty } }$ .", "Now, using the Benamou-Brenier formula for $W_2(\\nu ,(1+\\epsilon h)\\nu )$ one has in particular that: $W_2(\\nu ,(1+\\epsilon h)\\nu )\\le \\int \\Vert V_u \\Vert _{L^2(s_u)} \\mathop {}\\!\\mathrm {d}u$ Using the expressions of $V_u$ and $s_u$ , one gets by simple computation: $W_2(\\nu ,(1+\\epsilon h)\\nu )\\le & \\epsilon \\int \\left(\\int \\frac{\\Vert \\nabla F(x) \\Vert ^2}{1-u\\epsilon + u\\epsilon (h+1)} \\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u\\\\&\\le \\epsilon \\left( \\int \\Vert \\nabla F(x) \\Vert ^2\\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}} \\int _0^1 (1-u\\epsilon )^{-\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u.$ Finally, $\\epsilon \\int _0^1 (1-u\\epsilon )^{-\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u = 2(1-\\sqrt{1 - \\epsilon }) \\rightarrow 1$ when $\\epsilon \\rightarrow 0$ , hence: $\\lim \\sup _{\\epsilon \\rightarrow 0} W_2(\\nu ,(1+\\epsilon h)) \\le \\Vert F\\Vert _{\\dot{H}(\\nu )}\\le \\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}.$ thm:villani and prop:conversesobolevwasserstein allow to conclude that $\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,\\nu +\\epsilon (\\nu ^{\\prime }-\\nu )) = \\Vert \\nu - \\nu ^{\\prime } \\Vert _{\\dot{H}^{-1}(\\nu )}$ for any $\\nu ^{\\prime }$ that has a bounded density w.r.t.", "$\\nu $ .", "By analogy, one could wonder if $D$ is also a linearization of the the Wasserstein-Fisher-Rao distance.", "We leave such question for future work.", "Algorithms Noisy Gradient flow of the MMD [Proof of thm:convergencenoisygradient] To simplify notations, we write $\\mathcal {D}_{\\beta _n}(\\nu _n) = \\int \\Vert V(x+\\beta _n u) \\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n \\mathop {}\\!\\mathrm {d}u $ where $V := \\nabla f_{\\mu ,\\nu _n}$ and $g$ is the density of a standard gaussian.", "The symbol $\\otimes $ denotes the product of two independent probability distributions.", "Recall that a sample $x_{n+1}$ from $\\nu _{n+1}$ is obtained using $x_{n+1} = x_n - \\gamma V(x_n+ \\beta _n u_n)$ where $x_n$ is a sample from $\\nu _n$ and $u_n$ is a sample from a standard gaussian distribution that is independent from $x_n$ .", "Moreover, by assumption $\\beta _n$ is a non-negative scalar satisfying: $8\\lambda ^2\\beta _n^2 {{\\mathcal {F}}}(\\nu _n) \\le \\mathcal {D}_{\\beta _n}(\\nu _n)$ Consider now the map $(x,u)\\mapsto s_t(x)= x - \\gamma tV(x+\\beta _n u)$ for $0\\le t\\le 1$ , then $\\nu _{n+1}$ is obtained as a push-forward of $\\nu _n\\otimes g$ by $s_1$ : $\\nu _{n+1} = (s_1)_{\\#}(\\nu _n\\otimes g)$ .", "Moreover, the curve $\\rho _t = (s_t)_{\\#}(\\nu _n\\otimes g)$ is a path from $\\nu _n$ to $\\nu _{n+1}$ .", "We know by prop:gradwitnessfunction that $\\nabla f_{\\mu ,\\nu _n}$ is $2L$ -Lipschitz, thus using $\\phi (x,u) = -\\gamma V(x+\\beta _n u)$ , $\\psi (x,u) = x$ and $q = \\nu _n\\otimes g $ in lem:derivativemmdaugmented it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable in $t$ with: $\\dot{{{\\mathcal {F}}}}(\\rho _t)=\\int \\nabla f_{\\mu ,\\rho _t}(s_t(x)).", "(-\\gamma V(x+\\beta _n u))g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u$ Moreover, $\\dot{{{\\mathcal {F}}}}(\\rho _0)$ is given by $\\dot{{{\\mathcal {F}}}}(\\rho _0)= -\\gamma \\int V(x).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u$ and the following estimate holds: $\\vert \\dot{{{\\mathcal {F}}}} (\\rho _t) -\\dot{{{\\mathcal {F}}}}(\\rho _0)\\vert \\le 3\\gamma ^2 L t \\int \\Vert V(x+\\beta _n u) \\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u = 3\\gamma ^2 Lt \\mathcal {D}_{\\beta _n}(\\nu _n).$ Using the absolute continuity of ${{\\mathcal {F}}}(\\rho _t)$ , one has $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n})=\\dot{{{\\mathcal {F}}}}(\\rho _0)+ \\int _0^1 \\dot{{{\\mathcal {F}}}} (\\rho _t) - \\dot{{{\\mathcal {F}}}} (\\rho _0) \\mathop {}\\!\\mathrm {d}t $ .", "Combining with eq:estimategradient and using the expression of $\\dot{{{\\mathcal {F}}}}(\\rho _0)$ , it follows that: $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n})\\le -\\gamma \\int V(x).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u + \\frac{3}{2}\\gamma ^2L \\mathcal {D}_{\\beta _n}(\\nu _n).$ Adding and subtracting $\\gamma \\mathcal {D}_{\\beta _n}(\\nu _n)$ in eq:taylorexpansion it follows directly that: $\\begin{split}\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n} )\\le & -\\gamma (1-\\frac{3}{2}\\gamma L )\\mathcal {D}_{\\beta _n}(\\nu _n)\\\\&+ \\gamma \\int (V(x+\\beta _n u) -V(x)).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u \\end{split}$ We shall control now the last term in eq:penultimate.", "Recall now that for all $1\\le i\\le d$ , $ V_i(x) = \\partial _i f_{\\mu ,\\nu _n}(x) = \\langle f_{\\mu ,\\nu _n} , \\partial _i k(x,.", ")\\rangle $ where we used the reproducing property for the derivatives of $f_{\\mu ,\\nu _n}$ in ${{\\mathcal {H}}}$ (see sec:rkhs).", "Therefore, it follows by Cauchy-Schwartz in ${{\\mathcal {H}}}$ and using assump:Lipschitzgradrkhs: $\\Vert V(x+\\beta _n u) -V(x)\\Vert ^2&\\le \\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2 \\left( \\sum _{i=1}^{d}\\Vert \\partial _i k(x+\\beta _n u,.)", "-\\partial _i k(x,.", ")\\Vert ^2_{\\mathcal {H}}\\right)\\\\&\\le \\lambda ^2\\beta _n^2\\Vert f_{\\mu ,\\nu _n}\\Vert _{\\mathcal {H}}^2\\Vert u \\Vert ^2$ for all $ x,u \\in {{\\mathcal {X}}}$ .", "Now integrating both sides w.r.t.", "$\\nu _n$ and $g$ and recalling that $g$ is a standard gaussian, we have: $\\int \\Vert V(x+\\beta _n u) -V(x)\\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u\\le \\lambda ^2\\beta ^2_n\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2$ Getting back to eq:penultimate and applying Cauchy-Schwarz in $L_2(\\nu _n\\otimes g)$ it follows: $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n} )\\le & -\\gamma (1-\\frac{3}{2}\\gamma L )\\mathcal {D}_{\\beta _n}(\\nu _n) +\\gamma \\lambda \\beta _n\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}\\mathcal {D}^{\\frac{1}{2}}_{\\beta _n}(\\nu _n)$ It remains to notice that $\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2 = 2{{\\mathcal {F}}}(\\nu _n)$ and that $\\beta _n$ satisfies eq:controlnoiselevelbis to get: ${{\\mathcal {F}}}(\\nu _{n+1}) -{{\\mathcal {F}}}(\\nu _n) \\le -\\frac{\\gamma }{2}(1-\\frac{3}{2}\\gamma L)\\mathcal {D}_{\\beta _n}(\\nu _n).$ We introduce now $\\Gamma = 4\\gamma (1-\\frac{3}{2}\\gamma L)\\lambda ^2$ to simplify notation and prove the second inequality.", "Using eq:controlnoiselevelbis again in the above inequality we directly have: ${{\\mathcal {F}}}(\\nu _{n+1}) -{{\\mathcal {F}}}(\\nu _n) \\le - \\Gamma \\beta _n^2 {{\\mathcal {F}}}(\\nu _n)$ .", "One can already deduce that $\\Gamma \\beta _n^2$ is necessarily smaller than 1.", "Hence, taking ${{\\mathcal {F}}}(\\nu _n)$ to the r.h. side and iterating over $n$ it follows that: ${{\\mathcal {F}}}(\\nu _{n}) \\le {{\\mathcal {F}}}(\\nu _0)\\prod _{i=0}^{n-1}(1- \\Gamma \\beta _n^2)$ Simply using that $1-\\Gamma \\beta _n^2\\le e^{-\\Gamma \\beta _n^2}$ leads to the desired upper-bound ${{\\mathcal {F}}}(\\nu _{n}) \\le {{\\mathcal {F}}}(\\nu _0)e^{-\\Gamma \\sum _{i=0}^{n-1} \\beta _n^2}$ .", "Sample-based approximate scheme [Proof of prop:convergenceeulermaruyama] Let $(u_{n}^{i})_{1\\le i\\le N}$ be i.i.d standard gaussian variables and $(x_{0}^{i})_{1\\le i\\le N}$ i.i.d.", "samples from $\\nu _0$ .", "We consider $(x_n^i)_{1\\le i\\le N}$ the particles obtained using the approximate scheme eq:eulermaruyama: $x_{n+1}^{i}=x_{n}^{i}-\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})$ starting from $(x_{0}^{i})_{1\\le i\\le N}$ , where $\\hat{\\nu _n}$ is the empirical distribution of these $N$ interacting particles.", "Similarly, we denote by $(\\bar{x}_{n}^{i})_{1\\le i\\le N}$ the particles obtained using the exact update equation eq:discretizednoisyflow: $\\bar{x}_{n+1}^{i}=\\bar{x}_{n}^{i}-\\gamma \\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})$ also starting from $(x_{0}^{i})_{1\\le i\\le N}$ .", "By definition of $\\nu _n$ we have that $(\\bar{x}_{n}^{i})_{1\\le i\\le N}$ are i.i.d.", "samples drawn from $\\nu _n$ with empirical distribution denoted by $\\bar{\\nu }_{n}$ .", "We will control the expected error $c_{n}$ defined as $c^2_{n}= \\frac{1}{N}\\sum _{i=1}^N \\mathbb {E}\\left[\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}\\Vert ^{2}\\right]$ .", "By recursion, we have: $c_{n+1} = & \\frac{1}{\\sqrt{N}}\\left(\\sum _{i=1}^{N}\\mathbb {E}\\left[\\left\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}-\\gamma \\left(\\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})\\right)\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\\\le & c_{n} +\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {E}_{i}\\right]^{\\frac{1}{2}}+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {G}_{i}\\right]^{\\frac{1}{2}} \\\\& +\\frac{\\gamma }{\\sqrt{N}}\\left(\\sum _{i=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\hat{\\nu }_{n}}\\left(x_{n}^{i}+\\beta _{n}u_{n}^{i}\\right)-\\nabla f_{\\mu ,\\bar{\\nu }_{n}}\\left(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i}\\right)\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\\\le & c_{n}+2\\gamma L\\left(c_{n}+\\mathbb {E}\\left[W_{2}(\\hat{\\nu }_{n},\\bar{\\nu }_{n})^{2}\\right]^{\\frac{1}{2}}\\right)+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {E}_{i}\\right]^{\\frac{1}{2}}+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {G}_{i}\\right]^{\\frac{1}{2}}$ where the second line follows from a simple triangular inequality and the last line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is jointly $2L$ Lipschitz in $x$ and $\\nu $ by prop:gradwitnessfunction.", "Here, $\\mathcal {E}_{i}$ represents the error between $\\bar{\\nu }_n$ and $\\nu _n$ while $\\mathcal {G}_{i}$ represents the error between $\\hat{\\mu }$ and $\\mu $ and are given by: $\\mathcal {E}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\bar{\\nu }_{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})\\right\\Vert ^{2}\\right]\\\\\\mathcal {G}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})\\right\\Vert ^{2}\\right]$ We will first control the error term $\\mathcal {E}_i$ .", "To simplify notations, we write $y^{i}=\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i}$ .", "Recalling the expression of $\\nabla f_{\\mu ,\\nu }$ from prop:gradwitnessfunction and expanding the squared norm in $\\mathcal {E}_i$ , it follows: $\\mathcal {E}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\frac{1}{N}\\sum _{j=1}^{N}\\nabla k(y^{i},\\bar{x}_{n}^{j})-\\int \\nabla k(y^{i},x)d\\nu _{n}(x)\\right\\Vert ^{2}\\right]\\\\& =\\frac{1}{N^{2}}\\sum _{j=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\nabla k(y^{i},\\bar{x}_{n}^{j})-\\int \\nabla k(y^{i},x)d\\nu _{n}(x)\\right\\Vert ^{2}\\right]\\\\& \\le \\frac{L^{2}}{N^{2}}\\sum _{j=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\bar{x}_{n}^{j}-\\int xd\\nu _{n}(x)\\right\\Vert ^{2}\\right]=\\frac{L^{2}}{N}var(\\nu _{n}).$ The second line is obtained using the independence of the auxiliary samples $(\\bar{x}^{i}_n)_{1\\le i\\le N}$ and recalling that they are distributed according to $\\nu _{n}$ .", "The last line uses the fact that $\\nabla k(y,x)$ is $L$ -Lipshitz in $x$ by assump:lipschitzgradientk.", "To control the variance $var(\\nu _n)$ we use lem:Controlvariance which implies that $var(\\nu _{n})^{\\frac{1}{2}}\\le (B+var(\\nu _{0})^{\\frac{1}{2}})e^{LT}$ for all $n\\le \\frac{2T}{\\gamma }$ .", "For $\\mathcal {G}_{i}$ , it is sufficient to expand again the squared norm and recall that $\\nabla k(y,x)$ is $L$ -Lipschitz in $x$ which then implies that $\\mathcal {G}_{i}\\le \\frac{L^{2}}{M}var(\\mu )$ .", "Finally, one can observe that $\\mathbb {E}[W_{2}^{2}(\\hat{\\nu }_{n},\\bar{\\nu }_{n})]\\le \\frac{1}{N}\\sum _{i=1}^{N}\\mathbb {E}\\left[\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}\\Vert ^{2}\\right]=c_{n}^{2}$ , hence $c_n$ satisfies the recursion: $c_{n+1}\\le (1+4\\gamma L)c_{n}+\\frac{\\gamma L}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{\\gamma L}{\\sqrt{M}}var(\\mu ).$ Using lem:Discrete-Gronwall-lemma to solve the above inequality, it follows that: $c_{n}\\le \\frac{1}{4}\\left(\\frac{1}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{1}{\\sqrt{M}}var(\\mu ))\\right)(e^{4LT}-1)$ Lemma 19 Consider an initial distribution $\\nu _{0}$ with finite variance, a sequence $(\\beta _n)_{ n \\ge 0}$ of non-negative numbers bounded by $B<\\infty $ and define the sequence of probability distributions $\\nu _n$ of the process eq:discretizednoisyflow: $x_{n+1}=x_{n}-\\gamma \\nabla f_{\\mu ,\\nu _{n}}(x_{n}+\\beta _{n}u_{n}) \\qquad x_0 \\sim \\nu _0$ where $(u_n)_{n\\ge 0}$ are standard gaussian variables.", "Under assump:lipschitzgradientk, the variance of $\\nu _{n}$ satisfies for all $T>0$ and $n\\le \\frac{T}{\\gamma }$ the following inequality: $var(\\nu _{n})^{\\frac{1}{2}}\\le (B+var(\\nu _{0})^{\\frac{1}{2}})e^{2TL}$ Let $g$ be the density of a standard gaussian.", "Denote by $(x,u)$ and $(x^{\\prime },u^{\\prime })$ two independent samples from $\\nu _n\\otimes g$ .", "The idea is to find a recursion from $var(\\nu _{n})$ to $var(\\nu _{n+1})$ : $var(\\nu _{n+1})^{\\frac{1}{2}}& =\\left(\\mathbb {E}\\left[\\left\\Vert x -\\mathbb {E}\\left[x^{\\prime }\\right] -\\gamma \\nabla f_{\\mu ,\\nu _{n}}(x+\\beta _{n}u)+\\gamma \\mathbb {E}\\left[\\nabla f_{\\mu ,\\nu _{n}}(x^{\\prime }+\\beta _{n}u^{\\prime })\\right]\\right\\Vert ^2\\right]\\right)^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+\\gamma \\left(\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\nu _{n}}(x+\\beta _{n}u)-\\mathbb {E}\\left[\\nabla f_{\\mu ,\\nu _{n}}(x^{\\prime }+\\beta _{n}u^{\\prime })\\right]\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+2\\gamma L\\mathbb {E}_{\\begin{array}{c}x,x^{\\prime }\\sim \\nu _{n}\\\\ u,u^{\\prime }\\sim g\\end{array}}\\left[\\left\\Vert x+\\beta _{n}u-x^{\\prime }+\\beta _{n}u^{\\prime }\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+2\\gamma L(var(\\nu _{n})^{\\frac{1}{2}}+\\beta _{n})$ The second and last lines are obtained using a triangular inequality while the third line uses that $\\nabla f_{\\mu ,\\nu _n}(x)$ is $2L$ -Lipschitz in $x$ by prop:gradwitnessfunction.", "Recalling that $\\beta _{n}$ is bounded by $B$ it is easy to conclude using lem:Discrete-Gronwall-lemma.", "Connection with Neural Networks In this sub-section we establish a formal connection between the MMD gradient flow defined in eq:continuitymmd and neural networks optimization.", "Such connection holds in the limit of infinitely many neurons and is based on the formulation in .", "To remain consistent with the rest of the paper, the parameters of a network will be denoted by $x\\in {{\\mathcal {X}}}$ while the input and outputs will be denoted as $z$ and $y$ .", "Given a neural network or any parametric function $(z,x)\\mapsto \\psi (z,x)$ with parameter $x \\in {{\\mathcal {X}}}$ and input data $z$ we consider the supervised learning problem: $\\min _{(x_1,...,x_m )\\in {{\\mathcal {X}}}} \\frac{1}{2}\\mathbb {E}_{(y,z)\\sim p } \\left[ \\left\\Vert y - \\frac{1}{m}\\sum _{i=1}^m\\psi (z,x_i) \\right\\Vert ^2 \\right]$ where $(y,z) \\sim p$ are samples from the data distribution and the regression function is an average of $m$ different networks.", "The formulation in eq:regressionnetwork includes any type of networks.", "Indeed, the averaged function can itself be seen as one network with augmented parameters $(x_1,...,x_m)$ and any network can be written as an average of sub-networks with potentially shared weights.", "In the limit $m\\rightarrow \\infty $ , the average can be seen as an expectation over the parameters under some probability distribution $\\nu $ .", "This leads to an expected network $\\Psi (z,\\nu ) = \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\nu (x) $ and the optimization problem in eq:regressionnetwork can be lifted to an optimization problem in $\\mathcal {P}_2({{\\mathcal {X}}})$ the space of probability distributions: $\\min _{\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})} \\mathcal {L}(\\nu ) := \\frac{1}{2}\\mathbb {E}_{(y,z)\\sim p} \\left[ \\left\\Vert y - \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\nu (x) \\right\\Vert ^2 \\right]$ For convenience, we consider $\\bar{\\mathcal {L}}(\\nu )$ the function obtained by subtracting the variance of $y$ from $\\mathcal {L}(\\nu )$ , i.e.", ": $\\bar{\\mathcal {L}}(\\nu ) = \\mathcal {L}(\\nu ) - var(y) $ .", "When the model is well specified, there exists $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}}) $ such that $\\mathbb {E}_{y\\sim \\mathbb {P}(.|z)}[y] = \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\mu (x)$ .", "In that case, the cost function $\\bar{\\mathcal {L}}$ matches the functional ${{\\mathcal {F}}}$ defined in eq:mmdasfreeenergy for a particular choice of the kernel $k$ .", "More generally, as soon as a global minimizer for eq:liftedregression exists, prop:inequalitymmdloss relates the two losses $\\bar{\\mathcal {L}}$ and $\\mathcal {F}$ .", "Proposition 20 Assuming a global minimizer of eq:liftedregression is achieved by some $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , the following inequality holds for any $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ : $\\left(\\bar{\\mathcal {L}}(\\mu )^{\\frac{1}{2}} + {{\\mathcal {F}}}^{\\frac{1}{2}}(\\nu )\\right)^2\\ge \\bar{\\mathcal {L}}(\\nu )\\ge \\mathcal {F}(\\nu ) + \\bar{\\mathcal {L}}(\\mu )$ where ${{\\mathcal {F}}}(\\nu )$ is defined by eq:mmdasfreeenergy with a kernel $k$ constructed from the data as an expected product of networks: $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim \\mathbb {P}} \\left[\\psi (z,x)^T\\psi (z,x^{\\prime })\\right]$ Moreover, $\\bar{\\mathcal {L}} = {{\\mathcal {F}}}$ iif $\\bar{\\mathcal {L}}(\\mu )=0$ , which means that the model is well-specified.", "The framing eq:inequalitymmdnn implies that optimizing $\\mathcal {F}$ can decrease $\\mathcal {L}$ and vice-versa.", "Moreover, in the well specified case, optimizing $\\mathcal {F}$ is equivalent to optimizing $\\mathcal {L}$ .", "Hence one can use the gradient flow of the MMD defined in eq:continuitymmd to solve eq:liftedregression.", "One particular setting when eq:liftedregression is well-specified is the student-teacher problem as in .", "In this case, a teacher network of the form $\\Psi _T(z,\\mu )$ produces a deterministic output $y = \\Psi _T(z,\\mu )$ given an input $z$ while a student network $\\Psi _S(z,\\nu )$ tries to learn the mapping $z\\mapsto \\Psi _T(z,\\mu )$ by minimizing eq:liftedregression.", "In practice $\\mu $ and $\\nu $ are given as empirical distributions on some particles $\\Xi = (\\xi ^1,...,\\xi ^M)$ and $X=(x^1,...,x^N)$ with $\\mu = \\frac{1}{M} \\sum _{j=1}^M \\delta _{\\xi ^j}$ and $\\nu = \\frac{1}{N} \\sum _{i=1}^N\\delta _{x^i}$ .", "The particles $(x^i)_{1\\le i \\le N}$ are then optimized using gradient descent starting from an initial configuration $(x_0^i)_{1\\le i \\le N}$ .", "This leads to the update equation: $x^i_{n+1} = x^i_n - \\gamma \\mathbb {E}_{z\\sim p }\\left[ \\left(\\frac{1}{N}\\sum _{j=1}^N \\psi (z,x_n^{j})-\\frac{1}{M}\\sum _{j=1}^M \\psi (z,\\xi ^{j})\\right)\\nabla _{x_n^{i}}\\psi (z,x_n^{i})\\right],$ where $(x_n^{i})_{1\\le i\\le N}$ are the particles at iteration $n$ with empirical distribution $\\nu _n$ .", "Here, the gradient is rescaled by the number of particles $N$ .", "Re-arranging terms and recalling that $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim p}[\\psi (z,x)^T\\psi (z,x^{\\prime })]$ , equation eq:updateequationstudentteacher becomes: $x^i_{n+1} = x^i_n - \\gamma \\nabla f_{\\mu ,\\nu _n}(x_n^i).$ with $\\nabla f_{\\mu ,\\nu _n}(x_n^i) = \\left(\\frac{1}{N}\\sum _{j=1}^N \\nabla _2 k(x_n^{j},x_n^{i})-\\frac{1}{M}\\sum _{j=1}^M \\nabla _2 k(\\xi ^{j},x_n^{i})\\right)$ .", "The above equation is a discretized version of the gradient flow of the MMD defined in eq:continuitymmd.", "Such discretization is obtained from eq:eulermaruyama by setting the noise level $\\beta _n$ to 0.", "Hence, in the limit when $N\\rightarrow \\infty $ and $\\gamma \\rightarrow 0$ , one recovers the gradient flow defined in eq:eulerschemeparticles.", "In general the kernel $k$ is intractable and can be approximated using $n_b$ samples $(z_1,...,z_{n_b})$ from the data distribution: $\\hat{k}(x,x^{\\prime }) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\psi (z_b,x)^T \\psi (z_b,x^{\\prime })$ .", "This finally leads to an approximate update: $x^i_{n+1} = x^i_n - \\gamma \\nabla \\hat{f}_{\\mu ,\\nu _n}(x_n^i).$ where $\\nabla \\hat{f}_{\\mu ,\\nu _n}$ is given by: $\\nabla \\hat{f}_{\\mu ,\\nu _n}(x_n^i) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\left(\\frac{1}{N}\\sum _{j=1}^N \\psi (z_b,x_n^{j})-\\frac{1}{M}\\sum _{j=1}^M \\psi (z_b,\\xi ^{j})\\right)\\nabla _{x_n^{i}}\\psi (z_b,x_n^{i})).$ We provide now a proof for prop:inequalitymmdloss: [Proof of prop:inequalitymmdloss]Let $\\Psi (z,\\nu )$ =$\\int \\psi (z,x)\\mathop {}\\!\\mathrm {d}\\nu (x)$ .", "By eq:kernelNN, we have: $k(x,x^{\\prime }) =\\int _{z}\\psi (z,x)^T\\psi (z,x^{\\prime })\\mathop {}\\!\\mathrm {d}s(z)$ where $s$ denotes the distribution of $z$ .", "It is easy to see that ${{\\mathcal {F}}}(\\nu ) = \\frac{1}{2} \\int \\Vert \\Psi (z,\\nu ) -\\Psi (z,\\mu ) \\Vert ^2 \\mathop {}\\!\\mathrm {d}s(z) $ .", "Indeed expanding the square in the l.h.s and exchanging the order of integrations w.r.t $p$ and $(\\mu \\otimes \\nu )$ one gets ${{\\mathcal {F}}}(\\nu )$ .", "Now, introducing $\\Psi (z,\\mu )$ in the expression of $\\mathcal {L}(\\nu )$ , it follows by a simple calculation that: $\\mathcal {L}(\\nu )&= \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu )+ \\int \\left\\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu )\\right\\rangle \\mathop {}\\!\\mathrm {d}p(z)$ where $m(z)$ is the conditional mean of $y$ , i.e.", ": $m(z)=\\int y \\mathop {}\\!\\mathrm {d}p(y|z)$ .", "On the other hand we have that $2\\mathcal {L}(\\mu ) = var(y) + \\int \\Vert \\Psi (z,\\mu )-m(z)\\Vert ^2\\mathop {}\\!\\mathrm {d}p(z)$ , so that $ \\int \\Vert \\Psi (z,\\mu )-m(z)\\Vert ^2\\mathop {}\\!\\mathrm {d}p(z) = 2\\bar{\\mathcal {L}}(\\mu )$ .", "Hence, using Cauchy-Schwartz for the last term in eq:maineqnn, one gets the upper-bound: $\\mathcal {L}(\\nu )\\le \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu ) + 2 \\bar{\\mathcal {L}}(\\mu )^{\\frac{1}{2}}\\mathcal {F(\\nu )}^{\\frac{1}{2}}.$ This in turn gives an upper-bound on $\\bar{\\mathcal {L}}(\\nu )$ after subtracting $var(y)/2$ on both sides of the inequality.", "To get the lower bound on $\\bar{\\mathcal {L}}$ one needs to use the global optimality condition of $\\mu $ for $\\mathcal {L}$ from .", "Indeed, for any $0<\\epsilon \\le 1$ it is easy to see that: $\\epsilon ^{-1}( \\mathcal {L}(\\mu +\\epsilon (\\nu -\\mu ))-\\mathcal {L}(\\mu )) = \\int \\left\\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu )\\right\\rangle \\mathop {}\\!\\mathrm {d}p(z) +o(\\epsilon ).$ Taking the limit $\\epsilon \\rightarrow 0$ and recalling that the l.h.s is always non-negative by optimality of $\\mu $ , it follows that $\\int \\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu ) \\rangle \\mathop {}\\!\\mathrm {d}p(z)$ must also be non-negative.", "Therefore, from eq:maineqnn one gets that $\\mathcal {L}(\\nu ) \\ge \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu )$ .", "The final bound is obtained by subtracting $var(y)/2$ again from both sides of the inequality.", "Numerical Experiments Student-Teacher networks We consider a student-teacher network setting similar to .", "More precisely, using the notation from subsec:trainingneuralnetworks, we denote by $\\Psi (z,\\nu )$ the neural network of the form: $\\Psi (z,\\nu ) = \\int \\psi (z,x)\\mathop {}\\!\\mathrm {d}\\nu (x) $ where $z$ is an input vector in ${{\\mathbb {R}}}^{p}$ and $\\nu $ is a probability distribution over the parameters $x$ .", "Hence $\\Psi $ is an expectation over sub-networks $\\psi (z,x)$ with parameters $x$ .", "Here, we choose $\\psi $ of the form: $\\psi (z,x) = G\\left(b^{1}+W^{1}\\sigma (W^{0}z+b^{0})\\right).$ where $x$ is obtained as the concatenation of the parameters $(b^{1},W^{1},b^{0},W^{0})\\in {{\\mathcal {X}}}$ , $\\sigma $ is the ReLU non-linearity while $G$ is a fixed function and is defined later.", "Note that using $x$ to denote the parameters of a neural network is unusual, however, we prefer to keep a notation which is consistent with the rest of the paper.", "We will only consider the case when $\\nu $ is given by an empirical distribution of $N$ particles $X = (x^{1},...x^{N})$ for some $N\\in \\mathbb {N}$ .", "In that case, we denote by $\\nu _{X}$ such distribution to stress the dependence on the particles $X$ , i.e.", ": $ \\nu := \\nu _{X}= \\frac{1}{N} \\sum _{i=1}^N \\delta _{x^{i}}$ .", "The teacher network $\\Psi _{T}(z,\\nu _{\\Xi })$ is given by $M$ particles $\\Xi = (\\xi _1,...,\\xi _M)$ which are fixed during training and are initially drawn according to a normal distribution $\\mathcal {N}(0,1)$ .", "Similarly, the student network $\\Psi _{S}(z,\\nu _{X})$ has $N$ particles $X = (x^{1},...,x^{N})$ that are initialized according to a normal distribution $\\mathcal {N}(10^{-3},1)$ .", "Here we choose $M=1$ and $N=1000$ .", "The inputs $z$ are drawn from a uniform distribution $\\mathbb {S}$ on the sphere in ${{\\mathbb {R}}}^p$ as in with $p=50$ .", "The number of hidden layers $H$ is set to 3 and the output dimension is 1.", "The parameters of the student networks are trained to minimize the risk in eq:studentteacherproblem using SGD with mini-batches of size $n_b = 10^2$ and optimal step-size $\\gamma $ selected from: $\\lbrace 10^{-3},10^{-2},10^{-1}\\rbrace $ .", "$\\min _{X} \\mathbb {E}_{z\\sim \\mathbb {S} }\\left[(\\Psi _T(z,\\nu _{\\Xi } )- \\Psi _S(z,\\nu _{X}))^2\\right]$ When $G$ is simply the identity function and no bias is used, one recovers the setting in .", "In that case the network is partially 1-homogeneous and applies ensuring global optimality.", "Here, we are interested in the case when global optimality is not guaranteed by the homogeneity structure, hence we choose $G$ to be a gaussian with fixed bandwidth $\\sigma =2$ .", "As shown in subsec:trainingneuralnetworks, performing gradient descent to minimize eq:studentteacherproblem can be seen as a particle version of the gradient flow of the MMD with a kernel given by $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim \\mathbb {S}}[\\psi (z,x)\\psi (z,x^{\\prime })]$ and target distribution $\\mu $ given by $\\mu = \\nu _{\\Xi }$ .", "Hence one can use the noise injection algorithm defined in eq:eulermaruyama to train the parameters of the student network.", "Since $k$ is defined through an expectation over the data, it can be approximated using $n_{b}$ data samples $\\lbrace z_{1},...,z_{B}\\rbrace $ : $\\hat{k}(x,x^{\\prime }) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\psi (z_b,x)\\psi (z_b,x^{\\prime }).$ Such approximation of the kernel leads to a simple expression for the gradient of the unnormalised witness function between $\\nu _{\\Xi }$ and $\\nu _{X}$ : $\\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X}}(x) = \\frac{1}{n_b}\\sum _{b=1}^{n_b}\\left( \\frac{1}{M}\\sum _{j=1}^M\\psi (z_b,\\xi ^j) - \\frac{1}{N}\\sum _{i=1}^N\\psi (z_b , x^i)\\right)\\nabla _{x}\\psi (z_b,x), \\qquad \\forall x \\in {{\\mathcal {X}}}.$ euclidstudentteacher, provides the main steps to train the parameters of the student network using the noisy gradient flow of the MMD proposed in eq:eulermaruyama.", "It can be easily implemented using automatic differentiation packages like PyTorch.", "Indeed, one only needs to compute an auxiliary loss function ${{\\mathcal {F}}}_{aux}$ instead of the actual MMD loss ${{\\mathcal {F}}}$ and perform gradient descent using ${{\\mathcal {F}}}_{aux}$ .", "Such function is given by: ${{\\mathcal {F}}}_{aux} = \\frac{1}{n_b}\\sum _{i=1}^N\\sum _{b=1}^{n_b} \\left({\\tt NoGrad}\\left(y_S^b\\right) - y_T^b \\right)\\psi (z^b,\\widetilde{x}_n^{i})$ To compute ${{\\mathcal {F}}}_{aux}$ , two forward passes on the student network are required.", "A first forward pass using the current parameter values $X_n = (x_n^1,...,x_n^{N})$ of the student network is used to compute the predictions $y_S^b$ given an input $z^b$ .", "For such forward pass, the gradient w.r.t to the parameters $X_n$ is not used.", "This is enforced, here, formally by calling the function NoGrad.", "The second forward pass is performed using the noisy parameters $\\widetilde{x}_n^{i} = x_n^i + \\beta _n u_n^{i}$ and requires implementing special layers which can inject noise to the weights.", "This second forward pass will be used to provide a gradient to update the particles using back-propagation.", "Indeed, it is easy to see that $\\nabla _{x_n^{i}} {{\\mathcal {F}}}_{aux}$ gives exactly the gradient $\\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _X}(\\widetilde{x}_n^i)$ used in euclidstudentteacher.", "Learning gaussians Figure: Gradient flow of the MMDMMD from a gaussian initial distributions ν 0 ∼𝒩(10,0.5)\\nu _0\\sim \\mathcal {N}(10,0.5) towards a target distribution μ∼𝒩(0,1)\\mu \\sim \\mathcal {N}(0,1) using N=M=1000N=M=1000 samples from μ\\mu and ν 0 \\nu _0 and a gaussian kernel with bandwidth σ=2\\sigma = 2 .", "eq:eulermaruyama is usedwithout noise β n =0\\beta _n = 0 in red and with noise β n =10\\beta _n = 10 up to n=5000n=5000, then β n =0\\beta _n = 0 afterwards in blue.The left figure shows the evolution of the MMDMMD at each iteration.", "The middle figure shows the initial samples (black for μ\\mu ), and the right figure shows the final samples after 10 5 10^5 iterations with step-size γ=0.1\\gamma = 0.1.fig:experiments illustrates the behavior of the proposed algorithm eq:eulermaruyama in a simple setting, and compares it with the gradient flow of the MMD without noise injection.", "In this setting, the MMD flow fails to converge to the global optimum.", "Indeed, as shown in fig:experiments(right), some of the final samples (in red) obtained using noise-free gradient updates tend to get further away from the target samples (in black).", "Most of the remaining samples collapse to a unique point at the center near the origin.", "This can also be seen from fig:experiments(left) where the training error fails to decrease below $10^{-3}$ .", "On the other hand, adding noise to the gradient seems to lead to global convergence, as seen visually from the samples.", "The training error decreases below $10^{-4}$ and oscillates between $10^{-8}$ and $10^{-4}$ .", "The oscillation is due to the step-size, which remained fixed while the noise was set to 0 starting from iteration 5000.", "It is worth noting that adding noise to the gradient slows the speed of convergence, as one can see from fig:experiments(left).", "This is expected since the algorithm doesn't follow the path of steepest descent.", "The noise helps in escaping local optima, however, as illustrated here.", "1.35 Noisy gradient flow of the MMD [1] Input $N$ , $n_{iter}$ , $\\beta _0$ , $\\gamma $ Output $(x^{i}_{n_{iter}})_{1\\le i\\le N}$ Initialize $N$ particles from initial distribution $\\nu _0$ : $x_{0}^{i}\\mathrel {\\stackrel{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny i.i.d}}}{\\sim }}\\nu _0$ Initialize the noise level: $\\beta =\\beta _0$ $n=0,\\dots , n_{iter}$ Sample $M$ points from the target $\\mu $: $\\lbrace y^1,...,y^M\\rbrace $ .", "Sample $N$ gaussians : $\\lbrace u_n^{1},...,u_n^N\\rbrace $ $i=1,\\dots ,N$ Compute the noisy values: $\\widetilde{x}_n^{i} = x_n^i+\\beta _n u_n^i$ Evaluate vector field:$\\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(\\widetilde{x}_n^i) = \\frac{1}{N}\\sum \\limits _{j=1}^N \\nabla _2 k(x_n^j,\\widetilde{x}_n^{i})-\\frac{1}{M}\\sum \\limits _{m=1}^M \\nabla _2 k(y^m,\\widetilde{x}_n^{i})$ Update the particles: $x_{n+1}^{i} = x_n^i -\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(\\widetilde{x}_n^i)$ Update the noise level using an update rule $h$: $\\beta _{n+1}=h(\\beta _{n}, n)$ .", "1.35 Noisy gradient flow of the MMD for student-teacher learning [1] Input $N$ , $n_{iter}$ , $\\beta _0$ , $\\gamma $ , $n_{b}$ , $\\Xi = (\\xi ^j)_{1\\le j\\le M}$ .", "Output $(x^{i}_{n_{iter}})_{1\\le i\\le N}$ .", "Initialize $N$ particles from initial distribution $\\nu _0$ : $x_{0}^{i}\\mathrel {\\stackrel{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny i.i.d}}}{\\sim }}\\nu _0$ .", "Initialize the noise level: $\\beta =\\beta _0$ .", "$n=0,...,n_{iter}$ Sample minibatch of $n_{b}$ data points: $\\lbrace z^1,...,z^{n_{b}}\\rbrace $ .", "$b=1,...,n_{b}$ Compute teacher's output: $y_{T}^b = \\frac{1}{M}\\sum _{j=1}^M \\psi (z^b,\\xi ^{j})$ .", "Compute students's output: $y_{S}^b = \\frac{1}{N}\\sum _{i=1}^N \\psi (z^b,x_n^i)$ .", "Sample $N$ gaussians : $\\lbrace u_n^{1},...,u_n^{N}\\rbrace $ .", "$i=1,...,N$ Compute noisy particles: $\\widetilde{x}_n^{i} = x_n^i +\\beta _n u_n^{i}$ Evaluate vector field: $ \\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X_n}}(\\widetilde{x}_n^{i}) = \\frac{1}{n_{b}}\\sum _{b=1}^{n_{b}} ( y_{S}^b - y_{T}^b ) \\nabla _{x_n^{i}} \\psi (z^b,\\widetilde{x}_n^{i})$ Update particle $i$: $x_{n+1}^{i} = x_{n}^{i} -\\gamma \\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X_n}}(\\widetilde{x}_n^{i})$ Update the noise level using an update rule $h$: $\\beta _{n+1}=h(\\beta _{n}, n)$ .", "Auxiliary results Proposition 21 Under assump:lipschitzgradientk, the unnormalised witness function $f_{\\mu ,\\nu }$ between any probability distributions $\\mu $ and $\\nu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ is differentiable and satisfies: $\\nabla f_{\\mu ,\\nu }(z) = \\int \\nabla _1 k(z,x)\\mathop {}\\!\\mathrm {d}\\mu (x) - \\int \\nabla _1 k(z,x)\\mathop {}\\!\\mathrm {d}\\nu (x) \\qquad \\forall z\\in {{\\mathcal {X}}}$ where $z \\mapsto \\nabla _1 k(x,z)$ denotes the gradient of $z\\mapsto k(x,z)$ for a fixed $x \\in {{\\mathcal {X}}}$ .", "Moreover, the map $(z,\\mu ,\\nu )\\mapsto f_{\\mu ,\\nu }(z)$ is Lipschitz with: $\\Vert \\nabla f_{\\mu ,\\nu }(z) - \\nabla f_{\\mu ^{\\prime },\\nu ^{\\prime }}(z^{\\prime })\\Vert \\le 2L (\\Vert z-z^{\\prime } \\Vert + W_2(\\mu ,\\mu ^{\\prime }) + W_2(\\nu ,\\nu ^{\\prime }))$ Finally, each component of $\\nabla f_{\\mu ,\\nu }$ belongs to ${{\\mathcal {H}}}$ .", "The expression of the unnormalised witness function is given in eq:witnessfunction.", "To establish eq:gradientwitness, we simply need to apply the differentiation lemma .", "By assump:lipschitzgradientk, it follows that $ (x,z)\\mapsto \\nabla _1 k(z,x)$ has at most a linear growth.", "Hence on any bounded neighborhood of $z$ , $x\\mapsto \\Vert \\nabla _1 k(z,x) \\Vert $ is upper-bounded by an integrable function w.r.t.", "$\\mu $ and $\\nu $ .", "Therefore, the differentiation lemma applies and $\\nabla f_{\\mu ,\\nu }(z)$ is differentiable with gradient given by eq:gradientwitness.", "To prove the second statement, we will consider two optimal couplings: $\\pi _1$ with marginals $\\mu $ and $\\mu ^{\\prime }$ and $\\pi _2$ with marginals $\\nu $ and $\\nu ^{\\prime }$ .", "We use eq:gradientwitness to write: $\\Vert \\nabla f_{\\mu ,\\nu }(z) - \\nabla f_{\\mu ^{\\prime },\\nu ^{\\prime }}(z^{\\prime })\\Vert &= \\left\\Vert \\mathbb {E}_{\\pi _1}\\left[ \\nabla _1 k(z,x)-\\nabla _1 k(z^{\\prime },x^{\\prime }) \\right] - \\mathbb {E}_{\\pi _2}\\left[\\nabla _1 k(z,y)-\\nabla _1 k(z^{\\prime },y^{\\prime })\\right] \\right\\Vert \\\\& \\le \\mathbb {E}_{\\pi _1}\\left[ \\left\\Vert \\nabla _1 k(z,x)-\\nabla _1 k(z^{\\prime },x^{\\prime }) \\right\\Vert \\right] + \\mathbb {E}_{\\pi _2}\\left[\\left\\Vert \\nabla _1 k(z,y)-\\nabla _1 k(z^{\\prime },y^{\\prime }) \\right\\Vert \\right] \\\\&\\le L\\left( \\Vert z-z^{\\prime } \\Vert + \\mathbb {E}_{\\pi _1}[\\Vert x-x^{\\prime } \\Vert ] + \\Vert z-z^{\\prime } \\Vert + \\mathbb {E}_{\\pi _2}[\\Vert y-y^{\\prime } \\Vert ] \\right)\\\\&\\le L(2\\Vert z-z^{\\prime }\\Vert + W_2(\\mu ,\\mu ^{\\prime }) + W_2(\\nu ,\\nu ^{\\prime }) )$ The second line is obtained by convexity while the third one uses assump:lipschitzgradientk and finally the last line relies on $\\pi _1$ and $\\pi _2$ being optimal.", "The desired bound is obtained by further upper-bounding the last two terms by twice their amount.", "Lemma 22 Let $U$ be an open set, $q$ a probability distribution in $\\mathcal {P}_2({{\\mathcal {X}}}\\times \\mathcal {U})$ and $\\psi $ and $\\phi $ two measurable maps from ${{\\mathcal {X}}}\\times \\mathcal {U} $ to ${{\\mathcal {X}}}$ which are square-integrable w.r.t $q$ .", "Consider the path $\\rho _t$ from $(\\psi )_{\\#}q$ and $(\\psi +\\phi )_{\\#}q$ given by: $\\rho _t= (\\psi +t\\phi )_{\\#}q \\quad \\forall t\\in [0,1]$ .", "Under assump:lipschitzgradientk, $\\mathcal {F}(\\rho _t)$ is differentiable in $t$ with $\\dot{{{\\mathcal {F}}}}(\\rho _t)&=\\int \\nabla f_{\\mu ,\\rho _t}(\\psi (x,u)+t\\phi (x,u)) \\phi (x,u)\\mathop {}\\!\\mathrm {d}q(x,u)$ where $f_{\\mu ,\\rho _t}$ is the unnormalised witness function between $\\mu $ and $\\rho _t$ as defined in eq:witnessfunction.", "Moreover: $\\left|\\dot{{{\\mathcal {F}}}}(\\rho _t) - \\dot{{{\\mathcal {F}}}}(\\rho _s) \\right|\\le 3L\\left|t-s \\right|\\int \\left\\Vert \\phi (x,u) \\right\\Vert ^2 dq(x,u)$ For simplicity, we write $f_t$ instead of $f_{\\mu ,\\rho _t}$ and denote by $s_t(x,u)= \\psi (x,u)+t\\phi (x,u)$ The function $h: t\\mapsto k(s_t(x,u),s_t(x^{\\prime },u^{\\prime })) - k(s_t(x,u),z) - k(s_t(x^{\\prime },u^{\\prime }),z)$ is differentiable for all $(x,u)$ ,$(x^{\\prime },u^{\\prime })$ in ${{\\mathcal {X}}}\\times \\mathcal {U}$ and $z\\in {{\\mathcal {X}}}$ .", "Moreover, by assump:lipschitzgradientk, a simple computation shows that for all $0\\le t\\le 1$ : $\\left|\\dot{h} \\right|\\le L\\left[ \\left(\\left\\Vert z - \\phi (x,u)\\right\\Vert + \\left\\Vert \\psi (x,u)\\right\\Vert \\right) \\left\\Vert \\phi (x^{\\prime },u^{\\prime })\\right\\Vert +\\left(\\left\\Vert z - \\phi (x^{\\prime },u^{\\prime })\\right\\Vert + \\left\\Vert \\psi (x^{\\prime },u^{\\prime })\\right\\Vert \\right)\\left\\Vert \\phi (x,u)\\right\\Vert \\right]$ The right hand side of the above inequality is integrable when $z$ , $(x,u)$ and $(x^{\\prime },u^{\\prime })$ are independent and such that $z\\sim \\mu $ and both $(x,u)$ and $(x^{\\prime },u^{\\prime })$ are distributed according to $q$ .", "Therefore, by the differentiation lemma it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[(\\nabla _1 k(s_t(x,u),s_t(x^{\\prime },u^{\\prime }))-\\nabla _1 k(s_t(x,u),z)).\\phi (x,u)\\right].$ By prop:gradwitnessfunction, we directly get $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\int \\nabla f_{\\mu ,\\rho _t}(\\psi (x,u)+t\\phi (x,u)) \\phi (x,u)\\mathop {}\\!\\mathrm {d}q(x,u)$ .", "We shall control now the difference $\\vert \\dot{F}(\\rho _t)-\\dot{{{\\mathcal {F}}}}(\\rho _{t^{\\prime }})\\vert $ for $0\\le t,t^{\\prime }\\le 1$ .", "Using assump:lipschitzgradientk and recalling that $s_t(x,u)-s_{t^{\\prime }}(x,u)= (t-t^{\\prime })\\phi (x,u)$ a simple computation shows: $\\left|\\dot{{{\\mathcal {F}}}}(\\rho _t)-\\dot{{{\\mathcal {F}}}}(\\rho _{t^{\\prime }}) \\right|&\\le L\\left|t-t^{\\prime } \\right|\\mathbb {E}\\left[\\left(2\\Vert \\phi (x,u) \\Vert + \\Vert \\phi (x^{\\prime },u^{\\prime })\\Vert \\right)\\Vert \\phi (x,u)\\Vert \\right]\\\\&\\le L\\vert t-t^{\\prime }\\vert (2\\mathbb {E}\\left[\\Vert \\phi (x,u)\\Vert ^2 \\right] + \\mathbb {E}\\left[\\Vert \\phi (x,u)\\Vert \\right]^2)\\\\&\\le 3L\\vert t-t^{\\prime }\\vert \\int \\Vert \\phi (x,u)\\Vert ^2 \\mathop {}\\!\\mathrm {d}q(x,u).$ which gives the desired upper-bound.", "We denote by $(x,y)\\mapsto H_1 k(x,y)$ the Hessian of $x\\mapsto k(x,y)$ for all $y\\in {{\\mathcal {X}}}$ and by $(x,y)\\mapsto \\nabla _1\\nabla _2 k(x,y)$ the upper cross-diagonal block of the hessian of $(x,y)\\mapsto k(x,y)$ .", "Lemma 23 Let $q$ be a probability distribution in $\\mathcal {P}_2({{\\mathcal {X}}}\\times {{\\mathcal {X}}})$ and $\\psi $ and $\\phi $ two measurable maps from ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ to ${{\\mathcal {X}}}$ which are square-integrable w.r.t $q$ .", "Consider the path $\\rho _t$ from $(\\psi )_{\\#}q$ and $(\\psi +\\phi )_{\\#}q$ given by: $\\rho _t= (\\psi +t\\phi )_{\\#}q \\quad \\forall t\\in [0,1]$ .", "Under assump:diffkernel,assump:lipschitzgradientk, $\\mathcal {F}(\\rho _t)$ is twice differentiable in $t$ with $\\ddot{{{\\mathcal {F}}}}(\\rho _t)=&\\mathbb {E}\\left[\\phi (x,y)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime })) \\phi (x^{\\prime },y^{\\prime })\\right] \\\\&+ \\mathbb {E}\\left[\\phi (x,y)^T (H_1k(s_t(x,y),y_t^{\\prime })-H_1k(s_t(x,y),z)) \\phi (x,y)\\right]$ where $(x,y)$ and $(x^{\\prime },y^{\\prime })$ are independent samples from $q$ , $z$ is a sample from $\\mu $ and $s_t(x,y)= \\psi (x,y)+t\\phi (x,y)$ .", "Moreover, if assump:boundedfourthoder also holds then: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\mathbb {E}\\left[\\phi (x,y)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime })) \\phi (x^{\\prime },y^{\\prime })\\right] - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho _t)^{\\frac{1}{2}}\\mathbb {E}[\\Vert \\phi (x,y) \\Vert ^2]$ where we recall that ${{\\mathcal {X}}}\\subset \\mathbb {R}^d$ .", "The first part is similar to lem:derivativemmdaugmented.", "In fact we already know by lem:derivativemmdaugmented that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ exists and is given by: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[(\\nabla _1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-\\nabla _1 k(s_t(x,y),z)).\\phi (x,y)\\right]$ Define now the function $\\xi : t\\mapsto (\\nabla _1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-\\nabla _1 k(s_t(x,y),z)).\\phi (x,y)$ which is differentiable for all $(x,y)$ ,$(x^{\\prime },y^{\\prime })$ in ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ and $z\\in {{\\mathcal {X}}}$ by assump:diffkernel.", "Moreover, its time derivative is given by: $\\dot{\\xi } =& \\phi (x^{\\prime },y^{\\prime })^T \\nabla _2\\nabla _1k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))\\phi (x,y) \\\\&+ \\phi (x,y)^T(H_1k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }) ) - H_1k(s_t(x,y),z ))\\phi (x,y)$ By assump:lipschitzgradientk it follows in particular that $\\nabla _2\\nabla _1k$ and $H_1k$ are bounded hence $\\vert \\dot{\\xi } \\vert $ is upper-bounded by $ (\\Vert \\phi (x,y) \\Vert + \\Vert \\phi (x^{\\prime },u^{\\prime }) \\Vert )\\Vert \\phi (x,y)\\Vert $ which is integrable.", "Therefore, by the differentiation lemma it follows that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is differentiable and $\\ddot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[\\dot{\\xi }\\right].$ We prove now the second statement.", "Bu the reproducing property, it is easy to see that the last term in the expression of $\\dot{\\xi }$ can be written as: $\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), k(s_t(x^{\\prime },y^{\\prime }),.", ")- k(z,.", ")\\rangle _{{{\\mathcal {H}}}}$ Now, taking the expectation w.r.t $x^{\\prime }$ ,$y^{\\prime }$ and $z$ which can be exchanged with the inner-product in ${{\\mathcal {H}}}$ since $(x^{\\prime },y^{\\prime },z)\\mapsto k(s_t(x^{\\prime },y^{\\prime }),.", ")- k(z,.", ")$ is Bochner integrable and recalling that such integral is given by $f_{\\mu ,\\rho _t}$ one gets the following expression: $\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), f_{\\mu ,\\rho _t} \\rangle _{{{\\mathcal {H}}}}$ Using Cauchy-Schwartz and assump:boundedfourthoder it follows that: $\\vert \\left\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), f_{\\mu ,\\rho _t} \\right\\rangle _{{{\\mathcal {H}}}}\\vert \\le \\lambda d\\Vert \\phi (x,y)\\Vert ^2 \\Vert f_{\\mu ,\\rho _t}\\Vert $ One then concludes using the expression of $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ and recalling that ${{\\mathcal {F}}}(\\rho _t) = \\frac{1}{2}\\Vert f_{\\mu ,\\rho _t} \\Vert ^2$ .", "Lemma 24 Assume that for any geodesic $(\\rho _{t})_{t\\in [0,1]}$ between $\\rho _{0}$ and $\\rho _{1}$ in $\\mathcal {P}({{\\mathcal {X}}})$ with velocity vectors $(V_t)_{t \\in [0,1]}$ the following holds: $\\ddot{{{\\mathcal {F}}}}(\\rho _{t}) \\ge \\Lambda (\\rho _t,V_t)$ for some admissible functional $\\Lambda $ as defined in def:conditionslambda, then: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\rho _{0})+t{{\\mathcal {F}}}(\\rho _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds$ with $G(s,t)=s(1-t) \\mathbb {1}\\lbrace s\\le t\\rbrace +t(1-s) \\mathbb {1}\\lbrace s\\ge t\\rbrace $ for $0\\le s,t\\le 1$ .", "This is a direct consequence of the general identity (, Proposition 16.2).", "Indeed, for any continuous function $\\phi $ on $[0,1]$ with second derivative $\\ddot{\\phi }$ that is bounded below in distribution sense the following identity holds: $\\phi (t)=(1-t)\\phi (0)+t\\phi (1)-\\int _{0}^{1}\\ddot{\\phi }(s)G(s,t)ds.$ This holds a fortiori for ${{\\mathcal {F}}}(\\rho _{t})$ since ${{\\mathcal {F}}}$ is smooth.", "By assumption, we have that $\\ddot{{{\\mathcal {F}}}}(\\rho _{t}) \\ge \\Lambda (\\rho _t,V_t)$ , hence, it follows that: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\rho _{0})+t{{\\mathcal {F}}}(\\rho _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds.$ Lemma 25 [Mixture convexity] The functional ${{\\mathcal {F}}}$ is mixture convex: for any probability distributions $\\nu _1$ and $\\nu _2$ and scalar $1\\le \\lambda \\le 1$ : ${{\\mathcal {F}}}(\\lambda \\nu _1+(1-\\lambda )\\nu _2)\\le \\lambda {{\\mathcal {F}}}(\\nu _1)+ (1-\\lambda ){{\\mathcal {F}}}(\\nu _2)$ Let $\\nu $ and $\\nu ^{\\prime }$ be two probability distributions and $0\\le \\lambda \\le 1$ .", "Expanding the RKHS norm in ${{\\mathcal {F}}}$ it follows directly that: $\\mathcal {F}(\\lambda \\nu + (1-\\lambda )\\nu ^{\\prime }) -\\lambda \\mathcal {F}(\\nu ) -(1-\\lambda )\\mathcal {F}(\\nu ^{\\prime }) = -\\frac{1}{2}\\lambda (1-\\lambda )MMD(\\nu ,\\nu ^{\\prime })^2 \\le 0.$ which concludes the proof.", "Lemma 26 [Discrete Gronwall lemma] Let $a_{n+1}\\le (1+\\gamma A)a_{n}+b$ with $\\gamma >0$ , $A>0$ , $b>0$ and $a_0=0$ , then: $a_{n}\\le \\frac{b}{\\gamma A}(e^{n\\gamma A}-1).$ Using the recursion, it is easy to see that for any $n>0$ : $a_n \\le (1+\\gamma A)^n a_0 + b\\left(\\sum _{i=0}^{n-1}(1+\\gamma A )^{k}\\right)$ One concludes using the identity $\\sum _{i=0}^{n-1}(1+\\gamma A )^{k} =\\frac{1}{\\gamma A}((1+\\gamma A)^{n} -1)$ and recalling that $(1+\\gamma A)^{n} \\le e^{n\\gamma A}$ ." ], [ "A noisy update as a regularization", "We showed in subsection:barrieroptimization that ${{\\mathcal {F}}}$ is a non-convex functional, and derived a condition in sec:Lojasiewiczinequality to reach the global optimum.", "We now address the case where such a condition does not necessarily hold, and provide a regularization of the gradient flow to help achieve global optimality in this scenario.", "Our starting point will be the equilibrium condition in eq:equilibriumcondition.", "If an equilibrium $\\nu ^*$ that satisfies eq:equilibriumcondition happens to have a positive density, then $f_{\\mu ,\\nu ^{*}}$ would be constant everywhere.", "This in turn would mean that $f_{\\mu ,\\nu ^{*}}=0$ when the RKHS does not contain constant functions, as for a gaussian kernel .", "Hence, $\\nu ^*$ would be a global optimum since ${{\\mathcal {F}}}(\\nu ^{*})=0$ .", "The limit distribution $\\nu ^*$ might be singular, however, and can even be a dirac distribution .", "Although the gradient $\\nabla f_{\\mu ,\\nu ^{*}}$ is not identically 0 in that case, eq:equilibriumcondition only evaluates it on the support $\\nu ^{*}$ , on which $\\nabla f_{\\mu ,\\nu ^{*}}=0$ holds.", "Hence a possible fix would be to make sure that the unnormalised witness gradient is also evaluated at points outside of the support of $\\nu ^{*}$ .", "Here, we propose to regularize the flow by injecting noise into the gradient during updates of eq:eulerschemeparticles, $X_{n+1} = X_{n} -\\gamma \\nabla f_{\\mu ,\\nu _n}(X_n+ \\beta _n U_n), \\qquad n\\ge 0,$ where $U_n$ is a standard gaussian variable and $\\beta _n$ is the noise level at $n$ .", "Compared to eq:eulerscheme, the sample here is first blurred before evaluating the gradient.", "Intuitively, if $\\nu _n$ approaches a local optimum $\\nu ^{*}$ , $ \\nabla f_{\\mu ,\\nu _n}$ would be small on the support of $\\nu _n$ but it might be much larger outside of it, hence evaluating $\\nabla f_{\\mu ,\\nu _n}$ outside the support of $\\nu _n$ can help in escaping the local minimum.", "The stochastic process eq:discretizednoisyflow is different from adding a diffusion term to eq:continuitymmd.", "The latter case would correspond to regularizing ${{\\mathcal {F}}}$ using an entropic term as in , (see also subsec:klflow on the Langevin diffusion) and was shown to converge to a global optimum that is in general different from the global minmum of the un-regularized loss.", "Eq.", "eq:discretizednoisyflow is also different from , , where ${{\\mathcal {F}}}$ (and thus its associated velocity field) is regularized by convolving the interaction potential $W$ in eq:potentials with a mollifier.", "The optimal solution of a regularized version of the functional ${{\\mathcal {F}}}$ will be generally different from the non-regularized one, however, which is not desirable in our setting.", "Eq.", "eq:discretizednoisyflow is more closely related to the continuation methods , , and graduated optimization used for non-convex optimization in Euclidian spaces, which inject noise into the gradient of a loss function $F$ at each iteration.", "The key difference is the dependence of $f_{\\mu ,\\nu _n}$ of $\\nu _n$ , which is inherently due to functional optimization.", "We show in thm:convergencenoisygradient that eq:discretizednoisyflow attains the global minimum of ${{\\mathcal {F}}}$ provided that the level of the noise is well controlled, with the proof given in proof:thm:convergencenoisygradient.", "Proposition 8 Let $(\\nu _n)_{n\\in \\mathbb {N}}$ be defined by eq:discretizednoisyflow with an initial $\\nu _0$ .", "Denote $\\mathcal {D}_{\\beta _n}(\\nu _n)=\\mathbb {E}_{x\\sim \\nu _n, u\\sim g}[\\Vert \\nabla f_{\\mu ,\\nu _n}(x+\\beta _n u) \\Vert ^2]$ with $g$ the density of the standard gaussian distribution.", "Under assump:lipschitzgradientk,assump:Lipschitzgradrkhs, and for a choice of $\\beta _n$ such that $8\\lambda ^2\\beta _n^2 {{\\mathcal {F}}}(\\nu _n) \\le \\mathcal {D}_{\\beta _n}(\\nu _n),$ $\\text{the following inequality holds: }\\quad \\quad {{\\mathcal {F}}}(\\nu _{n+1}) - {{\\mathcal {F}}}(\\nu _n ) \\le -\\frac{\\gamma }{2}(1-3\\gamma L)\\mathcal {D}_{\\beta _n}(\\nu _n), &&$ where $\\lambda $ and $L$ are defined in assump:lipschitzgradientk,assump:Lipschitzgradrkhs and depend only on the choice of the kernel.", "Moreover if $\\sum _{i=0}^n \\beta _i^2 \\rightarrow \\infty ,$ then ${{\\mathcal {F}}}(\\nu _n)\\le {{\\mathcal {F}}}(\\nu _0) e^{-4\\lambda ^2\\gamma (1-3\\gamma L)\\sum _{i=0}^n \\beta ^2_i}.$ A particular case where $\\sum _{i=0}^n \\beta _i^2 \\rightarrow \\infty $ holds is when $\\beta _n$ decays as $1/\\sqrt{n}$ while still satisfying eq:controllevelnoise.", "In this case, convergence occurs in polynomial time.", "At each iteration, the level of the noise needs to be adjusted such that the gradient is not too blurred.", "This ensures that each step decreases the loss functional.", "However, $\\beta _n$ does not need to decrease at each iteration: it could increase adaptively whenever needed.", "For instance, when the sequence gets closer to a local optimum, it is helpful to increase the level of the noise to probe the gradient in regions where its value is not flat.", "Note that for $\\beta _n = 0$ in eq:decreasinglossiterations , we recover a similar bound to prop:decreasingfunctional." ], [ "The sample-based approximate scheme", "We now provide a practical algorithm to implement the noisy updates in the previous section, which employs a discretization in space.", "The update eq:discretizednoisyflow involves computing expectations of the gradient of the kernel $k$ w.r.t the target distribution $\\mu $ and the current distribution $\\nu _n$ at each iteration $n$ .", "This suggests a simple approximate scheme, based on samples from these two distributions, where at each iteration $n$ , we model a system of $N$ interacting particles $(X_n^i)_{1\\le i\\le N}$ and their empirical distribution in order to approximate $\\nu _n$ .", "More precisely, given i.i.d.", "samples $(X^i_0)_{1\\le i\\le N}$ and $(Y^{m})_{1\\le m\\le M}$ from $\\nu _0$ and $\\mu $ and a step-size $\\gamma $ , the approximate scheme iteratively updates the $i$ -th particle as $X_{n+1}^{i} = X_n^i -\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(X_n^i+\\beta _n U_n^i),$ where $U_{n}^{i}$ are i.i.d standard gaussians and $\\hat{\\mu },\\,\\hat{\\nu }_n$ denote the empirical distributions of $(Y^{m})_{1\\le m\\le M}$ and $(X^i_n)_{1\\le i\\le N}$ , respectively.", "It is worth noting that for $\\beta _n=0$ , eq:eulermaruyama is equivalent to gradient descent over the particles $(X_n^{i})$ using a sample based version of the MMD.", "Implementing eq:eulermaruyama is straightforward as it only requires to evaluate the gradient of $k$ on the current particles and target samples.", "Pseudocode is provided in euclid.", "The overall computational cost of the algorithm at each iteration is $O((M+N)N)$ with $O(M+N)$ memory.", "The computational cost becomes $O(M+N)$ when the kernel is approximated using random features, as is the case for regression with neural networks (subsec:trainingneuralnetworks).", "This is in contrast to the cubic cost of the flow of the KSD , which requires solving a linear system at each iteration.", "The cost can also be compared to the algorithm in , which involves computing empirical CDF and quantile functions of random projections of the particles.", "The approximation scheme in eq:eulermaruyama is a particle version of eq:discretizednoisyflow, so one would expect it to converge towards its population version eq:discretizednoisyflow as $M$ and $N$ goes to infinity.", "This is shown below.", "Theorem 9 Let $n\\ge 0$ and $T>0$ .", "Let $\\nu _n$ and $\\hat{\\nu }_n$ defined by eq:eulerscheme and eq:eulermaruyama respectively.", "Suppose assump:lipschitzgradientk holds and that $\\beta _n<B$ for all $n$ , for some $B>0$ .", "Then for any $\\frac{T}{\\gamma }\\ge n$ : $\\mathbb {E}\\left[W_{2}(\\hat{\\nu }_{n},\\nu _{n})\\right]\\le \\frac{1}{4}\\left(\\frac{1}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{1}{\\sqrt{M}}var(\\mu )^{\\frac{1}{2}})\\right)(e^{4LT}-1)$ prop:convergenceeulermaruyama controls the propagation of the chaos at each iteration, and uses techniques from .", "Notice also that these rates remain true when no noise is added to the updates, i.e.", "for the original flow when $B=0$ .", "A proof is provided in proof:propagationchaos.", "The dependence in $\\sqrt{M}$ underlines the fact that our procedure could be interesting as a sampling algorithm when one only has access to $M$ samples of $\\mu $ (see subsec:klflow for a more detailed discussion).", "Experiments Figure: Comparison between different training methods for student-teacher ReLU networks with gaussian output non-linearity and synthetic data uniform on a hyper-sphere.", "In blue, eq:eulermaruyama is used without noise β n =0\\beta _n=0 while in red noise is added with the following schedule: β 0 >0\\beta _0>0 and β n \\beta _n is decreased by half after every 10 3 10^3 epochs.", "In green, a diffusion term is added to the particles with noise level kept fixed during training (β n =β 0 \\beta _n=\\beta _0).", "In purple, the KSD is used as a cost function instead of the MMD.", "In all cases, the kernel is estimated using random features (RF) with a batch size of 10 2 10^2.", "Best step-size was selected for each method from {10 -3 ,10 -2 ,10 -1 }\\lbrace 10^{-3},10^{-2},10^{-1}\\rbrace and was used for 10 4 10^4 epochs on a dataset of 10 3 10^3 samples (RF).", "Initial parameters of the networks are drawn from i.i.d.", "gaussians: 𝒩(0,1)\\mathcal {N}(0,1) for the teacher and 𝒩(10 -3 ,1)\\mathcal {N}(10^{-3},1) for the student.", "Results are averaged over 10 different runs.fig:experimentsstudentteacher illustrates the behavior of the proposed algorithm eq:eulermaruyama in a simple setting and compares it with three other methods: MMD without noise injection (blue traces), MMD with diffusion (green traces) and KSD (purple traces, ).", "Here, a student network is trained to produce the outputs of a teacher network using gradient descent.", "More details on the experiment are provided in sec:experimentsneuralnetwork.", "As discussed in subsec:trainingneuralnetworks, this setting can be seen as a stochastic version of the MMD flow since the kernel is estimated using random features at each iteration (eq:randomfeatureskernel in sec:experimentsneuralnetwork).", "Here, the MMD flow fails to converge towards the global optimum.", "Such behavior is consistent with the observations in when the parameters are initialized from a gaussian noise with relatively high variance (which is the case here).", "On the other hand, adding noise to the gradient seems to lead to global convergence.", "Indeed, the training error decreases below $10^{-5}$ and leads to much better validation error.", "While adding a small diffusion term (green) help convergence, the noise-injection (red) still outperforms it.", "This also holds for KSD (purple) which leads to a good solution (b) although at a much higher computational cost (a).", "Our noise injection method (red) is also robust to the amount of noise and achieves best performance over a wide region (c).", "On the other hand, MMD + diffusion (green) performs well only for much smaller values of noise that are located in a narrow region.", "This is expected since adding a diffusion changes the optimal solution, unlike the injection where the global optimum of the MMD remains a fixed point of the algorithm.", "Another illustrative experiment on a simple flow between Gaussians is given in sec:experimentsgaussian." ], [ "Conclusion", "We have introduced MMD flow, a novel flow over the space of distributions, with a practical space-time discretized implementation and a regularisation scheme to improve convergence.", "We provide theoretical results, highlighting intrinsic properties of the regular MMD flow, and guarantees on convergence based on recent results in optimal transport, probabilistic interpretations of PDEs, and particle algorithms.", "Future work will focus on a deeper understanding of regularization for MMD flow, and its application in sampling and optimization for large neural networks.", "This appendix is organized as follows.", "In sec:appendixmathbackground, the mathematical background needed for this paper is given.", "In sec:assumptionskernel, we state the main assumptions used in this work.", "sec:appendixgradientflow is dedicated to the construction of the gradient flow of the MMD.", "sec:appendixconvergence provides proofs for the convergence results in sec:convergencemmdflow.", "sec:appendixalgorithms is dedicated to the modified gradient flow based on noise injection.", "In subsec:trainingneuralnetworks, we discuss the connexion with optimization of neural networks.", "sec:experiments provides details about the experiments.", "Finally, some auxiliary results are provided in sec:auxiliaryresults." ], [ "Mathematical background", "We define ${{\\mathcal {X}}}\\subset {{\\mathbb {R}}}^d$ as the closure of a convex open set, and $\\mathcal {P}_2({{\\mathcal {X}}})$ as the set of probability distributions on ${{\\mathcal {X}}}$ with finite second moment, equipped with the 2-Wassertein metric denoted $W_2$ .", "For any $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $L_2(\\nu )$ is the set of square integrable functions w.r.t.", "$\\nu $ ." ], [ "Maximum Mean Discrepancy and Reproducing Kernel Hilbert Spaces", "We recall here fundamental definitions and properties of reproducing kernel Hilbert spaces (RKHS) (see ) and Maximum Mean Discrepancies (MMD).", "Given a positive semi-definite kernel $(x,y)\\mapsto k(x,y)\\in {{\\mathbb {R}}}$ defined for all $x,y\\in {{\\mathcal {X}}}$ , we denote by ${{\\mathcal {H}}}$ its corresponding RKHS (see ).", "The space ${{\\mathcal {H}}}$ is a Hilbert space with inner product $\\langle .,.", "\\rangle _{{{\\mathcal {H}}}}$ and corresponding norm $\\Vert .", "\\Vert _{{{\\mathcal {H}}}}$ .", "A key property of ${{\\mathcal {H}}}$ is the reproducing property: for all $f \\in {{\\mathcal {H}}}, f(x) = \\langle f, k(x, .", ")\\rangle _{{{\\mathcal {H}}}}$ .", "Moreover, if $k$ is $m$ -times differentiable w.r.t.", "each of its coordinates, then any $f\\in {{\\mathcal {H}}}$ is $m$ -times differentiable and $\\partial ^{\\alpha }f(x)=\\langle f, \\partial ^{\\alpha } k(x,.)", "\\rangle _{{{\\mathcal {H}}}}$ where $\\alpha $ is any multi-index with $\\alpha \\le m$ .", "When $k$ has at most quadratic growth, then for all $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $\\int k(x,x) \\mathop {}\\!\\mathrm {d}\\mu (x) <\\infty $ .", "In that case, for any $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $ \\phi _{\\mu } := \\int k(.,x)\\mathop {}\\!\\mathrm {d}\\mu (x)$ is a well defined element in ${{\\mathcal {H}}}$ called the mean embedding of $\\mu $ .", "The kernel $k$ is said to be characteristic when such mean embedding is injective, that is any mean embedding is associated to a unique probability distribution.", "When $k$ is characteristic, it is possible to define a distance between distributions in $\\mathcal {P}_2({{\\mathcal {X}}})$ called the Maximum Mean Discrepancy: $MMD(\\mu ,\\nu ) = \\Vert \\phi _{\\mu } - \\phi _{\\nu }\\Vert _{{{\\mathcal {H}}}} \\qquad \\forall \\; \\mu ,\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}}).$ The difference between the mean embeddings of $\\mu $ and $\\nu $ is an element in ${{\\mathcal {H}}}$ called the unnormalised witness function between $\\mu $ and $\\nu $ : $f_{\\mu ,\\nu } = \\phi _{\\nu } - \\phi _{\\mu }$ .", "The MMD can also be seen as an Integral Probability Metric: $MMD(\\mu ,\\nu ) = \\sup _{g\\in \\mathcal {B}} \\int g\\mathop {}\\!\\mathrm {d}\\mu - \\int g \\mathop {}\\!\\mathrm {d}\\nu $ where $\\mathcal {B} = \\lbrace g\\in {{\\mathcal {H}}}: \\; \\Vert g\\Vert _{{{\\mathcal {H}}}}\\le 1 \\rbrace $ is the unit ball in the RKHS." ], [ "2-Wasserstein geometry", "For two given probability distributions $\\nu $ and $\\mu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ , we denote by $\\Pi (\\nu ,\\mu )$ the set of possible couplings between $\\nu $ and $\\mu $ .", "In other words $\\Pi (\\nu ,\\mu )$ contains all possible distributions $\\pi $ on ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ such that if $(X,Y) \\sim \\pi $ then $X \\sim \\nu $ and $Y\\sim \\mu $ .", "The 2-Wasserstein distance on $\\mathcal {P}_2({{\\mathcal {X}}})$ is defined by means of an optimal coupling between $\\nu $ and $\\mu $ in the following way: $W_2^2(\\nu ,\\mu ) := \\inf _{\\pi \\in \\Pi (\\nu ,\\mu )} \\int \\left\\Vert x - y\\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\pi (x,y) \\qquad \\forall \\nu , \\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ It is a well established fact that such optimal coupling $\\pi ^*$ exists , .", "Moreover, it can be used to define a path $(\\rho _t)_{t\\in [0,1]}$ between $\\nu $ and $\\mu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ .", "For a given time $t$ in $[0,1]$ and given a sample $(x,y)$ from $\\pi ^{*}$ , it is possible to construct a sample $z_t$ from $\\rho _t$ by taking the convex combination of $x$ and $y$ : $z_t = s_t(x,y)$ where $s_t$ is given by: $s_t(x,y) = (1-t)x+ty \\qquad \\forall x,y\\in {{\\mathcal {X}}}, \\; \\forall t\\in [0,1].$ The function $s_t$ is well defined since ${{\\mathcal {X}}}$ is a convex set.", "More formally, $\\rho _t$ can be written as the projection or push-forward of the optimal coupling $\\pi ^{*}$ by $s_t$ : $\\rho _t = (s_t)_{\\#}\\pi ^{*}$ We recall that for any $T: {{\\mathcal {X}}}\\rightarrow {{\\mathcal {X}}}$ a measurable map, and any $\\rho \\in \\mathcal {P}({{\\mathcal {X}}})$ , the push-forward measure $T_{\\#}\\rho $ is characterized by: $\\int _{y \\in {{\\mathcal {X}}}} \\phi (y) \\mathop {}\\!\\mathrm {d}T_{\\#}\\rho (y) =\\int _{x \\in {{\\mathcal {X}}}}\\phi (T(x)) \\mathop {}\\!\\mathrm {d}\\rho (x) \\text{ for every measurable and bounded function $\\phi $.", "}$ It is easy to see that eq:displacementgeodesic satisfies the following boundary conditions at $t=0,1$ : $\\rho _0 = \\nu \\qquad \\rho _1 = \\mu .$ Paths of the form of eq:displacementgeodesic are called displacement geodesics.", "They can be seen as the shortest paths from $\\nu $ to $\\mu $ in terms of mass transport ( Theorem 5.27).", "It can be shown that there exists a velocity vector field $(t,x)\\mapsto V_t(x)$ with values in ${{\\mathbb {R}}}^d$ such that $\\rho _t$ satisfies the continuity equation: $\\partial _t \\rho _t + div(\\rho _t V_t ) = 0 \\qquad \\forall t\\in [0,1].$ This equation expresses two facts, the first one is that $-div(\\rho _t V_t)$ reflects the infinitesimal changes in $\\rho _t$ as dictated by the vector field (also referred to as velocity field) $V_t$ , the second one is that the total mass of $\\rho _t$ does not vary in time as a consequence of the divergence theorem.", "Equation eq:continuityequation is well defined in the distribution sense even when $\\rho _t$ does not have a density.", "At each time $t$ , $V_t$ can be interpreted as a tangent vector to the curve $(\\rho _t)_{t\\in [0,1]}$ so that the length $l((\\rho _t)_{t\\in [0,1]})$ of the curve $(\\rho _t)_{t\\in [0,1]}$ would be given by: $l((\\rho _t)_{t\\in [0,1]})^2 = \\int _0^1 \\Vert V_t \\Vert ^2_{L_2(\\rho _t)} \\mathop {}\\!\\mathrm {d}t \\quad \\text{ where } \\quad \\left\\Vert V_t \\right\\Vert ^2_{L_2(\\rho _t)} = \\int \\left\\Vert V_t(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho _t(x)$ This perspective allows to provide a dynamical interpretation of the $W_2$ as the length of the shortest path from $\\nu $ to $\\mu $ and is summarized by the celebrated Benamou-Brenier formula (): $W_2(\\nu ,\\mu ) = \\inf _{(\\rho _t,V_t)_{t\\in [0,1]}} l((\\rho _t)_{t\\in [0,1]})$ where the infimum is taken over all couples $\\rho $ and $v$ satisfying eq:continuityequation with boundary conditions given by eq:boundaryconditions.", "If $(\\rho _t,V_t)_{t\\in [0,1]}$ satisfies eq:continuityequation and eq:boundaryconditions and realizes the infimum in eq:benamou-brenier-formula, it is then simply called a geodesic between $\\nu $ and $\\mu $ ; moreover it is called a constant-speed geodesic if, in addition, the norm of $V_t$ is constant for all $t\\in [0,1]$ .", "As a consequence, eq:displacementgeodesic is a constant-speed displacement geodesic.", "Remark 1 Such paths should not be confused with another kind of paths called mixture geodesics.", "The mixture geodesic $(m_t)_{t\\in [0,1]}$ from $\\nu $ to $\\mu $ is obtained by first choosing either $\\nu $ or $\\mu $ according to a Bernoulli distribution of parameter $t$ and then sampling from the chosen distribution: $m_t = (1-t)\\nu + t\\mu \\qquad \\forall t \\in [0,1].$ Paths of the form eq:mixturegeodesic can be thought as the shortest paths between two distributions when distances on $\\mathcal {P}_2({{\\mathcal {X}}})$ are measured using the MMD (see Theorem 5.3).", "We refer to for an overview of the notion of shortest paths in probability spaces and for the differences between mixture geodesics and displacement geodesics.", "Although, we will be interested in the MMD as a loss function, we will not consider the geodesics that are naturally associated to it and will rather consider the displacement geodesics defined in eq:displacementgeodesic for reasons that will become clear in subsec:lambdaconvexity." ], [ "Gradient flows on the space of probability measures", "Consider a real valued functional ${{\\mathcal {F}}}$ defined over $\\mathcal {P}_2({\\mathbf {x}})$ .", "We call $\\frac{\\partial {{{\\mathcal {F}}}}}{\\partial {\\nu }}$ if it exists, the unique (up to additive constants) function such that $\\frac{d}{d\\epsilon }{{\\mathcal {F}}}(\\nu +\\epsilon (\\nu ^{\\prime }-\\nu ))\\vert _{\\epsilon =0}=\\int \\frac{\\partial {{{\\mathcal {F}}}}}{\\partial {\\nu }}(\\nu ) (\\mathop {}\\!\\mathrm {d}\\nu ^{\\prime }-\\mathop {}\\!\\mathrm {d}\\nu ) $ for any $\\nu ^{\\prime } \\in \\mathcal {P}_2({{\\mathcal {X}}})$ .", "The function $\\frac{\\partial {{{\\mathcal {F}}}}}{\\partial {\\nu }}$ is called the first variation of ${{\\mathcal {F}}}$ evaluated at $\\nu $ .", "We consider here functionals ${{\\mathcal {F}}}$ of the form: ${{\\mathcal {F}}}(\\nu )=\\int U(\\nu (x)) \\nu (x)dx + \\int V(x)\\nu (x)dx + \\int W(x,y)\\nu (x)\\nu (y)dxdy$ where $U$ is the internal potential, $V$ an external potential and $W$ an interaction potential.", "The formal gradient flow equation associated to such functional can be written (see , Lemma 8 to 10): $\\frac{\\partial \\nu }{\\partial t}= div( \\nu \\nabla \\frac{\\partial {{\\mathcal {F}}}}{\\partial \\nu })=div( \\nu \\nabla (U^{\\prime }(\\nu ) + V + W * \\nu ))$ where $div$ is the divergence operator and $\\nabla \\frac{\\partial {{\\mathcal {F}}}}{\\partial \\nu }$ is the strong subdifferential of ${{\\mathcal {F}}}$ associated to the $W_2$ metric (see , Lemma 10.4.1).", "Indeed, for some generalized notion of gradient $\\nabla _{W_2}$ , and for sufficiently regular $\\nu $ and ${{\\mathcal {F}}}$ , the r.h.s.", "of eq:continuityequation1 can be formally written as $-\\nabla _{W_2}{{\\mathcal {F}}}(\\nu )$ .", "The dissipation of energy along the flow is then given by: $\\frac{d {{\\mathcal {F}}}(\\nu _t)}{dt} =-D(\\nu _t) \\quad \\text{ with } D(\\nu )= \\int \\Vert \\nabla \\frac{\\partial {{\\mathcal {F}}}(\\nu _t(x))}{\\partial \\nu }\\Vert ^2 \\nu _t(x)dx$ Such expression can be obtained by the following formal calculations: $\\frac{d{{\\mathcal {F}}}(\\nu _t)}{dt}=\\int \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu _t} \\frac{\\partial \\nu _t}{\\partial t}=\\int \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu } div( \\nu _t \\nabla \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu })=-\\int \\Vert \\nabla \\frac{\\partial {{\\mathcal {F}}}(\\nu _t)}{\\partial \\nu }\\Vert ^2d \\nu _t.$" ], [ "Displacement convexity", "Just as for Euclidian spaces, an important criterion to characterize the convergence of the Wasserstein gradient flow of a functional ${{\\mathcal {F}}}$ is given by displacement convexity (see )): Definition 2 [Displacement convexity] We say that a functional $\\nu \\mapsto \\mathcal {F}(\\nu )$ is displacement convex if for any $\\nu $ and $\\nu ^{\\prime }$ and a constant speed geodesic $(\\text{$\\rho _{t}$})_{t \\in [0,1]}$ between $\\nu $ and $\\nu ^{\\prime }$ with velocity vector field $(V_{t})_{t \\in [0,1]}$ as defined by eq:continuityequation, the following holds: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu _{0})+t{{\\mathcal {F}}}(\\nu _{1}) \\qquad \\forall \\; t\\in [0,1].$ def:displacementconvexity can be relaxed to a more general notion of convexity called $\\Lambda $ -displacement convexity (see ).", "We first define an admissible functional $\\Lambda $ : Definition 3 [Admissible $\\Lambda $ functional] Consider a functional $(\\rho ,v)\\mapsto \\Lambda (\\rho ,v) \\in {{\\mathbb {R}}}$ defined for any probability distribution $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ and any square integrable vector field $v$ w.r.t $\\rho $ .", "We say that $\\Lambda $ is admissible, if it satisfies: For any $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , $v\\mapsto \\Lambda (\\rho ,v)$ is a quadratic form.", "For any geodesic $(\\rho _t)_{0\\le t\\le 1}$ between two distributions $\\nu $ and $\\nu ^{\\prime }$ with corresponding vector fields $(V_t)_{t \\in [0,1]}$ it holds that $\\inf _{0\\le t\\le 1}\\Lambda (\\rho _t,V_t)/\\Vert V_t\\Vert _{L_{2}(\\rho _t)}^{2}>-\\infty $ We can now define the notion of $\\Lambda $ -convexity: Definition 4 [$\\Lambda $ convexity] We say that a functional $\\nu \\mapsto \\mathcal {F}(\\nu )$ is $\\Lambda $ -convex if for any $\\nu ,\\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})^2$ and a constant speed geodesic $(\\text{$\\rho _{t}$})_{t \\in [0,1]}$ between $\\nu $ and $\\nu ^{\\prime }$ with velocity vector field $(V_{t})_{t \\in [0,1]}$ as defined by eq:continuityequation, the following holds: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu _{0})+t{{\\mathcal {F}}}(\\nu _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds \\qquad \\forall \\; t\\in [0,1].$ where $(\\rho ,v)\\mapsto \\Lambda (\\rho ,v)$ satisfies def:conditionslambda, and $G(s,t)=s(1-t) \\mathbb {I}\\lbrace s\\le t\\rbrace +t(1-s) \\mathbb {I}\\lbrace s\\ge t\\rbrace $ .", "A particular case is when $\\Lambda (\\rho ,v)= \\lambda \\int \\left\\Vert v(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho (x) $ for some $\\lambda \\in {{\\mathbb {R}}}$ .", "In that case, eq:lambdadisplacementconvex becomes: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\nu _{0})+t{{\\mathcal {F}}}(\\nu _{1})-\\frac{\\lambda }{2}t(1-t)W_2^2(\\nu _0,\\nu _1) \\qquad \\forall \\; t\\in [0,1].$ def:displacementconvexity is a particular case of def:lambda-convexity, where in eq:semi-convexity one has $\\lambda =0$ ." ], [ "Comparison with the Kullback Leilber divergence flow", "Continuity equation and McKean Vlasov process.", "A famous example of a free energy eq:lyapunov is the Kullback-Leibler divergence, defined for $\\nu , \\mu \\in \\mathcal {P}({{\\mathcal {X}}})$ by $KL(\\nu ,\\mu )=\\int log(\\frac{\\nu (x)}{\\mu (x)})\\nu (x)dx$ .", "Indeed, $KL(\\nu , \\mu )=\\int U(\\nu (x))dx + \\int V(x) \\nu (x)dx$ with $U(s)=s\\log (s)$ the entropy function and $V(x)=-log(\\mu (x))$ .", "In this case, $\\nabla \\frac{\\partial {{\\mathcal {F}}}}{\\partial \\nu }= \\nabla \\log (\\nu ) + \\nabla V= \\nabla \\log (\\frac{\\nu }{\\mu })$ and equation eq:continuityequation1 leads to the classical Fokker-Planck equation $\\frac{\\partial {\\nu }}{\\partial t}= div(\\nu \\nabla V )+ \\Delta \\nu ,$ where $\\Delta $ is the Laplacian operator.", "It is well-known (see for instance ) that the distribution of the Langevin diffusion in eq:langevindiffusion satisfies eq:Fokker-Planck, $dX_t= -\\nabla \\log \\mu (X_t)dt+\\sqrt{2}dB_t.$ Here, $(B_t)_{t\\ge 0}$ is a $d$ -dimensional Brownian motion.", "While the entropy term in the $KL$ functional prevents the particles from \"crashing\" onto the mode of $\\mu $ , this role could be played by the interaction energy $W$ defined in eq:potentials for the MMD.", "Indeed, consider for instance the gaussian kernel $k(x,x^{\\prime })=e^{-\\Vert x-x^{\\prime }\\Vert ^2}$ .", "It is convex thus attractive at long distances ($\\Vert x-x^{\\prime }\\Vert >1$ ) but repulsive at small distances so repulsive.", "Convergence to a global minimum.", "The solution to the Fokker-Planck equation describing the gradient flow of the $KL$ can be shown to converge towards $\\mu $ under mild assumptions.", "This follows from the displacement convexity of the $KL$ along the Wasserstein geodesics.", "Unfortunately the MMD is not displacement convex in general, as shown in subsection:barrieroptimization or subsec:appendixlambdaconvexity.", "This makes the task of proving the convergence of the gradient flow of the MMD to the global optimum $\\mu $ much harder.", "Sampling algorithms derived from gradient flows.", "Two settings are usually encountered in the sampling literature: density-based, i.e.", "the target $\\mu $ is known up to a constant, or sample-based, i.e.", "only a set of samples $X \\sim \\mu $ is accessible.", "The Unadjusted Langevin Algorithm (ULA), which involves a time-discretized version of the Langevin diffusion falls into the first category since it requires the knowledge of $\\nabla \\log \\mu $ .", "In a sample-based setting, it may be difficult to adapt the ULA algorithm, since this would require to estimate $\\nabla \\log (\\mu )$ based on a set of samples of $\\mu $ , before plugging this estimate in the update of the algorithm.", "This problem, sometimes referred to as score estimation in the literature, has been the subject of a lot of work but remains hard especially in high dimensions (see ,,).", "In contrast, the discretized flow (in time and space) of the MMD presented in sec:samplebased is naturally adapted to the sample-based setting." ], [ "Main assumptions", "We state here all the assumptions on the kernel $k$ used to prove all the results: $k$ is continuously differentiable on ${{\\mathcal {X}}}$ with $L$ -Lipschitz gradient: $\\Vert \\nabla k(x,x^{\\prime }) - \\nabla k(y,y^{\\prime })\\Vert \\le L(\\Vert x-y\\Vert + \\Vert x^{\\prime }-y^{\\prime } \\Vert ) $ for all $x,x^{\\prime },y,y^{\\prime } \\in {{\\mathcal {X}}}$ .", "$k$ is twice differentiable on ${{\\mathcal {X}}}$ .", "$\\Vert Dk(x,y) \\Vert \\le \\lambda $ for all $x,y\\in {{\\mathcal {X}}}$ , where $Dk(x,y)$ is an $\\mathbb {R}^{d^2}\\times \\mathbb {R}^{d^2}$ matrix with entries given by $\\partial _{x_{i}}\\partial _{x_{j}}\\partial _{x^{\\prime }_{i}}\\partial _{x_{j}^{\\prime }}k(x,y)$ .", "$ \\sum _{i=1}^d\\Vert \\partial _i k(x,.)", "- \\partial _i k(y,.", ")\\Vert ^2_{{{\\mathcal {H}}}} \\le \\lambda ^2 \\Vert x-y\\Vert ^2 $ for all $x,y\\in {{\\mathcal {X}}}$ .", "Construction of the gradient flow of the MMD Continuous time flow Existence and uniqueness of a solution to eq:continuitymmd,eq:mcKeanVlasovprocess is guaranteed under Lipschitz regularity of $\\nabla k$ .", "[Proof of prop:existenceuniqueness][Existence and uniqueness] Under assump:lipschitzgradientk, the map $(x,\\nu )\\mapsto \\nabla f_{\\mu ,\\nu }(x)=\\int \\nabla k(x,.", ")d \\nu - \\int \\nabla k(x,.)", "d \\mu $ is Lipschitz continuous on ${{\\mathcal {X}}}\\times \\mathcal {P}_2({{\\mathcal {X}}})$ (endowed with the product of the canonical metric on ${{\\mathcal {X}}}$ and $W_2$ on $\\mathcal {P}_2({{\\mathcal {X}}})$ ), see prop:gradwitnessfunction.", "Hence, we benefit from standard existence and uniqueness results of McKean-Vlasov processes (see ).", "Then, it is straightforward to verify that the distribution of eq:mcKeanVlasovprocess is solution of eq:continuitymmd by ItÃŽ's formula (see ).", "The uniqueness of the gradient flow, given a starting distribution $\\nu _0$ , results from the $\\lambda $ -convexity of ${{\\mathcal {F}}}$ (for $\\lambda =3L$ ) which is given by lem:lambdaconvexitybis, and .", "The existence derive from the fact that the sub-differential of ${{\\mathcal {F}}}$ is single-valued, as stated by prop:differentialmmd, and that any $\\nu _0$ in $\\mathcal {P}_2({{\\mathcal {X}}})$ is in the domain of ${{\\mathcal {F}}}$ .", "One can then apply .", "[Proof of prop:decaymmd][Decay of the MMD] Recalling the discussion in subsec:gradientflowsfunctionals, the time derivative of ${{\\mathcal {F}}}(\\nu _t)$ along the flow is formally given by eq:dissipationenergy.", "But we know from prop:differentialmmd that the strong differential $\\nabla \\frac{\\delta {{\\mathcal {F}}}(\\nu )}{\\delta \\nu }$ is given by $\\nabla f_{\\mu ,\\nu }$ .", "Therefore, one formally obtains the desired expression by exchanging the order of derivation and integration, performing an integration by parts and using the continuity equation (see (REF )).", "We refer to for similar calculations.", "One can also obtain directly the same result using the energy identity in which holds for $\\lambda $ -displacement convex functionals.", "The result applies here since, by lem:lambdaconvexitybis, we know that ${{\\mathcal {F}}}$ is $\\lambda $ -displacement convex with $\\lambda = 3L$ .", "Time-discretized flow We prove that eq:eulerscheme approximates eq:continuitymmd.", "To make the dependence on the step-size $\\gamma $ explicit, we will write: $\\nu _{n+1}^{\\gamma } =(I-\\gamma \\nabla f_{\\mu ,\\nu _n^{\\gamma }})_{\\#}\\nu _{n}^{\\gamma }$ (so $\\nu _n^{\\gamma }=\\nu _n$ for any $n \\ge 0$ ).", "We start by introducing an auxiliary sequence $\\bar{\\nu }_{n}^{\\gamma }$ built by iteratively applying $\\nabla f_{\\mu ,\\nu _{\\gamma n}}$ where $\\nu _{\\gamma n}$ is the solution of eq:continuitymmd at time $t= \\gamma n$ : $\\bar{\\nu }_{n+1}^{\\gamma } =(I-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}})_{\\#}\\bar{\\nu }_{n}^{\\gamma }$ with $\\bar{\\nu }_{0}=\\nu _{0}$ .", "Note that the latter sequence involves the continuous-time process $\\nu _t$ of eq:continuitymmd with $t=\\gamma n$ .", "Using $\\nu _n^{\\gamma }$ , we also consider the interpolation path $\\rho _{t}^{\\gamma }=(I-(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }})_{\\#}\\nu _{n}^{\\gamma }$ for all $t\\in [n\\gamma ,(n+1)\\gamma )$ and $n\\in \\mathbb {N}$ , which is the same as in prop:convergenceeulerscheme.", "[Proof of prop:convergenceeulerscheme] Let $\\pi $ be an optimal coupling between $\\nu _{n}^{\\gamma }$ and $\\nu _{\\gamma n}$ , and $(x,y)$ a sample from $\\pi $ .", "For $t\\in [n\\gamma ,(n+1)\\gamma )$ we write $y_{t} =y_{n\\gamma }-\\int _{n\\gamma }^{t}\\nabla f_{\\mu ,\\nu _{s}}(y_u)\\mathop {}\\!\\mathrm {d}u$ and $x_{t} =x-(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)$ where $y_{n\\gamma }= y$ .", "We also introduce the approximation error $ E(t,n\\gamma ):=y_{t}-y+(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{\\gamma n}}(y)$ for which we know by lem:Taylor-expansion that $\\mathcal {E}(t,n\\gamma ):=\\mathbb {E}[E(t,n\\gamma )^2]^{\\frac{1}{2}}$ is upper-bounded by $(t-n\\gamma )^{2}C$ for some positive constant $C$ that depends only on $T$ and the Lipschitz constant $L$ .", "This allows to write: $W_{2}(\\rho _{t}^{\\gamma },\\nu _{t}) & \\le \\mathbb {E}\\left[\\left\\Vert y-x+(t-n\\gamma )(\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)-\\nabla f_{\\mu ,\\nu _{\\gamma n}}(y))+E(t,n\\gamma )\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\& \\le W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+4L(t-n\\gamma )W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+\\mathcal {E}(t,n\\gamma )\\\\& \\le (1+4\\gamma L)W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+(t-\\gamma n)^2C\\\\ &\\le (1+4\\gamma L)\\left( W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })+W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma }) \\right)+\\gamma ^{2}C \\\\& \\le \\gamma \\left[\\left(1+4\\gamma L\\right)M(T)+\\gamma C\\right]$ The second line is obtained using that $\\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)$ is jointly $2L$ -Lipschitz in $x$ and $\\nu $ (see prop:gradwitnessfunction) and by the fact that $W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n}) = \\mathbb {E}_{\\pi }[\\Vert y-x\\Vert ^2]^{\\frac{1}{2}}$ .", "The third one is obtained using $t-n \\gamma \\le \\gamma $ .", "For the last inequality, we used lem:eulererror1,lem:eulererror2 where $M(T)$ is a constant that depends only on $T$ .", "Hence for $\\gamma \\le \\frac{1}{4L}$ we get $W_{2}(\\rho _{t}^{\\gamma },\\nu _{t})\\le \\gamma (\\frac{C}{4L}+2M(T)).$ Lemma 10 For any $n\\ge 0$ : $W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })\\le \\gamma \\frac{C}{2L}(e^{n\\gamma 2L}-1)$ Let $\\pi $ be an optimal coupling between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{\\gamma n}$ and $(\\bar{x}$ , $x)$ a joint sample from $\\pi $ .", "Consider also the joint sample $(\\bar{y},y)$ obtained from $(\\bar{x}$ ,$x)$ by applying the gradient flow of ${{\\mathcal {F}}}$ in continuous time to get $y := x_{(n+1)\\gamma }=x_{n \\gamma }-\\int _{n\\gamma }^{(n+1)\\gamma }\\nabla f_{\\mu ,\\nu _{s}}(x_u)\\mathop {}\\!\\mathrm {d}u$ with $x_{n\\gamma } = x$ and by taking a discrete step from $\\bar{x}$ to write $\\bar{y}=\\bar{x}-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})$ .", "It is easy to see that $y\\sim \\nu _{\\gamma (n+1)}$ (i.e.", "a sample from the continous process eq:continuitymmd at time $t=(n+1)\\gamma $ ) and $\\bar{y}\\sim \\bar{\\nu }_{n+1}^{\\gamma }$ (i.e.", "a sample from eq:intermedprocesstime).", "Moreover, we introduce the approximation error $E((n+1)\\gamma ,n\\gamma ):=y-x+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)$ for which we know by lem:Taylor-expansion that $\\mathcal {E}((n+1)\\gamma ,n\\gamma ):=\\mathbb {E}[E((n+1)\\gamma ,n\\gamma )^2]^{\\frac{1}{2}}$ is upper-bounded by $\\gamma ^{2}C$ for some positive constant $C$ that depends only on $T$ and the Lipschitz constant $L$ .", "Denoting by $a_{n}=W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })$ , one can therefore write: $a_{n+1}\\le & \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)-\\bar{x}+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})+E((n+1)\\gamma ,n\\gamma )\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\\\le & \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}+\\gamma \\mathbb {E_{\\pi }}\\left[\\left\\Vert \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)-\\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x}))\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}+\\gamma ^{2}C$ Using that $\\nabla f_{\\mu ,\\nu _{\\gamma n}}$ is $2L$ -Lipschitz by prop:gradwitnessfunction and recalling that $\\mathbb {E}_{\\pi }\\left[\\Vert x-\\bar{x}\\Vert ^{2}\\right]^{\\frac{1}{2}}=W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })$ , we get the recursive inequality $a_{n+1}\\le (1+2 \\gamma L)a_{n}+\\gamma ^{2}C$ .", "Finally, using lem:Discrete-Gronwall-lemma and recalling that $a_{0}=0$ , since by definition $\\bar{\\nu }_{0}^{\\gamma }=\\nu _{0}^{\\gamma }$ , we conclude that $a_{n}\\le \\gamma \\frac{C}{2L}(e^{n\\gamma 2L}-1)$ .", "Lemma 11 For any $T>0$ and $n$ such that $n\\gamma \\le T$ $W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })\\le \\gamma \\frac{C}{8L^2}(e^{4TL}-1)^{2}$ Consider now an optimal coupling $\\pi $ between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{n}^{\\gamma }$ .", "Similarly to lem:eulererror1, we denote by $(\\bar{x},x)$ a joint sample from $\\pi $ and $(\\bar{y},y)$ is obtained from $(\\bar{x},x)$ by applying the discrete updates : $\\bar{y}=\\bar{x}-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})$ and $y=x-\\gamma \\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)$ .", "We again have that $y\\sim \\nu _{n+1}^{\\gamma }$ (i.e.", "a sample from the time discretized process eq:eulerscheme) and $\\bar{y}\\sim \\bar{\\nu }_{n+1}^{\\gamma }$ (i.e.", "a sample from eq:intermedprocesstime).", "Now, denoting by $b_{n}=W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })$ , it is easy to see from the definition of $\\bar{y}$ and $y$ that we have: $b_{n+1} & \\le \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\gamma \\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)-\\bar{x}+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\&\\le (1+2\\gamma L) \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^2\\right]^{\\frac{1}{2}} + 2\\gamma L W_2(\\nu _n^{\\gamma },\\nu _{\\gamma n}))\\\\& \\le (1+ 4\\gamma L)b_n + \\gamma L W_2(\\bar{\\nu }_n^{\\gamma },\\nu _{\\gamma n})$ The second line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is $2L$ -Lipschitz in both $x$ and $\\nu $ by prop:gradwitnessfunction.", "The third line follows by triangular inequality and using $\\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^2\\right]^{\\frac{1}{2}}= W_2(\\nu _n^{\\gamma },\\bar{\\nu }_n^{\\gamma }) = b_n$ , since $\\pi $ is an optimal coupling between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{n}^{\\gamma }$ .", "By lem:eulererror1, we have $W_2(\\bar{\\nu }_n^{\\gamma },\\nu _{\\gamma n})\\le \\gamma \\frac{C}{2L}(e^{2n\\gamma L}-1)$ , hence, for any $n$ such that $n\\gamma \\le T$ we get the recursive inequality $b_{n+1}\\le (1+4\\gamma L)b_{n}+(C/2L)\\gamma ^{2}(e^{2TL}-1).$ Finally, using again lem:Discrete-Gronwall-lemma, it follows that $b_{n}\\le \\gamma \\frac{C}{8L^2}(e^{4TL}-1)^{2}$ .", "Lemma 12 [Taylor expansion] Consider the process $ \\dot{x}_t = - \\nabla f_{\\mu ,\\nu _t}(x_t) $ , and denote by $\\mathcal {E}(t,s) = \\mathbb {E}[ \\Vert x_t - x_s +(t-s)\\nabla f_{\\mu ,\\nu _s}(x_s) \\Vert ^2 ]^{\\frac{1}{2}} $ for $0\\le s \\le t \\le T$ .", "Then one has: $\\mathcal {E}(t,s)\\le 2L^2 r_0 e^{LT}(t-s)^2$ with $r_0 = \\mathbb {E}_{(x,z)\\sim \\nu _0 \\otimes \\mu }[\\Vert x-z \\Vert ]$ By definition of $x_t$ and $\\mathcal {E}(t,s)$ one can write: $\\mathcal {E}(t,s)&=\\mathbb {E}\\left[\\left\\Vert \\int _{s}^t (\\nabla f_{\\mu ,\\nu _s}(x_s) - \\nabla f_{\\mu ,\\nu _u}(x_u))\\mathop {}\\!\\mathrm {d}u \\right\\Vert ^2 \\right]^{\\frac{1}{2}} \\\\&\\le \\int _{s}^t \\mathbb {E}\\left[\\left\\Vert (\\nabla f_{\\mu ,\\nu _s}(x_s) - \\nabla f_{\\mu ,\\nu _u}(x_u)) \\right\\Vert ^2 \\right]^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u\\\\&\\le 2L\\int _{s}^t \\mathbb {E}\\left[(\\left\\Vert x_s - x_u \\right\\Vert + W_2(\\nu _s,\\nu _u))^2 \\right]^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u \\le 4L\\int _{s}^t \\mathbb {E}\\left[\\left\\Vert x_s - x_u \\right\\Vert ^2\\right]^{\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u$ Where we used an integral expression for $x_t$ in the first line then applied a triangular inequality for the second line.", "The last line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is jointly $2L$ -Lipschitz in $x$ and $\\nu $ by prop:gradwitnessfunction and that $W_2(\\nu _s,\\nu _u) \\le \\mathbb {E}\\left[\\left\\Vert x_s - x_u \\right\\Vert ^2\\right]^{\\frac{1}{2}}$ .", "Now we use again an integral expression for $x_u$ which further gives: $\\mathcal {E}(t,s) \\le & 4L \\int _{s}^t \\mathbb {E}\\left[\\left\\Vert \\int _s^u \\nabla f_{\\mu ,\\nu _l}(x_l) \\mathop {}\\!\\mathrm {d}l \\right\\Vert ^2 \\right]^{\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u\\\\\\le & 4L \\int _{s}^t \\int _s^u \\mathbb {E}\\left[ \\left\\Vert \\mathbb {E}\\left[ \\nabla _1 k(x_l,x_l^{\\prime }) - \\nabla _1 k(x_l,z) \\right] \\right\\Vert ^2 \\right]^\\frac{1}{2}\\mathop {}\\!\\mathrm {d}l\\mathop {}\\!\\mathrm {d}u\\\\\\le &4L^2 \\int _{s}^t \\int _s^u \\mathbb {E}\\left[\\left\\Vert x_l^{\\prime } - z \\right\\Vert \\right] \\mathop {}\\!\\mathrm {d}l \\mathop {}\\!\\mathrm {d}u$ Again, the second line is obtained using a triangular inequality and recalling the expression of $\\nabla f_{\\mu ,\\nu }(x)$ from prop:gradwitnessfunction.", "The last line uses that $\\nabla k$ is $L$ -Lipschitz by assump:lipschitzgradientk.", "Now we need to make sure that $\\Vert x_l^{\\prime } - z \\Vert $ remains bounded at finite times.", "For this we will first show that $ r_t = \\mathbb {E}[\\Vert x_t - z \\Vert ]$ satisfies an integro-differential inequality: $r_t\\le & \\mathbb {E}\\left[\\left\\Vert x_0 - z -\\int _0^t \\nabla f_{\\mu ,\\nu _s}(x_s) \\mathop {}\\!\\mathrm {d}s \\right\\Vert \\right]\\\\\\le &r_0 +\\int _0^t \\mathbb {E}\\left[\\left\\Vert \\nabla _1 k(x_s,x_s^{\\prime })- \\nabla _1 k(x_s,z) \\right\\Vert \\right] \\mathop {}\\!\\mathrm {d}s\\le r_0 + L\\int _0^t r_s \\mathop {}\\!\\mathrm {d}s$ Again, we used an integral expression for $x_t$ in the first line, then a triangular inequality recalling the expression of $\\nabla f_{\\mu ,\\nu _s}$ .", "The last line uses again that $\\nabla k$ is $L$ -Lipschitz.", "By Gronwall's lemma it is easy to see that $r_t \\le r_0e^{Lt}$ at all times.", "Moreover, for all $t\\le T$ we have a fortiori that $r_t \\le r_0 e^{LT}$ .", "Recalling back the upper-bound on $\\mathcal {E}(t,s)$ we have finally: $\\mathcal {E}(t,s)\\le 4L^2 r_0 e^{LT} \\int _{s}^t \\int _s^u \\mathop {}\\!\\mathrm {d}l \\mathop {}\\!\\mathrm {d}u = 2L^2 r_0 e^{LT}(t-s)^2$ We show now that eq:eulerscheme decreases the functional ${{\\mathcal {F}}}$ .", "In all the proofs, the step-size $\\gamma $ is fixed.", "[Proof of prop:decreasingfunctional] Consider a path between $\\nu _n$ and $\\nu _{n+1}$ of the form $\\rho _t =(I-\\gamma t\\nabla f_{\\mu ,\\nu _n})_{\\#}\\nu _n$ .", "We know by prop:gradwitnessfunction that $\\nabla f_{\\mu ,\\nu _n}$ is $2L$ Lipschitz, thus by lem:derivativemmdaugmented and using $\\phi (x) = -\\gamma \\nabla f_{\\mu ,\\nu _n}(x)$ , $\\psi (x) = x$ and $q = \\nu _n$ it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and hence absolutely continuous.", "Therefore one can write: $\\mathcal {F}(\\rho _1)-\\mathcal {F}(\\rho _0) = \\dot{\\mathcal {F}}(\\rho _0)+ \\int _0^1 \\dot{{{\\mathcal {F}}}}(\\rho _t)- \\dot{{{\\mathcal {F}}}}(\\rho _0)dt.$ Moreover, lem:derivativemmdaugmented also allows to write: $\\dot{\\mathcal {F}}(\\rho _0) = -\\gamma \\int \\Vert \\nabla f_{\\mu ,\\nu _n}(x) \\Vert ^2 d\\nu _n(x); \\qquad \\vert \\dot{{{\\mathcal {F}}}}(\\rho _t)- \\dot{{{\\mathcal {F}}}}(\\rho _0)\\vert \\le 3L t\\gamma ^2 \\int \\Vert \\nabla f_{\\mu ,\\nu _n}(X) \\Vert ^2 d\\nu _n(X).$ where $t\\le 1$ .", "Hence, the result follows directly by applying the above expression to eq:taylorexpansiondecreasing.", "Convergence of the gradient flow of the MMD Equilibrium condition We discuss here the equilibrium condition eq:equilibriumcondition and relate it to .", "Recall that eq:equilibriumcondition is given by: $\\int \\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) = 0$ .", "Under some mild assumptions on the kernel which are states in it is possible to write eq:equilibriumcondition as: $\\int \\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) = \\langle f_{\\mu ,\\nu ^{*}} , D_{\\nu ^{*}} f_{\\mu ,\\nu ^{*}}\\rangle _{{{\\mathcal {H}}}} = 0$ where $D_{\\nu ^{*}}$ is a Hilbert-Schmidt operator given by: $D_{\\nu ^{*}} = \\int \\sum _{i=1}^d \\partial _i k(x,.", ")\\otimes \\partial _i k(x,.)", "\\mathop {}\\!\\mathrm {d}\\nu ^{*}(x)$ Hence eq:equilibriumcondition is equivalent to say that $f_{\\mu ,\\nu ^{*}}$ belongs to the null space of $D_{\\nu ^{*}}$ .", "In , a similar equilibrium condition is derived by considering the time derivative of the MMD along the KSD gradient flow: $\\frac{1}{2} \\frac{d}{dt} MMD^2(\\mu ,\\nu _t) = - \\lambda \\langle f_{\\mu ,\\nu _t}, (\\frac{1}{\\lambda }I - (D_{\\nu _t} +\\lambda I )^{-1})f_{\\mu ,\\nu _t} \\rangle _{{{\\mathcal {H}}}}$ The r.h.s is shown to be always negative and thus the MMD decreases in time.", "Hence, as $t$ approaches $\\infty $ , the r.h.s tends to 0 since the MMD converges to some limit value $l$ .", "This provides the equilibrium condition: $\\lambda \\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ It is further shown in that the above equation is also equivalent to having $f_{\\mu ,\\nu ^{*}}$ in the null space of $D_{\\nu ^{*}}$ in the case when $D_{\\nu ^{*}}$ has finite dimensions.", "We generalize this statement to infinite dimension in prop:nullspacediffoperator.", "In , it is simply assumed that if $f_{\\mu ,\\nu ^{*}} \\ne 0$ then $D_{\\nu ^{*}} f_{\\mu ,\\nu ^{*}} \\ne 0 $ which exactly amounts to assuming that local optima which are not global don't exist.", "Proposition 13 $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0 \\iff f_{\\mu ,\\nu ^{*}} \\in null(D_{\\nu ^{*}})$ This follows simply by recalling $D_{\\nu ^{*}}$ is a symmetric non-negative Hilbert-Schmidt operator it has therefore an eigen-decomposition of the form: $D_{\\nu ^{*}} = \\sum _{i=1}^{\\infty } \\lambda _i e_i \\otimes e_i$ where $e_i$ is an ortho-norrmal basis of ${{\\mathcal {H}}}$ and $\\lambda _i$ are non-negative.", "Moreover, $f_{\\mu ,\\nu ^{*}}$ can be decomposed in $(e_i)_{1\\le i}$ in the form: $f_{\\mu ,\\nu ^{*}} = \\sum _{i=0}^{\\infty } \\alpha _i e_i$ where $\\alpha _i$ is a squared integrable sequence.", "It follows that $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}}$ can be written as: $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = \\sum _{i=1}^{\\infty } \\frac{\\lambda _i}{\\lambda _i+\\lambda } \\alpha _i^2$ Hence, if $f_{\\mu ,\\nu ^{*}}\\in null(D_{\\nu ^{*}})$ then $\\langle f_{\\mu ,\\nu ^{*}}, D_{\\nu ^{*}}f_{\\mu ,\\nu ^{*}}\\rangle _{{{\\mathcal {H}}}}= 0$ , so that $\\sum _{i=1}^{\\infty } \\lambda _i \\alpha _i^2 = 0$ .", "Since $\\lambda _i$ are non-negative, this implies that $\\lambda _i \\alpha _i^2= 0$ for all $i$ .", "Therefore, it must be that $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ .", "Similarly, if $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} =0 $ then $\\frac{\\lambda _i\\alpha _i^2}{\\lambda _i + \\lambda } = 0$ hence $\\langle f_{\\mu ,\\nu {*}}, D_{\\nu ^{*}} f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ .", "This means that $f_{\\mu ,\\nu {*}}$ belongs to $null(D_{\\nu ^*})$ .", "$\\Lambda $ -displacement convexity of the MMD We provide now a proof of prop:lambdaconvexity: [Proof of prop:lambdaconvexity][$\\Lambda $ - displacement convexity of the MMD] To prove that $\\nu \\mapsto {{\\mathcal {F}}}(\\nu )$ is $\\Lambda $ -convex we need to compute the second time derivative $\\ddot{{{\\mathcal {F}}}}(\\rho _{t})$ where $(\\rho _{t})_{t \\in [0,1]}$ is a displacement geodesic between two probability distributions $\\nu _{0}$ and $\\nu _{1}$ as defined in eq:displacementgeodesic.", "Such geodesic always exists and can be written as $\\rho _t = (s_t)_{\\#}\\pi $ with $s_t = x + t(y-x)$ for all $t\\in [0,1]$ and $\\pi $ is an optimal coupling between $\\nu _0$ and $\\nu _1$ (, Theorem 5.27).", "We denote by $V_t$ the corresponding velocity vector as defined in eq:continuityequation.", "Recall that ${{\\mathcal {F}}}(\\rho _t) = \\frac{1}{2} \\Vert f_{\\mu ,\\rho _t}\\Vert ^2_{\\mathcal {H}}$ , with $f_{\\mu ,\\rho _t}$ defined in eq:witnessfunction.", "We start by computing the first derivative of $ t\\mapsto {{\\mathcal {F}}}(\\rho _t) $ .", "Since assump:diffkernel,assump:lipschitzgradientk hold, lem:secondderivativeaugmentedmmd applies for $\\phi (x,y) = y-x$ , $\\psi (x,y) = x$ and $q = \\pi $ , thus we know that $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ is well defined and given by: $\\begin{split}\\ddot{{{\\mathcal {F}}}}(\\rho _t) =&\\mathbb {E}\\left[ (y-x)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))(y^{\\prime }-x^{\\prime })\\right]\\\\&+ \\mathbb {E}\\left[ (y-x)^T( H_1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-H_1 k(s_t(x,y),z))(y-x)\\right]\\end{split}$ Moreover, assump:boundedfourthoder also holds which means by lem:secondderivativeaugmentedmmd that the second term in eq:hessian can be lower-bounded by $-\\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t)\\mathbb {E}[ \\Vert y-x \\Vert ^2]$ so that: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) =\\mathbb {E}\\left[ (y-x)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))(y^{\\prime }-x^{\\prime })\\right] - \\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t) \\mathbb {E}[ \\Vert y-x \\Vert ^2]$ Recall now that $(\\rho _t)_{t \\in [0,1]}$ is a constant speed geodesic with velocity vector $(V_t)_{t\\in [0,1]}$ thus by a change of variable, one further has: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\int \\left[ V_t^T(x)\\nabla _1 \\nabla _2 k(x,x^{\\prime })V_t(x^{\\prime })\\right]\\mathop {}\\!\\mathrm {d}\\rho _t(x) - \\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t) \\int \\Vert V_t(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho _t(x).$ Now we can introduce the function $\\Lambda (\\rho ,v) = \\langle v ,( C_{\\rho } -\\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}} I) v \\rangle _{L_2(\\rho )}$ which is defined for any pair $(\\rho ,v)$ with $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ and $v$ a square integrable vector field in $L_2(\\rho )$ and where $C_{\\rho }$ is a non-negative operator given by $(C_{\\rho }v)(x)=\\int \\nabla _{x}\\nabla _{x^{\\prime }}k(x,x^{\\prime })v(x^{\\prime })d\\rho (x^{\\prime })$ for any $x \\in {{\\mathcal {X}}}$ .", "This allows to write $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\Lambda (\\rho _t,V_t)$ .", "It is clear that $\\Lambda (\\rho ,.", ")$ is a quadratic form on $L_2(\\rho )$ and satisfies the requirement in def:conditionslambda.", "Finally, using lem:integrallambdaconvexity and def:lambda-convexity we conclude that ${{\\mathcal {F}}}$ is $\\Lambda $ -convex.", "Moreover, by the reproducing property we also know that for all $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ : $ \\mathbb {E}_{\\rho }\\left[ v(x)^T \\nabla _1 \\nabla _2 k(x,x^{\\prime }) v(x^{\\prime }) \\right] = \\mathbb {E}_{\\rho }\\left[\\left\\langle v(x)^T \\nabla _1 k(x,.", "), v(x^{\\prime })^T \\nabla _1k(x^{\\prime },.)", "\\right\\rangle _{{{\\mathcal {H}}}}\\right].$ By Bochner integrability of $v(x)^T \\nabla _1 k(x,.", ")$ it is possible to exchange the order of the integral and the inner-product .", "This leads to the expression $\\Vert \\mathbb {E}[v(x)^T \\nabla _1 k(x,.", ")]\\Vert ^2_{{{\\mathcal {H}}}}$ .", "Hence $\\Lambda (\\rho ,v)$ has a second expression of the form: $\\Lambda (\\rho ,v) = \\left\\Vert \\mathbb {E}_{\\rho }\\left[v(x)^T \\nabla _1 k(x,.", ")\\right]\\right\\Vert ^2_{{{\\mathcal {H}}}} - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}}\\mathbb {E}_{\\rho }\\left[\\left\\Vert v(x)\\right\\Vert ^2 \\right].$ We also provide a result showing $\\Lambda $ convexity for ${{\\mathcal {F}}}$ only under assump:lipschitzgradientk: Lemma 14 ($\\Lambda $ -displacement convexity) Under assump:lipschitzgradientk, for any $\\nu ,\\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})$ and any constant speed geodesic $\\rho _t$ from $\\nu $ to $\\nu ^{\\prime }$ , ${{\\mathcal {F}}}$ satisfies for all $0\\le t\\le 1$ : ${{\\mathcal {F}}}(\\rho _t) \\le (1-t){{\\mathcal {F}}}(\\nu ) + t{{\\mathcal {F}}}(\\nu ^{\\prime }) + 3L W_2^2(\\nu ,\\nu ^{\\prime }) \\qquad $ Let $\\rho _t$ be a constant speed geodesic of the form $\\rho _t = s_t{\\#}\\pi $ where $\\pi $ is an optimal coupling between $\\nu $ and $\\nu ^{\\prime }$ and $s_t(x,y) = x + t(y-x)$ .", "Since assump:lipschitzgradientk holds, one can apply lem:derivativemmdaugmented with $\\psi (x,y) =x $ , $\\phi (x,y)= y-x$ and $q = \\pi $ .", "Hence, one has that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and its differential satisfies: $\\vert \\dot{{{\\mathcal {F}}}}(\\rho _t) - \\dot{{{\\mathcal {F}}}}(\\rho _s) \\vert \\le 3L\\vert t-s \\vert \\int \\Vert y-x\\Vert ^2\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ This implies that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is Lipschitz continuous and therefore is differentiable for almost all $t\\in [0,1]$ by Rademacher's theorem.", "Hence, $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ is well defined for almost all $t\\in [0,1]$ .", "Moreover, from the above inequality it follows that $\\ddot{{{\\mathcal {F}}}}(\\rho _t)\\ge - 3L \\int \\Vert y-x\\Vert ^2\\mathop {}\\!\\mathrm {d}\\pi (x,y) = -3LW_2^2(\\nu ,\\nu ^{\\prime })$ for almost all $t\\in [0,1]$ .", "Using lem:integrallambdaconvexity it follows directly that ${{\\mathcal {F}}}$ satisfies the desired inequality.", "Descent up to a barrier To provide a proof of th:ratesmmd, we need the following preliminary results.", "Firstly, an upper-bound on a scalar product involving $\\nabla f_{\\mu , \\nu }$ for any $\\mu , \\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ in terms of the loss functional ${{\\mathcal {F}}}$ , is obtained using the $\\Lambda $ -displacement convexity of ${{\\mathcal {F}}}$ in lem:gradflowlambdaversion.", "Then, an EVI (Evolution Variational Inequality) is obtained in prop:evi on the gradient flow of ${{\\mathcal {F}}}$ in $W_2$ .", "The proof of the theorem is given afterwards.", "Lemma 15 Let $\\nu $ be a distribution in $\\mathcal {P}_2({{\\mathcal {X}}})$ and $\\mu $ the target distribution such that ${{\\mathcal {F}}}(\\mu )=0$ .", "Let $\\pi $ be an optimal coupling between $\\nu $ and $\\mu $ , and $(\\rho _t)_{t \\in [0,1]}$ the displacement geodesic defined by eq:displacementgeodesic with its corresponding velocity vector $(V_t)_{t\\in [0,1]}$ as defined in eq:continuityequation.", "Finally let $\\nabla f_{\\nu ,\\mu }(X)$ be the gradient of the unnormalised witness function between $\\mu $ and $\\nu $ .", "The following inequality holds: $\\int \\nabla f_{\\mu , \\nu }(x).", "(y-x) d\\pi (x,y)\\le {{\\mathcal {F}}}(\\mu )- {{\\mathcal {F}}}(\\nu ) -\\int _0^1 \\Lambda (\\rho _s,V_s)(1-s)ds$ where $\\Lambda $ is defined prop:lambdaconvexity.", "Recall that for all $t\\in [0,1]$ , $\\rho _t$ is given by $\\rho _t = (s_t)_{\\#}\\pi $ with $s_t = x + t(y-x)$ .", "By $\\Lambda $ -convexity of $\\mathcal {F}$ the following inequality holds: $\\mathcal {F}(\\rho _{t})\\le (1-t)\\mathcal {F}(\\nu )+t \\mathcal {F}(\\mu ) - \\int _0^1 \\Lambda (\\rho _s,V_s)G(s,t)ds$ Hence by bringing $\\mathcal {F}(\\nu )$ to the l.h.s and dividing by $t$ and then taking its limit at 0 it follows that: $\\dot{{{\\mathcal {F}}}}(\\rho _t)\\vert _{t=0}\\le \\mathcal {F}(\\mu )-\\mathcal {F}(\\nu )-\\int _0^1 \\Lambda (\\rho _s,V_s)(1-s)ds.", "$ where $\\dot{{{\\mathcal {F}}}}(\\rho _t)=d{{\\mathcal {F}}}(\\rho _t)/dt$ and since $\\lim _{t \\rightarrow 0}G(s,t)=(1-s)$ .", "Moreover, under assump:lipschitzgradientk, lem:derivativemmdaugmented applies for $\\phi (x,y) = y-x$ , $\\psi (x,y)= x$ and $q = \\pi $ .", "It follows therefore that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is differentiable with time derivative given by: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\int \\nabla f_{\\mu ,\\rho _t}(s_t(x,y)).", "(y-x)\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ .", "Hence at $t=0$ we get: $\\dot{{{\\mathcal {F}}}}(\\rho _t)\\vert _{t=0} = \\int \\nabla f_{\\mu ,\\nu }(x).", "(y-x)\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ which shows the desired result when used in eq:firstorderlambda.", "Proposition 16 Consider the sequence of distributions $\\nu _n$ obtained from eq:eulerscheme.", "For $n\\ge 0$ , consider the scalar $ K(\\rho ^n) := \\int _0^1\\Lambda (\\rho _s^n,V_s^n)(1-s)\\mathop {}\\!\\mathrm {d}s$ where $(\\rho _s^n)_{0\\le s\\le 1}$ is a constant speed displacement geodesic from $\\nu _n$ to the optimal value $\\mu $ with velocity vectors $(V_s^n)_{0\\le s\\le 1}$ .", "If $\\gamma \\le 1/L$ , where $L$ is the Lispchitz constant of $\\nabla k$ in assump:lipschitzgradientk, then: $2\\gamma ({{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\mu ))\\le W_2^2(\\nu _n,\\mu )-W_2^2(\\nu _{n+1},\\mu )-2\\gamma K(\\rho ^n).$ Let $\\Pi ^n$ be the optimal coupling between $\\nu _n$ and $\\mu $ , then the optimal transport between $\\nu _n$ and $\\mu $ is given by: $W_2^2(\\mu ,\\nu _n)=\\int \\Vert X-Y \\Vert ^2 d\\Pi ^n(\\nu _n,\\mu )$ Moreover, consider $Z=X-\\gamma \\nabla f_{\\mu , \\nu _n}(X)$ where $(X,Y)$ are samples from $\\pi ^n$ .", "It is easy to see that $(Z,Y)$ is a coupling between $\\nu _{n+1}$ and $\\mu $ , therefore, by definition of the optimal transport map between $\\nu _{n+1}$ and $\\mu $ it follows that: $W_2^2(\\nu _{n+1},\\mu )\\le \\int \\Vert X-\\gamma \\nabla f_{\\mu , \\nu _n}(X)-Y\\Vert ^2 d\\pi ^n(\\nu _n,\\mu )$ By expanding the r.h.s in eq:optimalupper-bound, the following inequality holds: $W_2^2(\\nu _{n+1},\\mu )\\le W_2^2(\\nu _{n},\\mu ) -2\\gamma \\int \\langle \\nabla f_{\\mu , \\nu _n}(X), X-Y \\rangle d\\pi ^n(\\nu _n,\\mu )+ \\gamma ^2D(\\nu _n)$ where $D(\\nu _n) = \\int \\Vert \\nabla f_{\\mu , \\nu _n}(X)\\Vert ^2 d\\nu _n $ .", "By lem:gradflowlambdaversion it holds that: $-2\\gamma \\int \\nabla f_{\\mu , \\nu _n}(X).", "(X-Y) d\\pi (\\nu ,\\mu )\\le -2\\gamma \\left({{\\mathcal {F}}}(\\nu _n)- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right)$ where $(\\rho ^n_t)_{0\\le t \\le 1}$ is a constant-speed geodesic from $\\nu _n$ to $\\mu $ and $K(\\rho ^n):=\\int _0^1 \\Lambda (\\rho ^n_s,v^n_s)(1-s)ds$ .", "Note that when $K(\\rho ^n)\\le 0$ it falls back to the convex setting.", "Therefore, the following inequality holds: $W_2^2(\\nu _{n+1},\\mu )\\le W_2^2(\\nu _{n},\\mu ) - 2\\gamma \\left({{\\mathcal {F}}}(\\nu _n)- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right) +\\gamma ^2 D(\\nu _n)$ Now we introduce a term involving ${{\\mathcal {F}}}(\\nu _{n+1})$ .", "The above inequality becomes: $W_2^2(\\nu _{n+1},\\mu )\\le & W_2^2(\\nu _{n},\\mu ) - 2\\gamma \\left({{\\mathcal {F}}}(\\nu _{n+1})- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right) \\\\&+\\gamma ^2 D(\\nu _n) -2\\gamma ({{\\mathcal {F}}}(\\nu _n)-{{\\mathcal {F}}}(\\nu _{n+1}))$ It is possible to upper-bound the last two terms on the r.h.s.", "by a negative quantity when the step-size is small enough.", "This is mainly a consequence of the smoothness of the functional ${{\\mathcal {F}}}$ and the fact that $\\nu _{n+1}$ is obtained by following the steepest direction of ${{\\mathcal {F}}}$ starting from $\\nu _n$ .", "prop:decreasingfunctional makes this statement more precise and enables to get the following inequality: $\\gamma ^2 D(\\nu _n) -2\\gamma ({{\\mathcal {F}}}(\\nu _n)-{{\\mathcal {F}}}(\\nu _{n+1})\\le -\\gamma ^2 (1-3\\gamma L)D(\\nu _n),$ where $L$ is the Lispchitz constant of $\\nabla k$ .", "Combining eq:mainineq2 and eq:decreasingfunctional we finally get: $2\\gamma ({{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\mu ))+\\gamma ^2(1-3\\gamma L)D(\\nu _n)\\le W_2^2(\\nu _n,\\mu )-W_2^2(\\nu _{n+1},\\mu )-2\\gamma K(\\rho ^n).$ and under the condition $\\gamma \\le 1/(3L)$ we recover the desired result.", "We can now give the proof of the th:ratesmmd.", "[Proof of th:ratesmmd] Consider the Lyapunov function $L_j = j \\gamma ({{\\mathcal {F}}}(\\nu _j) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )$ for any iteration $j$ .", "At iteration $j+1$ , we have: $L_{j+1} &= j\\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _{j+1},\\mu )\\\\&\\le j\\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )-\\gamma K(\\rho ^j)\\\\&\\le j\\gamma ({{\\mathcal {F}}}(\\nu _{j}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )-\\gamma K(\\rho ^j) -j\\gamma ^2 (1-\\frac{3}{2} \\gamma L )\\int \\Vert \\nabla f_{\\mu , \\nu _j}(X)\\Vert ^2 d\\nu _j \\\\&\\le L_j - \\gamma K(\\rho ^j).$ where we used  prop:evi and prop:decreasingfunctional successively for the two first inequalities.", "We thus get by telescopic summation: $L_n \\le L_0 -\\gamma \\sum _{j = 0}^{n-1} K(\\rho ^j)$ Let us denote $\\bar{K}$ the average value of $(K(\\rho ^j))_{0\\le j \\le n}$ over iterations up to $n$ .", "We can now write the final result: ${{\\mathcal {F}}}(\\nu _{n}) - {{\\mathcal {F}}}(\\mu ) \\le \\frac{W_2^2(\\nu _0, \\mu )}{2 \\gamma n} -\\bar{K}$ Lojasiewicz type inequalities Given a probability distribution $\\nu $ , the weighted Sobolev semi-norm is defined for all squared integrable functions $f$ in $L_2(\\nu )$ as $ \\Vert f \\Vert _{\\dot{H}(\\nu )} = \\left(\\int \\left\\Vert \\nabla f(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}}$ with the convention $\\Vert f \\Vert _{\\dot{H}(\\nu )} = +\\infty $ if $f$ does not have a square integrable gradient.", "The Negative weighted Sobolev distance $ \\Vert .", "\\Vert _{\\dot{H}^{-1}(\\nu )} $ is then defined on distributions as the dual norm of $ \\Vert .\\Vert _{\\dot{H}(\\nu )} $ .", "For convenience, we recall the definition of $ \\Vert .", "\\Vert _{\\dot{H}^{-1}(\\nu )} $ : Definition 5 Let $\\nu \\in \\mathcal {P}_2({\\mathbf {x}})$ , with its corresponding weighted Sobolev semi-norm $ \\Vert .", "\\Vert _{\\dot{H}(\\nu )} $ .", "The weighted negative Sobolev distance $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )}$ between any $p$ and $q$ in $\\mathcal {P}_2({\\mathbf {x}})$ is defined as $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )} = \\sup _{f\\in L_2(\\nu ), \\Vert f \\Vert _{\\dot{H}(\\nu )} \\le 1 } \\left|\\int f(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int f(x)\\mathop {}\\!\\mathrm {d}q(x) \\right|$ with possibly infinite values.", "There are several possible choices for the set of test functions $f$ .", "While it is often required that $f$ vanishes at the boundary (see ), we do not make such restriction and rather use the definition from .", "We refer to for more discussion on the relationship between different choices for the set of test functions.", "We provide now a proof for prop:lojasiewicz.", "[Proof of prop:lojasiewicz] This proof follows simply from the definition of the negative Sobolev distance.", "Under assump:lipschitzgradientk, the kernel has at most quadratic growth hence, for any $\\mu ,\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})^2$ , $f_{\\mu ,\\nu }\\in L_2(\\nu )$ .", "Consider $g = \\Vert f_{\\mu , \\nu _t}\\Vert ^{-1}_{\\dot{H}(\\nu _t)} f_{\\mu , \\nu _t}$ , then $g\\in L_2(\\nu _t)$ and $\\Vert g \\Vert _{\\dot{H}(\\nu _t)}\\le 1$ .", "Therefore, we directly have: $\\left|\\int g \\mathop {}\\!\\mathrm {d}\\nu _t - \\int g \\mathop {}\\!\\mathrm {d}\\mu \\right|\\le \\left\\Vert \\nu _t - \\mu \\right\\Vert _{\\dot{H}^{-1}(\\nu _t)}$ Now, recall the definition of $g$ , which implies that $\\left|\\int g \\mathop {}\\!\\mathrm {d}\\nu _t - \\int g \\mathop {}\\!\\mathrm {d}\\mu \\right|= \\left\\Vert \\nabla f_{\\mu , \\nu _t}\\right\\Vert ^{-1}_{L_2(\\nu _t)} \\left|\\int f_{\\mu , \\nu _t}\\mathop {}\\!\\mathrm {d}\\nu _t-\\int f_{\\mu , \\nu _t} \\mathop {}\\!\\mathrm {d}\\mu \\right|.$ Moreover, we have that $\\int f_{\\mu , \\nu _t}\\mathop {}\\!\\mathrm {d}\\nu _t-\\int f_{\\mu ,\\nu _t}\\mathop {}\\!\\mathrm {d}\\mu = \\Vert f_{\\mu , \\nu _t}\\Vert ^2_{{{\\mathcal {H}}}}$ , since $f_{\\mu , \\nu _t}$ is the unnormalised witness function between $\\nu _t$ and $\\mu $ .", "Combining eq:loja1 and eq:loja2 we thus get the desired Lojasiewicz inequality on $f_{\\mu ,\\nu _t}$ : $\\Vert f_{\\mu ,\\nu _t} \\Vert ^2_{\\mathcal {H}} \\le \\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)} \\Vert \\mu -\\nu _t\\Vert _{\\dot{H}^{-1}(\\nu _t)}$ where $\\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)}=\\Vert \\nabla f_{\\mu , \\nu _t} \\Vert _{L_2(\\nu _t)}$ by definition.", "Then, using prop:decaymmd and recalling by assumption that: $\\Vert \\mu - \\nu _t \\Vert ^2_{\\dot{H}^{-1}(\\nu _t)} \\le C$ , we have: $\\dot{{{\\mathcal {F}}}}(\\nu _t) = - \\Vert \\nabla f_{\\mu , \\nu _t} \\Vert ^2_{L_2(\\nu _t)} \\le -\\frac{1}{C}\\Vert f_{\\mu ,\\nu _t} \\Vert ^4_{\\mathcal {H}}= -\\frac{4}{C}{{\\mathcal {F}}}(\\nu _t)^2 $ It is clear that if $\\mathcal {F}(\\nu _0)>0$ then ${{\\mathcal {F}}}(\\nu _t)>0$ at all times by uniqueness of the solution.", "Hence, one can divide by ${{\\mathcal {F}}}(\\nu _t)^2$ and integrate the inequality from 0 to some time $t$ .", "The desired inequality is obtained by simple calculations.", "Then, using prop:decreasingfunctional and eq:PLinequality where $\\nu _t$ is replaced by $\\nu _n$ it follows: ${{\\mathcal {F}}}(\\nu _{n+1}) - {{\\mathcal {F}}}(\\nu _n) \\le -\\gamma \\left(1-\\frac{3}{2} L\\gamma \\right)\\Vert \\nabla f_{\\mu ,\\nu _n}\\Vert _{L_2(\\nu _n)}^2 \\le -\\frac{4}{C}\\gamma \\left(1-\\frac{3}{2}\\gamma L\\right){{\\mathcal {F}}}(\\nu _n)^2.$ Dividing by both sides of the inequality by $ {{\\mathcal {F}}}(\\nu _n){{\\mathcal {F}}}(\\nu _{n+1})$ and recalling that ${{\\mathcal {F}}}(\\nu _{n+1})\\le {{\\mathcal {F}}}(\\nu _n)$ it follows directly that: $\\frac{1}{{{\\mathcal {F}}}(\\nu _n)} - \\frac{1}{{{\\mathcal {F}}}(\\nu _{n+1})} \\le -\\frac{4}{C}\\gamma \\left(1-\\frac{3}{2}\\gamma L\\right).$ The proof is concluded by summing over $n$ and rearranging the terms.", "A simple example Consider a gaussian target distribution $\\mu (x) = \\mathcal {N}(a,\\Sigma ) $ and initial distribution $\\nu _0 = \\mathcal {N}(a_0,\\Sigma _0)$ .", "In this case it is sufficient to use a kernel that captures the first and second moments of the distribution.", "We simply consider a kernel of the form $k(x,y)= (x^{\\top }y)^2 + x^{\\top }y$ .", "In this case, it is easy to see by simple computations that the following equation holds: $\\dot{X}_t = - (\\Sigma _t-\\Sigma + a_t a_t^{\\top }-a a^{\\top } )X_t - (a_t-a),\\qquad \\forall t \\ge 0$ Where $a_t$ and $\\Sigma _t$ are the mean and covariance matrix of $\\nu _t$ and satisfy the equations: $\\dot{\\Sigma }_t &= - (S_t \\Sigma _t + \\Sigma _t S_t )\\\\\\dot{a}_t &= - S_t a_t -(a_t-a).$ Where we introduced $S_t = \\Sigma _t-\\Sigma + a_t a_t^{\\top }-aa^{\\top }$ for simplicity.", "eq:example1mckeanvlassov implies that $\\nu _t$ is in fact a gaussian distribution since $X_t$ is obtained by summing gaussian increments.", "The same conclusion can be reached by solving the corresponding continuity equation.", "Thus we will be only interested in the behavior of $a_t$ and $\\Sigma _t$ .", "First we can express the squared MMD in terms of those parameters: $MMD^2(\\mu ,\\nu _t) = \\Vert S_t \\Vert ^2 + \\Vert a_t-a \\Vert ^2.$ Since $a_t$ and $\\Sigma _t$ are obtained from the gradient flow of the MMD, it follows that $\\Vert a_t-a \\Vert ^2$ and $\\Vert S_t \\Vert ^2$ remain bounded.", "Moreover, the Negative Sobolev distance is obtained by solving a finite dimensional quadratic problem and can be simply written as: $D(\\mu ,\\nu _t) = tr(Q_t \\Sigma _t Q_t) + \\Vert a_t-a\\Vert ^2$ where $Q_t$ is the unique solution of the Lyapounov equation: $\\Sigma _t Q_t + Q_t \\Sigma _t = \\Sigma _t- \\Sigma + (a_t-a)(a_t-a)^{\\top }:=G_t.$ We first consider the one dimensional case, for which eq:Lyapounov has a particularly simple solution and allows to provide a closed form expression for the negative Sobolev distance: $Q_t= \\frac{G_t}{2\\Sigma _t}, \\qquad D(\\mu ,\\nu _t) = \\frac{G_t^2}{4\\Sigma _t} + (a_t-a)^2.$ Recalling eq:example1MMD and that $MMD^2(\\mu ,\\nu _t)$ is bounded at all times by definition of $\\nu _t$ , it follows that both $G_t$ and $a_t-a$ are also bounded.", "Hence, it is easy to see that $D(\\mu ,\\nu _t)$ will remain bounded iff $\\Sigma _t$ remains bounded away from 0.", "This analysis generalizes the higher dimensions using which provides an expression for $Q_t$ in terms of $G_t$ and the singular value decomposition of $\\Sigma _t = U_t D_t U_t^{\\top }$ : $Q_t = U_t \\left( \\left(\\frac{1}{(D_t)_i + (D_t)_j }\\right)\\odot U_t^{\\top } G_t U_t\\right) U_t^{\\top }.$ Here, $\\odot $ denotes the Hadamard product of matrices.", "It is easy to see from this expression that $D(\\mu ,\\nu _t)$ will be bounded if all singular values $((D_t)_i)_{1\\le i \\le d}$ of $\\Sigma _t$ remain bounded away from 0.", "Lojasiewicz-type inequalities for ${{\\mathcal {F}}}$ under different metrics The Wasserstein gradient flow of ${{\\mathcal {F}}}$ can be seen as the continuous-time limit of the so called minimizing movement scheme .", "Such proximal scheme is defined using an initial distribution $\\nu _0$ , a step-size $\\tau $ , and an iterative update equation: $\\nu _{n+1} \\in \\arg \\min _{\\nu } {{\\mathcal {F}}}(\\nu ) + \\frac{1}{2\\tau } W_2^2(\\nu ,\\nu _n).$ In , it is shown that the continuity equation $\\partial _t \\nu _t = div(\\nu _t \\nabla f_{\\mu ,\\nu _t})$ can be obtained as the limit when $\\tau \\rightarrow 0$ of eq:minimizingmovementscheme using suitable interpolations between the elements $\\nu _n$ .", "In , a different transport equation that includes a birth-death term is considered: $\\partial _t \\nu _t = \\beta div(\\nu _t \\nabla f_{\\mu ,\\nu _t}) + \\alpha (f_{\\mu ,\\nu _t} - \\int f_{\\mu ,\\nu _t}(x)\\mathop {}\\!\\mathrm {d}\\nu _t(x) )\\nu _t$ When $\\beta =0$ and $\\alpha =1$ , it is shown formally in that the above dynamics corresponds to the limit of a proximal scheme using the KL instead of the Wasserstein distance.", "For general $\\beta $ and $\\alpha $ , eq:birthdeath corresponds to the limit of a different proximal scheme where $W_2^2(\\nu ,\\nu _n)$ is replaced by the Wasserstein-Fisher-Rao distance $d^2_{\\alpha ,\\beta }(\\nu ,\\nu _n)$ (see , , ).", "$d^2_{\\alpha ,\\beta }(\\nu ,\\nu _n)$ is an interpolation between the squared Wasserstein distance ($\\beta =1$ and $\\alpha =0$ ) and the squared Fisher-Rao distance as defined in ($\\beta =0$ and $\\alpha = 1$ ).", "Such scheme is consistent with the one proposed in and which uses the $KL$ .", "In fact, as we will show later, both the $KL$ and the Fisher-Rao distance have the same local behavior therefore both proximal schemes are expected to be equivalent in the limit when $\\tau \\rightarrow 0$ .", "Under eq:birthdeath, the time evolution of ${{\\mathcal {F}}}$ is given by : $\\dot{{{\\mathcal {F}}}}(\\nu _t) = -\\beta \\int \\Vert \\nabla f_{\\mu ,\\nu _t}\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu _t(x) -\\alpha \\int \\left|f_{\\mu ,\\nu _t}(x)-\\int f_{\\mu ,\\nu _t}(x^{\\prime })\\mathop {}\\!\\mathrm {d}\\nu _t(x^{\\prime })\\right|^2\\mathop {}\\!\\mathrm {d}\\nu _t(x)$ We would like to apply the same approach as in sec:Lojasiewiczinequality to provide a condition on the convergence of eq:birthdeath.", "Hence we first introduce an analogue to the Negative Sobolev distance in def:negsobolev by duality: $D_{\\nu }(p,q) =\\sup _{\\begin{array}{c}g\\in L_2(\\nu )\\\\ \\beta \\Vert \\nabla g \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert g- \\bar{g} \\Vert ^2_{L_2(\\nu )} \\le 1 \\end{array}} \\left|\\int g(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int g(x) \\mathop {}\\!\\mathrm {d}q(x)\\right|$ where $\\bar{g}$ is simply the expectation of $g$ under $\\nu $ .", "Such quantity defines a distance, since it is the dual of a semi-norm.", "Now using the particular structure of the MMD, we recall that $f_{\\mu ,\\nu }\\in L_2(\\nu )$ and that $\\beta \\Vert \\nabla f \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f- \\bar{f} \\Vert ^2_{L_2(\\nu )}<\\infty $ .", "Hence for a particular $g$ of the form: $g = \\frac{f_{\\mu ,\\nu }}{\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} \\right)^\\frac{1}{2}}$ the following inequality holds: $D_{\\nu }(\\mu ,\\nu ) \\ge \\frac{\\left|\\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\nu (x) - \\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\mu (x)\\right|}{\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )}\\right)^{\\frac{1}{2}} }.$ But since $f_{\\mu ,\\nu }$ is the unnormalised witness function between $\\mu $ and $\\nu $ we have that $2{{\\mathcal {F}}}(\\nu ) = \\left|\\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\nu (x) - \\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\mu (x)\\right|$ .", "Hence one can write that: $D^2_{\\nu }(\\mu ,\\nu )\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )}\\right) \\ge 4{{\\mathcal {F}}}^2(\\nu )$ Now provided that $D^2_{\\nu }(\\mu ,\\nu _t)$ remains bounded at all time $t$ by some constant $C>0$ one can easily deduce a rate of convergence for ${{\\mathcal {F}}}(\\nu _t)$ just as in prop:lojasiewicz.", "In fact, in the case when $\\beta = 1$ and $\\alpha =0$ one recovers prop:lojasiewicz.", "Another interesting case is when $\\beta =0$ and $\\alpha =1$ .", "In this case, $D_{\\nu }(p,q)$ is defined for $p$ and $q$ such that the difference $p-q$ is absolutely continuous w.r.t.", "$\\nu $ .", "Moreover, $D_{\\nu }(p,q)$ has the simple expression: $D_{\\nu }(p,q) = \\int \\left(\\frac{p-q }{\\nu }(x)\\right)^2 \\mathop {}\\!\\mathrm {d}\\nu (x)$ where $\\frac{ p-q }{ \\nu }$ denotes the radon nikodym density of $p-q$ w.r.t.", "$\\nu $ .", "More importantly, $D^2_{\\nu }(\\mu ,\\nu )$ is exactly equal to $\\chi ^2(\\mu \\Vert \\nu )^{\\frac{1}{2}}$ .", "As we will show now, $(\\chi ^2)^{\\frac{1}{2}}$ turns out to be a linearization of $\\sqrt{2} KL^{\\frac{1}{2}}$ and the Fisher-Rao distance.", "Linearization of the KL and the Fisher-Rao distance.", "We first show the result for the KL.", "Given a probability distribution $\\nu ^{\\prime }$ that is absolutely continuous w.r.t to $\\nu $ and for $0<\\epsilon < 1$ denote by $G(\\epsilon ) := KL(\\nu \\Vert (\\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) )$ .", "It can be shown that $G(\\epsilon ) = \\frac{1}{2}\\chi ^2(\\nu ^{\\prime }\\Vert \\nu )\\epsilon ^2 +o(\\epsilon ^2)$ .", "To see this, one needs to perform a second order Taylor expansion of $G(\\epsilon )$ at $\\epsilon =0$ .", "Exchanging the derivatives and the integral, $\\dot{G}(\\epsilon )$ and $\\ddot{G}(\\epsilon )$ are both given by: $\\dot{G}(\\epsilon ) = -\\int \\frac{\\mu -\\nu }{\\nu +\\epsilon (\\mu -\\nu )}\\mathop {}\\!\\mathrm {d}\\nu \\\\\\ddot{G}(\\epsilon ) = \\int \\frac{(\\nu -\\mu )^2}{(\\nu +\\epsilon (\\mu -\\nu ))^2} \\mathop {}\\!\\mathrm {d}\\nu $ Hence, we have for $\\epsilon =0$ : $\\dot{G}(0) = 0$ and $\\ddot{G}(0) = \\chi ^2(\\mu \\Vert \\nu )$ .", "Therefore, it follows: $G(\\epsilon ) =\\frac{1}{2} \\chi ^2(\\mu \\Vert \\nu ) \\epsilon ^2 + o(\\epsilon ^2)$ , which means that $\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon }\\left[2KL\\left(\\nu \\Vert \\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) \\right) \\right]^{\\frac{1}{2}} = \\chi ^2(\\nu ^{\\prime }\\Vert \\nu )^\\frac{1}{2}.$ The same approach can be used for the Fisher-Rao distance $d_{0,1}(\\nu ,\\nu ^{\\prime })$ .", "From we have that: $d^2_{0,1}(\\nu ,\\nu ^{\\prime }) = 2\\int (\\sqrt{\\nu (x)}-\\sqrt{\\nu ^{\\prime }(x)})^2\\mathop {}\\!\\mathrm {d}x$ where $\\nu $ and $\\nu ^{\\prime }$ are assumed to have a density w.r.t.", "Lebesgue measure.", "Using the exact same approach as for the KL one easily show that $ \\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon }\\left[2d^2_{0,1}\\left(\\nu \\Vert \\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) \\right) \\right]^{\\frac{1}{2}} = \\chi ^2(\\nu ^{\\prime }\\Vert \\nu )^\\frac{1}{2}.$ Linearization of the $W_2$ .", "Similarly, it can be shown that the Negative weighted Sobolev distance is a linearization of the $W_2$ under suitable conditions.", "We recall here which relates the two quantities: Theorem 17 Let $\\nu \\in \\mathcal {P}({{\\mathcal {X}}})$ be a probability measure with finite second moment, absolutely continuous w.r.t the Lebesgue measure and let $h\\in L^{\\infty }({{\\mathcal {X}}})$ with $\\int h(x)\\mathop {}\\!\\mathrm {d}\\nu (x)=0$ .", "Then $\\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}\\le \\lim \\inf _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,(1+\\epsilon h )\\nu ).$ thm:villani implies that for any probability distribution $\\nu ^{\\prime }$ that has a bounded density w.r.t.", "to $\\nu $ one has: $\\Vert \\nu ^{\\prime }-\\nu \\Vert _{\\dot{H}^{-1}(\\nu )}\\le \\lim \\inf _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,\\nu +\\epsilon (\\nu ^{\\prime }-\\nu )).$ To get the converse inequality, one needs to assume that the support of $\\nu $ is ${{\\mathcal {X}}}$ .", "prop:conversesobolevwasserstein provides such inequality and uses techniques from .", "Proposition 18 Let $\\nu \\in \\mathcal {P}({{\\mathcal {X}}})$ be a probability measure with finite second moment, absolutely continuous w.r.t the Lebesgue measure with support equal to ${{\\mathcal {X}}}$ and let $h\\in L^{\\infty }({{\\mathcal {X}}})$ with $\\int h(x)\\mathop {}\\!\\mathrm {d}\\nu (x)=0$ and $1+h\\ge 0$ .", "Then $\\lim \\sup _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,(1+\\epsilon h )\\nu )\\le \\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}$ Consider the elliptic equation: $\\nu h + div(\\nu \\nabla F) = 0$ with Neumann boundary condition on $\\partial {{\\mathcal {X}}}$ .", "Such equation admits a unique solution $F$ in $\\dot{H}(\\nu )$ up to a constant since $\\nu $ is supported on all of ${{\\mathcal {X}}}$ (see ).", "Moreover, we have that $ \\int F(x)h(x)\\mathop {}\\!\\mathrm {d}\\nu (x) = \\int \\Vert \\nabla F(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu (x)$ which implies that $\\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )} \\ge \\Vert F \\Vert _{\\dot{H}(\\nu )}$ .", "Now consider the path: $s_u = (1 + u\\epsilon h)\\nu $ for $u\\in [0,1]$ .", "$s_u$ is a probability distribution for all $u\\in [0,1]$ with $s_0= \\nu $ and $s_1 = (1+\\epsilon h)\\nu $ .", "It is easy to see that $s_u$ satisfies the continuity equation: $\\partial _u s_u +div(s_u V_u )=0$ with $V_u = \\frac{\\epsilon \\nabla F}{1+u\\epsilon h}$ .", "Indeed, for any smooth test function $f$ one has: $\\frac{\\mathop {}\\!\\mathrm {d}}{\\mathop {}\\!\\mathrm {d}u}\\int f(x)\\mathop {}\\!\\mathrm {d}s_u(x) = \\epsilon \\int f(x)h(x)\\mathop {}\\!\\mathrm {d}\\nu (x) = \\epsilon \\int \\nabla f(x).\\nabla F(x) \\mathop {}\\!\\mathrm {d}\\nu (x) = \\int \\nabla f(x).V_u(x)\\mathop {}\\!\\mathrm {d}s_u(x).$ We used the definition of $F$ for the second equality and that $\\nu $ admits a density w.r.t.", "to $s_u$ provided that $\\epsilon $ is small enough.", "Such density is given by $1/(1+u\\epsilon h)$ and is positive and bounded when $\\epsilon \\le \\frac{1}{2\\Vert h \\Vert _{\\infty } }$ .", "Now, using the Benamou-Brenier formula for $W_2(\\nu ,(1+\\epsilon h)\\nu )$ one has in particular that: $W_2(\\nu ,(1+\\epsilon h)\\nu )\\le \\int \\Vert V_u \\Vert _{L^2(s_u)} \\mathop {}\\!\\mathrm {d}u$ Using the expressions of $V_u$ and $s_u$ , one gets by simple computation: $W_2(\\nu ,(1+\\epsilon h)\\nu )\\le & \\epsilon \\int \\left(\\int \\frac{\\Vert \\nabla F(x) \\Vert ^2}{1-u\\epsilon + u\\epsilon (h+1)} \\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u\\\\&\\le \\epsilon \\left( \\int \\Vert \\nabla F(x) \\Vert ^2\\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}} \\int _0^1 (1-u\\epsilon )^{-\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u.$ Finally, $\\epsilon \\int _0^1 (1-u\\epsilon )^{-\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u = 2(1-\\sqrt{1 - \\epsilon }) \\rightarrow 1$ when $\\epsilon \\rightarrow 0$ , hence: $\\lim \\sup _{\\epsilon \\rightarrow 0} W_2(\\nu ,(1+\\epsilon h)) \\le \\Vert F\\Vert _{\\dot{H}(\\nu )}\\le \\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}.$ thm:villani and prop:conversesobolevwasserstein allow to conclude that $\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,\\nu +\\epsilon (\\nu ^{\\prime }-\\nu )) = \\Vert \\nu - \\nu ^{\\prime } \\Vert _{\\dot{H}^{-1}(\\nu )}$ for any $\\nu ^{\\prime }$ that has a bounded density w.r.t.", "$\\nu $ .", "By analogy, one could wonder if $D$ is also a linearization of the the Wasserstein-Fisher-Rao distance.", "We leave such question for future work.", "Algorithms Noisy Gradient flow of the MMD [Proof of thm:convergencenoisygradient] To simplify notations, we write $\\mathcal {D}_{\\beta _n}(\\nu _n) = \\int \\Vert V(x+\\beta _n u) \\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n \\mathop {}\\!\\mathrm {d}u $ where $V := \\nabla f_{\\mu ,\\nu _n}$ and $g$ is the density of a standard gaussian.", "The symbol $\\otimes $ denotes the product of two independent probability distributions.", "Recall that a sample $x_{n+1}$ from $\\nu _{n+1}$ is obtained using $x_{n+1} = x_n - \\gamma V(x_n+ \\beta _n u_n)$ where $x_n$ is a sample from $\\nu _n$ and $u_n$ is a sample from a standard gaussian distribution that is independent from $x_n$ .", "Moreover, by assumption $\\beta _n$ is a non-negative scalar satisfying: $8\\lambda ^2\\beta _n^2 {{\\mathcal {F}}}(\\nu _n) \\le \\mathcal {D}_{\\beta _n}(\\nu _n)$ Consider now the map $(x,u)\\mapsto s_t(x)= x - \\gamma tV(x+\\beta _n u)$ for $0\\le t\\le 1$ , then $\\nu _{n+1}$ is obtained as a push-forward of $\\nu _n\\otimes g$ by $s_1$ : $\\nu _{n+1} = (s_1)_{\\#}(\\nu _n\\otimes g)$ .", "Moreover, the curve $\\rho _t = (s_t)_{\\#}(\\nu _n\\otimes g)$ is a path from $\\nu _n$ to $\\nu _{n+1}$ .", "We know by prop:gradwitnessfunction that $\\nabla f_{\\mu ,\\nu _n}$ is $2L$ -Lipschitz, thus using $\\phi (x,u) = -\\gamma V(x+\\beta _n u)$ , $\\psi (x,u) = x$ and $q = \\nu _n\\otimes g $ in lem:derivativemmdaugmented it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable in $t$ with: $\\dot{{{\\mathcal {F}}}}(\\rho _t)=\\int \\nabla f_{\\mu ,\\rho _t}(s_t(x)).", "(-\\gamma V(x+\\beta _n u))g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u$ Moreover, $\\dot{{{\\mathcal {F}}}}(\\rho _0)$ is given by $\\dot{{{\\mathcal {F}}}}(\\rho _0)= -\\gamma \\int V(x).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u$ and the following estimate holds: $\\vert \\dot{{{\\mathcal {F}}}} (\\rho _t) -\\dot{{{\\mathcal {F}}}}(\\rho _0)\\vert \\le 3\\gamma ^2 L t \\int \\Vert V(x+\\beta _n u) \\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u = 3\\gamma ^2 Lt \\mathcal {D}_{\\beta _n}(\\nu _n).$ Using the absolute continuity of ${{\\mathcal {F}}}(\\rho _t)$ , one has $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n})=\\dot{{{\\mathcal {F}}}}(\\rho _0)+ \\int _0^1 \\dot{{{\\mathcal {F}}}} (\\rho _t) - \\dot{{{\\mathcal {F}}}} (\\rho _0) \\mathop {}\\!\\mathrm {d}t $ .", "Combining with eq:estimategradient and using the expression of $\\dot{{{\\mathcal {F}}}}(\\rho _0)$ , it follows that: $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n})\\le -\\gamma \\int V(x).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u + \\frac{3}{2}\\gamma ^2L \\mathcal {D}_{\\beta _n}(\\nu _n).$ Adding and subtracting $\\gamma \\mathcal {D}_{\\beta _n}(\\nu _n)$ in eq:taylorexpansion it follows directly that: $\\begin{split}\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n} )\\le & -\\gamma (1-\\frac{3}{2}\\gamma L )\\mathcal {D}_{\\beta _n}(\\nu _n)\\\\&+ \\gamma \\int (V(x+\\beta _n u) -V(x)).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u \\end{split}$ We shall control now the last term in eq:penultimate.", "Recall now that for all $1\\le i\\le d$ , $ V_i(x) = \\partial _i f_{\\mu ,\\nu _n}(x) = \\langle f_{\\mu ,\\nu _n} , \\partial _i k(x,.", ")\\rangle $ where we used the reproducing property for the derivatives of $f_{\\mu ,\\nu _n}$ in ${{\\mathcal {H}}}$ (see sec:rkhs).", "Therefore, it follows by Cauchy-Schwartz in ${{\\mathcal {H}}}$ and using assump:Lipschitzgradrkhs: $\\Vert V(x+\\beta _n u) -V(x)\\Vert ^2&\\le \\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2 \\left( \\sum _{i=1}^{d}\\Vert \\partial _i k(x+\\beta _n u,.)", "-\\partial _i k(x,.", ")\\Vert ^2_{\\mathcal {H}}\\right)\\\\&\\le \\lambda ^2\\beta _n^2\\Vert f_{\\mu ,\\nu _n}\\Vert _{\\mathcal {H}}^2\\Vert u \\Vert ^2$ for all $ x,u \\in {{\\mathcal {X}}}$ .", "Now integrating both sides w.r.t.", "$\\nu _n$ and $g$ and recalling that $g$ is a standard gaussian, we have: $\\int \\Vert V(x+\\beta _n u) -V(x)\\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u\\le \\lambda ^2\\beta ^2_n\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2$ Getting back to eq:penultimate and applying Cauchy-Schwarz in $L_2(\\nu _n\\otimes g)$ it follows: $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n} )\\le & -\\gamma (1-\\frac{3}{2}\\gamma L )\\mathcal {D}_{\\beta _n}(\\nu _n) +\\gamma \\lambda \\beta _n\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}\\mathcal {D}^{\\frac{1}{2}}_{\\beta _n}(\\nu _n)$ It remains to notice that $\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2 = 2{{\\mathcal {F}}}(\\nu _n)$ and that $\\beta _n$ satisfies eq:controlnoiselevelbis to get: ${{\\mathcal {F}}}(\\nu _{n+1}) -{{\\mathcal {F}}}(\\nu _n) \\le -\\frac{\\gamma }{2}(1-\\frac{3}{2}\\gamma L)\\mathcal {D}_{\\beta _n}(\\nu _n).$ We introduce now $\\Gamma = 4\\gamma (1-\\frac{3}{2}\\gamma L)\\lambda ^2$ to simplify notation and prove the second inequality.", "Using eq:controlnoiselevelbis again in the above inequality we directly have: ${{\\mathcal {F}}}(\\nu _{n+1}) -{{\\mathcal {F}}}(\\nu _n) \\le - \\Gamma \\beta _n^2 {{\\mathcal {F}}}(\\nu _n)$ .", "One can already deduce that $\\Gamma \\beta _n^2$ is necessarily smaller than 1.", "Hence, taking ${{\\mathcal {F}}}(\\nu _n)$ to the r.h. side and iterating over $n$ it follows that: ${{\\mathcal {F}}}(\\nu _{n}) \\le {{\\mathcal {F}}}(\\nu _0)\\prod _{i=0}^{n-1}(1- \\Gamma \\beta _n^2)$ Simply using that $1-\\Gamma \\beta _n^2\\le e^{-\\Gamma \\beta _n^2}$ leads to the desired upper-bound ${{\\mathcal {F}}}(\\nu _{n}) \\le {{\\mathcal {F}}}(\\nu _0)e^{-\\Gamma \\sum _{i=0}^{n-1} \\beta _n^2}$ .", "Sample-based approximate scheme [Proof of prop:convergenceeulermaruyama] Let $(u_{n}^{i})_{1\\le i\\le N}$ be i.i.d standard gaussian variables and $(x_{0}^{i})_{1\\le i\\le N}$ i.i.d.", "samples from $\\nu _0$ .", "We consider $(x_n^i)_{1\\le i\\le N}$ the particles obtained using the approximate scheme eq:eulermaruyama: $x_{n+1}^{i}=x_{n}^{i}-\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})$ starting from $(x_{0}^{i})_{1\\le i\\le N}$ , where $\\hat{\\nu _n}$ is the empirical distribution of these $N$ interacting particles.", "Similarly, we denote by $(\\bar{x}_{n}^{i})_{1\\le i\\le N}$ the particles obtained using the exact update equation eq:discretizednoisyflow: $\\bar{x}_{n+1}^{i}=\\bar{x}_{n}^{i}-\\gamma \\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})$ also starting from $(x_{0}^{i})_{1\\le i\\le N}$ .", "By definition of $\\nu _n$ we have that $(\\bar{x}_{n}^{i})_{1\\le i\\le N}$ are i.i.d.", "samples drawn from $\\nu _n$ with empirical distribution denoted by $\\bar{\\nu }_{n}$ .", "We will control the expected error $c_{n}$ defined as $c^2_{n}= \\frac{1}{N}\\sum _{i=1}^N \\mathbb {E}\\left[\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}\\Vert ^{2}\\right]$ .", "By recursion, we have: $c_{n+1} = & \\frac{1}{\\sqrt{N}}\\left(\\sum _{i=1}^{N}\\mathbb {E}\\left[\\left\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}-\\gamma \\left(\\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})\\right)\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\\\le & c_{n} +\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {E}_{i}\\right]^{\\frac{1}{2}}+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {G}_{i}\\right]^{\\frac{1}{2}} \\\\& +\\frac{\\gamma }{\\sqrt{N}}\\left(\\sum _{i=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\hat{\\nu }_{n}}\\left(x_{n}^{i}+\\beta _{n}u_{n}^{i}\\right)-\\nabla f_{\\mu ,\\bar{\\nu }_{n}}\\left(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i}\\right)\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\\\le & c_{n}+2\\gamma L\\left(c_{n}+\\mathbb {E}\\left[W_{2}(\\hat{\\nu }_{n},\\bar{\\nu }_{n})^{2}\\right]^{\\frac{1}{2}}\\right)+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {E}_{i}\\right]^{\\frac{1}{2}}+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {G}_{i}\\right]^{\\frac{1}{2}}$ where the second line follows from a simple triangular inequality and the last line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is jointly $2L$ Lipschitz in $x$ and $\\nu $ by prop:gradwitnessfunction.", "Here, $\\mathcal {E}_{i}$ represents the error between $\\bar{\\nu }_n$ and $\\nu _n$ while $\\mathcal {G}_{i}$ represents the error between $\\hat{\\mu }$ and $\\mu $ and are given by: $\\mathcal {E}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\bar{\\nu }_{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})\\right\\Vert ^{2}\\right]\\\\\\mathcal {G}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})\\right\\Vert ^{2}\\right]$ We will first control the error term $\\mathcal {E}_i$ .", "To simplify notations, we write $y^{i}=\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i}$ .", "Recalling the expression of $\\nabla f_{\\mu ,\\nu }$ from prop:gradwitnessfunction and expanding the squared norm in $\\mathcal {E}_i$ , it follows: $\\mathcal {E}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\frac{1}{N}\\sum _{j=1}^{N}\\nabla k(y^{i},\\bar{x}_{n}^{j})-\\int \\nabla k(y^{i},x)d\\nu _{n}(x)\\right\\Vert ^{2}\\right]\\\\& =\\frac{1}{N^{2}}\\sum _{j=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\nabla k(y^{i},\\bar{x}_{n}^{j})-\\int \\nabla k(y^{i},x)d\\nu _{n}(x)\\right\\Vert ^{2}\\right]\\\\& \\le \\frac{L^{2}}{N^{2}}\\sum _{j=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\bar{x}_{n}^{j}-\\int xd\\nu _{n}(x)\\right\\Vert ^{2}\\right]=\\frac{L^{2}}{N}var(\\nu _{n}).$ The second line is obtained using the independence of the auxiliary samples $(\\bar{x}^{i}_n)_{1\\le i\\le N}$ and recalling that they are distributed according to $\\nu _{n}$ .", "The last line uses the fact that $\\nabla k(y,x)$ is $L$ -Lipshitz in $x$ by assump:lipschitzgradientk.", "To control the variance $var(\\nu _n)$ we use lem:Controlvariance which implies that $var(\\nu _{n})^{\\frac{1}{2}}\\le (B+var(\\nu _{0})^{\\frac{1}{2}})e^{LT}$ for all $n\\le \\frac{2T}{\\gamma }$ .", "For $\\mathcal {G}_{i}$ , it is sufficient to expand again the squared norm and recall that $\\nabla k(y,x)$ is $L$ -Lipschitz in $x$ which then implies that $\\mathcal {G}_{i}\\le \\frac{L^{2}}{M}var(\\mu )$ .", "Finally, one can observe that $\\mathbb {E}[W_{2}^{2}(\\hat{\\nu }_{n},\\bar{\\nu }_{n})]\\le \\frac{1}{N}\\sum _{i=1}^{N}\\mathbb {E}\\left[\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}\\Vert ^{2}\\right]=c_{n}^{2}$ , hence $c_n$ satisfies the recursion: $c_{n+1}\\le (1+4\\gamma L)c_{n}+\\frac{\\gamma L}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{\\gamma L}{\\sqrt{M}}var(\\mu ).$ Using lem:Discrete-Gronwall-lemma to solve the above inequality, it follows that: $c_{n}\\le \\frac{1}{4}\\left(\\frac{1}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{1}{\\sqrt{M}}var(\\mu ))\\right)(e^{4LT}-1)$ Lemma 19 Consider an initial distribution $\\nu _{0}$ with finite variance, a sequence $(\\beta _n)_{ n \\ge 0}$ of non-negative numbers bounded by $B<\\infty $ and define the sequence of probability distributions $\\nu _n$ of the process eq:discretizednoisyflow: $x_{n+1}=x_{n}-\\gamma \\nabla f_{\\mu ,\\nu _{n}}(x_{n}+\\beta _{n}u_{n}) \\qquad x_0 \\sim \\nu _0$ where $(u_n)_{n\\ge 0}$ are standard gaussian variables.", "Under assump:lipschitzgradientk, the variance of $\\nu _{n}$ satisfies for all $T>0$ and $n\\le \\frac{T}{\\gamma }$ the following inequality: $var(\\nu _{n})^{\\frac{1}{2}}\\le (B+var(\\nu _{0})^{\\frac{1}{2}})e^{2TL}$ Let $g$ be the density of a standard gaussian.", "Denote by $(x,u)$ and $(x^{\\prime },u^{\\prime })$ two independent samples from $\\nu _n\\otimes g$ .", "The idea is to find a recursion from $var(\\nu _{n})$ to $var(\\nu _{n+1})$ : $var(\\nu _{n+1})^{\\frac{1}{2}}& =\\left(\\mathbb {E}\\left[\\left\\Vert x -\\mathbb {E}\\left[x^{\\prime }\\right] -\\gamma \\nabla f_{\\mu ,\\nu _{n}}(x+\\beta _{n}u)+\\gamma \\mathbb {E}\\left[\\nabla f_{\\mu ,\\nu _{n}}(x^{\\prime }+\\beta _{n}u^{\\prime })\\right]\\right\\Vert ^2\\right]\\right)^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+\\gamma \\left(\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\nu _{n}}(x+\\beta _{n}u)-\\mathbb {E}\\left[\\nabla f_{\\mu ,\\nu _{n}}(x^{\\prime }+\\beta _{n}u^{\\prime })\\right]\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+2\\gamma L\\mathbb {E}_{\\begin{array}{c}x,x^{\\prime }\\sim \\nu _{n}\\\\ u,u^{\\prime }\\sim g\\end{array}}\\left[\\left\\Vert x+\\beta _{n}u-x^{\\prime }+\\beta _{n}u^{\\prime }\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+2\\gamma L(var(\\nu _{n})^{\\frac{1}{2}}+\\beta _{n})$ The second and last lines are obtained using a triangular inequality while the third line uses that $\\nabla f_{\\mu ,\\nu _n}(x)$ is $2L$ -Lipschitz in $x$ by prop:gradwitnessfunction.", "Recalling that $\\beta _{n}$ is bounded by $B$ it is easy to conclude using lem:Discrete-Gronwall-lemma.", "Connection with Neural Networks In this sub-section we establish a formal connection between the MMD gradient flow defined in eq:continuitymmd and neural networks optimization.", "Such connection holds in the limit of infinitely many neurons and is based on the formulation in .", "To remain consistent with the rest of the paper, the parameters of a network will be denoted by $x\\in {{\\mathcal {X}}}$ while the input and outputs will be denoted as $z$ and $y$ .", "Given a neural network or any parametric function $(z,x)\\mapsto \\psi (z,x)$ with parameter $x \\in {{\\mathcal {X}}}$ and input data $z$ we consider the supervised learning problem: $\\min _{(x_1,...,x_m )\\in {{\\mathcal {X}}}} \\frac{1}{2}\\mathbb {E}_{(y,z)\\sim p } \\left[ \\left\\Vert y - \\frac{1}{m}\\sum _{i=1}^m\\psi (z,x_i) \\right\\Vert ^2 \\right]$ where $(y,z) \\sim p$ are samples from the data distribution and the regression function is an average of $m$ different networks.", "The formulation in eq:regressionnetwork includes any type of networks.", "Indeed, the averaged function can itself be seen as one network with augmented parameters $(x_1,...,x_m)$ and any network can be written as an average of sub-networks with potentially shared weights.", "In the limit $m\\rightarrow \\infty $ , the average can be seen as an expectation over the parameters under some probability distribution $\\nu $ .", "This leads to an expected network $\\Psi (z,\\nu ) = \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\nu (x) $ and the optimization problem in eq:regressionnetwork can be lifted to an optimization problem in $\\mathcal {P}_2({{\\mathcal {X}}})$ the space of probability distributions: $\\min _{\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})} \\mathcal {L}(\\nu ) := \\frac{1}{2}\\mathbb {E}_{(y,z)\\sim p} \\left[ \\left\\Vert y - \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\nu (x) \\right\\Vert ^2 \\right]$ For convenience, we consider $\\bar{\\mathcal {L}}(\\nu )$ the function obtained by subtracting the variance of $y$ from $\\mathcal {L}(\\nu )$ , i.e.", ": $\\bar{\\mathcal {L}}(\\nu ) = \\mathcal {L}(\\nu ) - var(y) $ .", "When the model is well specified, there exists $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}}) $ such that $\\mathbb {E}_{y\\sim \\mathbb {P}(.|z)}[y] = \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\mu (x)$ .", "In that case, the cost function $\\bar{\\mathcal {L}}$ matches the functional ${{\\mathcal {F}}}$ defined in eq:mmdasfreeenergy for a particular choice of the kernel $k$ .", "More generally, as soon as a global minimizer for eq:liftedregression exists, prop:inequalitymmdloss relates the two losses $\\bar{\\mathcal {L}}$ and $\\mathcal {F}$ .", "Proposition 20 Assuming a global minimizer of eq:liftedregression is achieved by some $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , the following inequality holds for any $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ : $\\left(\\bar{\\mathcal {L}}(\\mu )^{\\frac{1}{2}} + {{\\mathcal {F}}}^{\\frac{1}{2}}(\\nu )\\right)^2\\ge \\bar{\\mathcal {L}}(\\nu )\\ge \\mathcal {F}(\\nu ) + \\bar{\\mathcal {L}}(\\mu )$ where ${{\\mathcal {F}}}(\\nu )$ is defined by eq:mmdasfreeenergy with a kernel $k$ constructed from the data as an expected product of networks: $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim \\mathbb {P}} \\left[\\psi (z,x)^T\\psi (z,x^{\\prime })\\right]$ Moreover, $\\bar{\\mathcal {L}} = {{\\mathcal {F}}}$ iif $\\bar{\\mathcal {L}}(\\mu )=0$ , which means that the model is well-specified.", "The framing eq:inequalitymmdnn implies that optimizing $\\mathcal {F}$ can decrease $\\mathcal {L}$ and vice-versa.", "Moreover, in the well specified case, optimizing $\\mathcal {F}$ is equivalent to optimizing $\\mathcal {L}$ .", "Hence one can use the gradient flow of the MMD defined in eq:continuitymmd to solve eq:liftedregression.", "One particular setting when eq:liftedregression is well-specified is the student-teacher problem as in .", "In this case, a teacher network of the form $\\Psi _T(z,\\mu )$ produces a deterministic output $y = \\Psi _T(z,\\mu )$ given an input $z$ while a student network $\\Psi _S(z,\\nu )$ tries to learn the mapping $z\\mapsto \\Psi _T(z,\\mu )$ by minimizing eq:liftedregression.", "In practice $\\mu $ and $\\nu $ are given as empirical distributions on some particles $\\Xi = (\\xi ^1,...,\\xi ^M)$ and $X=(x^1,...,x^N)$ with $\\mu = \\frac{1}{M} \\sum _{j=1}^M \\delta _{\\xi ^j}$ and $\\nu = \\frac{1}{N} \\sum _{i=1}^N\\delta _{x^i}$ .", "The particles $(x^i)_{1\\le i \\le N}$ are then optimized using gradient descent starting from an initial configuration $(x_0^i)_{1\\le i \\le N}$ .", "This leads to the update equation: $x^i_{n+1} = x^i_n - \\gamma \\mathbb {E}_{z\\sim p }\\left[ \\left(\\frac{1}{N}\\sum _{j=1}^N \\psi (z,x_n^{j})-\\frac{1}{M}\\sum _{j=1}^M \\psi (z,\\xi ^{j})\\right)\\nabla _{x_n^{i}}\\psi (z,x_n^{i})\\right],$ where $(x_n^{i})_{1\\le i\\le N}$ are the particles at iteration $n$ with empirical distribution $\\nu _n$ .", "Here, the gradient is rescaled by the number of particles $N$ .", "Re-arranging terms and recalling that $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim p}[\\psi (z,x)^T\\psi (z,x^{\\prime })]$ , equation eq:updateequationstudentteacher becomes: $x^i_{n+1} = x^i_n - \\gamma \\nabla f_{\\mu ,\\nu _n}(x_n^i).$ with $\\nabla f_{\\mu ,\\nu _n}(x_n^i) = \\left(\\frac{1}{N}\\sum _{j=1}^N \\nabla _2 k(x_n^{j},x_n^{i})-\\frac{1}{M}\\sum _{j=1}^M \\nabla _2 k(\\xi ^{j},x_n^{i})\\right)$ .", "The above equation is a discretized version of the gradient flow of the MMD defined in eq:continuitymmd.", "Such discretization is obtained from eq:eulermaruyama by setting the noise level $\\beta _n$ to 0.", "Hence, in the limit when $N\\rightarrow \\infty $ and $\\gamma \\rightarrow 0$ , one recovers the gradient flow defined in eq:eulerschemeparticles.", "In general the kernel $k$ is intractable and can be approximated using $n_b$ samples $(z_1,...,z_{n_b})$ from the data distribution: $\\hat{k}(x,x^{\\prime }) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\psi (z_b,x)^T \\psi (z_b,x^{\\prime })$ .", "This finally leads to an approximate update: $x^i_{n+1} = x^i_n - \\gamma \\nabla \\hat{f}_{\\mu ,\\nu _n}(x_n^i).$ where $\\nabla \\hat{f}_{\\mu ,\\nu _n}$ is given by: $\\nabla \\hat{f}_{\\mu ,\\nu _n}(x_n^i) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\left(\\frac{1}{N}\\sum _{j=1}^N \\psi (z_b,x_n^{j})-\\frac{1}{M}\\sum _{j=1}^M \\psi (z_b,\\xi ^{j})\\right)\\nabla _{x_n^{i}}\\psi (z_b,x_n^{i})).$ We provide now a proof for prop:inequalitymmdloss: [Proof of prop:inequalitymmdloss]Let $\\Psi (z,\\nu )$ =$\\int \\psi (z,x)\\mathop {}\\!\\mathrm {d}\\nu (x)$ .", "By eq:kernelNN, we have: $k(x,x^{\\prime }) =\\int _{z}\\psi (z,x)^T\\psi (z,x^{\\prime })\\mathop {}\\!\\mathrm {d}s(z)$ where $s$ denotes the distribution of $z$ .", "It is easy to see that ${{\\mathcal {F}}}(\\nu ) = \\frac{1}{2} \\int \\Vert \\Psi (z,\\nu ) -\\Psi (z,\\mu ) \\Vert ^2 \\mathop {}\\!\\mathrm {d}s(z) $ .", "Indeed expanding the square in the l.h.s and exchanging the order of integrations w.r.t $p$ and $(\\mu \\otimes \\nu )$ one gets ${{\\mathcal {F}}}(\\nu )$ .", "Now, introducing $\\Psi (z,\\mu )$ in the expression of $\\mathcal {L}(\\nu )$ , it follows by a simple calculation that: $\\mathcal {L}(\\nu )&= \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu )+ \\int \\left\\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu )\\right\\rangle \\mathop {}\\!\\mathrm {d}p(z)$ where $m(z)$ is the conditional mean of $y$ , i.e.", ": $m(z)=\\int y \\mathop {}\\!\\mathrm {d}p(y|z)$ .", "On the other hand we have that $2\\mathcal {L}(\\mu ) = var(y) + \\int \\Vert \\Psi (z,\\mu )-m(z)\\Vert ^2\\mathop {}\\!\\mathrm {d}p(z)$ , so that $ \\int \\Vert \\Psi (z,\\mu )-m(z)\\Vert ^2\\mathop {}\\!\\mathrm {d}p(z) = 2\\bar{\\mathcal {L}}(\\mu )$ .", "Hence, using Cauchy-Schwartz for the last term in eq:maineqnn, one gets the upper-bound: $\\mathcal {L}(\\nu )\\le \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu ) + 2 \\bar{\\mathcal {L}}(\\mu )^{\\frac{1}{2}}\\mathcal {F(\\nu )}^{\\frac{1}{2}}.$ This in turn gives an upper-bound on $\\bar{\\mathcal {L}}(\\nu )$ after subtracting $var(y)/2$ on both sides of the inequality.", "To get the lower bound on $\\bar{\\mathcal {L}}$ one needs to use the global optimality condition of $\\mu $ for $\\mathcal {L}$ from .", "Indeed, for any $0<\\epsilon \\le 1$ it is easy to see that: $\\epsilon ^{-1}( \\mathcal {L}(\\mu +\\epsilon (\\nu -\\mu ))-\\mathcal {L}(\\mu )) = \\int \\left\\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu )\\right\\rangle \\mathop {}\\!\\mathrm {d}p(z) +o(\\epsilon ).$ Taking the limit $\\epsilon \\rightarrow 0$ and recalling that the l.h.s is always non-negative by optimality of $\\mu $ , it follows that $\\int \\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu ) \\rangle \\mathop {}\\!\\mathrm {d}p(z)$ must also be non-negative.", "Therefore, from eq:maineqnn one gets that $\\mathcal {L}(\\nu ) \\ge \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu )$ .", "The final bound is obtained by subtracting $var(y)/2$ again from both sides of the inequality.", "Numerical Experiments Student-Teacher networks We consider a student-teacher network setting similar to .", "More precisely, using the notation from subsec:trainingneuralnetworks, we denote by $\\Psi (z,\\nu )$ the neural network of the form: $\\Psi (z,\\nu ) = \\int \\psi (z,x)\\mathop {}\\!\\mathrm {d}\\nu (x) $ where $z$ is an input vector in ${{\\mathbb {R}}}^{p}$ and $\\nu $ is a probability distribution over the parameters $x$ .", "Hence $\\Psi $ is an expectation over sub-networks $\\psi (z,x)$ with parameters $x$ .", "Here, we choose $\\psi $ of the form: $\\psi (z,x) = G\\left(b^{1}+W^{1}\\sigma (W^{0}z+b^{0})\\right).$ where $x$ is obtained as the concatenation of the parameters $(b^{1},W^{1},b^{0},W^{0})\\in {{\\mathcal {X}}}$ , $\\sigma $ is the ReLU non-linearity while $G$ is a fixed function and is defined later.", "Note that using $x$ to denote the parameters of a neural network is unusual, however, we prefer to keep a notation which is consistent with the rest of the paper.", "We will only consider the case when $\\nu $ is given by an empirical distribution of $N$ particles $X = (x^{1},...x^{N})$ for some $N\\in \\mathbb {N}$ .", "In that case, we denote by $\\nu _{X}$ such distribution to stress the dependence on the particles $X$ , i.e.", ": $ \\nu := \\nu _{X}= \\frac{1}{N} \\sum _{i=1}^N \\delta _{x^{i}}$ .", "The teacher network $\\Psi _{T}(z,\\nu _{\\Xi })$ is given by $M$ particles $\\Xi = (\\xi _1,...,\\xi _M)$ which are fixed during training and are initially drawn according to a normal distribution $\\mathcal {N}(0,1)$ .", "Similarly, the student network $\\Psi _{S}(z,\\nu _{X})$ has $N$ particles $X = (x^{1},...,x^{N})$ that are initialized according to a normal distribution $\\mathcal {N}(10^{-3},1)$ .", "Here we choose $M=1$ and $N=1000$ .", "The inputs $z$ are drawn from a uniform distribution $\\mathbb {S}$ on the sphere in ${{\\mathbb {R}}}^p$ as in with $p=50$ .", "The number of hidden layers $H$ is set to 3 and the output dimension is 1.", "The parameters of the student networks are trained to minimize the risk in eq:studentteacherproblem using SGD with mini-batches of size $n_b = 10^2$ and optimal step-size $\\gamma $ selected from: $\\lbrace 10^{-3},10^{-2},10^{-1}\\rbrace $ .", "$\\min _{X} \\mathbb {E}_{z\\sim \\mathbb {S} }\\left[(\\Psi _T(z,\\nu _{\\Xi } )- \\Psi _S(z,\\nu _{X}))^2\\right]$ When $G$ is simply the identity function and no bias is used, one recovers the setting in .", "In that case the network is partially 1-homogeneous and applies ensuring global optimality.", "Here, we are interested in the case when global optimality is not guaranteed by the homogeneity structure, hence we choose $G$ to be a gaussian with fixed bandwidth $\\sigma =2$ .", "As shown in subsec:trainingneuralnetworks, performing gradient descent to minimize eq:studentteacherproblem can be seen as a particle version of the gradient flow of the MMD with a kernel given by $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim \\mathbb {S}}[\\psi (z,x)\\psi (z,x^{\\prime })]$ and target distribution $\\mu $ given by $\\mu = \\nu _{\\Xi }$ .", "Hence one can use the noise injection algorithm defined in eq:eulermaruyama to train the parameters of the student network.", "Since $k$ is defined through an expectation over the data, it can be approximated using $n_{b}$ data samples $\\lbrace z_{1},...,z_{B}\\rbrace $ : $\\hat{k}(x,x^{\\prime }) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\psi (z_b,x)\\psi (z_b,x^{\\prime }).$ Such approximation of the kernel leads to a simple expression for the gradient of the unnormalised witness function between $\\nu _{\\Xi }$ and $\\nu _{X}$ : $\\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X}}(x) = \\frac{1}{n_b}\\sum _{b=1}^{n_b}\\left( \\frac{1}{M}\\sum _{j=1}^M\\psi (z_b,\\xi ^j) - \\frac{1}{N}\\sum _{i=1}^N\\psi (z_b , x^i)\\right)\\nabla _{x}\\psi (z_b,x), \\qquad \\forall x \\in {{\\mathcal {X}}}.$ euclidstudentteacher, provides the main steps to train the parameters of the student network using the noisy gradient flow of the MMD proposed in eq:eulermaruyama.", "It can be easily implemented using automatic differentiation packages like PyTorch.", "Indeed, one only needs to compute an auxiliary loss function ${{\\mathcal {F}}}_{aux}$ instead of the actual MMD loss ${{\\mathcal {F}}}$ and perform gradient descent using ${{\\mathcal {F}}}_{aux}$ .", "Such function is given by: ${{\\mathcal {F}}}_{aux} = \\frac{1}{n_b}\\sum _{i=1}^N\\sum _{b=1}^{n_b} \\left({\\tt NoGrad}\\left(y_S^b\\right) - y_T^b \\right)\\psi (z^b,\\widetilde{x}_n^{i})$ To compute ${{\\mathcal {F}}}_{aux}$ , two forward passes on the student network are required.", "A first forward pass using the current parameter values $X_n = (x_n^1,...,x_n^{N})$ of the student network is used to compute the predictions $y_S^b$ given an input $z^b$ .", "For such forward pass, the gradient w.r.t to the parameters $X_n$ is not used.", "This is enforced, here, formally by calling the function NoGrad.", "The second forward pass is performed using the noisy parameters $\\widetilde{x}_n^{i} = x_n^i + \\beta _n u_n^{i}$ and requires implementing special layers which can inject noise to the weights.", "This second forward pass will be used to provide a gradient to update the particles using back-propagation.", "Indeed, it is easy to see that $\\nabla _{x_n^{i}} {{\\mathcal {F}}}_{aux}$ gives exactly the gradient $\\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _X}(\\widetilde{x}_n^i)$ used in euclidstudentteacher.", "Learning gaussians Figure: Gradient flow of the MMDMMD from a gaussian initial distributions ν 0 ∼𝒩(10,0.5)\\nu _0\\sim \\mathcal {N}(10,0.5) towards a target distribution μ∼𝒩(0,1)\\mu \\sim \\mathcal {N}(0,1) using N=M=1000N=M=1000 samples from μ\\mu and ν 0 \\nu _0 and a gaussian kernel with bandwidth σ=2\\sigma = 2 .", "eq:eulermaruyama is usedwithout noise β n =0\\beta _n = 0 in red and with noise β n =10\\beta _n = 10 up to n=5000n=5000, then β n =0\\beta _n = 0 afterwards in blue.The left figure shows the evolution of the MMDMMD at each iteration.", "The middle figure shows the initial samples (black for μ\\mu ), and the right figure shows the final samples after 10 5 10^5 iterations with step-size γ=0.1\\gamma = 0.1.fig:experiments illustrates the behavior of the proposed algorithm eq:eulermaruyama in a simple setting, and compares it with the gradient flow of the MMD without noise injection.", "In this setting, the MMD flow fails to converge to the global optimum.", "Indeed, as shown in fig:experiments(right), some of the final samples (in red) obtained using noise-free gradient updates tend to get further away from the target samples (in black).", "Most of the remaining samples collapse to a unique point at the center near the origin.", "This can also be seen from fig:experiments(left) where the training error fails to decrease below $10^{-3}$ .", "On the other hand, adding noise to the gradient seems to lead to global convergence, as seen visually from the samples.", "The training error decreases below $10^{-4}$ and oscillates between $10^{-8}$ and $10^{-4}$ .", "The oscillation is due to the step-size, which remained fixed while the noise was set to 0 starting from iteration 5000.", "It is worth noting that adding noise to the gradient slows the speed of convergence, as one can see from fig:experiments(left).", "This is expected since the algorithm doesn't follow the path of steepest descent.", "The noise helps in escaping local optima, however, as illustrated here.", "1.35 Noisy gradient flow of the MMD [1] Input $N$ , $n_{iter}$ , $\\beta _0$ , $\\gamma $ Output $(x^{i}_{n_{iter}})_{1\\le i\\le N}$ Initialize $N$ particles from initial distribution $\\nu _0$ : $x_{0}^{i}\\mathrel {\\stackrel{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny i.i.d}}}{\\sim }}\\nu _0$ Initialize the noise level: $\\beta =\\beta _0$ $n=0,\\dots , n_{iter}$ Sample $M$ points from the target $\\mu $: $\\lbrace y^1,...,y^M\\rbrace $ .", "Sample $N$ gaussians : $\\lbrace u_n^{1},...,u_n^N\\rbrace $ $i=1,\\dots ,N$ Compute the noisy values: $\\widetilde{x}_n^{i} = x_n^i+\\beta _n u_n^i$ Evaluate vector field:$\\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(\\widetilde{x}_n^i) = \\frac{1}{N}\\sum \\limits _{j=1}^N \\nabla _2 k(x_n^j,\\widetilde{x}_n^{i})-\\frac{1}{M}\\sum \\limits _{m=1}^M \\nabla _2 k(y^m,\\widetilde{x}_n^{i})$ Update the particles: $x_{n+1}^{i} = x_n^i -\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(\\widetilde{x}_n^i)$ Update the noise level using an update rule $h$: $\\beta _{n+1}=h(\\beta _{n}, n)$ .", "1.35 Noisy gradient flow of the MMD for student-teacher learning [1] Input $N$ , $n_{iter}$ , $\\beta _0$ , $\\gamma $ , $n_{b}$ , $\\Xi = (\\xi ^j)_{1\\le j\\le M}$ .", "Output $(x^{i}_{n_{iter}})_{1\\le i\\le N}$ .", "Initialize $N$ particles from initial distribution $\\nu _0$ : $x_{0}^{i}\\mathrel {\\stackrel{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny i.i.d}}}{\\sim }}\\nu _0$ .", "Initialize the noise level: $\\beta =\\beta _0$ .", "$n=0,...,n_{iter}$ Sample minibatch of $n_{b}$ data points: $\\lbrace z^1,...,z^{n_{b}}\\rbrace $ .", "$b=1,...,n_{b}$ Compute teacher's output: $y_{T}^b = \\frac{1}{M}\\sum _{j=1}^M \\psi (z^b,\\xi ^{j})$ .", "Compute students's output: $y_{S}^b = \\frac{1}{N}\\sum _{i=1}^N \\psi (z^b,x_n^i)$ .", "Sample $N$ gaussians : $\\lbrace u_n^{1},...,u_n^{N}\\rbrace $ .", "$i=1,...,N$ Compute noisy particles: $\\widetilde{x}_n^{i} = x_n^i +\\beta _n u_n^{i}$ Evaluate vector field: $ \\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X_n}}(\\widetilde{x}_n^{i}) = \\frac{1}{n_{b}}\\sum _{b=1}^{n_{b}} ( y_{S}^b - y_{T}^b ) \\nabla _{x_n^{i}} \\psi (z^b,\\widetilde{x}_n^{i})$ Update particle $i$: $x_{n+1}^{i} = x_{n}^{i} -\\gamma \\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X_n}}(\\widetilde{x}_n^{i})$ Update the noise level using an update rule $h$: $\\beta _{n+1}=h(\\beta _{n}, n)$ .", "Auxiliary results Proposition 21 Under assump:lipschitzgradientk, the unnormalised witness function $f_{\\mu ,\\nu }$ between any probability distributions $\\mu $ and $\\nu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ is differentiable and satisfies: $\\nabla f_{\\mu ,\\nu }(z) = \\int \\nabla _1 k(z,x)\\mathop {}\\!\\mathrm {d}\\mu (x) - \\int \\nabla _1 k(z,x)\\mathop {}\\!\\mathrm {d}\\nu (x) \\qquad \\forall z\\in {{\\mathcal {X}}}$ where $z \\mapsto \\nabla _1 k(x,z)$ denotes the gradient of $z\\mapsto k(x,z)$ for a fixed $x \\in {{\\mathcal {X}}}$ .", "Moreover, the map $(z,\\mu ,\\nu )\\mapsto f_{\\mu ,\\nu }(z)$ is Lipschitz with: $\\Vert \\nabla f_{\\mu ,\\nu }(z) - \\nabla f_{\\mu ^{\\prime },\\nu ^{\\prime }}(z^{\\prime })\\Vert \\le 2L (\\Vert z-z^{\\prime } \\Vert + W_2(\\mu ,\\mu ^{\\prime }) + W_2(\\nu ,\\nu ^{\\prime }))$ Finally, each component of $\\nabla f_{\\mu ,\\nu }$ belongs to ${{\\mathcal {H}}}$ .", "The expression of the unnormalised witness function is given in eq:witnessfunction.", "To establish eq:gradientwitness, we simply need to apply the differentiation lemma .", "By assump:lipschitzgradientk, it follows that $ (x,z)\\mapsto \\nabla _1 k(z,x)$ has at most a linear growth.", "Hence on any bounded neighborhood of $z$ , $x\\mapsto \\Vert \\nabla _1 k(z,x) \\Vert $ is upper-bounded by an integrable function w.r.t.", "$\\mu $ and $\\nu $ .", "Therefore, the differentiation lemma applies and $\\nabla f_{\\mu ,\\nu }(z)$ is differentiable with gradient given by eq:gradientwitness.", "To prove the second statement, we will consider two optimal couplings: $\\pi _1$ with marginals $\\mu $ and $\\mu ^{\\prime }$ and $\\pi _2$ with marginals $\\nu $ and $\\nu ^{\\prime }$ .", "We use eq:gradientwitness to write: $\\Vert \\nabla f_{\\mu ,\\nu }(z) - \\nabla f_{\\mu ^{\\prime },\\nu ^{\\prime }}(z^{\\prime })\\Vert &= \\left\\Vert \\mathbb {E}_{\\pi _1}\\left[ \\nabla _1 k(z,x)-\\nabla _1 k(z^{\\prime },x^{\\prime }) \\right] - \\mathbb {E}_{\\pi _2}\\left[\\nabla _1 k(z,y)-\\nabla _1 k(z^{\\prime },y^{\\prime })\\right] \\right\\Vert \\\\& \\le \\mathbb {E}_{\\pi _1}\\left[ \\left\\Vert \\nabla _1 k(z,x)-\\nabla _1 k(z^{\\prime },x^{\\prime }) \\right\\Vert \\right] + \\mathbb {E}_{\\pi _2}\\left[\\left\\Vert \\nabla _1 k(z,y)-\\nabla _1 k(z^{\\prime },y^{\\prime }) \\right\\Vert \\right] \\\\&\\le L\\left( \\Vert z-z^{\\prime } \\Vert + \\mathbb {E}_{\\pi _1}[\\Vert x-x^{\\prime } \\Vert ] + \\Vert z-z^{\\prime } \\Vert + \\mathbb {E}_{\\pi _2}[\\Vert y-y^{\\prime } \\Vert ] \\right)\\\\&\\le L(2\\Vert z-z^{\\prime }\\Vert + W_2(\\mu ,\\mu ^{\\prime }) + W_2(\\nu ,\\nu ^{\\prime }) )$ The second line is obtained by convexity while the third one uses assump:lipschitzgradientk and finally the last line relies on $\\pi _1$ and $\\pi _2$ being optimal.", "The desired bound is obtained by further upper-bounding the last two terms by twice their amount.", "Lemma 22 Let $U$ be an open set, $q$ a probability distribution in $\\mathcal {P}_2({{\\mathcal {X}}}\\times \\mathcal {U})$ and $\\psi $ and $\\phi $ two measurable maps from ${{\\mathcal {X}}}\\times \\mathcal {U} $ to ${{\\mathcal {X}}}$ which are square-integrable w.r.t $q$ .", "Consider the path $\\rho _t$ from $(\\psi )_{\\#}q$ and $(\\psi +\\phi )_{\\#}q$ given by: $\\rho _t= (\\psi +t\\phi )_{\\#}q \\quad \\forall t\\in [0,1]$ .", "Under assump:lipschitzgradientk, $\\mathcal {F}(\\rho _t)$ is differentiable in $t$ with $\\dot{{{\\mathcal {F}}}}(\\rho _t)&=\\int \\nabla f_{\\mu ,\\rho _t}(\\psi (x,u)+t\\phi (x,u)) \\phi (x,u)\\mathop {}\\!\\mathrm {d}q(x,u)$ where $f_{\\mu ,\\rho _t}$ is the unnormalised witness function between $\\mu $ and $\\rho _t$ as defined in eq:witnessfunction.", "Moreover: $\\left|\\dot{{{\\mathcal {F}}}}(\\rho _t) - \\dot{{{\\mathcal {F}}}}(\\rho _s) \\right|\\le 3L\\left|t-s \\right|\\int \\left\\Vert \\phi (x,u) \\right\\Vert ^2 dq(x,u)$ For simplicity, we write $f_t$ instead of $f_{\\mu ,\\rho _t}$ and denote by $s_t(x,u)= \\psi (x,u)+t\\phi (x,u)$ The function $h: t\\mapsto k(s_t(x,u),s_t(x^{\\prime },u^{\\prime })) - k(s_t(x,u),z) - k(s_t(x^{\\prime },u^{\\prime }),z)$ is differentiable for all $(x,u)$ ,$(x^{\\prime },u^{\\prime })$ in ${{\\mathcal {X}}}\\times \\mathcal {U}$ and $z\\in {{\\mathcal {X}}}$ .", "Moreover, by assump:lipschitzgradientk, a simple computation shows that for all $0\\le t\\le 1$ : $\\left|\\dot{h} \\right|\\le L\\left[ \\left(\\left\\Vert z - \\phi (x,u)\\right\\Vert + \\left\\Vert \\psi (x,u)\\right\\Vert \\right) \\left\\Vert \\phi (x^{\\prime },u^{\\prime })\\right\\Vert +\\left(\\left\\Vert z - \\phi (x^{\\prime },u^{\\prime })\\right\\Vert + \\left\\Vert \\psi (x^{\\prime },u^{\\prime })\\right\\Vert \\right)\\left\\Vert \\phi (x,u)\\right\\Vert \\right]$ The right hand side of the above inequality is integrable when $z$ , $(x,u)$ and $(x^{\\prime },u^{\\prime })$ are independent and such that $z\\sim \\mu $ and both $(x,u)$ and $(x^{\\prime },u^{\\prime })$ are distributed according to $q$ .", "Therefore, by the differentiation lemma it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[(\\nabla _1 k(s_t(x,u),s_t(x^{\\prime },u^{\\prime }))-\\nabla _1 k(s_t(x,u),z)).\\phi (x,u)\\right].$ By prop:gradwitnessfunction, we directly get $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\int \\nabla f_{\\mu ,\\rho _t}(\\psi (x,u)+t\\phi (x,u)) \\phi (x,u)\\mathop {}\\!\\mathrm {d}q(x,u)$ .", "We shall control now the difference $\\vert \\dot{F}(\\rho _t)-\\dot{{{\\mathcal {F}}}}(\\rho _{t^{\\prime }})\\vert $ for $0\\le t,t^{\\prime }\\le 1$ .", "Using assump:lipschitzgradientk and recalling that $s_t(x,u)-s_{t^{\\prime }}(x,u)= (t-t^{\\prime })\\phi (x,u)$ a simple computation shows: $\\left|\\dot{{{\\mathcal {F}}}}(\\rho _t)-\\dot{{{\\mathcal {F}}}}(\\rho _{t^{\\prime }}) \\right|&\\le L\\left|t-t^{\\prime } \\right|\\mathbb {E}\\left[\\left(2\\Vert \\phi (x,u) \\Vert + \\Vert \\phi (x^{\\prime },u^{\\prime })\\Vert \\right)\\Vert \\phi (x,u)\\Vert \\right]\\\\&\\le L\\vert t-t^{\\prime }\\vert (2\\mathbb {E}\\left[\\Vert \\phi (x,u)\\Vert ^2 \\right] + \\mathbb {E}\\left[\\Vert \\phi (x,u)\\Vert \\right]^2)\\\\&\\le 3L\\vert t-t^{\\prime }\\vert \\int \\Vert \\phi (x,u)\\Vert ^2 \\mathop {}\\!\\mathrm {d}q(x,u).$ which gives the desired upper-bound.", "We denote by $(x,y)\\mapsto H_1 k(x,y)$ the Hessian of $x\\mapsto k(x,y)$ for all $y\\in {{\\mathcal {X}}}$ and by $(x,y)\\mapsto \\nabla _1\\nabla _2 k(x,y)$ the upper cross-diagonal block of the hessian of $(x,y)\\mapsto k(x,y)$ .", "Lemma 23 Let $q$ be a probability distribution in $\\mathcal {P}_2({{\\mathcal {X}}}\\times {{\\mathcal {X}}})$ and $\\psi $ and $\\phi $ two measurable maps from ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ to ${{\\mathcal {X}}}$ which are square-integrable w.r.t $q$ .", "Consider the path $\\rho _t$ from $(\\psi )_{\\#}q$ and $(\\psi +\\phi )_{\\#}q$ given by: $\\rho _t= (\\psi +t\\phi )_{\\#}q \\quad \\forall t\\in [0,1]$ .", "Under assump:diffkernel,assump:lipschitzgradientk, $\\mathcal {F}(\\rho _t)$ is twice differentiable in $t$ with $\\ddot{{{\\mathcal {F}}}}(\\rho _t)=&\\mathbb {E}\\left[\\phi (x,y)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime })) \\phi (x^{\\prime },y^{\\prime })\\right] \\\\&+ \\mathbb {E}\\left[\\phi (x,y)^T (H_1k(s_t(x,y),y_t^{\\prime })-H_1k(s_t(x,y),z)) \\phi (x,y)\\right]$ where $(x,y)$ and $(x^{\\prime },y^{\\prime })$ are independent samples from $q$ , $z$ is a sample from $\\mu $ and $s_t(x,y)= \\psi (x,y)+t\\phi (x,y)$ .", "Moreover, if assump:boundedfourthoder also holds then: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\mathbb {E}\\left[\\phi (x,y)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime })) \\phi (x^{\\prime },y^{\\prime })\\right] - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho _t)^{\\frac{1}{2}}\\mathbb {E}[\\Vert \\phi (x,y) \\Vert ^2]$ where we recall that ${{\\mathcal {X}}}\\subset \\mathbb {R}^d$ .", "The first part is similar to lem:derivativemmdaugmented.", "In fact we already know by lem:derivativemmdaugmented that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ exists and is given by: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[(\\nabla _1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-\\nabla _1 k(s_t(x,y),z)).\\phi (x,y)\\right]$ Define now the function $\\xi : t\\mapsto (\\nabla _1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-\\nabla _1 k(s_t(x,y),z)).\\phi (x,y)$ which is differentiable for all $(x,y)$ ,$(x^{\\prime },y^{\\prime })$ in ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ and $z\\in {{\\mathcal {X}}}$ by assump:diffkernel.", "Moreover, its time derivative is given by: $\\dot{\\xi } =& \\phi (x^{\\prime },y^{\\prime })^T \\nabla _2\\nabla _1k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))\\phi (x,y) \\\\&+ \\phi (x,y)^T(H_1k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }) ) - H_1k(s_t(x,y),z ))\\phi (x,y)$ By assump:lipschitzgradientk it follows in particular that $\\nabla _2\\nabla _1k$ and $H_1k$ are bounded hence $\\vert \\dot{\\xi } \\vert $ is upper-bounded by $ (\\Vert \\phi (x,y) \\Vert + \\Vert \\phi (x^{\\prime },u^{\\prime }) \\Vert )\\Vert \\phi (x,y)\\Vert $ which is integrable.", "Therefore, by the differentiation lemma it follows that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is differentiable and $\\ddot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[\\dot{\\xi }\\right].$ We prove now the second statement.", "Bu the reproducing property, it is easy to see that the last term in the expression of $\\dot{\\xi }$ can be written as: $\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), k(s_t(x^{\\prime },y^{\\prime }),.", ")- k(z,.", ")\\rangle _{{{\\mathcal {H}}}}$ Now, taking the expectation w.r.t $x^{\\prime }$ ,$y^{\\prime }$ and $z$ which can be exchanged with the inner-product in ${{\\mathcal {H}}}$ since $(x^{\\prime },y^{\\prime },z)\\mapsto k(s_t(x^{\\prime },y^{\\prime }),.", ")- k(z,.", ")$ is Bochner integrable and recalling that such integral is given by $f_{\\mu ,\\rho _t}$ one gets the following expression: $\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), f_{\\mu ,\\rho _t} \\rangle _{{{\\mathcal {H}}}}$ Using Cauchy-Schwartz and assump:boundedfourthoder it follows that: $\\vert \\left\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), f_{\\mu ,\\rho _t} \\right\\rangle _{{{\\mathcal {H}}}}\\vert \\le \\lambda d\\Vert \\phi (x,y)\\Vert ^2 \\Vert f_{\\mu ,\\rho _t}\\Vert $ One then concludes using the expression of $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ and recalling that ${{\\mathcal {F}}}(\\rho _t) = \\frac{1}{2}\\Vert f_{\\mu ,\\rho _t} \\Vert ^2$ .", "Lemma 24 Assume that for any geodesic $(\\rho _{t})_{t\\in [0,1]}$ between $\\rho _{0}$ and $\\rho _{1}$ in $\\mathcal {P}({{\\mathcal {X}}})$ with velocity vectors $(V_t)_{t \\in [0,1]}$ the following holds: $\\ddot{{{\\mathcal {F}}}}(\\rho _{t}) \\ge \\Lambda (\\rho _t,V_t)$ for some admissible functional $\\Lambda $ as defined in def:conditionslambda, then: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\rho _{0})+t{{\\mathcal {F}}}(\\rho _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds$ with $G(s,t)=s(1-t) \\mathbb {1}\\lbrace s\\le t\\rbrace +t(1-s) \\mathbb {1}\\lbrace s\\ge t\\rbrace $ for $0\\le s,t\\le 1$ .", "This is a direct consequence of the general identity (, Proposition 16.2).", "Indeed, for any continuous function $\\phi $ on $[0,1]$ with second derivative $\\ddot{\\phi }$ that is bounded below in distribution sense the following identity holds: $\\phi (t)=(1-t)\\phi (0)+t\\phi (1)-\\int _{0}^{1}\\ddot{\\phi }(s)G(s,t)ds.$ This holds a fortiori for ${{\\mathcal {F}}}(\\rho _{t})$ since ${{\\mathcal {F}}}$ is smooth.", "By assumption, we have that $\\ddot{{{\\mathcal {F}}}}(\\rho _{t}) \\ge \\Lambda (\\rho _t,V_t)$ , hence, it follows that: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\rho _{0})+t{{\\mathcal {F}}}(\\rho _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds.$ Lemma 25 [Mixture convexity] The functional ${{\\mathcal {F}}}$ is mixture convex: for any probability distributions $\\nu _1$ and $\\nu _2$ and scalar $1\\le \\lambda \\le 1$ : ${{\\mathcal {F}}}(\\lambda \\nu _1+(1-\\lambda )\\nu _2)\\le \\lambda {{\\mathcal {F}}}(\\nu _1)+ (1-\\lambda ){{\\mathcal {F}}}(\\nu _2)$ Let $\\nu $ and $\\nu ^{\\prime }$ be two probability distributions and $0\\le \\lambda \\le 1$ .", "Expanding the RKHS norm in ${{\\mathcal {F}}}$ it follows directly that: $\\mathcal {F}(\\lambda \\nu + (1-\\lambda )\\nu ^{\\prime }) -\\lambda \\mathcal {F}(\\nu ) -(1-\\lambda )\\mathcal {F}(\\nu ^{\\prime }) = -\\frac{1}{2}\\lambda (1-\\lambda )MMD(\\nu ,\\nu ^{\\prime })^2 \\le 0.$ which concludes the proof.", "Lemma 26 [Discrete Gronwall lemma] Let $a_{n+1}\\le (1+\\gamma A)a_{n}+b$ with $\\gamma >0$ , $A>0$ , $b>0$ and $a_0=0$ , then: $a_{n}\\le \\frac{b}{\\gamma A}(e^{n\\gamma A}-1).$ Using the recursion, it is easy to see that for any $n>0$ : $a_n \\le (1+\\gamma A)^n a_0 + b\\left(\\sum _{i=0}^{n-1}(1+\\gamma A )^{k}\\right)$ One concludes using the identity $\\sum _{i=0}^{n-1}(1+\\gamma A )^{k} =\\frac{1}{\\gamma A}((1+\\gamma A)^{n} -1)$ and recalling that $(1+\\gamma A)^{n} \\le e^{n\\gamma A}$ ." ], [ "Continuous time flow", "Existence and uniqueness of a solution to eq:continuitymmd,eq:mcKeanVlasovprocess is guaranteed under Lipschitz regularity of $\\nabla k$ .", "[Proof of prop:existenceuniqueness][Existence and uniqueness] Under assump:lipschitzgradientk, the map $(x,\\nu )\\mapsto \\nabla f_{\\mu ,\\nu }(x)=\\int \\nabla k(x,.", ")d \\nu - \\int \\nabla k(x,.)", "d \\mu $ is Lipschitz continuous on ${{\\mathcal {X}}}\\times \\mathcal {P}_2({{\\mathcal {X}}})$ (endowed with the product of the canonical metric on ${{\\mathcal {X}}}$ and $W_2$ on $\\mathcal {P}_2({{\\mathcal {X}}})$ ), see prop:gradwitnessfunction.", "Hence, we benefit from standard existence and uniqueness results of McKean-Vlasov processes (see ).", "Then, it is straightforward to verify that the distribution of eq:mcKeanVlasovprocess is solution of eq:continuitymmd by ItÃŽ's formula (see ).", "The uniqueness of the gradient flow, given a starting distribution $\\nu _0$ , results from the $\\lambda $ -convexity of ${{\\mathcal {F}}}$ (for $\\lambda =3L$ ) which is given by lem:lambdaconvexitybis, and .", "The existence derive from the fact that the sub-differential of ${{\\mathcal {F}}}$ is single-valued, as stated by prop:differentialmmd, and that any $\\nu _0$ in $\\mathcal {P}_2({{\\mathcal {X}}})$ is in the domain of ${{\\mathcal {F}}}$ .", "One can then apply .", "[Proof of prop:decaymmd][Decay of the MMD] Recalling the discussion in subsec:gradientflowsfunctionals, the time derivative of ${{\\mathcal {F}}}(\\nu _t)$ along the flow is formally given by eq:dissipationenergy.", "But we know from prop:differentialmmd that the strong differential $\\nabla \\frac{\\delta {{\\mathcal {F}}}(\\nu )}{\\delta \\nu }$ is given by $\\nabla f_{\\mu ,\\nu }$ .", "Therefore, one formally obtains the desired expression by exchanging the order of derivation and integration, performing an integration by parts and using the continuity equation (see (REF )).", "We refer to for similar calculations.", "One can also obtain directly the same result using the energy identity in which holds for $\\lambda $ -displacement convex functionals.", "The result applies here since, by lem:lambdaconvexitybis, we know that ${{\\mathcal {F}}}$ is $\\lambda $ -displacement convex with $\\lambda = 3L$ ." ], [ "Time-discretized flow", "We prove that eq:eulerscheme approximates eq:continuitymmd.", "To make the dependence on the step-size $\\gamma $ explicit, we will write: $\\nu _{n+1}^{\\gamma } =(I-\\gamma \\nabla f_{\\mu ,\\nu _n^{\\gamma }})_{\\#}\\nu _{n}^{\\gamma }$ (so $\\nu _n^{\\gamma }=\\nu _n$ for any $n \\ge 0$ ).", "We start by introducing an auxiliary sequence $\\bar{\\nu }_{n}^{\\gamma }$ built by iteratively applying $\\nabla f_{\\mu ,\\nu _{\\gamma n}}$ where $\\nu _{\\gamma n}$ is the solution of eq:continuitymmd at time $t= \\gamma n$ : $\\bar{\\nu }_{n+1}^{\\gamma } =(I-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}})_{\\#}\\bar{\\nu }_{n}^{\\gamma }$ with $\\bar{\\nu }_{0}=\\nu _{0}$ .", "Note that the latter sequence involves the continuous-time process $\\nu _t$ of eq:continuitymmd with $t=\\gamma n$ .", "Using $\\nu _n^{\\gamma }$ , we also consider the interpolation path $\\rho _{t}^{\\gamma }=(I-(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }})_{\\#}\\nu _{n}^{\\gamma }$ for all $t\\in [n\\gamma ,(n+1)\\gamma )$ and $n\\in \\mathbb {N}$ , which is the same as in prop:convergenceeulerscheme.", "[Proof of prop:convergenceeulerscheme] Let $\\pi $ be an optimal coupling between $\\nu _{n}^{\\gamma }$ and $\\nu _{\\gamma n}$ , and $(x,y)$ a sample from $\\pi $ .", "For $t\\in [n\\gamma ,(n+1)\\gamma )$ we write $y_{t} =y_{n\\gamma }-\\int _{n\\gamma }^{t}\\nabla f_{\\mu ,\\nu _{s}}(y_u)\\mathop {}\\!\\mathrm {d}u$ and $x_{t} =x-(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)$ where $y_{n\\gamma }= y$ .", "We also introduce the approximation error $ E(t,n\\gamma ):=y_{t}-y+(t-n\\gamma )\\nabla f_{\\mu ,\\nu _{\\gamma n}}(y)$ for which we know by lem:Taylor-expansion that $\\mathcal {E}(t,n\\gamma ):=\\mathbb {E}[E(t,n\\gamma )^2]^{\\frac{1}{2}}$ is upper-bounded by $(t-n\\gamma )^{2}C$ for some positive constant $C$ that depends only on $T$ and the Lipschitz constant $L$ .", "This allows to write: $W_{2}(\\rho _{t}^{\\gamma },\\nu _{t}) & \\le \\mathbb {E}\\left[\\left\\Vert y-x+(t-n\\gamma )(\\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)-\\nabla f_{\\mu ,\\nu _{\\gamma n}}(y))+E(t,n\\gamma )\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\& \\le W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+4L(t-n\\gamma )W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+\\mathcal {E}(t,n\\gamma )\\\\& \\le (1+4\\gamma L)W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n})+(t-\\gamma n)^2C\\\\ &\\le (1+4\\gamma L)\\left( W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })+W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma }) \\right)+\\gamma ^{2}C \\\\& \\le \\gamma \\left[\\left(1+4\\gamma L\\right)M(T)+\\gamma C\\right]$ The second line is obtained using that $\\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)$ is jointly $2L$ -Lipschitz in $x$ and $\\nu $ (see prop:gradwitnessfunction) and by the fact that $W_{2}(\\nu _{n}^{\\gamma },\\nu _{\\gamma n}) = \\mathbb {E}_{\\pi }[\\Vert y-x\\Vert ^2]^{\\frac{1}{2}}$ .", "The third one is obtained using $t-n \\gamma \\le \\gamma $ .", "For the last inequality, we used lem:eulererror1,lem:eulererror2 where $M(T)$ is a constant that depends only on $T$ .", "Hence for $\\gamma \\le \\frac{1}{4L}$ we get $W_{2}(\\rho _{t}^{\\gamma },\\nu _{t})\\le \\gamma (\\frac{C}{4L}+2M(T)).$ Lemma 10 For any $n\\ge 0$ : $W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })\\le \\gamma \\frac{C}{2L}(e^{n\\gamma 2L}-1)$ Let $\\pi $ be an optimal coupling between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{\\gamma n}$ and $(\\bar{x}$ , $x)$ a joint sample from $\\pi $ .", "Consider also the joint sample $(\\bar{y},y)$ obtained from $(\\bar{x}$ ,$x)$ by applying the gradient flow of ${{\\mathcal {F}}}$ in continuous time to get $y := x_{(n+1)\\gamma }=x_{n \\gamma }-\\int _{n\\gamma }^{(n+1)\\gamma }\\nabla f_{\\mu ,\\nu _{s}}(x_u)\\mathop {}\\!\\mathrm {d}u$ with $x_{n\\gamma } = x$ and by taking a discrete step from $\\bar{x}$ to write $\\bar{y}=\\bar{x}-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})$ .", "It is easy to see that $y\\sim \\nu _{\\gamma (n+1)}$ (i.e.", "a sample from the continous process eq:continuitymmd at time $t=(n+1)\\gamma $ ) and $\\bar{y}\\sim \\bar{\\nu }_{n+1}^{\\gamma }$ (i.e.", "a sample from eq:intermedprocesstime).", "Moreover, we introduce the approximation error $E((n+1)\\gamma ,n\\gamma ):=y-x+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)$ for which we know by lem:Taylor-expansion that $\\mathcal {E}((n+1)\\gamma ,n\\gamma ):=\\mathbb {E}[E((n+1)\\gamma ,n\\gamma )^2]^{\\frac{1}{2}}$ is upper-bounded by $\\gamma ^{2}C$ for some positive constant $C$ that depends only on $T$ and the Lipschitz constant $L$ .", "Denoting by $a_{n}=W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })$ , one can therefore write: $a_{n+1}\\le & \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)-\\bar{x}+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})+E((n+1)\\gamma ,n\\gamma )\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\\\le & \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}+\\gamma \\mathbb {E_{\\pi }}\\left[\\left\\Vert \\nabla f_{\\mu ,\\nu _{\\gamma n}}(x)-\\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x}))\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}+\\gamma ^{2}C$ Using that $\\nabla f_{\\mu ,\\nu _{\\gamma n}}$ is $2L$ -Lipschitz by prop:gradwitnessfunction and recalling that $\\mathbb {E}_{\\pi }\\left[\\Vert x-\\bar{x}\\Vert ^{2}\\right]^{\\frac{1}{2}}=W_{2}(\\nu _{\\gamma n},\\bar{\\nu }_{n}^{\\gamma })$ , we get the recursive inequality $a_{n+1}\\le (1+2 \\gamma L)a_{n}+\\gamma ^{2}C$ .", "Finally, using lem:Discrete-Gronwall-lemma and recalling that $a_{0}=0$ , since by definition $\\bar{\\nu }_{0}^{\\gamma }=\\nu _{0}^{\\gamma }$ , we conclude that $a_{n}\\le \\gamma \\frac{C}{2L}(e^{n\\gamma 2L}-1)$ .", "Lemma 11 For any $T>0$ and $n$ such that $n\\gamma \\le T$ $W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })\\le \\gamma \\frac{C}{8L^2}(e^{4TL}-1)^{2}$ Consider now an optimal coupling $\\pi $ between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{n}^{\\gamma }$ .", "Similarly to lem:eulererror1, we denote by $(\\bar{x},x)$ a joint sample from $\\pi $ and $(\\bar{y},y)$ is obtained from $(\\bar{x},x)$ by applying the discrete updates : $\\bar{y}=\\bar{x}-\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})$ and $y=x-\\gamma \\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)$ .", "We again have that $y\\sim \\nu _{n+1}^{\\gamma }$ (i.e.", "a sample from the time discretized process eq:eulerscheme) and $\\bar{y}\\sim \\bar{\\nu }_{n+1}^{\\gamma }$ (i.e.", "a sample from eq:intermedprocesstime).", "Now, denoting by $b_{n}=W_{2}(\\nu _{n}^{\\gamma },\\bar{\\nu }_{n}^{\\gamma })$ , it is easy to see from the definition of $\\bar{y}$ and $y$ that we have: $b_{n+1} & \\le \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\gamma \\nabla f_{\\mu ,\\nu _{n}^{\\gamma }}(x)-\\bar{x}+\\gamma \\nabla f_{\\mu ,\\nu _{\\gamma n}}(\\bar{x})\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\&\\le (1+2\\gamma L) \\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^2\\right]^{\\frac{1}{2}} + 2\\gamma L W_2(\\nu _n^{\\gamma },\\nu _{\\gamma n}))\\\\& \\le (1+ 4\\gamma L)b_n + \\gamma L W_2(\\bar{\\nu }_n^{\\gamma },\\nu _{\\gamma n})$ The second line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is $2L$ -Lipschitz in both $x$ and $\\nu $ by prop:gradwitnessfunction.", "The third line follows by triangular inequality and using $\\mathbb {E_{\\pi }}\\left[\\left\\Vert x-\\bar{x}\\right\\Vert ^2\\right]^{\\frac{1}{2}}= W_2(\\nu _n^{\\gamma },\\bar{\\nu }_n^{\\gamma }) = b_n$ , since $\\pi $ is an optimal coupling between $\\bar{\\nu }_{n}^{\\gamma }$ and $\\nu _{n}^{\\gamma }$ .", "By lem:eulererror1, we have $W_2(\\bar{\\nu }_n^{\\gamma },\\nu _{\\gamma n})\\le \\gamma \\frac{C}{2L}(e^{2n\\gamma L}-1)$ , hence, for any $n$ such that $n\\gamma \\le T$ we get the recursive inequality $b_{n+1}\\le (1+4\\gamma L)b_{n}+(C/2L)\\gamma ^{2}(e^{2TL}-1).$ Finally, using again lem:Discrete-Gronwall-lemma, it follows that $b_{n}\\le \\gamma \\frac{C}{8L^2}(e^{4TL}-1)^{2}$ .", "Lemma 12 [Taylor expansion] Consider the process $ \\dot{x}_t = - \\nabla f_{\\mu ,\\nu _t}(x_t) $ , and denote by $\\mathcal {E}(t,s) = \\mathbb {E}[ \\Vert x_t - x_s +(t-s)\\nabla f_{\\mu ,\\nu _s}(x_s) \\Vert ^2 ]^{\\frac{1}{2}} $ for $0\\le s \\le t \\le T$ .", "Then one has: $\\mathcal {E}(t,s)\\le 2L^2 r_0 e^{LT}(t-s)^2$ with $r_0 = \\mathbb {E}_{(x,z)\\sim \\nu _0 \\otimes \\mu }[\\Vert x-z \\Vert ]$ By definition of $x_t$ and $\\mathcal {E}(t,s)$ one can write: $\\mathcal {E}(t,s)&=\\mathbb {E}\\left[\\left\\Vert \\int _{s}^t (\\nabla f_{\\mu ,\\nu _s}(x_s) - \\nabla f_{\\mu ,\\nu _u}(x_u))\\mathop {}\\!\\mathrm {d}u \\right\\Vert ^2 \\right]^{\\frac{1}{2}} \\\\&\\le \\int _{s}^t \\mathbb {E}\\left[\\left\\Vert (\\nabla f_{\\mu ,\\nu _s}(x_s) - \\nabla f_{\\mu ,\\nu _u}(x_u)) \\right\\Vert ^2 \\right]^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u\\\\&\\le 2L\\int _{s}^t \\mathbb {E}\\left[(\\left\\Vert x_s - x_u \\right\\Vert + W_2(\\nu _s,\\nu _u))^2 \\right]^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u \\le 4L\\int _{s}^t \\mathbb {E}\\left[\\left\\Vert x_s - x_u \\right\\Vert ^2\\right]^{\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u$ Where we used an integral expression for $x_t$ in the first line then applied a triangular inequality for the second line.", "The last line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is jointly $2L$ -Lipschitz in $x$ and $\\nu $ by prop:gradwitnessfunction and that $W_2(\\nu _s,\\nu _u) \\le \\mathbb {E}\\left[\\left\\Vert x_s - x_u \\right\\Vert ^2\\right]^{\\frac{1}{2}}$ .", "Now we use again an integral expression for $x_u$ which further gives: $\\mathcal {E}(t,s) \\le & 4L \\int _{s}^t \\mathbb {E}\\left[\\left\\Vert \\int _s^u \\nabla f_{\\mu ,\\nu _l}(x_l) \\mathop {}\\!\\mathrm {d}l \\right\\Vert ^2 \\right]^{\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u\\\\\\le & 4L \\int _{s}^t \\int _s^u \\mathbb {E}\\left[ \\left\\Vert \\mathbb {E}\\left[ \\nabla _1 k(x_l,x_l^{\\prime }) - \\nabla _1 k(x_l,z) \\right] \\right\\Vert ^2 \\right]^\\frac{1}{2}\\mathop {}\\!\\mathrm {d}l\\mathop {}\\!\\mathrm {d}u\\\\\\le &4L^2 \\int _{s}^t \\int _s^u \\mathbb {E}\\left[\\left\\Vert x_l^{\\prime } - z \\right\\Vert \\right] \\mathop {}\\!\\mathrm {d}l \\mathop {}\\!\\mathrm {d}u$ Again, the second line is obtained using a triangular inequality and recalling the expression of $\\nabla f_{\\mu ,\\nu }(x)$ from prop:gradwitnessfunction.", "The last line uses that $\\nabla k$ is $L$ -Lipschitz by assump:lipschitzgradientk.", "Now we need to make sure that $\\Vert x_l^{\\prime } - z \\Vert $ remains bounded at finite times.", "For this we will first show that $ r_t = \\mathbb {E}[\\Vert x_t - z \\Vert ]$ satisfies an integro-differential inequality: $r_t\\le & \\mathbb {E}\\left[\\left\\Vert x_0 - z -\\int _0^t \\nabla f_{\\mu ,\\nu _s}(x_s) \\mathop {}\\!\\mathrm {d}s \\right\\Vert \\right]\\\\\\le &r_0 +\\int _0^t \\mathbb {E}\\left[\\left\\Vert \\nabla _1 k(x_s,x_s^{\\prime })- \\nabla _1 k(x_s,z) \\right\\Vert \\right] \\mathop {}\\!\\mathrm {d}s\\le r_0 + L\\int _0^t r_s \\mathop {}\\!\\mathrm {d}s$ Again, we used an integral expression for $x_t$ in the first line, then a triangular inequality recalling the expression of $\\nabla f_{\\mu ,\\nu _s}$ .", "The last line uses again that $\\nabla k$ is $L$ -Lipschitz.", "By Gronwall's lemma it is easy to see that $r_t \\le r_0e^{Lt}$ at all times.", "Moreover, for all $t\\le T$ we have a fortiori that $r_t \\le r_0 e^{LT}$ .", "Recalling back the upper-bound on $\\mathcal {E}(t,s)$ we have finally: $\\mathcal {E}(t,s)\\le 4L^2 r_0 e^{LT} \\int _{s}^t \\int _s^u \\mathop {}\\!\\mathrm {d}l \\mathop {}\\!\\mathrm {d}u = 2L^2 r_0 e^{LT}(t-s)^2$ We show now that eq:eulerscheme decreases the functional ${{\\mathcal {F}}}$ .", "In all the proofs, the step-size $\\gamma $ is fixed.", "[Proof of prop:decreasingfunctional] Consider a path between $\\nu _n$ and $\\nu _{n+1}$ of the form $\\rho _t =(I-\\gamma t\\nabla f_{\\mu ,\\nu _n})_{\\#}\\nu _n$ .", "We know by prop:gradwitnessfunction that $\\nabla f_{\\mu ,\\nu _n}$ is $2L$ Lipschitz, thus by lem:derivativemmdaugmented and using $\\phi (x) = -\\gamma \\nabla f_{\\mu ,\\nu _n}(x)$ , $\\psi (x) = x$ and $q = \\nu _n$ it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and hence absolutely continuous.", "Therefore one can write: $\\mathcal {F}(\\rho _1)-\\mathcal {F}(\\rho _0) = \\dot{\\mathcal {F}}(\\rho _0)+ \\int _0^1 \\dot{{{\\mathcal {F}}}}(\\rho _t)- \\dot{{{\\mathcal {F}}}}(\\rho _0)dt.$ Moreover, lem:derivativemmdaugmented also allows to write: $\\dot{\\mathcal {F}}(\\rho _0) = -\\gamma \\int \\Vert \\nabla f_{\\mu ,\\nu _n}(x) \\Vert ^2 d\\nu _n(x); \\qquad \\vert \\dot{{{\\mathcal {F}}}}(\\rho _t)- \\dot{{{\\mathcal {F}}}}(\\rho _0)\\vert \\le 3L t\\gamma ^2 \\int \\Vert \\nabla f_{\\mu ,\\nu _n}(X) \\Vert ^2 d\\nu _n(X).$ where $t\\le 1$ .", "Hence, the result follows directly by applying the above expression to eq:taylorexpansiondecreasing." ], [ "Equilibrium condition", "We discuss here the equilibrium condition eq:equilibriumcondition and relate it to .", "Recall that eq:equilibriumcondition is given by: $\\int \\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) = 0$ .", "Under some mild assumptions on the kernel which are states in it is possible to write eq:equilibriumcondition as: $\\int \\Vert \\nabla f_{\\mu ,\\nu ^{*}}(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu ^{*}(x) = \\langle f_{\\mu ,\\nu ^{*}} , D_{\\nu ^{*}} f_{\\mu ,\\nu ^{*}}\\rangle _{{{\\mathcal {H}}}} = 0$ where $D_{\\nu ^{*}}$ is a Hilbert-Schmidt operator given by: $D_{\\nu ^{*}} = \\int \\sum _{i=1}^d \\partial _i k(x,.", ")\\otimes \\partial _i k(x,.)", "\\mathop {}\\!\\mathrm {d}\\nu ^{*}(x)$ Hence eq:equilibriumcondition is equivalent to say that $f_{\\mu ,\\nu ^{*}}$ belongs to the null space of $D_{\\nu ^{*}}$ .", "In , a similar equilibrium condition is derived by considering the time derivative of the MMD along the KSD gradient flow: $\\frac{1}{2} \\frac{d}{dt} MMD^2(\\mu ,\\nu _t) = - \\lambda \\langle f_{\\mu ,\\nu _t}, (\\frac{1}{\\lambda }I - (D_{\\nu _t} +\\lambda I )^{-1})f_{\\mu ,\\nu _t} \\rangle _{{{\\mathcal {H}}}}$ The r.h.s is shown to be always negative and thus the MMD decreases in time.", "Hence, as $t$ approaches $\\infty $ , the r.h.s tends to 0 since the MMD converges to some limit value $l$ .", "This provides the equilibrium condition: $\\lambda \\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ It is further shown in that the above equation is also equivalent to having $f_{\\mu ,\\nu ^{*}}$ in the null space of $D_{\\nu ^{*}}$ in the case when $D_{\\nu ^{*}}$ has finite dimensions.", "We generalize this statement to infinite dimension in prop:nullspacediffoperator.", "In , it is simply assumed that if $f_{\\mu ,\\nu ^{*}} \\ne 0$ then $D_{\\nu ^{*}} f_{\\mu ,\\nu ^{*}} \\ne 0 $ which exactly amounts to assuming that local optima which are not global don't exist.", "Proposition 13 $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0 \\iff f_{\\mu ,\\nu ^{*}} \\in null(D_{\\nu ^{*}})$ This follows simply by recalling $D_{\\nu ^{*}}$ is a symmetric non-negative Hilbert-Schmidt operator it has therefore an eigen-decomposition of the form: $D_{\\nu ^{*}} = \\sum _{i=1}^{\\infty } \\lambda _i e_i \\otimes e_i$ where $e_i$ is an ortho-norrmal basis of ${{\\mathcal {H}}}$ and $\\lambda _i$ are non-negative.", "Moreover, $f_{\\mu ,\\nu ^{*}}$ can be decomposed in $(e_i)_{1\\le i}$ in the form: $f_{\\mu ,\\nu ^{*}} = \\sum _{i=0}^{\\infty } \\alpha _i e_i$ where $\\alpha _i$ is a squared integrable sequence.", "It follows that $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}}$ can be written as: $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = \\sum _{i=1}^{\\infty } \\frac{\\lambda _i}{\\lambda _i+\\lambda } \\alpha _i^2$ Hence, if $f_{\\mu ,\\nu ^{*}}\\in null(D_{\\nu ^{*}})$ then $\\langle f_{\\mu ,\\nu ^{*}}, D_{\\nu ^{*}}f_{\\mu ,\\nu ^{*}}\\rangle _{{{\\mathcal {H}}}}= 0$ , so that $\\sum _{i=1}^{\\infty } \\lambda _i \\alpha _i^2 = 0$ .", "Since $\\lambda _i$ are non-negative, this implies that $\\lambda _i \\alpha _i^2= 0$ for all $i$ .", "Therefore, it must be that $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ .", "Similarly, if $\\langle f_{\\mu ,\\nu ^{*}}, (\\frac{1}{\\lambda }I - (D_{\\nu ^{*}} +\\lambda I )^{-1})f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} =0 $ then $\\frac{\\lambda _i\\alpha _i^2}{\\lambda _i + \\lambda } = 0$ hence $\\langle f_{\\mu ,\\nu {*}}, D_{\\nu ^{*}} f_{\\mu ,\\nu {*}} \\rangle _{{{\\mathcal {H}}}} = 0$ .", "This means that $f_{\\mu ,\\nu {*}}$ belongs to $null(D_{\\nu ^*})$ ." ], [ "$\\Lambda $ -displacement convexity of the MMD", "We provide now a proof of prop:lambdaconvexity: [Proof of prop:lambdaconvexity][$\\Lambda $ - displacement convexity of the MMD] To prove that $\\nu \\mapsto {{\\mathcal {F}}}(\\nu )$ is $\\Lambda $ -convex we need to compute the second time derivative $\\ddot{{{\\mathcal {F}}}}(\\rho _{t})$ where $(\\rho _{t})_{t \\in [0,1]}$ is a displacement geodesic between two probability distributions $\\nu _{0}$ and $\\nu _{1}$ as defined in eq:displacementgeodesic.", "Such geodesic always exists and can be written as $\\rho _t = (s_t)_{\\#}\\pi $ with $s_t = x + t(y-x)$ for all $t\\in [0,1]$ and $\\pi $ is an optimal coupling between $\\nu _0$ and $\\nu _1$ (, Theorem 5.27).", "We denote by $V_t$ the corresponding velocity vector as defined in eq:continuityequation.", "Recall that ${{\\mathcal {F}}}(\\rho _t) = \\frac{1}{2} \\Vert f_{\\mu ,\\rho _t}\\Vert ^2_{\\mathcal {H}}$ , with $f_{\\mu ,\\rho _t}$ defined in eq:witnessfunction.", "We start by computing the first derivative of $ t\\mapsto {{\\mathcal {F}}}(\\rho _t) $ .", "Since assump:diffkernel,assump:lipschitzgradientk hold, lem:secondderivativeaugmentedmmd applies for $\\phi (x,y) = y-x$ , $\\psi (x,y) = x$ and $q = \\pi $ , thus we know that $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ is well defined and given by: $\\begin{split}\\ddot{{{\\mathcal {F}}}}(\\rho _t) =&\\mathbb {E}\\left[ (y-x)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))(y^{\\prime }-x^{\\prime })\\right]\\\\&+ \\mathbb {E}\\left[ (y-x)^T( H_1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-H_1 k(s_t(x,y),z))(y-x)\\right]\\end{split}$ Moreover, assump:boundedfourthoder also holds which means by lem:secondderivativeaugmentedmmd that the second term in eq:hessian can be lower-bounded by $-\\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t)\\mathbb {E}[ \\Vert y-x \\Vert ^2]$ so that: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) =\\mathbb {E}\\left[ (y-x)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))(y^{\\prime }-x^{\\prime })\\right] - \\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t) \\mathbb {E}[ \\Vert y-x \\Vert ^2]$ Recall now that $(\\rho _t)_{t \\in [0,1]}$ is a constant speed geodesic with velocity vector $(V_t)_{t\\in [0,1]}$ thus by a change of variable, one further has: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\int \\left[ V_t^T(x)\\nabla _1 \\nabla _2 k(x,x^{\\prime })V_t(x^{\\prime })\\right]\\mathop {}\\!\\mathrm {d}\\rho _t(x) - \\sqrt{2}\\lambda d{{\\mathcal {F}}}(\\rho _t) \\int \\Vert V_t(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\rho _t(x).$ Now we can introduce the function $\\Lambda (\\rho ,v) = \\langle v ,( C_{\\rho } -\\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}} I) v \\rangle _{L_2(\\rho )}$ which is defined for any pair $(\\rho ,v)$ with $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ and $v$ a square integrable vector field in $L_2(\\rho )$ and where $C_{\\rho }$ is a non-negative operator given by $(C_{\\rho }v)(x)=\\int \\nabla _{x}\\nabla _{x^{\\prime }}k(x,x^{\\prime })v(x^{\\prime })d\\rho (x^{\\prime })$ for any $x \\in {{\\mathcal {X}}}$ .", "This allows to write $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\Lambda (\\rho _t,V_t)$ .", "It is clear that $\\Lambda (\\rho ,.", ")$ is a quadratic form on $L_2(\\rho )$ and satisfies the requirement in def:conditionslambda.", "Finally, using lem:integrallambdaconvexity and def:lambda-convexity we conclude that ${{\\mathcal {F}}}$ is $\\Lambda $ -convex.", "Moreover, by the reproducing property we also know that for all $\\rho \\in \\mathcal {P}_2({{\\mathcal {X}}})$ : $ \\mathbb {E}_{\\rho }\\left[ v(x)^T \\nabla _1 \\nabla _2 k(x,x^{\\prime }) v(x^{\\prime }) \\right] = \\mathbb {E}_{\\rho }\\left[\\left\\langle v(x)^T \\nabla _1 k(x,.", "), v(x^{\\prime })^T \\nabla _1k(x^{\\prime },.)", "\\right\\rangle _{{{\\mathcal {H}}}}\\right].$ By Bochner integrability of $v(x)^T \\nabla _1 k(x,.", ")$ it is possible to exchange the order of the integral and the inner-product .", "This leads to the expression $\\Vert \\mathbb {E}[v(x)^T \\nabla _1 k(x,.", ")]\\Vert ^2_{{{\\mathcal {H}}}}$ .", "Hence $\\Lambda (\\rho ,v)$ has a second expression of the form: $\\Lambda (\\rho ,v) = \\left\\Vert \\mathbb {E}_{\\rho }\\left[v(x)^T \\nabla _1 k(x,.", ")\\right]\\right\\Vert ^2_{{{\\mathcal {H}}}} - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho )^{\\frac{1}{2}}\\mathbb {E}_{\\rho }\\left[\\left\\Vert v(x)\\right\\Vert ^2 \\right].$ We also provide a result showing $\\Lambda $ convexity for ${{\\mathcal {F}}}$ only under assump:lipschitzgradientk: Lemma 14 ($\\Lambda $ -displacement convexity) Under assump:lipschitzgradientk, for any $\\nu ,\\nu ^{\\prime }\\in \\mathcal {P}_2({{\\mathcal {X}}})$ and any constant speed geodesic $\\rho _t$ from $\\nu $ to $\\nu ^{\\prime }$ , ${{\\mathcal {F}}}$ satisfies for all $0\\le t\\le 1$ : ${{\\mathcal {F}}}(\\rho _t) \\le (1-t){{\\mathcal {F}}}(\\nu ) + t{{\\mathcal {F}}}(\\nu ^{\\prime }) + 3L W_2^2(\\nu ,\\nu ^{\\prime }) \\qquad $ Let $\\rho _t$ be a constant speed geodesic of the form $\\rho _t = s_t{\\#}\\pi $ where $\\pi $ is an optimal coupling between $\\nu $ and $\\nu ^{\\prime }$ and $s_t(x,y) = x + t(y-x)$ .", "Since assump:lipschitzgradientk holds, one can apply lem:derivativemmdaugmented with $\\psi (x,y) =x $ , $\\phi (x,y)= y-x$ and $q = \\pi $ .", "Hence, one has that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and its differential satisfies: $\\vert \\dot{{{\\mathcal {F}}}}(\\rho _t) - \\dot{{{\\mathcal {F}}}}(\\rho _s) \\vert \\le 3L\\vert t-s \\vert \\int \\Vert y-x\\Vert ^2\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ This implies that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is Lipschitz continuous and therefore is differentiable for almost all $t\\in [0,1]$ by Rademacher's theorem.", "Hence, $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ is well defined for almost all $t\\in [0,1]$ .", "Moreover, from the above inequality it follows that $\\ddot{{{\\mathcal {F}}}}(\\rho _t)\\ge - 3L \\int \\Vert y-x\\Vert ^2\\mathop {}\\!\\mathrm {d}\\pi (x,y) = -3LW_2^2(\\nu ,\\nu ^{\\prime })$ for almost all $t\\in [0,1]$ .", "Using lem:integrallambdaconvexity it follows directly that ${{\\mathcal {F}}}$ satisfies the desired inequality." ], [ "Descent up to a barrier", "To provide a proof of th:ratesmmd, we need the following preliminary results.", "Firstly, an upper-bound on a scalar product involving $\\nabla f_{\\mu , \\nu }$ for any $\\mu , \\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ in terms of the loss functional ${{\\mathcal {F}}}$ , is obtained using the $\\Lambda $ -displacement convexity of ${{\\mathcal {F}}}$ in lem:gradflowlambdaversion.", "Then, an EVI (Evolution Variational Inequality) is obtained in prop:evi on the gradient flow of ${{\\mathcal {F}}}$ in $W_2$ .", "The proof of the theorem is given afterwards.", "Lemma 15 Let $\\nu $ be a distribution in $\\mathcal {P}_2({{\\mathcal {X}}})$ and $\\mu $ the target distribution such that ${{\\mathcal {F}}}(\\mu )=0$ .", "Let $\\pi $ be an optimal coupling between $\\nu $ and $\\mu $ , and $(\\rho _t)_{t \\in [0,1]}$ the displacement geodesic defined by eq:displacementgeodesic with its corresponding velocity vector $(V_t)_{t\\in [0,1]}$ as defined in eq:continuityequation.", "Finally let $\\nabla f_{\\nu ,\\mu }(X)$ be the gradient of the unnormalised witness function between $\\mu $ and $\\nu $ .", "The following inequality holds: $\\int \\nabla f_{\\mu , \\nu }(x).", "(y-x) d\\pi (x,y)\\le {{\\mathcal {F}}}(\\mu )- {{\\mathcal {F}}}(\\nu ) -\\int _0^1 \\Lambda (\\rho _s,V_s)(1-s)ds$ where $\\Lambda $ is defined prop:lambdaconvexity.", "Recall that for all $t\\in [0,1]$ , $\\rho _t$ is given by $\\rho _t = (s_t)_{\\#}\\pi $ with $s_t = x + t(y-x)$ .", "By $\\Lambda $ -convexity of $\\mathcal {F}$ the following inequality holds: $\\mathcal {F}(\\rho _{t})\\le (1-t)\\mathcal {F}(\\nu )+t \\mathcal {F}(\\mu ) - \\int _0^1 \\Lambda (\\rho _s,V_s)G(s,t)ds$ Hence by bringing $\\mathcal {F}(\\nu )$ to the l.h.s and dividing by $t$ and then taking its limit at 0 it follows that: $\\dot{{{\\mathcal {F}}}}(\\rho _t)\\vert _{t=0}\\le \\mathcal {F}(\\mu )-\\mathcal {F}(\\nu )-\\int _0^1 \\Lambda (\\rho _s,V_s)(1-s)ds.", "$ where $\\dot{{{\\mathcal {F}}}}(\\rho _t)=d{{\\mathcal {F}}}(\\rho _t)/dt$ and since $\\lim _{t \\rightarrow 0}G(s,t)=(1-s)$ .", "Moreover, under assump:lipschitzgradientk, lem:derivativemmdaugmented applies for $\\phi (x,y) = y-x$ , $\\psi (x,y)= x$ and $q = \\pi $ .", "It follows therefore that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is differentiable with time derivative given by: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\int \\nabla f_{\\mu ,\\rho _t}(s_t(x,y)).", "(y-x)\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ .", "Hence at $t=0$ we get: $\\dot{{{\\mathcal {F}}}}(\\rho _t)\\vert _{t=0} = \\int \\nabla f_{\\mu ,\\nu }(x).", "(y-x)\\mathop {}\\!\\mathrm {d}\\pi (x,y)$ which shows the desired result when used in eq:firstorderlambda.", "Proposition 16 Consider the sequence of distributions $\\nu _n$ obtained from eq:eulerscheme.", "For $n\\ge 0$ , consider the scalar $ K(\\rho ^n) := \\int _0^1\\Lambda (\\rho _s^n,V_s^n)(1-s)\\mathop {}\\!\\mathrm {d}s$ where $(\\rho _s^n)_{0\\le s\\le 1}$ is a constant speed displacement geodesic from $\\nu _n$ to the optimal value $\\mu $ with velocity vectors $(V_s^n)_{0\\le s\\le 1}$ .", "If $\\gamma \\le 1/L$ , where $L$ is the Lispchitz constant of $\\nabla k$ in assump:lipschitzgradientk, then: $2\\gamma ({{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\mu ))\\le W_2^2(\\nu _n,\\mu )-W_2^2(\\nu _{n+1},\\mu )-2\\gamma K(\\rho ^n).$ Let $\\Pi ^n$ be the optimal coupling between $\\nu _n$ and $\\mu $ , then the optimal transport between $\\nu _n$ and $\\mu $ is given by: $W_2^2(\\mu ,\\nu _n)=\\int \\Vert X-Y \\Vert ^2 d\\Pi ^n(\\nu _n,\\mu )$ Moreover, consider $Z=X-\\gamma \\nabla f_{\\mu , \\nu _n}(X)$ where $(X,Y)$ are samples from $\\pi ^n$ .", "It is easy to see that $(Z,Y)$ is a coupling between $\\nu _{n+1}$ and $\\mu $ , therefore, by definition of the optimal transport map between $\\nu _{n+1}$ and $\\mu $ it follows that: $W_2^2(\\nu _{n+1},\\mu )\\le \\int \\Vert X-\\gamma \\nabla f_{\\mu , \\nu _n}(X)-Y\\Vert ^2 d\\pi ^n(\\nu _n,\\mu )$ By expanding the r.h.s in eq:optimalupper-bound, the following inequality holds: $W_2^2(\\nu _{n+1},\\mu )\\le W_2^2(\\nu _{n},\\mu ) -2\\gamma \\int \\langle \\nabla f_{\\mu , \\nu _n}(X), X-Y \\rangle d\\pi ^n(\\nu _n,\\mu )+ \\gamma ^2D(\\nu _n)$ where $D(\\nu _n) = \\int \\Vert \\nabla f_{\\mu , \\nu _n}(X)\\Vert ^2 d\\nu _n $ .", "By lem:gradflowlambdaversion it holds that: $-2\\gamma \\int \\nabla f_{\\mu , \\nu _n}(X).", "(X-Y) d\\pi (\\nu ,\\mu )\\le -2\\gamma \\left({{\\mathcal {F}}}(\\nu _n)- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right)$ where $(\\rho ^n_t)_{0\\le t \\le 1}$ is a constant-speed geodesic from $\\nu _n$ to $\\mu $ and $K(\\rho ^n):=\\int _0^1 \\Lambda (\\rho ^n_s,v^n_s)(1-s)ds$ .", "Note that when $K(\\rho ^n)\\le 0$ it falls back to the convex setting.", "Therefore, the following inequality holds: $W_2^2(\\nu _{n+1},\\mu )\\le W_2^2(\\nu _{n},\\mu ) - 2\\gamma \\left({{\\mathcal {F}}}(\\nu _n)- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right) +\\gamma ^2 D(\\nu _n)$ Now we introduce a term involving ${{\\mathcal {F}}}(\\nu _{n+1})$ .", "The above inequality becomes: $W_2^2(\\nu _{n+1},\\mu )\\le & W_2^2(\\nu _{n},\\mu ) - 2\\gamma \\left({{\\mathcal {F}}}(\\nu _{n+1})- {{\\mathcal {F}}}(\\mu ) +K(\\rho ^n)\\right) \\\\&+\\gamma ^2 D(\\nu _n) -2\\gamma ({{\\mathcal {F}}}(\\nu _n)-{{\\mathcal {F}}}(\\nu _{n+1}))$ It is possible to upper-bound the last two terms on the r.h.s.", "by a negative quantity when the step-size is small enough.", "This is mainly a consequence of the smoothness of the functional ${{\\mathcal {F}}}$ and the fact that $\\nu _{n+1}$ is obtained by following the steepest direction of ${{\\mathcal {F}}}$ starting from $\\nu _n$ .", "prop:decreasingfunctional makes this statement more precise and enables to get the following inequality: $\\gamma ^2 D(\\nu _n) -2\\gamma ({{\\mathcal {F}}}(\\nu _n)-{{\\mathcal {F}}}(\\nu _{n+1})\\le -\\gamma ^2 (1-3\\gamma L)D(\\nu _n),$ where $L$ is the Lispchitz constant of $\\nabla k$ .", "Combining eq:mainineq2 and eq:decreasingfunctional we finally get: $2\\gamma ({{\\mathcal {F}}}(\\nu _{n+1})-{{\\mathcal {F}}}(\\mu ))+\\gamma ^2(1-3\\gamma L)D(\\nu _n)\\le W_2^2(\\nu _n,\\mu )-W_2^2(\\nu _{n+1},\\mu )-2\\gamma K(\\rho ^n).$ and under the condition $\\gamma \\le 1/(3L)$ we recover the desired result.", "We can now give the proof of the th:ratesmmd.", "[Proof of th:ratesmmd] Consider the Lyapunov function $L_j = j \\gamma ({{\\mathcal {F}}}(\\nu _j) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )$ for any iteration $j$ .", "At iteration $j+1$ , we have: $L_{j+1} &= j\\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _{j+1},\\mu )\\\\&\\le j\\gamma ({{\\mathcal {F}}}(\\nu _{j+1}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )-\\gamma K(\\rho ^j)\\\\&\\le j\\gamma ({{\\mathcal {F}}}(\\nu _{j}) - {{\\mathcal {F}}}(\\mu )) + \\frac{1}{2} W_2^2(\\nu _j,\\mu )-\\gamma K(\\rho ^j) -j\\gamma ^2 (1-\\frac{3}{2} \\gamma L )\\int \\Vert \\nabla f_{\\mu , \\nu _j}(X)\\Vert ^2 d\\nu _j \\\\&\\le L_j - \\gamma K(\\rho ^j).$ where we used  prop:evi and prop:decreasingfunctional successively for the two first inequalities.", "We thus get by telescopic summation: $L_n \\le L_0 -\\gamma \\sum _{j = 0}^{n-1} K(\\rho ^j)$ Let us denote $\\bar{K}$ the average value of $(K(\\rho ^j))_{0\\le j \\le n}$ over iterations up to $n$ .", "We can now write the final result: ${{\\mathcal {F}}}(\\nu _{n}) - {{\\mathcal {F}}}(\\mu ) \\le \\frac{W_2^2(\\nu _0, \\mu )}{2 \\gamma n} -\\bar{K}$" ], [ "Lojasiewicz type inequalities", "Given a probability distribution $\\nu $ , the weighted Sobolev semi-norm is defined for all squared integrable functions $f$ in $L_2(\\nu )$ as $ \\Vert f \\Vert _{\\dot{H}(\\nu )} = \\left(\\int \\left\\Vert \\nabla f(x) \\right\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}}$ with the convention $\\Vert f \\Vert _{\\dot{H}(\\nu )} = +\\infty $ if $f$ does not have a square integrable gradient.", "The Negative weighted Sobolev distance $ \\Vert .", "\\Vert _{\\dot{H}^{-1}(\\nu )} $ is then defined on distributions as the dual norm of $ \\Vert .\\Vert _{\\dot{H}(\\nu )} $ .", "For convenience, we recall the definition of $ \\Vert .", "\\Vert _{\\dot{H}^{-1}(\\nu )} $ : Definition 5 Let $\\nu \\in \\mathcal {P}_2({\\mathbf {x}})$ , with its corresponding weighted Sobolev semi-norm $ \\Vert .", "\\Vert _{\\dot{H}(\\nu )} $ .", "The weighted negative Sobolev distance $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )}$ between any $p$ and $q$ in $\\mathcal {P}_2({\\mathbf {x}})$ is defined as $\\Vert p - q \\Vert _{\\dot{H}^{-1}(\\nu )} = \\sup _{f\\in L_2(\\nu ), \\Vert f \\Vert _{\\dot{H}(\\nu )} \\le 1 } \\left|\\int f(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int f(x)\\mathop {}\\!\\mathrm {d}q(x) \\right|$ with possibly infinite values.", "There are several possible choices for the set of test functions $f$ .", "While it is often required that $f$ vanishes at the boundary (see ), we do not make such restriction and rather use the definition from .", "We refer to for more discussion on the relationship between different choices for the set of test functions.", "We provide now a proof for prop:lojasiewicz.", "[Proof of prop:lojasiewicz] This proof follows simply from the definition of the negative Sobolev distance.", "Under assump:lipschitzgradientk, the kernel has at most quadratic growth hence, for any $\\mu ,\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})^2$ , $f_{\\mu ,\\nu }\\in L_2(\\nu )$ .", "Consider $g = \\Vert f_{\\mu , \\nu _t}\\Vert ^{-1}_{\\dot{H}(\\nu _t)} f_{\\mu , \\nu _t}$ , then $g\\in L_2(\\nu _t)$ and $\\Vert g \\Vert _{\\dot{H}(\\nu _t)}\\le 1$ .", "Therefore, we directly have: $\\left|\\int g \\mathop {}\\!\\mathrm {d}\\nu _t - \\int g \\mathop {}\\!\\mathrm {d}\\mu \\right|\\le \\left\\Vert \\nu _t - \\mu \\right\\Vert _{\\dot{H}^{-1}(\\nu _t)}$ Now, recall the definition of $g$ , which implies that $\\left|\\int g \\mathop {}\\!\\mathrm {d}\\nu _t - \\int g \\mathop {}\\!\\mathrm {d}\\mu \\right|= \\left\\Vert \\nabla f_{\\mu , \\nu _t}\\right\\Vert ^{-1}_{L_2(\\nu _t)} \\left|\\int f_{\\mu , \\nu _t}\\mathop {}\\!\\mathrm {d}\\nu _t-\\int f_{\\mu , \\nu _t} \\mathop {}\\!\\mathrm {d}\\mu \\right|.$ Moreover, we have that $\\int f_{\\mu , \\nu _t}\\mathop {}\\!\\mathrm {d}\\nu _t-\\int f_{\\mu ,\\nu _t}\\mathop {}\\!\\mathrm {d}\\mu = \\Vert f_{\\mu , \\nu _t}\\Vert ^2_{{{\\mathcal {H}}}}$ , since $f_{\\mu , \\nu _t}$ is the unnormalised witness function between $\\nu _t$ and $\\mu $ .", "Combining eq:loja1 and eq:loja2 we thus get the desired Lojasiewicz inequality on $f_{\\mu ,\\nu _t}$ : $\\Vert f_{\\mu ,\\nu _t} \\Vert ^2_{\\mathcal {H}} \\le \\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)} \\Vert \\mu -\\nu _t\\Vert _{\\dot{H}^{-1}(\\nu _t)}$ where $\\Vert f_{\\mu ,\\nu _t} \\Vert _{\\dot{H}(\\nu _t)}=\\Vert \\nabla f_{\\mu , \\nu _t} \\Vert _{L_2(\\nu _t)}$ by definition.", "Then, using prop:decaymmd and recalling by assumption that: $\\Vert \\mu - \\nu _t \\Vert ^2_{\\dot{H}^{-1}(\\nu _t)} \\le C$ , we have: $\\dot{{{\\mathcal {F}}}}(\\nu _t) = - \\Vert \\nabla f_{\\mu , \\nu _t} \\Vert ^2_{L_2(\\nu _t)} \\le -\\frac{1}{C}\\Vert f_{\\mu ,\\nu _t} \\Vert ^4_{\\mathcal {H}}= -\\frac{4}{C}{{\\mathcal {F}}}(\\nu _t)^2 $ It is clear that if $\\mathcal {F}(\\nu _0)>0$ then ${{\\mathcal {F}}}(\\nu _t)>0$ at all times by uniqueness of the solution.", "Hence, one can divide by ${{\\mathcal {F}}}(\\nu _t)^2$ and integrate the inequality from 0 to some time $t$ .", "The desired inequality is obtained by simple calculations.", "Then, using prop:decreasingfunctional and eq:PLinequality where $\\nu _t$ is replaced by $\\nu _n$ it follows: ${{\\mathcal {F}}}(\\nu _{n+1}) - {{\\mathcal {F}}}(\\nu _n) \\le -\\gamma \\left(1-\\frac{3}{2} L\\gamma \\right)\\Vert \\nabla f_{\\mu ,\\nu _n}\\Vert _{L_2(\\nu _n)}^2 \\le -\\frac{4}{C}\\gamma \\left(1-\\frac{3}{2}\\gamma L\\right){{\\mathcal {F}}}(\\nu _n)^2.$ Dividing by both sides of the inequality by $ {{\\mathcal {F}}}(\\nu _n){{\\mathcal {F}}}(\\nu _{n+1})$ and recalling that ${{\\mathcal {F}}}(\\nu _{n+1})\\le {{\\mathcal {F}}}(\\nu _n)$ it follows directly that: $\\frac{1}{{{\\mathcal {F}}}(\\nu _n)} - \\frac{1}{{{\\mathcal {F}}}(\\nu _{n+1})} \\le -\\frac{4}{C}\\gamma \\left(1-\\frac{3}{2}\\gamma L\\right).$ The proof is concluded by summing over $n$ and rearranging the terms." ], [ "A simple example", "Consider a gaussian target distribution $\\mu (x) = \\mathcal {N}(a,\\Sigma ) $ and initial distribution $\\nu _0 = \\mathcal {N}(a_0,\\Sigma _0)$ .", "In this case it is sufficient to use a kernel that captures the first and second moments of the distribution.", "We simply consider a kernel of the form $k(x,y)= (x^{\\top }y)^2 + x^{\\top }y$ .", "In this case, it is easy to see by simple computations that the following equation holds: $\\dot{X}_t = - (\\Sigma _t-\\Sigma + a_t a_t^{\\top }-a a^{\\top } )X_t - (a_t-a),\\qquad \\forall t \\ge 0$ Where $a_t$ and $\\Sigma _t$ are the mean and covariance matrix of $\\nu _t$ and satisfy the equations: $\\dot{\\Sigma }_t &= - (S_t \\Sigma _t + \\Sigma _t S_t )\\\\\\dot{a}_t &= - S_t a_t -(a_t-a).$ Where we introduced $S_t = \\Sigma _t-\\Sigma + a_t a_t^{\\top }-aa^{\\top }$ for simplicity.", "eq:example1mckeanvlassov implies that $\\nu _t$ is in fact a gaussian distribution since $X_t$ is obtained by summing gaussian increments.", "The same conclusion can be reached by solving the corresponding continuity equation.", "Thus we will be only interested in the behavior of $a_t$ and $\\Sigma _t$ .", "First we can express the squared MMD in terms of those parameters: $MMD^2(\\mu ,\\nu _t) = \\Vert S_t \\Vert ^2 + \\Vert a_t-a \\Vert ^2.$ Since $a_t$ and $\\Sigma _t$ are obtained from the gradient flow of the MMD, it follows that $\\Vert a_t-a \\Vert ^2$ and $\\Vert S_t \\Vert ^2$ remain bounded.", "Moreover, the Negative Sobolev distance is obtained by solving a finite dimensional quadratic problem and can be simply written as: $D(\\mu ,\\nu _t) = tr(Q_t \\Sigma _t Q_t) + \\Vert a_t-a\\Vert ^2$ where $Q_t$ is the unique solution of the Lyapounov equation: $\\Sigma _t Q_t + Q_t \\Sigma _t = \\Sigma _t- \\Sigma + (a_t-a)(a_t-a)^{\\top }:=G_t.$ We first consider the one dimensional case, for which eq:Lyapounov has a particularly simple solution and allows to provide a closed form expression for the negative Sobolev distance: $Q_t= \\frac{G_t}{2\\Sigma _t}, \\qquad D(\\mu ,\\nu _t) = \\frac{G_t^2}{4\\Sigma _t} + (a_t-a)^2.$ Recalling eq:example1MMD and that $MMD^2(\\mu ,\\nu _t)$ is bounded at all times by definition of $\\nu _t$ , it follows that both $G_t$ and $a_t-a$ are also bounded.", "Hence, it is easy to see that $D(\\mu ,\\nu _t)$ will remain bounded iff $\\Sigma _t$ remains bounded away from 0.", "This analysis generalizes the higher dimensions using which provides an expression for $Q_t$ in terms of $G_t$ and the singular value decomposition of $\\Sigma _t = U_t D_t U_t^{\\top }$ : $Q_t = U_t \\left( \\left(\\frac{1}{(D_t)_i + (D_t)_j }\\right)\\odot U_t^{\\top } G_t U_t\\right) U_t^{\\top }.$ Here, $\\odot $ denotes the Hadamard product of matrices.", "It is easy to see from this expression that $D(\\mu ,\\nu _t)$ will be bounded if all singular values $((D_t)_i)_{1\\le i \\le d}$ of $\\Sigma _t$ remain bounded away from 0." ], [ "Lojasiewicz-type inequalities for ${{\\mathcal {F}}}$ under different metrics", "The Wasserstein gradient flow of ${{\\mathcal {F}}}$ can be seen as the continuous-time limit of the so called minimizing movement scheme .", "Such proximal scheme is defined using an initial distribution $\\nu _0$ , a step-size $\\tau $ , and an iterative update equation: $\\nu _{n+1} \\in \\arg \\min _{\\nu } {{\\mathcal {F}}}(\\nu ) + \\frac{1}{2\\tau } W_2^2(\\nu ,\\nu _n).$ In , it is shown that the continuity equation $\\partial _t \\nu _t = div(\\nu _t \\nabla f_{\\mu ,\\nu _t})$ can be obtained as the limit when $\\tau \\rightarrow 0$ of eq:minimizingmovementscheme using suitable interpolations between the elements $\\nu _n$ .", "In , a different transport equation that includes a birth-death term is considered: $\\partial _t \\nu _t = \\beta div(\\nu _t \\nabla f_{\\mu ,\\nu _t}) + \\alpha (f_{\\mu ,\\nu _t} - \\int f_{\\mu ,\\nu _t}(x)\\mathop {}\\!\\mathrm {d}\\nu _t(x) )\\nu _t$ When $\\beta =0$ and $\\alpha =1$ , it is shown formally in that the above dynamics corresponds to the limit of a proximal scheme using the KL instead of the Wasserstein distance.", "For general $\\beta $ and $\\alpha $ , eq:birthdeath corresponds to the limit of a different proximal scheme where $W_2^2(\\nu ,\\nu _n)$ is replaced by the Wasserstein-Fisher-Rao distance $d^2_{\\alpha ,\\beta }(\\nu ,\\nu _n)$ (see , , ).", "$d^2_{\\alpha ,\\beta }(\\nu ,\\nu _n)$ is an interpolation between the squared Wasserstein distance ($\\beta =1$ and $\\alpha =0$ ) and the squared Fisher-Rao distance as defined in ($\\beta =0$ and $\\alpha = 1$ ).", "Such scheme is consistent with the one proposed in and which uses the $KL$ .", "In fact, as we will show later, both the $KL$ and the Fisher-Rao distance have the same local behavior therefore both proximal schemes are expected to be equivalent in the limit when $\\tau \\rightarrow 0$ .", "Under eq:birthdeath, the time evolution of ${{\\mathcal {F}}}$ is given by : $\\dot{{{\\mathcal {F}}}}(\\nu _t) = -\\beta \\int \\Vert \\nabla f_{\\mu ,\\nu _t}\\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu _t(x) -\\alpha \\int \\left|f_{\\mu ,\\nu _t}(x)-\\int f_{\\mu ,\\nu _t}(x^{\\prime })\\mathop {}\\!\\mathrm {d}\\nu _t(x^{\\prime })\\right|^2\\mathop {}\\!\\mathrm {d}\\nu _t(x)$ We would like to apply the same approach as in sec:Lojasiewiczinequality to provide a condition on the convergence of eq:birthdeath.", "Hence we first introduce an analogue to the Negative Sobolev distance in def:negsobolev by duality: $D_{\\nu }(p,q) =\\sup _{\\begin{array}{c}g\\in L_2(\\nu )\\\\ \\beta \\Vert \\nabla g \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert g- \\bar{g} \\Vert ^2_{L_2(\\nu )} \\le 1 \\end{array}} \\left|\\int g(x)\\mathop {}\\!\\mathrm {d}p(x) - \\int g(x) \\mathop {}\\!\\mathrm {d}q(x)\\right|$ where $\\bar{g}$ is simply the expectation of $g$ under $\\nu $ .", "Such quantity defines a distance, since it is the dual of a semi-norm.", "Now using the particular structure of the MMD, we recall that $f_{\\mu ,\\nu }\\in L_2(\\nu )$ and that $\\beta \\Vert \\nabla f \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f- \\bar{f} \\Vert ^2_{L_2(\\nu )}<\\infty $ .", "Hence for a particular $g$ of the form: $g = \\frac{f_{\\mu ,\\nu }}{\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} \\right)^\\frac{1}{2}}$ the following inequality holds: $D_{\\nu }(\\mu ,\\nu ) \\ge \\frac{\\left|\\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\nu (x) - \\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\mu (x)\\right|}{\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )}\\right)^{\\frac{1}{2}} }.$ But since $f_{\\mu ,\\nu }$ is the unnormalised witness function between $\\mu $ and $\\nu $ we have that $2{{\\mathcal {F}}}(\\nu ) = \\left|\\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\nu (x) - \\int f_{\\mu ,\\nu } \\mathop {}\\!\\mathrm {d}\\mu (x)\\right|$ .", "Hence one can write that: $D^2_{\\nu }(\\mu ,\\nu )\\left(\\beta \\Vert \\nabla f_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )} +\\alpha \\Vert f_{\\mu ,\\nu }- \\bar{f}_{\\mu ,\\nu } \\Vert ^2_{L_2(\\nu )}\\right) \\ge 4{{\\mathcal {F}}}^2(\\nu )$ Now provided that $D^2_{\\nu }(\\mu ,\\nu _t)$ remains bounded at all time $t$ by some constant $C>0$ one can easily deduce a rate of convergence for ${{\\mathcal {F}}}(\\nu _t)$ just as in prop:lojasiewicz.", "In fact, in the case when $\\beta = 1$ and $\\alpha =0$ one recovers prop:lojasiewicz.", "Another interesting case is when $\\beta =0$ and $\\alpha =1$ .", "In this case, $D_{\\nu }(p,q)$ is defined for $p$ and $q$ such that the difference $p-q$ is absolutely continuous w.r.t.", "$\\nu $ .", "Moreover, $D_{\\nu }(p,q)$ has the simple expression: $D_{\\nu }(p,q) = \\int \\left(\\frac{p-q }{\\nu }(x)\\right)^2 \\mathop {}\\!\\mathrm {d}\\nu (x)$ where $\\frac{ p-q }{ \\nu }$ denotes the radon nikodym density of $p-q$ w.r.t.", "$\\nu $ .", "More importantly, $D^2_{\\nu }(\\mu ,\\nu )$ is exactly equal to $\\chi ^2(\\mu \\Vert \\nu )^{\\frac{1}{2}}$ .", "As we will show now, $(\\chi ^2)^{\\frac{1}{2}}$ turns out to be a linearization of $\\sqrt{2} KL^{\\frac{1}{2}}$ and the Fisher-Rao distance." ], [ "Linearization of the KL and the Fisher-Rao distance.", "We first show the result for the KL.", "Given a probability distribution $\\nu ^{\\prime }$ that is absolutely continuous w.r.t to $\\nu $ and for $0<\\epsilon < 1$ denote by $G(\\epsilon ) := KL(\\nu \\Vert (\\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) )$ .", "It can be shown that $G(\\epsilon ) = \\frac{1}{2}\\chi ^2(\\nu ^{\\prime }\\Vert \\nu )\\epsilon ^2 +o(\\epsilon ^2)$ .", "To see this, one needs to perform a second order Taylor expansion of $G(\\epsilon )$ at $\\epsilon =0$ .", "Exchanging the derivatives and the integral, $\\dot{G}(\\epsilon )$ and $\\ddot{G}(\\epsilon )$ are both given by: $\\dot{G}(\\epsilon ) = -\\int \\frac{\\mu -\\nu }{\\nu +\\epsilon (\\mu -\\nu )}\\mathop {}\\!\\mathrm {d}\\nu \\\\\\ddot{G}(\\epsilon ) = \\int \\frac{(\\nu -\\mu )^2}{(\\nu +\\epsilon (\\mu -\\nu ))^2} \\mathop {}\\!\\mathrm {d}\\nu $ Hence, we have for $\\epsilon =0$ : $\\dot{G}(0) = 0$ and $\\ddot{G}(0) = \\chi ^2(\\mu \\Vert \\nu )$ .", "Therefore, it follows: $G(\\epsilon ) =\\frac{1}{2} \\chi ^2(\\mu \\Vert \\nu ) \\epsilon ^2 + o(\\epsilon ^2)$ , which means that $\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon }\\left[2KL\\left(\\nu \\Vert \\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) \\right) \\right]^{\\frac{1}{2}} = \\chi ^2(\\nu ^{\\prime }\\Vert \\nu )^\\frac{1}{2}.$ The same approach can be used for the Fisher-Rao distance $d_{0,1}(\\nu ,\\nu ^{\\prime })$ .", "From we have that: $d^2_{0,1}(\\nu ,\\nu ^{\\prime }) = 2\\int (\\sqrt{\\nu (x)}-\\sqrt{\\nu ^{\\prime }(x)})^2\\mathop {}\\!\\mathrm {d}x$ where $\\nu $ and $\\nu ^{\\prime }$ are assumed to have a density w.r.t.", "Lebesgue measure.", "Using the exact same approach as for the KL one easily show that $ \\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon }\\left[2d^2_{0,1}\\left(\\nu \\Vert \\nu +\\epsilon (\\nu ^{\\prime }-\\nu ) \\right) \\right]^{\\frac{1}{2}} = \\chi ^2(\\nu ^{\\prime }\\Vert \\nu )^\\frac{1}{2}.$" ], [ "Linearization of the $W_2$ .", "Similarly, it can be shown that the Negative weighted Sobolev distance is a linearization of the $W_2$ under suitable conditions.", "We recall here which relates the two quantities: Theorem 17 Let $\\nu \\in \\mathcal {P}({{\\mathcal {X}}})$ be a probability measure with finite second moment, absolutely continuous w.r.t the Lebesgue measure and let $h\\in L^{\\infty }({{\\mathcal {X}}})$ with $\\int h(x)\\mathop {}\\!\\mathrm {d}\\nu (x)=0$ .", "Then $\\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}\\le \\lim \\inf _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,(1+\\epsilon h )\\nu ).$ thm:villani implies that for any probability distribution $\\nu ^{\\prime }$ that has a bounded density w.r.t.", "to $\\nu $ one has: $\\Vert \\nu ^{\\prime }-\\nu \\Vert _{\\dot{H}^{-1}(\\nu )}\\le \\lim \\inf _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,\\nu +\\epsilon (\\nu ^{\\prime }-\\nu )).$ To get the converse inequality, one needs to assume that the support of $\\nu $ is ${{\\mathcal {X}}}$ .", "prop:conversesobolevwasserstein provides such inequality and uses techniques from .", "Proposition 18 Let $\\nu \\in \\mathcal {P}({{\\mathcal {X}}})$ be a probability measure with finite second moment, absolutely continuous w.r.t the Lebesgue measure with support equal to ${{\\mathcal {X}}}$ and let $h\\in L^{\\infty }({{\\mathcal {X}}})$ with $\\int h(x)\\mathop {}\\!\\mathrm {d}\\nu (x)=0$ and $1+h\\ge 0$ .", "Then $\\lim \\sup _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,(1+\\epsilon h )\\nu )\\le \\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}$ Consider the elliptic equation: $\\nu h + div(\\nu \\nabla F) = 0$ with Neumann boundary condition on $\\partial {{\\mathcal {X}}}$ .", "Such equation admits a unique solution $F$ in $\\dot{H}(\\nu )$ up to a constant since $\\nu $ is supported on all of ${{\\mathcal {X}}}$ (see ).", "Moreover, we have that $ \\int F(x)h(x)\\mathop {}\\!\\mathrm {d}\\nu (x) = \\int \\Vert \\nabla F(x) \\Vert ^2 \\mathop {}\\!\\mathrm {d}\\nu (x)$ which implies that $\\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )} \\ge \\Vert F \\Vert _{\\dot{H}(\\nu )}$ .", "Now consider the path: $s_u = (1 + u\\epsilon h)\\nu $ for $u\\in [0,1]$ .", "$s_u$ is a probability distribution for all $u\\in [0,1]$ with $s_0= \\nu $ and $s_1 = (1+\\epsilon h)\\nu $ .", "It is easy to see that $s_u$ satisfies the continuity equation: $\\partial _u s_u +div(s_u V_u )=0$ with $V_u = \\frac{\\epsilon \\nabla F}{1+u\\epsilon h}$ .", "Indeed, for any smooth test function $f$ one has: $\\frac{\\mathop {}\\!\\mathrm {d}}{\\mathop {}\\!\\mathrm {d}u}\\int f(x)\\mathop {}\\!\\mathrm {d}s_u(x) = \\epsilon \\int f(x)h(x)\\mathop {}\\!\\mathrm {d}\\nu (x) = \\epsilon \\int \\nabla f(x).\\nabla F(x) \\mathop {}\\!\\mathrm {d}\\nu (x) = \\int \\nabla f(x).V_u(x)\\mathop {}\\!\\mathrm {d}s_u(x).$ We used the definition of $F$ for the second equality and that $\\nu $ admits a density w.r.t.", "to $s_u$ provided that $\\epsilon $ is small enough.", "Such density is given by $1/(1+u\\epsilon h)$ and is positive and bounded when $\\epsilon \\le \\frac{1}{2\\Vert h \\Vert _{\\infty } }$ .", "Now, using the Benamou-Brenier formula for $W_2(\\nu ,(1+\\epsilon h)\\nu )$ one has in particular that: $W_2(\\nu ,(1+\\epsilon h)\\nu )\\le \\int \\Vert V_u \\Vert _{L^2(s_u)} \\mathop {}\\!\\mathrm {d}u$ Using the expressions of $V_u$ and $s_u$ , one gets by simple computation: $W_2(\\nu ,(1+\\epsilon h)\\nu )\\le & \\epsilon \\int \\left(\\int \\frac{\\Vert \\nabla F(x) \\Vert ^2}{1-u\\epsilon + u\\epsilon (h+1)} \\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}} \\mathop {}\\!\\mathrm {d}u\\\\&\\le \\epsilon \\left( \\int \\Vert \\nabla F(x) \\Vert ^2\\mathop {}\\!\\mathrm {d}\\nu (x) \\right)^{\\frac{1}{2}} \\int _0^1 (1-u\\epsilon )^{-\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u.$ Finally, $\\epsilon \\int _0^1 (1-u\\epsilon )^{-\\frac{1}{2}}\\mathop {}\\!\\mathrm {d}u = 2(1-\\sqrt{1 - \\epsilon }) \\rightarrow 1$ when $\\epsilon \\rightarrow 0$ , hence: $\\lim \\sup _{\\epsilon \\rightarrow 0} W_2(\\nu ,(1+\\epsilon h)) \\le \\Vert F\\Vert _{\\dot{H}(\\nu )}\\le \\Vert h \\Vert _{\\dot{H}^{-1}(\\nu )}.$ thm:villani and prop:conversesobolevwasserstein allow to conclude that $\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\epsilon } W_2(\\nu ,\\nu +\\epsilon (\\nu ^{\\prime }-\\nu )) = \\Vert \\nu - \\nu ^{\\prime } \\Vert _{\\dot{H}^{-1}(\\nu )}$ for any $\\nu ^{\\prime }$ that has a bounded density w.r.t.", "$\\nu $ .", "By analogy, one could wonder if $D$ is also a linearization of the the Wasserstein-Fisher-Rao distance.", "We leave such question for future work." ], [ "Noisy Gradient flow of the MMD", "[Proof of thm:convergencenoisygradient] To simplify notations, we write $\\mathcal {D}_{\\beta _n}(\\nu _n) = \\int \\Vert V(x+\\beta _n u) \\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n \\mathop {}\\!\\mathrm {d}u $ where $V := \\nabla f_{\\mu ,\\nu _n}$ and $g$ is the density of a standard gaussian.", "The symbol $\\otimes $ denotes the product of two independent probability distributions.", "Recall that a sample $x_{n+1}$ from $\\nu _{n+1}$ is obtained using $x_{n+1} = x_n - \\gamma V(x_n+ \\beta _n u_n)$ where $x_n$ is a sample from $\\nu _n$ and $u_n$ is a sample from a standard gaussian distribution that is independent from $x_n$ .", "Moreover, by assumption $\\beta _n$ is a non-negative scalar satisfying: $8\\lambda ^2\\beta _n^2 {{\\mathcal {F}}}(\\nu _n) \\le \\mathcal {D}_{\\beta _n}(\\nu _n)$ Consider now the map $(x,u)\\mapsto s_t(x)= x - \\gamma tV(x+\\beta _n u)$ for $0\\le t\\le 1$ , then $\\nu _{n+1}$ is obtained as a push-forward of $\\nu _n\\otimes g$ by $s_1$ : $\\nu _{n+1} = (s_1)_{\\#}(\\nu _n\\otimes g)$ .", "Moreover, the curve $\\rho _t = (s_t)_{\\#}(\\nu _n\\otimes g)$ is a path from $\\nu _n$ to $\\nu _{n+1}$ .", "We know by prop:gradwitnessfunction that $\\nabla f_{\\mu ,\\nu _n}$ is $2L$ -Lipschitz, thus using $\\phi (x,u) = -\\gamma V(x+\\beta _n u)$ , $\\psi (x,u) = x$ and $q = \\nu _n\\otimes g $ in lem:derivativemmdaugmented it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable in $t$ with: $\\dot{{{\\mathcal {F}}}}(\\rho _t)=\\int \\nabla f_{\\mu ,\\rho _t}(s_t(x)).", "(-\\gamma V(x+\\beta _n u))g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u$ Moreover, $\\dot{{{\\mathcal {F}}}}(\\rho _0)$ is given by $\\dot{{{\\mathcal {F}}}}(\\rho _0)= -\\gamma \\int V(x).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u$ and the following estimate holds: $\\vert \\dot{{{\\mathcal {F}}}} (\\rho _t) -\\dot{{{\\mathcal {F}}}}(\\rho _0)\\vert \\le 3\\gamma ^2 L t \\int \\Vert V(x+\\beta _n u) \\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u = 3\\gamma ^2 Lt \\mathcal {D}_{\\beta _n}(\\nu _n).$ Using the absolute continuity of ${{\\mathcal {F}}}(\\rho _t)$ , one has $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n})=\\dot{{{\\mathcal {F}}}}(\\rho _0)+ \\int _0^1 \\dot{{{\\mathcal {F}}}} (\\rho _t) - \\dot{{{\\mathcal {F}}}} (\\rho _0) \\mathop {}\\!\\mathrm {d}t $ .", "Combining with eq:estimategradient and using the expression of $\\dot{{{\\mathcal {F}}}}(\\rho _0)$ , it follows that: $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n})\\le -\\gamma \\int V(x).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u + \\frac{3}{2}\\gamma ^2L \\mathcal {D}_{\\beta _n}(\\nu _n).$ Adding and subtracting $\\gamma \\mathcal {D}_{\\beta _n}(\\nu _n)$ in eq:taylorexpansion it follows directly that: $\\begin{split}\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n} )\\le & -\\gamma (1-\\frac{3}{2}\\gamma L )\\mathcal {D}_{\\beta _n}(\\nu _n)\\\\&+ \\gamma \\int (V(x+\\beta _n u) -V(x)).V(x+\\beta _n u) g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u \\end{split}$ We shall control now the last term in eq:penultimate.", "Recall now that for all $1\\le i\\le d$ , $ V_i(x) = \\partial _i f_{\\mu ,\\nu _n}(x) = \\langle f_{\\mu ,\\nu _n} , \\partial _i k(x,.", ")\\rangle $ where we used the reproducing property for the derivatives of $f_{\\mu ,\\nu _n}$ in ${{\\mathcal {H}}}$ (see sec:rkhs).", "Therefore, it follows by Cauchy-Schwartz in ${{\\mathcal {H}}}$ and using assump:Lipschitzgradrkhs: $\\Vert V(x+\\beta _n u) -V(x)\\Vert ^2&\\le \\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2 \\left( \\sum _{i=1}^{d}\\Vert \\partial _i k(x+\\beta _n u,.)", "-\\partial _i k(x,.", ")\\Vert ^2_{\\mathcal {H}}\\right)\\\\&\\le \\lambda ^2\\beta _n^2\\Vert f_{\\mu ,\\nu _n}\\Vert _{\\mathcal {H}}^2\\Vert u \\Vert ^2$ for all $ x,u \\in {{\\mathcal {X}}}$ .", "Now integrating both sides w.r.t.", "$\\nu _n$ and $g$ and recalling that $g$ is a standard gaussian, we have: $\\int \\Vert V(x+\\beta _n u) -V(x)\\Vert ^2 g(u)\\mathop {}\\!\\mathrm {d}\\nu _n(x)\\mathop {}\\!\\mathrm {d}u\\le \\lambda ^2\\beta ^2_n\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2$ Getting back to eq:penultimate and applying Cauchy-Schwarz in $L_2(\\nu _n\\otimes g)$ it follows: $\\mathcal {F}(\\nu _{n+1})-\\mathcal {F}(\\nu _{n} )\\le & -\\gamma (1-\\frac{3}{2}\\gamma L )\\mathcal {D}_{\\beta _n}(\\nu _n) +\\gamma \\lambda \\beta _n\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}\\mathcal {D}^{\\frac{1}{2}}_{\\beta _n}(\\nu _n)$ It remains to notice that $\\Vert f_{\\mu ,\\nu _n} \\Vert _{\\mathcal {H}}^2 = 2{{\\mathcal {F}}}(\\nu _n)$ and that $\\beta _n$ satisfies eq:controlnoiselevelbis to get: ${{\\mathcal {F}}}(\\nu _{n+1}) -{{\\mathcal {F}}}(\\nu _n) \\le -\\frac{\\gamma }{2}(1-\\frac{3}{2}\\gamma L)\\mathcal {D}_{\\beta _n}(\\nu _n).$ We introduce now $\\Gamma = 4\\gamma (1-\\frac{3}{2}\\gamma L)\\lambda ^2$ to simplify notation and prove the second inequality.", "Using eq:controlnoiselevelbis again in the above inequality we directly have: ${{\\mathcal {F}}}(\\nu _{n+1}) -{{\\mathcal {F}}}(\\nu _n) \\le - \\Gamma \\beta _n^2 {{\\mathcal {F}}}(\\nu _n)$ .", "One can already deduce that $\\Gamma \\beta _n^2$ is necessarily smaller than 1.", "Hence, taking ${{\\mathcal {F}}}(\\nu _n)$ to the r.h. side and iterating over $n$ it follows that: ${{\\mathcal {F}}}(\\nu _{n}) \\le {{\\mathcal {F}}}(\\nu _0)\\prod _{i=0}^{n-1}(1- \\Gamma \\beta _n^2)$ Simply using that $1-\\Gamma \\beta _n^2\\le e^{-\\Gamma \\beta _n^2}$ leads to the desired upper-bound ${{\\mathcal {F}}}(\\nu _{n}) \\le {{\\mathcal {F}}}(\\nu _0)e^{-\\Gamma \\sum _{i=0}^{n-1} \\beta _n^2}$ ." ], [ "Sample-based approximate scheme", "[Proof of prop:convergenceeulermaruyama] Let $(u_{n}^{i})_{1\\le i\\le N}$ be i.i.d standard gaussian variables and $(x_{0}^{i})_{1\\le i\\le N}$ i.i.d.", "samples from $\\nu _0$ .", "We consider $(x_n^i)_{1\\le i\\le N}$ the particles obtained using the approximate scheme eq:eulermaruyama: $x_{n+1}^{i}=x_{n}^{i}-\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})$ starting from $(x_{0}^{i})_{1\\le i\\le N}$ , where $\\hat{\\nu _n}$ is the empirical distribution of these $N$ interacting particles.", "Similarly, we denote by $(\\bar{x}_{n}^{i})_{1\\le i\\le N}$ the particles obtained using the exact update equation eq:discretizednoisyflow: $\\bar{x}_{n+1}^{i}=\\bar{x}_{n}^{i}-\\gamma \\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})$ also starting from $(x_{0}^{i})_{1\\le i\\le N}$ .", "By definition of $\\nu _n$ we have that $(\\bar{x}_{n}^{i})_{1\\le i\\le N}$ are i.i.d.", "samples drawn from $\\nu _n$ with empirical distribution denoted by $\\bar{\\nu }_{n}$ .", "We will control the expected error $c_{n}$ defined as $c^2_{n}= \\frac{1}{N}\\sum _{i=1}^N \\mathbb {E}\\left[\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}\\Vert ^{2}\\right]$ .", "By recursion, we have: $c_{n+1} = & \\frac{1}{\\sqrt{N}}\\left(\\sum _{i=1}^{N}\\mathbb {E}\\left[\\left\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}-\\gamma \\left(\\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})\\right)\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\\\le & c_{n} +\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {E}_{i}\\right]^{\\frac{1}{2}}+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {G}_{i}\\right]^{\\frac{1}{2}} \\\\& +\\frac{\\gamma }{\\sqrt{N}}\\left(\\sum _{i=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\hat{\\nu }_{n}}\\left(x_{n}^{i}+\\beta _{n}u_{n}^{i}\\right)-\\nabla f_{\\mu ,\\bar{\\nu }_{n}}\\left(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i}\\right)\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\\\le & c_{n}+2\\gamma L\\left(c_{n}+\\mathbb {E}\\left[W_{2}(\\hat{\\nu }_{n},\\bar{\\nu }_{n})^{2}\\right]^{\\frac{1}{2}}\\right)+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {E}_{i}\\right]^{\\frac{1}{2}}+\\frac{\\gamma }{\\sqrt{N}}\\left[\\sum _{i=1}^{N}\\mathcal {G}_{i}\\right]^{\\frac{1}{2}}$ where the second line follows from a simple triangular inequality and the last line is obtained recalling that $\\nabla f_{\\mu ,\\nu }(x)$ is jointly $2L$ Lipschitz in $x$ and $\\nu $ by prop:gradwitnessfunction.", "Here, $\\mathcal {E}_{i}$ represents the error between $\\bar{\\nu }_n$ and $\\nu _n$ while $\\mathcal {G}_{i}$ represents the error between $\\hat{\\mu }$ and $\\mu $ and are given by: $\\mathcal {E}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\bar{\\nu }_{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\nu _{n}}(\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i})\\right\\Vert ^{2}\\right]\\\\\\mathcal {G}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\hat{\\mu },\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})-\\nabla f_{\\mu ,\\hat{\\nu }_{n}}(x_{n}^{i}+\\beta _{n}u_{n}^{i})\\right\\Vert ^{2}\\right]$ We will first control the error term $\\mathcal {E}_i$ .", "To simplify notations, we write $y^{i}=\\bar{x}_{n}^{i}+\\beta _{n}u_{n}^{i}$ .", "Recalling the expression of $\\nabla f_{\\mu ,\\nu }$ from prop:gradwitnessfunction and expanding the squared norm in $\\mathcal {E}_i$ , it follows: $\\mathcal {E}_{i} & =\\mathbb {E}\\left[\\left\\Vert \\frac{1}{N}\\sum _{j=1}^{N}\\nabla k(y^{i},\\bar{x}_{n}^{j})-\\int \\nabla k(y^{i},x)d\\nu _{n}(x)\\right\\Vert ^{2}\\right]\\\\& =\\frac{1}{N^{2}}\\sum _{j=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\nabla k(y^{i},\\bar{x}_{n}^{j})-\\int \\nabla k(y^{i},x)d\\nu _{n}(x)\\right\\Vert ^{2}\\right]\\\\& \\le \\frac{L^{2}}{N^{2}}\\sum _{j=1}^{N}\\mathbb {E}\\left[\\left\\Vert \\bar{x}_{n}^{j}-\\int xd\\nu _{n}(x)\\right\\Vert ^{2}\\right]=\\frac{L^{2}}{N}var(\\nu _{n}).$ The second line is obtained using the independence of the auxiliary samples $(\\bar{x}^{i}_n)_{1\\le i\\le N}$ and recalling that they are distributed according to $\\nu _{n}$ .", "The last line uses the fact that $\\nabla k(y,x)$ is $L$ -Lipshitz in $x$ by assump:lipschitzgradientk.", "To control the variance $var(\\nu _n)$ we use lem:Controlvariance which implies that $var(\\nu _{n})^{\\frac{1}{2}}\\le (B+var(\\nu _{0})^{\\frac{1}{2}})e^{LT}$ for all $n\\le \\frac{2T}{\\gamma }$ .", "For $\\mathcal {G}_{i}$ , it is sufficient to expand again the squared norm and recall that $\\nabla k(y,x)$ is $L$ -Lipschitz in $x$ which then implies that $\\mathcal {G}_{i}\\le \\frac{L^{2}}{M}var(\\mu )$ .", "Finally, one can observe that $\\mathbb {E}[W_{2}^{2}(\\hat{\\nu }_{n},\\bar{\\nu }_{n})]\\le \\frac{1}{N}\\sum _{i=1}^{N}\\mathbb {E}\\left[\\Vert x_{n}^{i}-\\bar{x}_{n}^{i}\\Vert ^{2}\\right]=c_{n}^{2}$ , hence $c_n$ satisfies the recursion: $c_{n+1}\\le (1+4\\gamma L)c_{n}+\\frac{\\gamma L}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{\\gamma L}{\\sqrt{M}}var(\\mu ).$ Using lem:Discrete-Gronwall-lemma to solve the above inequality, it follows that: $c_{n}\\le \\frac{1}{4}\\left(\\frac{1}{\\sqrt{N}}(B+var(\\nu _{0})^{\\frac{1}{2}})e^{2LT}+\\frac{1}{\\sqrt{M}}var(\\mu ))\\right)(e^{4LT}-1)$ Lemma 19 Consider an initial distribution $\\nu _{0}$ with finite variance, a sequence $(\\beta _n)_{ n \\ge 0}$ of non-negative numbers bounded by $B<\\infty $ and define the sequence of probability distributions $\\nu _n$ of the process eq:discretizednoisyflow: $x_{n+1}=x_{n}-\\gamma \\nabla f_{\\mu ,\\nu _{n}}(x_{n}+\\beta _{n}u_{n}) \\qquad x_0 \\sim \\nu _0$ where $(u_n)_{n\\ge 0}$ are standard gaussian variables.", "Under assump:lipschitzgradientk, the variance of $\\nu _{n}$ satisfies for all $T>0$ and $n\\le \\frac{T}{\\gamma }$ the following inequality: $var(\\nu _{n})^{\\frac{1}{2}}\\le (B+var(\\nu _{0})^{\\frac{1}{2}})e^{2TL}$ Let $g$ be the density of a standard gaussian.", "Denote by $(x,u)$ and $(x^{\\prime },u^{\\prime })$ two independent samples from $\\nu _n\\otimes g$ .", "The idea is to find a recursion from $var(\\nu _{n})$ to $var(\\nu _{n+1})$ : $var(\\nu _{n+1})^{\\frac{1}{2}}& =\\left(\\mathbb {E}\\left[\\left\\Vert x -\\mathbb {E}\\left[x^{\\prime }\\right] -\\gamma \\nabla f_{\\mu ,\\nu _{n}}(x+\\beta _{n}u)+\\gamma \\mathbb {E}\\left[\\nabla f_{\\mu ,\\nu _{n}}(x^{\\prime }+\\beta _{n}u^{\\prime })\\right]\\right\\Vert ^2\\right]\\right)^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+\\gamma \\left(\\mathbb {E}\\left[\\left\\Vert \\nabla f_{\\mu ,\\nu _{n}}(x+\\beta _{n}u)-\\mathbb {E}\\left[\\nabla f_{\\mu ,\\nu _{n}}(x^{\\prime }+\\beta _{n}u^{\\prime })\\right]\\right\\Vert ^{2}\\right]\\right)^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+2\\gamma L\\mathbb {E}_{\\begin{array}{c}x,x^{\\prime }\\sim \\nu _{n}\\\\ u,u^{\\prime }\\sim g\\end{array}}\\left[\\left\\Vert x+\\beta _{n}u-x^{\\prime }+\\beta _{n}u^{\\prime }\\right\\Vert ^{2}\\right]^{\\frac{1}{2}}\\\\& \\le var(\\nu _{n})^{\\frac{1}{2}}+2\\gamma L(var(\\nu _{n})^{\\frac{1}{2}}+\\beta _{n})$ The second and last lines are obtained using a triangular inequality while the third line uses that $\\nabla f_{\\mu ,\\nu _n}(x)$ is $2L$ -Lipschitz in $x$ by prop:gradwitnessfunction.", "Recalling that $\\beta _{n}$ is bounded by $B$ it is easy to conclude using lem:Discrete-Gronwall-lemma." ], [ "Connection with Neural Networks", "In this sub-section we establish a formal connection between the MMD gradient flow defined in eq:continuitymmd and neural networks optimization.", "Such connection holds in the limit of infinitely many neurons and is based on the formulation in .", "To remain consistent with the rest of the paper, the parameters of a network will be denoted by $x\\in {{\\mathcal {X}}}$ while the input and outputs will be denoted as $z$ and $y$ .", "Given a neural network or any parametric function $(z,x)\\mapsto \\psi (z,x)$ with parameter $x \\in {{\\mathcal {X}}}$ and input data $z$ we consider the supervised learning problem: $\\min _{(x_1,...,x_m )\\in {{\\mathcal {X}}}} \\frac{1}{2}\\mathbb {E}_{(y,z)\\sim p } \\left[ \\left\\Vert y - \\frac{1}{m}\\sum _{i=1}^m\\psi (z,x_i) \\right\\Vert ^2 \\right]$ where $(y,z) \\sim p$ are samples from the data distribution and the regression function is an average of $m$ different networks.", "The formulation in eq:regressionnetwork includes any type of networks.", "Indeed, the averaged function can itself be seen as one network with augmented parameters $(x_1,...,x_m)$ and any network can be written as an average of sub-networks with potentially shared weights.", "In the limit $m\\rightarrow \\infty $ , the average can be seen as an expectation over the parameters under some probability distribution $\\nu $ .", "This leads to an expected network $\\Psi (z,\\nu ) = \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\nu (x) $ and the optimization problem in eq:regressionnetwork can be lifted to an optimization problem in $\\mathcal {P}_2({{\\mathcal {X}}})$ the space of probability distributions: $\\min _{\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})} \\mathcal {L}(\\nu ) := \\frac{1}{2}\\mathbb {E}_{(y,z)\\sim p} \\left[ \\left\\Vert y - \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\nu (x) \\right\\Vert ^2 \\right]$ For convenience, we consider $\\bar{\\mathcal {L}}(\\nu )$ the function obtained by subtracting the variance of $y$ from $\\mathcal {L}(\\nu )$ , i.e.", ": $\\bar{\\mathcal {L}}(\\nu ) = \\mathcal {L}(\\nu ) - var(y) $ .", "When the model is well specified, there exists $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}}) $ such that $\\mathbb {E}_{y\\sim \\mathbb {P}(.|z)}[y] = \\int \\psi (z,x) \\mathop {}\\!\\mathrm {d}\\mu (x)$ .", "In that case, the cost function $\\bar{\\mathcal {L}}$ matches the functional ${{\\mathcal {F}}}$ defined in eq:mmdasfreeenergy for a particular choice of the kernel $k$ .", "More generally, as soon as a global minimizer for eq:liftedregression exists, prop:inequalitymmdloss relates the two losses $\\bar{\\mathcal {L}}$ and $\\mathcal {F}$ .", "Proposition 20 Assuming a global minimizer of eq:liftedregression is achieved by some $\\mu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ , the following inequality holds for any $\\nu \\in \\mathcal {P}_2({{\\mathcal {X}}})$ : $\\left(\\bar{\\mathcal {L}}(\\mu )^{\\frac{1}{2}} + {{\\mathcal {F}}}^{\\frac{1}{2}}(\\nu )\\right)^2\\ge \\bar{\\mathcal {L}}(\\nu )\\ge \\mathcal {F}(\\nu ) + \\bar{\\mathcal {L}}(\\mu )$ where ${{\\mathcal {F}}}(\\nu )$ is defined by eq:mmdasfreeenergy with a kernel $k$ constructed from the data as an expected product of networks: $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim \\mathbb {P}} \\left[\\psi (z,x)^T\\psi (z,x^{\\prime })\\right]$ Moreover, $\\bar{\\mathcal {L}} = {{\\mathcal {F}}}$ iif $\\bar{\\mathcal {L}}(\\mu )=0$ , which means that the model is well-specified.", "The framing eq:inequalitymmdnn implies that optimizing $\\mathcal {F}$ can decrease $\\mathcal {L}$ and vice-versa.", "Moreover, in the well specified case, optimizing $\\mathcal {F}$ is equivalent to optimizing $\\mathcal {L}$ .", "Hence one can use the gradient flow of the MMD defined in eq:continuitymmd to solve eq:liftedregression.", "One particular setting when eq:liftedregression is well-specified is the student-teacher problem as in .", "In this case, a teacher network of the form $\\Psi _T(z,\\mu )$ produces a deterministic output $y = \\Psi _T(z,\\mu )$ given an input $z$ while a student network $\\Psi _S(z,\\nu )$ tries to learn the mapping $z\\mapsto \\Psi _T(z,\\mu )$ by minimizing eq:liftedregression.", "In practice $\\mu $ and $\\nu $ are given as empirical distributions on some particles $\\Xi = (\\xi ^1,...,\\xi ^M)$ and $X=(x^1,...,x^N)$ with $\\mu = \\frac{1}{M} \\sum _{j=1}^M \\delta _{\\xi ^j}$ and $\\nu = \\frac{1}{N} \\sum _{i=1}^N\\delta _{x^i}$ .", "The particles $(x^i)_{1\\le i \\le N}$ are then optimized using gradient descent starting from an initial configuration $(x_0^i)_{1\\le i \\le N}$ .", "This leads to the update equation: $x^i_{n+1} = x^i_n - \\gamma \\mathbb {E}_{z\\sim p }\\left[ \\left(\\frac{1}{N}\\sum _{j=1}^N \\psi (z,x_n^{j})-\\frac{1}{M}\\sum _{j=1}^M \\psi (z,\\xi ^{j})\\right)\\nabla _{x_n^{i}}\\psi (z,x_n^{i})\\right],$ where $(x_n^{i})_{1\\le i\\le N}$ are the particles at iteration $n$ with empirical distribution $\\nu _n$ .", "Here, the gradient is rescaled by the number of particles $N$ .", "Re-arranging terms and recalling that $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim p}[\\psi (z,x)^T\\psi (z,x^{\\prime })]$ , equation eq:updateequationstudentteacher becomes: $x^i_{n+1} = x^i_n - \\gamma \\nabla f_{\\mu ,\\nu _n}(x_n^i).$ with $\\nabla f_{\\mu ,\\nu _n}(x_n^i) = \\left(\\frac{1}{N}\\sum _{j=1}^N \\nabla _2 k(x_n^{j},x_n^{i})-\\frac{1}{M}\\sum _{j=1}^M \\nabla _2 k(\\xi ^{j},x_n^{i})\\right)$ .", "The above equation is a discretized version of the gradient flow of the MMD defined in eq:continuitymmd.", "Such discretization is obtained from eq:eulermaruyama by setting the noise level $\\beta _n$ to 0.", "Hence, in the limit when $N\\rightarrow \\infty $ and $\\gamma \\rightarrow 0$ , one recovers the gradient flow defined in eq:eulerschemeparticles.", "In general the kernel $k$ is intractable and can be approximated using $n_b$ samples $(z_1,...,z_{n_b})$ from the data distribution: $\\hat{k}(x,x^{\\prime }) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\psi (z_b,x)^T \\psi (z_b,x^{\\prime })$ .", "This finally leads to an approximate update: $x^i_{n+1} = x^i_n - \\gamma \\nabla \\hat{f}_{\\mu ,\\nu _n}(x_n^i).$ where $\\nabla \\hat{f}_{\\mu ,\\nu _n}$ is given by: $\\nabla \\hat{f}_{\\mu ,\\nu _n}(x_n^i) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\left(\\frac{1}{N}\\sum _{j=1}^N \\psi (z_b,x_n^{j})-\\frac{1}{M}\\sum _{j=1}^M \\psi (z_b,\\xi ^{j})\\right)\\nabla _{x_n^{i}}\\psi (z_b,x_n^{i})).$ We provide now a proof for prop:inequalitymmdloss: [Proof of prop:inequalitymmdloss]Let $\\Psi (z,\\nu )$ =$\\int \\psi (z,x)\\mathop {}\\!\\mathrm {d}\\nu (x)$ .", "By eq:kernelNN, we have: $k(x,x^{\\prime }) =\\int _{z}\\psi (z,x)^T\\psi (z,x^{\\prime })\\mathop {}\\!\\mathrm {d}s(z)$ where $s$ denotes the distribution of $z$ .", "It is easy to see that ${{\\mathcal {F}}}(\\nu ) = \\frac{1}{2} \\int \\Vert \\Psi (z,\\nu ) -\\Psi (z,\\mu ) \\Vert ^2 \\mathop {}\\!\\mathrm {d}s(z) $ .", "Indeed expanding the square in the l.h.s and exchanging the order of integrations w.r.t $p$ and $(\\mu \\otimes \\nu )$ one gets ${{\\mathcal {F}}}(\\nu )$ .", "Now, introducing $\\Psi (z,\\mu )$ in the expression of $\\mathcal {L}(\\nu )$ , it follows by a simple calculation that: $\\mathcal {L}(\\nu )&= \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu )+ \\int \\left\\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu )\\right\\rangle \\mathop {}\\!\\mathrm {d}p(z)$ where $m(z)$ is the conditional mean of $y$ , i.e.", ": $m(z)=\\int y \\mathop {}\\!\\mathrm {d}p(y|z)$ .", "On the other hand we have that $2\\mathcal {L}(\\mu ) = var(y) + \\int \\Vert \\Psi (z,\\mu )-m(z)\\Vert ^2\\mathop {}\\!\\mathrm {d}p(z)$ , so that $ \\int \\Vert \\Psi (z,\\mu )-m(z)\\Vert ^2\\mathop {}\\!\\mathrm {d}p(z) = 2\\bar{\\mathcal {L}}(\\mu )$ .", "Hence, using Cauchy-Schwartz for the last term in eq:maineqnn, one gets the upper-bound: $\\mathcal {L}(\\nu )\\le \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu ) + 2 \\bar{\\mathcal {L}}(\\mu )^{\\frac{1}{2}}\\mathcal {F(\\nu )}^{\\frac{1}{2}}.$ This in turn gives an upper-bound on $\\bar{\\mathcal {L}}(\\nu )$ after subtracting $var(y)/2$ on both sides of the inequality.", "To get the lower bound on $\\bar{\\mathcal {L}}$ one needs to use the global optimality condition of $\\mu $ for $\\mathcal {L}$ from .", "Indeed, for any $0<\\epsilon \\le 1$ it is easy to see that: $\\epsilon ^{-1}( \\mathcal {L}(\\mu +\\epsilon (\\nu -\\mu ))-\\mathcal {L}(\\mu )) = \\int \\left\\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu )\\right\\rangle \\mathop {}\\!\\mathrm {d}p(z) +o(\\epsilon ).$ Taking the limit $\\epsilon \\rightarrow 0$ and recalling that the l.h.s is always non-negative by optimality of $\\mu $ , it follows that $\\int \\langle \\Psi (z,\\mu )-m(z),\\Psi (z,\\nu )-\\Psi (z,\\mu ) \\rangle \\mathop {}\\!\\mathrm {d}p(z)$ must also be non-negative.", "Therefore, from eq:maineqnn one gets that $\\mathcal {L}(\\nu ) \\ge \\mathcal {L}(\\mu )+ \\mathcal {F}(\\nu )$ .", "The final bound is obtained by subtracting $var(y)/2$ again from both sides of the inequality." ], [ "Student-Teacher networks", "We consider a student-teacher network setting similar to .", "More precisely, using the notation from subsec:trainingneuralnetworks, we denote by $\\Psi (z,\\nu )$ the neural network of the form: $\\Psi (z,\\nu ) = \\int \\psi (z,x)\\mathop {}\\!\\mathrm {d}\\nu (x) $ where $z$ is an input vector in ${{\\mathbb {R}}}^{p}$ and $\\nu $ is a probability distribution over the parameters $x$ .", "Hence $\\Psi $ is an expectation over sub-networks $\\psi (z,x)$ with parameters $x$ .", "Here, we choose $\\psi $ of the form: $\\psi (z,x) = G\\left(b^{1}+W^{1}\\sigma (W^{0}z+b^{0})\\right).$ where $x$ is obtained as the concatenation of the parameters $(b^{1},W^{1},b^{0},W^{0})\\in {{\\mathcal {X}}}$ , $\\sigma $ is the ReLU non-linearity while $G$ is a fixed function and is defined later.", "Note that using $x$ to denote the parameters of a neural network is unusual, however, we prefer to keep a notation which is consistent with the rest of the paper.", "We will only consider the case when $\\nu $ is given by an empirical distribution of $N$ particles $X = (x^{1},...x^{N})$ for some $N\\in \\mathbb {N}$ .", "In that case, we denote by $\\nu _{X}$ such distribution to stress the dependence on the particles $X$ , i.e.", ": $ \\nu := \\nu _{X}= \\frac{1}{N} \\sum _{i=1}^N \\delta _{x^{i}}$ .", "The teacher network $\\Psi _{T}(z,\\nu _{\\Xi })$ is given by $M$ particles $\\Xi = (\\xi _1,...,\\xi _M)$ which are fixed during training and are initially drawn according to a normal distribution $\\mathcal {N}(0,1)$ .", "Similarly, the student network $\\Psi _{S}(z,\\nu _{X})$ has $N$ particles $X = (x^{1},...,x^{N})$ that are initialized according to a normal distribution $\\mathcal {N}(10^{-3},1)$ .", "Here we choose $M=1$ and $N=1000$ .", "The inputs $z$ are drawn from a uniform distribution $\\mathbb {S}$ on the sphere in ${{\\mathbb {R}}}^p$ as in with $p=50$ .", "The number of hidden layers $H$ is set to 3 and the output dimension is 1.", "The parameters of the student networks are trained to minimize the risk in eq:studentteacherproblem using SGD with mini-batches of size $n_b = 10^2$ and optimal step-size $\\gamma $ selected from: $\\lbrace 10^{-3},10^{-2},10^{-1}\\rbrace $ .", "$\\min _{X} \\mathbb {E}_{z\\sim \\mathbb {S} }\\left[(\\Psi _T(z,\\nu _{\\Xi } )- \\Psi _S(z,\\nu _{X}))^2\\right]$ When $G$ is simply the identity function and no bias is used, one recovers the setting in .", "In that case the network is partially 1-homogeneous and applies ensuring global optimality.", "Here, we are interested in the case when global optimality is not guaranteed by the homogeneity structure, hence we choose $G$ to be a gaussian with fixed bandwidth $\\sigma =2$ .", "As shown in subsec:trainingneuralnetworks, performing gradient descent to minimize eq:studentteacherproblem can be seen as a particle version of the gradient flow of the MMD with a kernel given by $k(x,x^{\\prime }) = \\mathbb {E}_{z\\sim \\mathbb {S}}[\\psi (z,x)\\psi (z,x^{\\prime })]$ and target distribution $\\mu $ given by $\\mu = \\nu _{\\Xi }$ .", "Hence one can use the noise injection algorithm defined in eq:eulermaruyama to train the parameters of the student network.", "Since $k$ is defined through an expectation over the data, it can be approximated using $n_{b}$ data samples $\\lbrace z_{1},...,z_{B}\\rbrace $ : $\\hat{k}(x,x^{\\prime }) = \\frac{1}{n_b} \\sum _{b=1}^{n_b} \\psi (z_b,x)\\psi (z_b,x^{\\prime }).$ Such approximation of the kernel leads to a simple expression for the gradient of the unnormalised witness function between $\\nu _{\\Xi }$ and $\\nu _{X}$ : $\\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X}}(x) = \\frac{1}{n_b}\\sum _{b=1}^{n_b}\\left( \\frac{1}{M}\\sum _{j=1}^M\\psi (z_b,\\xi ^j) - \\frac{1}{N}\\sum _{i=1}^N\\psi (z_b , x^i)\\right)\\nabla _{x}\\psi (z_b,x), \\qquad \\forall x \\in {{\\mathcal {X}}}.$ euclidstudentteacher, provides the main steps to train the parameters of the student network using the noisy gradient flow of the MMD proposed in eq:eulermaruyama.", "It can be easily implemented using automatic differentiation packages like PyTorch.", "Indeed, one only needs to compute an auxiliary loss function ${{\\mathcal {F}}}_{aux}$ instead of the actual MMD loss ${{\\mathcal {F}}}$ and perform gradient descent using ${{\\mathcal {F}}}_{aux}$ .", "Such function is given by: ${{\\mathcal {F}}}_{aux} = \\frac{1}{n_b}\\sum _{i=1}^N\\sum _{b=1}^{n_b} \\left({\\tt NoGrad}\\left(y_S^b\\right) - y_T^b \\right)\\psi (z^b,\\widetilde{x}_n^{i})$ To compute ${{\\mathcal {F}}}_{aux}$ , two forward passes on the student network are required.", "A first forward pass using the current parameter values $X_n = (x_n^1,...,x_n^{N})$ of the student network is used to compute the predictions $y_S^b$ given an input $z^b$ .", "For such forward pass, the gradient w.r.t to the parameters $X_n$ is not used.", "This is enforced, here, formally by calling the function NoGrad.", "The second forward pass is performed using the noisy parameters $\\widetilde{x}_n^{i} = x_n^i + \\beta _n u_n^{i}$ and requires implementing special layers which can inject noise to the weights.", "This second forward pass will be used to provide a gradient to update the particles using back-propagation.", "Indeed, it is easy to see that $\\nabla _{x_n^{i}} {{\\mathcal {F}}}_{aux}$ gives exactly the gradient $\\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _X}(\\widetilde{x}_n^i)$ used in euclidstudentteacher." ], [ "Learning gaussians", "fig:experiments illustrates the behavior of the proposed algorithm eq:eulermaruyama in a simple setting, and compares it with the gradient flow of the MMD without noise injection.", "In this setting, the MMD flow fails to converge to the global optimum.", "Indeed, as shown in fig:experiments(right), some of the final samples (in red) obtained using noise-free gradient updates tend to get further away from the target samples (in black).", "Most of the remaining samples collapse to a unique point at the center near the origin.", "This can also be seen from fig:experiments(left) where the training error fails to decrease below $10^{-3}$ .", "On the other hand, adding noise to the gradient seems to lead to global convergence, as seen visually from the samples.", "The training error decreases below $10^{-4}$ and oscillates between $10^{-8}$ and $10^{-4}$ .", "The oscillation is due to the step-size, which remained fixed while the noise was set to 0 starting from iteration 5000.", "It is worth noting that adding noise to the gradient slows the speed of convergence, as one can see from fig:experiments(left).", "This is expected since the algorithm doesn't follow the path of steepest descent.", "The noise helps in escaping local optima, however, as illustrated here.", "1.35 Noisy gradient flow of the MMD [1] Input $N$ , $n_{iter}$ , $\\beta _0$ , $\\gamma $ Output $(x^{i}_{n_{iter}})_{1\\le i\\le N}$ Initialize $N$ particles from initial distribution $\\nu _0$ : $x_{0}^{i}\\mathrel {\\stackrel{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny i.i.d}}}{\\sim }}\\nu _0$ Initialize the noise level: $\\beta =\\beta _0$ $n=0,\\dots , n_{iter}$ Sample $M$ points from the target $\\mu $: $\\lbrace y^1,...,y^M\\rbrace $ .", "Sample $N$ gaussians : $\\lbrace u_n^{1},...,u_n^N\\rbrace $ $i=1,\\dots ,N$ Compute the noisy values: $\\widetilde{x}_n^{i} = x_n^i+\\beta _n u_n^i$ Evaluate vector field:$\\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(\\widetilde{x}_n^i) = \\frac{1}{N}\\sum \\limits _{j=1}^N \\nabla _2 k(x_n^j,\\widetilde{x}_n^{i})-\\frac{1}{M}\\sum \\limits _{m=1}^M \\nabla _2 k(y^m,\\widetilde{x}_n^{i})$ Update the particles: $x_{n+1}^{i} = x_n^i -\\gamma \\nabla f_{\\hat{\\mu },\\hat{\\nu }_n}(\\widetilde{x}_n^i)$ Update the noise level using an update rule $h$: $\\beta _{n+1}=h(\\beta _{n}, n)$ .", "1.35 Noisy gradient flow of the MMD for student-teacher learning [1] Input $N$ , $n_{iter}$ , $\\beta _0$ , $\\gamma $ , $n_{b}$ , $\\Xi = (\\xi ^j)_{1\\le j\\le M}$ .", "Output $(x^{i}_{n_{iter}})_{1\\le i\\le N}$ .", "Initialize $N$ particles from initial distribution $\\nu _0$ : $x_{0}^{i}\\mathrel {\\stackrel{\\makebox{[}0pt]{\\mbox{\\normalfont \\tiny i.i.d}}}{\\sim }}\\nu _0$ .", "Initialize the noise level: $\\beta =\\beta _0$ .", "$n=0,...,n_{iter}$ Sample minibatch of $n_{b}$ data points: $\\lbrace z^1,...,z^{n_{b}}\\rbrace $ .", "$b=1,...,n_{b}$ Compute teacher's output: $y_{T}^b = \\frac{1}{M}\\sum _{j=1}^M \\psi (z^b,\\xi ^{j})$ .", "Compute students's output: $y_{S}^b = \\frac{1}{N}\\sum _{i=1}^N \\psi (z^b,x_n^i)$ .", "Sample $N$ gaussians : $\\lbrace u_n^{1},...,u_n^{N}\\rbrace $ .", "$i=1,...,N$ Compute noisy particles: $\\widetilde{x}_n^{i} = x_n^i +\\beta _n u_n^{i}$ Evaluate vector field: $ \\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X_n}}(\\widetilde{x}_n^{i}) = \\frac{1}{n_{b}}\\sum _{b=1}^{n_{b}} ( y_{S}^b - y_{T}^b ) \\nabla _{x_n^{i}} \\psi (z^b,\\widetilde{x}_n^{i})$ Update particle $i$: $x_{n+1}^{i} = x_{n}^{i} -\\gamma \\nabla \\hat{f}_{\\nu _{\\Xi },\\nu _{X_n}}(\\widetilde{x}_n^{i})$ Update the noise level using an update rule $h$: $\\beta _{n+1}=h(\\beta _{n}, n)$ ." ], [ "Auxiliary results", "Proposition 21 Under assump:lipschitzgradientk, the unnormalised witness function $f_{\\mu ,\\nu }$ between any probability distributions $\\mu $ and $\\nu $ in $\\mathcal {P}_2({{\\mathcal {X}}})$ is differentiable and satisfies: $\\nabla f_{\\mu ,\\nu }(z) = \\int \\nabla _1 k(z,x)\\mathop {}\\!\\mathrm {d}\\mu (x) - \\int \\nabla _1 k(z,x)\\mathop {}\\!\\mathrm {d}\\nu (x) \\qquad \\forall z\\in {{\\mathcal {X}}}$ where $z \\mapsto \\nabla _1 k(x,z)$ denotes the gradient of $z\\mapsto k(x,z)$ for a fixed $x \\in {{\\mathcal {X}}}$ .", "Moreover, the map $(z,\\mu ,\\nu )\\mapsto f_{\\mu ,\\nu }(z)$ is Lipschitz with: $\\Vert \\nabla f_{\\mu ,\\nu }(z) - \\nabla f_{\\mu ^{\\prime },\\nu ^{\\prime }}(z^{\\prime })\\Vert \\le 2L (\\Vert z-z^{\\prime } \\Vert + W_2(\\mu ,\\mu ^{\\prime }) + W_2(\\nu ,\\nu ^{\\prime }))$ Finally, each component of $\\nabla f_{\\mu ,\\nu }$ belongs to ${{\\mathcal {H}}}$ .", "The expression of the unnormalised witness function is given in eq:witnessfunction.", "To establish eq:gradientwitness, we simply need to apply the differentiation lemma .", "By assump:lipschitzgradientk, it follows that $ (x,z)\\mapsto \\nabla _1 k(z,x)$ has at most a linear growth.", "Hence on any bounded neighborhood of $z$ , $x\\mapsto \\Vert \\nabla _1 k(z,x) \\Vert $ is upper-bounded by an integrable function w.r.t.", "$\\mu $ and $\\nu $ .", "Therefore, the differentiation lemma applies and $\\nabla f_{\\mu ,\\nu }(z)$ is differentiable with gradient given by eq:gradientwitness.", "To prove the second statement, we will consider two optimal couplings: $\\pi _1$ with marginals $\\mu $ and $\\mu ^{\\prime }$ and $\\pi _2$ with marginals $\\nu $ and $\\nu ^{\\prime }$ .", "We use eq:gradientwitness to write: $\\Vert \\nabla f_{\\mu ,\\nu }(z) - \\nabla f_{\\mu ^{\\prime },\\nu ^{\\prime }}(z^{\\prime })\\Vert &= \\left\\Vert \\mathbb {E}_{\\pi _1}\\left[ \\nabla _1 k(z,x)-\\nabla _1 k(z^{\\prime },x^{\\prime }) \\right] - \\mathbb {E}_{\\pi _2}\\left[\\nabla _1 k(z,y)-\\nabla _1 k(z^{\\prime },y^{\\prime })\\right] \\right\\Vert \\\\& \\le \\mathbb {E}_{\\pi _1}\\left[ \\left\\Vert \\nabla _1 k(z,x)-\\nabla _1 k(z^{\\prime },x^{\\prime }) \\right\\Vert \\right] + \\mathbb {E}_{\\pi _2}\\left[\\left\\Vert \\nabla _1 k(z,y)-\\nabla _1 k(z^{\\prime },y^{\\prime }) \\right\\Vert \\right] \\\\&\\le L\\left( \\Vert z-z^{\\prime } \\Vert + \\mathbb {E}_{\\pi _1}[\\Vert x-x^{\\prime } \\Vert ] + \\Vert z-z^{\\prime } \\Vert + \\mathbb {E}_{\\pi _2}[\\Vert y-y^{\\prime } \\Vert ] \\right)\\\\&\\le L(2\\Vert z-z^{\\prime }\\Vert + W_2(\\mu ,\\mu ^{\\prime }) + W_2(\\nu ,\\nu ^{\\prime }) )$ The second line is obtained by convexity while the third one uses assump:lipschitzgradientk and finally the last line relies on $\\pi _1$ and $\\pi _2$ being optimal.", "The desired bound is obtained by further upper-bounding the last two terms by twice their amount.", "Lemma 22 Let $U$ be an open set, $q$ a probability distribution in $\\mathcal {P}_2({{\\mathcal {X}}}\\times \\mathcal {U})$ and $\\psi $ and $\\phi $ two measurable maps from ${{\\mathcal {X}}}\\times \\mathcal {U} $ to ${{\\mathcal {X}}}$ which are square-integrable w.r.t $q$ .", "Consider the path $\\rho _t$ from $(\\psi )_{\\#}q$ and $(\\psi +\\phi )_{\\#}q$ given by: $\\rho _t= (\\psi +t\\phi )_{\\#}q \\quad \\forall t\\in [0,1]$ .", "Under assump:lipschitzgradientk, $\\mathcal {F}(\\rho _t)$ is differentiable in $t$ with $\\dot{{{\\mathcal {F}}}}(\\rho _t)&=\\int \\nabla f_{\\mu ,\\rho _t}(\\psi (x,u)+t\\phi (x,u)) \\phi (x,u)\\mathop {}\\!\\mathrm {d}q(x,u)$ where $f_{\\mu ,\\rho _t}$ is the unnormalised witness function between $\\mu $ and $\\rho _t$ as defined in eq:witnessfunction.", "Moreover: $\\left|\\dot{{{\\mathcal {F}}}}(\\rho _t) - \\dot{{{\\mathcal {F}}}}(\\rho _s) \\right|\\le 3L\\left|t-s \\right|\\int \\left\\Vert \\phi (x,u) \\right\\Vert ^2 dq(x,u)$ For simplicity, we write $f_t$ instead of $f_{\\mu ,\\rho _t}$ and denote by $s_t(x,u)= \\psi (x,u)+t\\phi (x,u)$ The function $h: t\\mapsto k(s_t(x,u),s_t(x^{\\prime },u^{\\prime })) - k(s_t(x,u),z) - k(s_t(x^{\\prime },u^{\\prime }),z)$ is differentiable for all $(x,u)$ ,$(x^{\\prime },u^{\\prime })$ in ${{\\mathcal {X}}}\\times \\mathcal {U}$ and $z\\in {{\\mathcal {X}}}$ .", "Moreover, by assump:lipschitzgradientk, a simple computation shows that for all $0\\le t\\le 1$ : $\\left|\\dot{h} \\right|\\le L\\left[ \\left(\\left\\Vert z - \\phi (x,u)\\right\\Vert + \\left\\Vert \\psi (x,u)\\right\\Vert \\right) \\left\\Vert \\phi (x^{\\prime },u^{\\prime })\\right\\Vert +\\left(\\left\\Vert z - \\phi (x^{\\prime },u^{\\prime })\\right\\Vert + \\left\\Vert \\psi (x^{\\prime },u^{\\prime })\\right\\Vert \\right)\\left\\Vert \\phi (x,u)\\right\\Vert \\right]$ The right hand side of the above inequality is integrable when $z$ , $(x,u)$ and $(x^{\\prime },u^{\\prime })$ are independent and such that $z\\sim \\mu $ and both $(x,u)$ and $(x^{\\prime },u^{\\prime })$ are distributed according to $q$ .", "Therefore, by the differentiation lemma it follows that ${{\\mathcal {F}}}(\\rho _t)$ is differentiable and: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[(\\nabla _1 k(s_t(x,u),s_t(x^{\\prime },u^{\\prime }))-\\nabla _1 k(s_t(x,u),z)).\\phi (x,u)\\right].$ By prop:gradwitnessfunction, we directly get $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\int \\nabla f_{\\mu ,\\rho _t}(\\psi (x,u)+t\\phi (x,u)) \\phi (x,u)\\mathop {}\\!\\mathrm {d}q(x,u)$ .", "We shall control now the difference $\\vert \\dot{F}(\\rho _t)-\\dot{{{\\mathcal {F}}}}(\\rho _{t^{\\prime }})\\vert $ for $0\\le t,t^{\\prime }\\le 1$ .", "Using assump:lipschitzgradientk and recalling that $s_t(x,u)-s_{t^{\\prime }}(x,u)= (t-t^{\\prime })\\phi (x,u)$ a simple computation shows: $\\left|\\dot{{{\\mathcal {F}}}}(\\rho _t)-\\dot{{{\\mathcal {F}}}}(\\rho _{t^{\\prime }}) \\right|&\\le L\\left|t-t^{\\prime } \\right|\\mathbb {E}\\left[\\left(2\\Vert \\phi (x,u) \\Vert + \\Vert \\phi (x^{\\prime },u^{\\prime })\\Vert \\right)\\Vert \\phi (x,u)\\Vert \\right]\\\\&\\le L\\vert t-t^{\\prime }\\vert (2\\mathbb {E}\\left[\\Vert \\phi (x,u)\\Vert ^2 \\right] + \\mathbb {E}\\left[\\Vert \\phi (x,u)\\Vert \\right]^2)\\\\&\\le 3L\\vert t-t^{\\prime }\\vert \\int \\Vert \\phi (x,u)\\Vert ^2 \\mathop {}\\!\\mathrm {d}q(x,u).$ which gives the desired upper-bound.", "We denote by $(x,y)\\mapsto H_1 k(x,y)$ the Hessian of $x\\mapsto k(x,y)$ for all $y\\in {{\\mathcal {X}}}$ and by $(x,y)\\mapsto \\nabla _1\\nabla _2 k(x,y)$ the upper cross-diagonal block of the hessian of $(x,y)\\mapsto k(x,y)$ .", "Lemma 23 Let $q$ be a probability distribution in $\\mathcal {P}_2({{\\mathcal {X}}}\\times {{\\mathcal {X}}})$ and $\\psi $ and $\\phi $ two measurable maps from ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ to ${{\\mathcal {X}}}$ which are square-integrable w.r.t $q$ .", "Consider the path $\\rho _t$ from $(\\psi )_{\\#}q$ and $(\\psi +\\phi )_{\\#}q$ given by: $\\rho _t= (\\psi +t\\phi )_{\\#}q \\quad \\forall t\\in [0,1]$ .", "Under assump:diffkernel,assump:lipschitzgradientk, $\\mathcal {F}(\\rho _t)$ is twice differentiable in $t$ with $\\ddot{{{\\mathcal {F}}}}(\\rho _t)=&\\mathbb {E}\\left[\\phi (x,y)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime })) \\phi (x^{\\prime },y^{\\prime })\\right] \\\\&+ \\mathbb {E}\\left[\\phi (x,y)^T (H_1k(s_t(x,y),y_t^{\\prime })-H_1k(s_t(x,y),z)) \\phi (x,y)\\right]$ where $(x,y)$ and $(x^{\\prime },y^{\\prime })$ are independent samples from $q$ , $z$ is a sample from $\\mu $ and $s_t(x,y)= \\psi (x,y)+t\\phi (x,y)$ .", "Moreover, if assump:boundedfourthoder also holds then: $\\ddot{{{\\mathcal {F}}}}(\\rho _t) \\ge \\mathbb {E}\\left[\\phi (x,y)^T\\nabla _1 \\nabla _2 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime })) \\phi (x^{\\prime },y^{\\prime })\\right] - \\sqrt{2}\\lambda d {{\\mathcal {F}}}(\\rho _t)^{\\frac{1}{2}}\\mathbb {E}[\\Vert \\phi (x,y) \\Vert ^2]$ where we recall that ${{\\mathcal {X}}}\\subset \\mathbb {R}^d$ .", "The first part is similar to lem:derivativemmdaugmented.", "In fact we already know by lem:derivativemmdaugmented that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ exists and is given by: $\\dot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[(\\nabla _1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-\\nabla _1 k(s_t(x,y),z)).\\phi (x,y)\\right]$ Define now the function $\\xi : t\\mapsto (\\nabla _1 k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))-\\nabla _1 k(s_t(x,y),z)).\\phi (x,y)$ which is differentiable for all $(x,y)$ ,$(x^{\\prime },y^{\\prime })$ in ${{\\mathcal {X}}}\\times {{\\mathcal {X}}}$ and $z\\in {{\\mathcal {X}}}$ by assump:diffkernel.", "Moreover, its time derivative is given by: $\\dot{\\xi } =& \\phi (x^{\\prime },y^{\\prime })^T \\nabla _2\\nabla _1k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }))\\phi (x,y) \\\\&+ \\phi (x,y)^T(H_1k(s_t(x,y),s_t(x^{\\prime },y^{\\prime }) ) - H_1k(s_t(x,y),z ))\\phi (x,y)$ By assump:lipschitzgradientk it follows in particular that $\\nabla _2\\nabla _1k$ and $H_1k$ are bounded hence $\\vert \\dot{\\xi } \\vert $ is upper-bounded by $ (\\Vert \\phi (x,y) \\Vert + \\Vert \\phi (x^{\\prime },u^{\\prime }) \\Vert )\\Vert \\phi (x,y)\\Vert $ which is integrable.", "Therefore, by the differentiation lemma it follows that $\\dot{{{\\mathcal {F}}}}(\\rho _t)$ is differentiable and $\\ddot{{{\\mathcal {F}}}}(\\rho _t) = \\mathbb {E}\\left[\\dot{\\xi }\\right].$ We prove now the second statement.", "Bu the reproducing property, it is easy to see that the last term in the expression of $\\dot{\\xi }$ can be written as: $\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), k(s_t(x^{\\prime },y^{\\prime }),.", ")- k(z,.", ")\\rangle _{{{\\mathcal {H}}}}$ Now, taking the expectation w.r.t $x^{\\prime }$ ,$y^{\\prime }$ and $z$ which can be exchanged with the inner-product in ${{\\mathcal {H}}}$ since $(x^{\\prime },y^{\\prime },z)\\mapsto k(s_t(x^{\\prime },y^{\\prime }),.", ")- k(z,.", ")$ is Bochner integrable and recalling that such integral is given by $f_{\\mu ,\\rho _t}$ one gets the following expression: $\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), f_{\\mu ,\\rho _t} \\rangle _{{{\\mathcal {H}}}}$ Using Cauchy-Schwartz and assump:boundedfourthoder it follows that: $\\vert \\left\\langle \\phi (x,y)^TH_1 k(s_t(x,y),.", ")\\phi (x,y), f_{\\mu ,\\rho _t} \\right\\rangle _{{{\\mathcal {H}}}}\\vert \\le \\lambda d\\Vert \\phi (x,y)\\Vert ^2 \\Vert f_{\\mu ,\\rho _t}\\Vert $ One then concludes using the expression of $\\ddot{{{\\mathcal {F}}}}(\\rho _t)$ and recalling that ${{\\mathcal {F}}}(\\rho _t) = \\frac{1}{2}\\Vert f_{\\mu ,\\rho _t} \\Vert ^2$ .", "Lemma 24 Assume that for any geodesic $(\\rho _{t})_{t\\in [0,1]}$ between $\\rho _{0}$ and $\\rho _{1}$ in $\\mathcal {P}({{\\mathcal {X}}})$ with velocity vectors $(V_t)_{t \\in [0,1]}$ the following holds: $\\ddot{{{\\mathcal {F}}}}(\\rho _{t}) \\ge \\Lambda (\\rho _t,V_t)$ for some admissible functional $\\Lambda $ as defined in def:conditionslambda, then: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\rho _{0})+t{{\\mathcal {F}}}(\\rho _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds$ with $G(s,t)=s(1-t) \\mathbb {1}\\lbrace s\\le t\\rbrace +t(1-s) \\mathbb {1}\\lbrace s\\ge t\\rbrace $ for $0\\le s,t\\le 1$ .", "This is a direct consequence of the general identity (, Proposition 16.2).", "Indeed, for any continuous function $\\phi $ on $[0,1]$ with second derivative $\\ddot{\\phi }$ that is bounded below in distribution sense the following identity holds: $\\phi (t)=(1-t)\\phi (0)+t\\phi (1)-\\int _{0}^{1}\\ddot{\\phi }(s)G(s,t)ds.$ This holds a fortiori for ${{\\mathcal {F}}}(\\rho _{t})$ since ${{\\mathcal {F}}}$ is smooth.", "By assumption, we have that $\\ddot{{{\\mathcal {F}}}}(\\rho _{t}) \\ge \\Lambda (\\rho _t,V_t)$ , hence, it follows that: ${{\\mathcal {F}}}(\\rho _{t})\\le (1-t){{\\mathcal {F}}}(\\rho _{0})+t{{\\mathcal {F}}}(\\rho _{1})-\\int _{0}^{1}\\Lambda (\\rho _{s},V_{s})G(s,t)ds.$ Lemma 25 [Mixture convexity] The functional ${{\\mathcal {F}}}$ is mixture convex: for any probability distributions $\\nu _1$ and $\\nu _2$ and scalar $1\\le \\lambda \\le 1$ : ${{\\mathcal {F}}}(\\lambda \\nu _1+(1-\\lambda )\\nu _2)\\le \\lambda {{\\mathcal {F}}}(\\nu _1)+ (1-\\lambda ){{\\mathcal {F}}}(\\nu _2)$ Let $\\nu $ and $\\nu ^{\\prime }$ be two probability distributions and $0\\le \\lambda \\le 1$ .", "Expanding the RKHS norm in ${{\\mathcal {F}}}$ it follows directly that: $\\mathcal {F}(\\lambda \\nu + (1-\\lambda )\\nu ^{\\prime }) -\\lambda \\mathcal {F}(\\nu ) -(1-\\lambda )\\mathcal {F}(\\nu ^{\\prime }) = -\\frac{1}{2}\\lambda (1-\\lambda )MMD(\\nu ,\\nu ^{\\prime })^2 \\le 0.$ which concludes the proof.", "Lemma 26 [Discrete Gronwall lemma] Let $a_{n+1}\\le (1+\\gamma A)a_{n}+b$ with $\\gamma >0$ , $A>0$ , $b>0$ and $a_0=0$ , then: $a_{n}\\le \\frac{b}{\\gamma A}(e^{n\\gamma A}-1).$ Using the recursion, it is easy to see that for any $n>0$ : $a_n \\le (1+\\gamma A)^n a_0 + b\\left(\\sum _{i=0}^{n-1}(1+\\gamma A )^{k}\\right)$ One concludes using the identity $\\sum _{i=0}^{n-1}(1+\\gamma A )^{k} =\\frac{1}{\\gamma A}((1+\\gamma A)^{n} -1)$ and recalling that $(1+\\gamma A)^{n} \\le e^{n\\gamma A}$ ." ] ]
1906.04370
[ [ "General Linear Group Action on Tensors: A Candidate for Post-Quantum\n Cryptography" ], [ "Abstract Starting from the one-way group action framework of Brassard and Yung (Crypto '90), we revisit building cryptography based on group actions.", "Several previous candidates for one-way group actions no longer stand, due to progress both on classical algorithms (e.g., graph isomorphism) and quantum algorithms (e.g., discrete logarithm).", "We propose the general linear group action on tensors as a new candidate to build cryptography based on group actions.", "Recent works (Futorny--Grochow--Sergeichuk, Lin.", "Alg.", "Appl., 2019) suggest that the underlying algorithmic problem, the tensor isomorphism problem, is the hardest one among several isomorphism testing problems arising from areas including coding theory, computational group theory, and multivariate cryptography.", "We present evidence to justify the viability of this proposal from comprehensive study of the state-of-art heuristic algorithms, theoretical algorithms, and hardness results, as well as quantum algorithms.", "We then introduce a new notion called pseudorandom group actions to further develop group-action based cryptography.", "Briefly speaking, given a group $G$ acting on a set $S$, we assume that it is hard to distinguish two distributions of $(s, t)$ either uniformly chosen from $S\\times S$, or where $s$ is randomly chosen from $S$ and $t$ is the result of applying a random group action of $g\\in G$ on $s$.", "This subsumes the classical decisional Diffie-Hellman assumption when specialized to a particular group action.", "We carefully analyze various attack strategies that support the general linear group action on tensors as a candidate for this assumption.", "Finally, we establish the quantum security of several cryptographic primitives based on the one-way group action assumption and the pseudorandom group action assumption." ], [ "Introduction", "Modern cryptography has thrived thanks to the paradigm shift to a formal approach: precise definition of security and mathematically sound proof of security of a given construction based on accurate assumptions.", "Most notably, computational assumptions originated from specific algebraic problem such as factoring and discrete logarithm have enabled widely deployed cryptosystems.", "Clearly, it is imperative to base cryptography on diverse problems to reduce the risk that some problems turn out to be easy.", "One such effort was by Brassard and Yung soon after the early development of modern cryptography [22].", "They proposed an approach to use a group action to construct a one-way function, from which they constructed cryptographic primitives such as bit commitment, identification and digital signature.", "The abstraction of one-way group actions ($\\mathrm {OWA}$ ) not only unifies the assumptions from factoring and discrete logarithm, but more importantly Brassard and Yung suggested new problems to instantiate it such as the graph isomorphism problem (GI).", "Since then, many developments fall in this framework [77], [27], [55], [71].", "In particular, the work of Couveignes [27] can be understood as a specific group action based on isogenies between elliptic curves, and it has spurred the development of isogeny-based cryptography [30].", "However, searching for concrete group actions to support this approach turns out to be a tricky task, especially given the potential threats from attackers capable of quantum computation.", "For graph isomorphism, there are effective heuristic solvers [67], [69] as well as efficient average-case algorithms [11], not to mention Babai's recent breakthrough of a quasipolynomial-time algorithm [7].", "Shor's celebrated work solves discrete logarithm and factoring in polynomial time on a quantum computer [85], which would break a vast majority of public-key cryptography.", "The core technique, quantum Fourier sampling, has proven powerful and can be applied to break popular symmetric-key cryptosystems as well [60].", "A subexponential-time quantum algorithm was also found for computing isogenies in ordinary curves [26], which attributes to the shift to super-singular curves in the recent development of isogeny-based cryptography [48].", "In fact, there is a considerable effort developing post-quantum cryptography that can resist quantum attacks.", "Besides isogeny-based, there are popular proposals based on discrete lattices, coding problems, and multivariate equations [8], [24]." ], [ "Overview of our results", "In this paper, we revisit building cryptography via the framework of group actions and aim to provide new candidate and tools that could serve as quantum-safe solutions.", "Our contribution can be summarized in the following three aspects.", "First, we propose a family of group actions on tensors of order at least three over a finite field as a new candidate for one-way actions.", "We back up its viability by its relation with other group actions, extensive analysis from heuristic algorithms, provable algorithmic and hardness results, as well as demonstrating its resistance to a standard quantum Fourier sampling technique.", "Second, we propose the notion of pseudorandom group actions ($\\mathrm {PRA}$ ) that extends the scope of the existing group-action framework.", "The $\\mathrm {PRA}$ assumption can be seen as a natural generalization of the Decisional Diffie-Hellman (DDH) assumption.", "We again instantiate it with the group action on tensors, and we provide evidence (in addition to those for one-wayness) from analyzing various state-of-art attacking strategies.", "Finally, based on any $\\mathrm {PRA}$ , we show realization of several primitives in minicrypt such as digital signatures via the Fiat-Shamir transformation and pseudorandom functions.", "We give complete security proofs against quantum adversaries, thanks to recent advances in analyzing quantum superposition attacks and the quantum random oracle model [94], [91], [88], which is known to be a tricky business.", "Our constructions based on $\\mathrm {PRA}$ are more efficient than known schemes based on one-way group actions.", "As a side contribution, we also describe formal quantum-security proofs for several $\\mathrm {OWA}$ -based schemes including identification and signatures, which are missing in the literature and deserve some care.", "In what follows, we elaborate on our proposed group action based on tensors and the new pseudorandom group action assumption.", "Readers interested in the cryptographic primitives supported by $\\mathrm {PRA}$ are referred to Section ." ], [ "The general linear group action on tensors.", "The candidate group action we propose is based on tensors, a central notion in quantum theory.", "In this paper, a $k$ -tensor $T$ is a multidimensional array with $k$ indices $i_1, i_2, \\ldots , i_k$ over a field $\\mathbb {F}$ , where $i_j \\in \\lbrace 1, 2, \\ldots , d_j\\rbrace $ for $j=1, 2, \\ldots , k$ .", "For a tuple of indices $(i_1, i_2, \\ldots , i_k)$ , the corresponding component of $T$ denoted as $T_{i_1, i_2, \\ldots , i_k}$ is an element of $\\mathbb {F}$ .", "The number $k$ is called the order of the tensor.", "A matrix over field $\\mathbb {F}$ can be regarded as a tensor of order two.", "We consider a natural group action on $k$ -tensors that represents a local change of basis.", "Let $G = \\prod _{j=1}^k \\operatorname{\\mathrm {GL}}(d_j,\\mathbb {F})$ be the direct product of general linear groups.", "For $M = \\bigl ( M^{(j)} \\bigr )_{j=1}^k \\in G$ , and a $k$ -tensor $T$ , the action of $M$ on $T$ is given by $\\alpha : (M, T) \\mapsto \\widehat{T}, \\text{ where } \\widehat{T}_{i_1, i_2, \\ldots , i_k}= \\sum _{l_1, l_2, \\ldots , l_k} \\biggl ( \\prod _{j=1}^k M^{(j)}_{i_j,l_j} \\biggr )T_{l_1, l_2, \\ldots , l_k}.$ We shall refer to the above group action as the general linear group action on tensors ($\\mathrm {GLAT}$ ) of dimensions $(d_1, \\dots , d_k)$ over $\\mathbb {F}$ , or simply $\\mathrm {GLAT} $ when there is no risk of confusion.", "We will consider group actions on tensors of order at least three, as the problem is usually easy for matrices.", "In fact, in most of the cases, we focus on 3-tensors which is most studied and believed to be hard." ], [ "General linear actions on tensors as a candidate for one-way group\nactions.", "We propose to use $\\mathrm {GLAT}$ as an instantiation of one-way group actions.", "Roughly speaking, a group action is called a one-way group action (OWA in short), if for a random $s\\in S$ , a random $g\\in G$ , $t=g\\cdot s$ , and any polynomial-time adversary $\\mathcal {A}$ given $s$ and $t$ as input, $\\mathcal {A}$ outputs a $g^{\\prime }\\in G$ such that $t=g^{\\prime }\\cdot s$ only with negligible probability.", "Breaking the one-wayness can be identified with solving some isomorphism problem.", "Specifically, two $k$ -tensors $T$ and $\\widehat{T}$ are said to be isomorphic if there exists an $M\\in G$ such that $\\widehat{T} = \\alpha (M,T)$ .", "We define the decisional tensor isomorphism problem (DTI) as deciding if two given $k$ -tensors are isomorphic; and the search version (TI) is tasked with computing an $M\\in G$ such that $\\widehat{T} = \\alpha (M, T)$ if there is one.", "Clearly, our assumption that $\\mathrm {GLAT}$ is a one-way group action is equivalent to assuming that TI is hard for random $M\\in G$ , random $k$ -tensor $S$ , and $T := \\alpha (M,S)$ .", "We focus on the case when the order $k$ of the tensor equals three and the corresponding tensor isomorphism problem is abbreviated as 3TI.", "We justify our proposal from multiple routes; see Section  for a more formal treatment.", "The 3-tensor isomorphism problem can be regarded as “the most difficult” one among problems about testing isomorphism between objects, such as polynomials, graphs, linear codes, and groups, thanks to the recent work of Futorny, Grochow, and Sergeichuk [39].", "More specifically, it was proven in [39] that several isomorphism problems, including graph isomorphism, quadratic polynomials with 2 secrets from multivarite cryptography [77], $p$ -group isomorphism from computational group theory [76], [63], and linear code permutation equivalence from coding theory [81], [84], all reduce to 3TI; cf.", "Observation REF .", "Note that testing isomorphism of quadratic polynomials with two secrets has been studied in multivariate cryptography for more than two decades [77].", "Isomorphism testing of $p$ -groups has been studied in computational group theory and theoretical computer science at least since the 1980's (cf.", "[76], [63]).", "Current status of these two problems then could serve as evidence for the difficulty of 3TI.", "Known techniques that are effective on GI, including the combinatorial techniques [93] and the group-theoretic techniques [5], [64], are difficult to translate to 3TI.", "Indeed, it is not even clear how to adapt a basic combinatorial technique for GI, namely individualizing a vertex [11], to the 3TI setting.", "It is also much harder to work with matrix groups over finite fields than to work with permutation groups.", "Also, techniques in computer algebra, including those that lead to the recent solution of isomorphism of quadratic polynomials with one secret [57], seem not applicable to 3TI.", "Finally, there is negative evidence that quantum algorithmic techniques involving the most successful quantum Fourier sampling may not be able to solve GI and code equivalence [53], [36].", "It is expected that the same argument holds with respect to 3TI as well.", "Loosely speaking, this is because the group underlying 3TI is a direct product of general linear groups, which also has irreducible representations of high dimensions." ], [ "A new assumption: pseudorandom group actions.", "Inspired by the Decisional Diffie-Hellman assumption, which enables versatile cryptographic constructions, we propose the notion of pseudorandom group actions, or $\\mathrm {PRA}$ in short.", "Roughly speaking, we call a group action $\\alpha : G \\times S \\rightarrow S$ pseudorandom, if any quantum polynomial-time algorithm $\\mathcal {A}$ cannot distinguish the following two distributions except with negligible probability: $(s, t)$ where $s, t\\in _R S$ , and the other distribution $(s, \\alpha (g, s))$ , where $s\\in _R S$ and $g\\in _R G$ .", "A precise definition can be found in Section .", "Note that if a group action is transitive, then the pseudorandom distribution trivially coincides with the random distribution.", "Unless otherwise stated, we will consider intransitive group actions when working with pseudorandom group actions.", "In fact, we can assume that $(s, t)$ from the random distribution are in different orbits with high probability, while $(s, t)$ from the pseudorandom distribution are always in the same orbit.", "Also note that $\\mathrm {PRA}$ is a stronger assumption than $\\mathrm {OWA}$ .", "To break $\\mathrm {PRA}$ , it is enough to solve the isomorphism testing problem on average in a relaxed sense, i.e., on $1/\\operatorname{\\mathrm {poly}}(n)$ fraction of the input instances instead of all but $1/\\operatorname{\\mathrm {poly}}(n)$ fraction, where $n$ is the input size.", "The Decisional Diffie-Hellman (DDH) assumption [31], [20] can be seen as the $\\mathrm {PRA}$ initiated with a certain group action; see Observation REF .", "However, DDH is broken on a quantum computer.", "We resort again to $\\mathrm {GLAT}$ as a quantum-safe candidate of $\\mathrm {PRA}$ .", "We investigate the hardness of breaking $\\mathrm {PRA}$ from various perspectives and provide further justification for using the general linear action on 3-tensors as a candidate for $\\mathrm {PRA}$ .", "Easy instances on 3-tensors seem scarce, and average-case algorithms do not speed up dramatically.", "Indeed, the best known average-case algorithm, while improves over worst-case somewhat due to the birthday paradox, still inherently enumerate all vectors in $\\mathbb {F}_q^n$ and hence take exponential time [15], [63].", "For 3-tensors, there have not been non-trivial and easy-to-compute isomorphism invariants, i.e., those properties that are preserved under the action.", "For example, a natural isomorphism invariant, the tensor rank, is well-known to be NP-hard [49].", "Later work suggests that “most tensor problems are NP-hard” [51].", "We propose and analyze several attack strategies from group theory and geometry.", "While effective on some non-trivial actions, these attacks do not work for the general linear action on 3-tensors.", "For instance, we notice that breaking our $\\mathrm {PRA}$ from $\\mathrm {GLAT}$ reduces to the orbit closure intersection problem, which has received considerable attention in optimization, and geometric complexity theory.", "Despite recent advances [73], [16], [13], [1], [32], [58], any improvement towards a more effective attack would be a breakthrough.", "Recently, De Feo and Galbraith proposed an assumption in the setting of supersingular isogeny-based cryptography, which can be viewed as another instantiation of $\\mathrm {PRA}$  [38].", "This gives more reason to further explore $\\mathrm {PRA}$ as a basic building block in cryptography." ], [ "Discussions", "In this paper, we further develop and extend the scope of group action based cryptography by introducing the general linear group actions on tensors, $\\mathrm {GLAT}$ , to the family of instantiations, by formulating the pseudorandom assumption generalizing the well-known DDH assumption, and by proving the quantum-security of various cryptographic primitives such as signatures and pseudorandom functions in this framework.", "There are two key features of $\\mathrm {GLAT}$ that are worth mentioning explicitly.", "First, the general linear action is non-commutative simply because the general linear group is non-abelian.", "This is, on the one hand, an attractive property that enabled us to argue the quantum hardness and the infeasibility of quantum Fourier sampling type of attacks.", "On the other hand, however, this also makes it challenging to extend many attractive properties of discrete-logarithm and decisional Diffie-Hellman to the more general framework of group action cryptography.", "For example, while it is known that the worst-case DDH assumption reduces to the average-case DDH assumption [75], the proof relies critically on commutativity.", "Second, the general linear action is linear and the space of tensors form a linear space.", "Linearity seems to be responsible for the supergroup attacks on the $\\mathrm {PRA} (d)$ assumption discussed in Subsection REF .", "It also introduces the difficulty for building more efficient PRF constructions analogous to the DDH-based ones proposed in [75].", "Our work leaves a host of basic problems about group action based cryptography as future work.", "First, we have been focusing on the general linear group actions on tensors and have not discussed too much about the other possible group actions on the tensor space.", "A mixture of different types of group actions on different indices of the tensor may have the advantage of obtaining a more efficient construction or other appealing structural properties.", "It will be interesting to understand better how the hardness of the group actions on tensors relate to each other and what are the good choices of group actions for practicability considerations.", "Second, it is appealing to recover the average-case to worst-case reduction, at least to some extent, for the general group actions framework.", "Finally, it is an important open problem to build quantum-secure public-key encryption schemes based on hard problems about $\\mathrm {GLAT}$ or its close variations." ], [ "The group action framework", "In this section, we formally describe the framework for group action based cryptography to be used in this paper.", "While such general frameworks were already proposed by Brassard and Yung [22] and Couveignes [27], there are delicate differences in several places, so we will have to still go through the details.", "This section should be considered as largely expository." ], [ "Group actions and notations", "Let us first formally define group actions.", "Let $G$ be a group, $S$ be a set, and $\\mathrm {id}$ the identity element of $G$ .", "A (left) group action of $G$ on $S$ is a function $\\alpha : G\\times S\\rightarrow S$ satisfying the following: (1) $\\forall s\\in S$ , $\\alpha (\\mathrm {id}, s)=s$ ; (2) $\\forall g, h\\in G$ , $s\\in S$ , $\\alpha (gh, s)=\\alpha (g, \\alpha (h, s))$ .", "The group operation is denoted by $\\circ $ , e.g.", "for $g, h\\in G$ , we can write their product as $g\\circ h$ .", "We shall use $\\cdot $ to denote the left action, e.g.", "$g\\cdot s=\\alpha (g, s)$ .", "We may also consider the right group action $\\beta :S\\times G\\rightarrow S$ , and use the exponent notation for right actions, e.g.", "$s^g=\\beta (s, g)$ .", "Later, we will use a special symbol $\\bot \\notin G\\cup S$ to indicate that a bit string does not correspond to an encoding of an element in $G$ or $S$ .", "We extend the operators $\\circ $ and $\\cdot $ to $\\circ :G\\cup \\lbrace \\bot \\rbrace \\times G\\cup \\lbrace \\bot \\rbrace \\rightarrow G\\cup \\lbrace \\bot \\rbrace $ and $\\cdot :G\\cup \\lbrace \\bot \\rbrace \\times S\\cup \\lbrace \\bot \\rbrace \\rightarrow S\\cup \\lbrace \\bot \\rbrace $ , by letting $g\\circ h=\\bot $ whenever $g=\\bot $ or $h=\\bot $ , and $g\\cdot s=\\bot $ whenever $g=\\bot $ or $s=\\bot $ .", "Let $\\alpha :G\\times S\\rightarrow S$ be a group action.", "For $s\\in S$ , the orbit of $s$ is $O_s=\\lbrace t\\in S : \\exists g\\in G, g\\cdot s=t\\rbrace $ .", "The action $\\alpha $ partitions $S$ into a disjoint union of orbits.", "If there is only one orbit, then $\\alpha $ is called transitive.", "Restricting $\\alpha $ to any orbit $O$ gives a transitive action.", "In this case, take any $s\\in O$ , and let $\\mathrm {Stab}(s, G)=\\lbrace g\\in G : g\\cdot s=s\\rbrace $ be the stabilizer group of $s$ in $G$ .", "For any $t\\in O$ , those group elements sending $s$ to $t$ form a coset of $\\mathrm {Stab}(s, G)$ .", "We then obtain the following easy observation.", "Observation 1 Let $\\alpha :G\\times S\\rightarrow S$ , $s$ , and $O$ be as above.", "The following two distributions are the same: the uniform distribution of $t\\in O$ , and the distribution of $g\\cdot s$ where $g$ is sampled from a uniform distribution over $G$ ." ], [ "The computational model", "For computational purposes, we need to model the algorithmic representations of groups and sets, as well as basic operations like group multiplication, group inverse, and group actions.", "We review the group action framework as proposed in Brassard and Yung [22].", "A variant of this framework, with a focus on restricting to abelian (commutative) groups, was studied by Couveignes [27].", "However, it seems to us that some subtleties are present, so we will propose another version, and compare it with those by Brassard and Yung, and Couveignes, later.", "Let $n$ be a parameter which controls the instance size.", "Therefore, polynomial time or length in the following are with respect to $n$ .", "(Representing group and set elements.)", "Let $G$ be a group, and $S$ be a set.", "Let $\\alpha : G\\times S\\rightarrow S$ be a group action.", "Group elements and set elements are represented by bit strings $\\lbrace 0, 1\\rbrace ^*$ .", "There are polynomials $p(n)$ and $q(n)$ , such that we only work with group elements representable by $\\lbrace 0, 1\\rbrace ^{p(n)}$ and set elements representable by $\\lbrace 0, 1\\rbrace ^{q(n)}$ .", "There are functions $F_G$ and $F_S$ from $\\lbrace 0, 1\\rbrace ^*$ to $G\\cup \\lbrace \\perp \\rbrace $ and $S\\cup \\lbrace \\perp \\rbrace $ , respectively.", "Here, $\\perp $ is a special symbol, designating that the bit string does not represent a group or set element.", "$F_G$ and $F_S$ should be thought of as assigning bit strings to group elements.", "(Unique encoding of group and set elements.)", "For any $g\\in G$ , there exists a unique $b\\in \\lbrace 0, 1\\rbrace ^*$ such that $F_G(b)=g$ .", "In particular, there exists a unique bit string, also denoted by $\\mathrm {id}$ , such that $F_G(\\mathrm {id})=\\mathrm {id}$ .", "Similarly, for any $s\\in S$ , there exists a unique $b\\in \\lbrace 0, 1\\rbrace ^*$ such that $F_S(b)=s$ .", "(Group operations.)", "There are polynomial-time computable functions $\\mathrm {PROD}:\\lbrace 0, 1\\rbrace ^*\\times \\lbrace 0, 1\\rbrace ^*\\rightarrow \\lbrace 0, 1\\rbrace ^*$ and $\\mathrm {INV}:\\lbrace 0,1\\rbrace ^*\\rightarrow \\lbrace 0, 1\\rbrace ^*$ , such that for $b, c\\in \\lbrace 0, 1\\rbrace ^*$ , $F_G(\\mathrm {PROD} (b, c))=F_G(b)\\circ F_G(c)$ , and $F_G(\\mathrm {INV} (b))\\circ F_G(b)=\\mathrm {id}$ .", "(Group action.)", "There is a polynomial-time function $a:\\lbrace 0, 1\\rbrace ^*\\times \\lbrace 0,1\\rbrace ^*\\rightarrow \\lbrace 0, 1\\rbrace ^*$ , such that for $b\\in \\lbrace 0, 1\\rbrace ^*$ and $c\\in \\lbrace 0, 1\\rbrace ^*$ , satisfies $F_S(a(b, c))=\\alpha (F_G(b), F_S(c))$ .", "(Recognizing group and set elements.)", "There are polynomial-time computable functions $C_G$ and $C_S$ , such that $C_G(b)=1$ iff $F_G(b)\\ne \\bot $ , and $C_S(b)=1$ iff $F_S(b)\\ne \\bot $ .", "(Random sampling of group and set elements.)", "There are polynomial-time computable functions $R_G$ and $R_S$ , such that $R_G$ uniformly samples a group element $g\\in G$ , represented by the unique $b\\in \\lbrace 0, 1\\rbrace ^{p(n)}$ with $F_G(b)=g$ , and $R_S$ uniformly samples a set element $s\\in S$ , represented by some $b\\in \\lbrace 0, 1\\rbrace ^{q(n)}$ with $F_S(b)=s$ .", "Remark 1 Some remarks are due for the above model.", "The differences with Brassard and Yung are: (1) allowing infinite groups and sets; (2) adding random sampling of set elements.", "Note that in the case of infinite groups and sets, the parameters $p(n)$ and $q(n)$ are used to control the bit lengths for the descriptions of legitimate group and set elements.", "This allows us to incorporate e.g.", "the lattice isomorphism problem [54] into this framework.", "In the rest of this article, however, we will mostly work with finite groups and sets, unless otherwise stated.", "The main reason to consider infinite groups is the uses of lattice isomorphism and equivalence of integral bilinear forms in the cryptographic setting.", "The key difference with Couveignes lies in Couveignes's focus on transitive abelian group actions with trivial stabilizers.", "It is possible to adapt the above framework to use the black-box group model by Babai and Szemerédi [21], whose motivation was to deal with non-unique encodings of group elements (like quotient groups).", "For our purposes, it is more convenient and practical to assume that the group elements have unique encodings.", "Babai [6] gives an efficient Monte Carlo algorithm for sampling a group element of a finite group in a very general setting which is applicable to most of our instantiations with finite groups." ], [ "The isomorphism problem and the one-way assumption", "Now that we have defined group actions and a computational model, let us examine the isomorphism problems associated with group actions.", "Definition 2 (The isomorphism problem) Let $\\alpha :G\\times S\\rightarrow S$ be a group action.", "The isomorphism problem for $\\alpha $ is to decide, given $s, t\\in S$ , whether $s$ and $t$ lie in the same orbit under $\\alpha $ .", "If they are, the search version of the isomorphism problem further asks to compute some $g\\in G$ , such that $\\alpha (g, s)=t$ .", "If we assume that there is a distribution on $S$ and we require the algorithm to succeed for $(s, t)$ where $s$ is sampled from this distribution and $t$ is arbitrary, then this is the average-case setting of the isomorphism problem.", "For example, the first average-case efficient algorithm for the graph isomorphism problem was designed by Babai, Erdős and Selkow in the 1970's [11].", "The hardness of the isomorphism problem provides us with the basic intuition for its use in cryptography.", "But for cryptographic uses, the promised search version of the isomorphism problem is more relevant, as already observed by Brassard and Yung [22].", "That is, suppose we are given $s, t\\in S$ with the promise that they are in the same orbit, the problem asks to compute $g\\in G$ such that $g\\cdot s=t$ .", "Making this more precise and suitable for cryptographic purposes, we formulate the following problem.", "Definition 3 (The group-action inversion (GA-Inv) problem) Let $\\mathcal {G}$ be a group action family, such that for a security parameter $\\lambda $ , $\\mathcal {G}(1^\\lambda )$ consists of descriptions of a group $G$ , a set $S$ with $\\log (|G|)=\\operatorname{\\mathrm {poly}}(\\lambda )$ , $\\log (|S|)=\\operatorname{\\mathrm {poly}}(\\lambda )$ , and an group action $\\alpha : G\\times S \\rightarrow S$ that can be computed efficiently, which we denote as a whole as a public parameter $\\texttt {params}$ .", "Generate random $s\\leftarrow S$ and $g\\leftarrow G$ , and compute $t: = \\alpha (g,s)$ .", "The group-action inversion (GA-Inv) problem is to find $g$ given $(s,t)$ .", "Definition 4 (Group-action inversion game) The group-action inversion game is the following game between a challenger and an arbitrary adversary $\\mathcal {A}$ : The challenger and adversary $\\mathcal {A}$ agree on the public parameter $\\texttt {params}$ by choosing it to be $\\mathcal {G}(1^\\lambda )$ for some security parameter $\\lambda $ .", "Challenger samples $s\\leftarrow S$ and $g\\leftarrow G$ using $R_S$ and $R_G$ , computes $t = g \\cdot s$ , and gives $(s, t)$ to $\\mathcal {A}$ .", "The adversary $\\mathcal {A}$ produces some $g^{\\prime }$ and sends it to the challenger.", "We define the output of the game $\\textsf {GA-Inv} _{\\mathcal {A},\\mathcal {G}}(1^\\lambda ) = 1$ if $g^{\\prime } \\cdot s = t$ , and say $\\mathcal {A}$ wins the game if $\\textsf {GA-Inv} _{\\mathcal {A},\\mathcal {G}}(1^\\lambda ) = 1$ .", "Definition 5 We say that the group-action inversion (GA-Inv) problem is hard relative to $\\mathcal {G}$ , if for any polynomial time quantum algorithm $\\mathcal {A}$ , $\\Pr \\bigl [ \\textsf {GA-Inv} _{\\mathcal {A},\\mathcal {G}}(1^\\lambda ) \\bigr ] \\le \\operatorname{\\mathrm {negl}}(\\lambda ) \\, .$ We propose our first cryptographic assumption in the following.", "It generalizes the one in [22].", "Assumption 1 (One-way group action ($\\mathrm {OWA}$ ) assumption) There exists a family $\\mathcal {G}$ relative to which the $\\textsf {GA-Inv} $ problem is hard.", "We informally call the group action family $\\mathcal {G}$ in Assumption REF a one-way group action.", "Its name comes from the fact that, as already suggested in [22], this assumption immediately implies that we can treat $\\Gamma _s: G \\rightarrow S$ given by $\\Gamma _s(g)=\\alpha (g, s)$ as a one-way function for a random $s$ .", "In fact, OWA assumption is equivalent to the assertion that the function $\\Gamma :G\\times S\\rightarrow S\\times S$ given by $\\Gamma (g, s)=(g\\cdot s, s)$ is one-way in the standard sense.", "Note that the $\\mathrm {OWA}$ assumption comes with the promise that $s$ and $t$ are in the same orbit.", "The question is to compute a group element that sends $s$ to $t$ .", "Comparing with Definition REF , we see that the $\\mathrm {OWA}$ assumption is stronger than the assumption that the search version of the isomorphism problem is hard for a group action, while incomparable with the decision version.", "Still, most algorithms for the isomorphism problem we are aware of do solve the search version.", "Remark 6 Note that Assumption REF has a slight difference with that of Brassard and Yung as follows.", "In [22], Brassard and Yung asks for the existence of some $s\\in S$ as in Definition REF , such that for a random $g\\in G$ , it is not feasible to compute $g^{\\prime }$ that sends $s$ to $\\alpha (g, s)$ .", "Here, we relax this condition, namely a random $s\\in S$ satisfies this already.", "One motivation for Brassard and Yung to fix $s$ was to take into account of graph isomorphism, for which Brassard and Crepéau defined the notion of “hard graphs” which could serve as this starting point [10].", "However, by Babai's algorithm [7] we know that hard graphs could not exist.", "Here we use a stronger notion by allowing a random $s$ , which we believe is a reasonable requirement for some concrete group actions discussed in Section .", "A useful fact for the GA-Inv problem is that it is self-reducible to random instances within the orbit of the input pair.", "For any given $s$ , let $O_s$ be the orbit of $s$ under the group action $\\alpha $ .", "If there is an efficient algorithm $\\mathcal {A}$ that computes $g$ from $(t, t^{\\prime })$ where $t^{\\prime } = \\alpha (g,t)$ for at least $1/\\operatorname{\\mathrm {poly}}(\\lambda )$ fraction of the pairs $(t,t^{\\prime })\\in O_s \\times O_s$ , then the GA-Inv problem can be computed for any $(t,t^{\\prime }) \\in O_s \\times O_s$ with probability $1-e^{-\\operatorname{\\mathrm {poly}}(\\lambda )}$ .", "On input $(t,t^{\\prime })$ , the algorithm samples random group elements $h,h^{\\prime }$ and calls $\\mathcal {A}$ with $(\\alpha (h,t),\\alpha (h^{\\prime },t^{\\prime }))$ .", "If $\\mathcal {A}$ successfully returns $g$ , the algorithm outputs $h^{-1}gh^{\\prime }$ and otherwise repeats the procedure for polynomial number of times.", "The one-way assumption leads to several basic cryptographic applications as described in the literature.", "First, it gives a identification scheme by adapting the zero-knowledge proof system for graph isomorphism [46].", "Then via the celebrated Fiat-Shamir transformation [43], one also obtains a signature scheme.", "Proving quantum security of these protocols, however, would need more care.", "For completeness, we give detailed proofs in Section ." ], [ "General linear actions on tensors: the one-way group action assumption", "In this section, we propose the general linear actions on tensors, i.e., the tensor isomorphism problem, as our choice of candidate for the $\\mathrm {OWA}$ assumption.", "We first reflect on what would be needed for a group action to be a good candidate." ], [ "Requirements for a group action to be one-way", "Naturally, the hardness of the $\\textsf {GA-Inv} $ problem for a specific group action needs to be examined in the context of the following four types of algorithms.", "Practical algorithms: implemented algorithms with practical performance evaluations but no theoretical guarantees; Average-case algorithms: for some natural distribution over the input instances, there is an algorithm that are efficient for most input instances from this distribution with provable guarantees; Worst-case algorithms: efficient algorithms with provable guarantees for all input instances; Quantum algorithms: average-case or worst-case efficient algorithms in the quantum setting.", "Here, efficient means sub-exponential, and most means $1-1/\\operatorname{\\mathrm {poly}}(n)$ fraction.", "It is important to keep in mind all possible attacks by these four types of algorithms.", "Past experience suggests that one problem may look difficult from one viewpoint, but turns out to be easy from another.", "The graph isomorphism problem has long been thought to be a difficult problem from the worst-case viewpoint.", "Indeed, a quasipolynomial-time algorithm was only known very recently, thanks to Babai's breakthrough [7].", "However, it has long been known to be effectively solvable from the practical viewpoint [67], [69].", "This shows the importance of practical algorithms when justifying a cryptographic assumption.", "Patarin proposed to use polynomial map isomorphism problems in his instantiation of the identification and signature schemes [77].", "He also proposed the one-sided version of such problems, which has been studied intensively [80], [45], [78], [41], [59], [12], [70], [15], [79], [14], mostly from the viewpoint of practical cryptanalysis.", "However, the problem of testing isomorphism of quadratic polynomials with one secret was recently shown to be solvable in randomized polynomial time [57], using ideas including efficient algorithms for computing the algebra structure, and the $*$ -algebra structure underlying such problems.", "Hence, the investigation of theoretical algorithms is also valuable.", "Considering of quantum attacks is necessary for security in the quantum era.", "Shor's algorithm, for example, invalidates the hardness assumption of the discrete logarithm problems.", "Guided by the difficulty met by the hidden subgroup approach on tackling graph isomorphism [53], Moore, Russell, and Vazirani proposed the code equivalence problem as a candidate for the one-way assumption [71].", "However, this problem turns out to admit an effective practical algorithm by Sendrier [84]." ], [ "One-way group action assumption and the hidden subgroup approach", "From the post-quantum perspective, a general remark can be made on the $\\mathrm {OWA}$ assumption and the hidden subgroup approach in quantum algorithm design.", "Recall that the hidden subgroup approach is a natural generalization of Shor's quantum algorithms for discrete logarithm and factoring [85], and can accommodate both lattice problems [83] and isomorphism testing problems [53].", "The survey paper of Childs and van Dam [29] contains a nice introduction to this approach.", "A well-known approach to formulate GA-Inv as an HSP problem is the following [29].", "Let $\\alpha :G\\times S\\rightarrow S$ be a group action.", "Given $s, t\\in S$ with the promise that $t=g\\cdot s$ for some $g\\in G$ , we want to compute $g$ .", "To cast this problem as an HSP instance, we first formulate it as an automorphism type problem.", "Let $\\tilde{G}=G\\wr \\operatorname{\\mathrm {S}}_2$ , where $\\operatorname{\\mathrm {S}}_2$ is the symmetric group on two elements, and $\\wr $ denotes the wreath product.", "The action $\\alpha $ induces an action $\\beta $ of $\\tilde{G}$ on $S\\times S$ as follows.", "Given $(g, h, i)\\in \\tilde{G}=G\\wr \\operatorname{\\mathrm {S}}_2$ where $g, h\\in G, i\\in \\operatorname{\\mathrm {S}}_2$ , if $i$ is the identity, it sends $(s, t)\\in S\\times S$ to $(g\\cdot s, h\\cdot t)$ ; otherwise, it sends $(s, t)$ to $(h\\cdot t, g\\cdot s)$ .", "Given $(s, t)\\in S\\times S$ , we define a function $f_{(s, t)}:\\tilde{G}\\rightarrow S\\times S$ , such that $f_{(s, t)}$ sends $(g, h, i)$ to $(g, h, i)\\cdot (s, t)$ , defined as above.", "It can be verified that $f_{(s, t)}$ hides the coset of the stabilizer group of $(s, t)$ in $\\tilde{G}$ .", "Since $s$ and $t$ lie in the same orbit, any generating set of the stabilizer group of $(s, t)$ contains an element of the form $(g, h, i)$ , where $i$ is not the identity element in $\\operatorname{\\mathrm {S}}_2$ , $g\\cdot s=t$ , and $h\\cdot t=s$ .", "In particular, $g$ is the element required to solve the GA-Inv problem.", "In the above reduction to the HSP problem, the ambient group is $G\\wr \\operatorname{\\mathrm {S}}_2$ instead of the original $G$ .", "In some cases like the graph isomorphism problem, because of the polynomial-time reduction from isomorphism testing to automorphism problem, we can retain the ambient group to be $G$ .", "However, such a reduction is not known for $\\mathrm {GLAT}$ .", "There has been notable progress on the HSP problems for various ambient groups, but the dihedral groups and the symmetric groups have withstood the attacks so far.", "Indeed, one source of confidence on using lattice problems in post-quantum cryptography lies in the lack of progress in tackling the hidden subgroup problem for dihedral groups [83].", "There is formal negative evidence for the applicability of this approach for certain group actions where the groups have high-dimensional representations, like $\\operatorname{\\mathrm {S}}_n$ and $\\operatorname{\\mathrm {GL}}(n, q)$ in the case of the graph isomorphism problem [53] and the permutation code equivalence problem [36].", "The general lesson is that current quantum algorithmic technologies seem incapable of handling groups which have irreducible representations of high dimensions.", "As mentioned, the $\\mathrm {OWA}$ assumption has been discussed in post-quantum cryptography with the instantiation of the permutation code equivalence problem [71], [34], [35], [87], [36].", "Though this problem is not satisfying enough due to the existence of effective practical algorithms [84], the following quoted from [71] would be applicable to our choice of candidate to the discussed below.", "The design of efficient cryptographic primitives resistant to quantum attack is a pressing practical problem whose solution can have an enormous impact on the practice of cryptography long before a quantum computer is physically realized.", "A program to create such primitives must necessarily rely on insights into the limits of quantum algorithms, and this paper explores consequences of the strongest such insights we have about the limits of quantum algorithms." ], [ "The tensor isomorphism problem and others", "We now formally define the tensor isomorphism problem and other isomorphism testing problems.", "For this we need some notation and preparations." ], [ "Notation and preliminaries", "We usually use $\\mathbb {F}$ to denote a field.", "The finite field with $q$ elements and the real number field are denoted by $\\mathbb {F}_q$ and $\\mathbb {R}$ , respectively.", "The linear space of $m$ by $n$ matrices over $\\mathbb {F}$ is denoted by $\\mathrm {M}(m,n,\\mathbb {F})$ , and $\\mathrm {M}(n, \\mathbb {F}):=\\mathrm {M}(n, n, \\mathbb {F})$ .", "The identity matrix in $\\mathrm {M}(n, \\mathbb {F})$ is denoted by $I_n$ .", "For $A\\in \\mathrm {M}(m, n, \\mathbb {F})$ , $A^t$ denotes the transpose of $A$ .", "The group of $n$ by $n$ invertible matrices over $\\mathbb {F}$ is denoted by $\\operatorname{\\mathrm {GL}}(n,\\mathbb {F})$ .", "We will also meet the notation $\\operatorname{\\mathrm {GL}}(n, \\mathbb {Z})$ , the group of $n$ by $n$ integral matrices with determinant $\\pm 1$ .", "We use a slightly non-standard notation $\\operatorname{\\mathrm {GL}}(m, n, \\mathbb {F})$ to denote the set of rank $\\min (m, n)$ matrices in $\\mathrm {M}(m, n, \\mathbb {F})$ .", "We use $\\langle \\cdot \\rangle $ to denote the linear span; for example, given $A_1, \\dots , A_k\\in \\mathrm {M}(m, n, \\mathbb {F})$ , $\\langle A_1,\\dots , A_k\\rangle $ is a subspace of $\\mathrm {M}(m, n, \\mathbb {F})$ .", "We will meet some subgroups of $\\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ as follows.", "The symmetric group $\\operatorname{\\mathrm {S}}_n$ on $n$ objects is embedded into $\\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ as permutation matrices.", "The orthogonal group $\\operatorname{\\mathrm {O}}(n, \\mathbb {F})$ consists of those invertible matrices $A$ such that $A^tA=I_n$ .", "The special linear group $\\operatorname{\\mathrm {SL}}(n, \\mathbb {F})$ consists of those invertible matrices $A$ such that $\\det (A)=1$ .", "Finally, when $n=\\ell ^2$ , there are subgroups of $\\operatorname{\\mathrm {GL}}(\\ell ^2, \\mathbb {F})$ isomorphic to $\\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})$ .", "This can be seen as follows.", "First we fix an isomorphism of linear spaces $\\phi : \\mathbb {F}^{\\ell ^2}\\rightarrow \\mathrm {M}(\\ell ,\\mathbb {F})$For example, we can let the first $\\ell $ components be the first row, the second $\\ell $ components be the second row, etc... Then $\\mathrm {M}(\\ell , \\mathbb {F})$ admits an action by $\\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})$ by left and right multiplications, e.g.", "$(A, D)\\in \\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})$ sends $C\\in \\mathrm {M}(\\ell , \\mathbb {F})$ to $ACD^t$ .", "Now use $\\phi ^{-1}$ and we get one subgroup of $\\operatorname{\\mathrm {GL}}(\\ell ^2, \\mathbb {F})$ isomorphic to $\\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F})$ ." ], [ "Definitions of several group actions", "We first recall the concept of tensors and the group actions on the space of $k$ -tensors as introduced in Section .", "Definition 7 (Tensor) A $k$ -tensor $T$ of local dimensions $d_1, d_2, \\ldots , d_k$ over $\\mathbb {F}$ , written as $T = (T_{i_1, i_2, \\ldots , i_k}),$ is a multidimensional array with $k$ indices and its components $T_{i_1, i_2,\\ldots , i_k}$ chosen from $\\mathbb {F}$ for all $i_j \\in \\lbrace 1, 2, \\ldots , d_j\\rbrace $ .", "The set of $k$ -tensors of local dimensions $d_1, d_2, \\ldots , d_k$ over $\\mathbb {F}$ is denoted as $\\mathrm {T}(d_1, d_2, \\ldots , d_k, \\mathbb {F}).$ The integer $k$ is called the order of tensor $T$ .", "Group Action 1 (The general linear group action on tensors) Let $\\mathbb {F}$ be a field, $k$ , $d_1, d_2, \\ldots , d_k$ be integers.", "Group $G$ : $\\prod _{j=1}^k \\operatorname{\\mathrm {GL}}(d_j, \\mathbb {F})$ .", "Set $S$ : $\\mathrm {T}(d_1, d_2, \\ldots , d_k, \\mathbb {F})$ .", "Action $\\alpha $ : for a $k$ -tensor $T \\in S$ , a member $M = (M^{(1)},M^{(2)}, \\ldots , M^{(k)})$ of the group $G$ , $\\alpha (M, T) = \\biggl ( \\bigotimes _{j=1}^k M^{(j)} \\biggr ) T =\\sum _{l_1, l_2, \\ldots , l_k} \\biggl ( \\prod _{j=1}^k M^{(j)}_{i_j,l_j} \\biggr )T_{l_1, l_2, \\ldots , l_k}.$ We refer to the general linear group action on tensors in Action REF as $\\mathrm {GLAT}$ .", "In the following, let us formally define several problems which have been referred to frequently in the above discussions.", "As already observed by Brassard and Yung [22], the discrete logarithm problem can be formulated using the language of group actions.", "More specifically, we have: Group Action 2 (Discrete Logarithm in Cyclic Groups of Prime Orders) Let $p$ be a prime, $\\mathbb {Z}_p$ the integer.", "Group $G$ : $\\mathbb {Z}_p^{*}$ , the multiplicative group of units in $\\mathbb {Z}_p$ .", "Set $S$ : $C_p\\setminus \\lbrace \\mathrm {id}\\rbrace $ , where $C_p$ is a cyclic group of order $p$ and $\\mathrm {id}$ is the identity element.", "Action $\\alpha $ : for $a\\in \\mathbb {Z}_p^{*}$ , and $s\\in S$ , $\\alpha (a, s)=s^a$ .", "Note that in the above, we refrained from giving a specific realization of the cyclic group $C_p$ for the sake of clarify; the reader may refer to Boneh's excellent survey [20] for concrete proposals that can support the security of the Decisional Diffie-Hellman assumption.", "The linear code permutation equivalence (LCPE) problem asks to decide whether two linear codes (i.e.", "linear subspaces) are the same up to a permutation of the coordinates.", "It has been studied in the coding theory community since the 1990's [81], [84].", "Group Action 3 (Group action for Linear Code Permutation Equivalence problem (LCPE)) Let $m, d$ be integers, $m\\le d$ , and let $\\mathbb {F}$ be a field.", "Group $G$ : $\\operatorname{\\mathrm {GL}}(m, \\mathbb {F}) \\times \\operatorname{\\mathrm {S}}_d$ .", "Set $S$ : $\\operatorname{\\mathrm {GL}}(m, d, \\mathbb {F})$ .", "Action $\\alpha $ : for $A \\in S$ , $M = (N,P) \\in G$ , $\\alpha (M, A) = N A P^t $ .", "The connection with coding theory is that $A$ can be viewed as the generating matrix of a linear code (a subspace of $\\mathbb {F}_q^n$ ), and $N$ is the change of basis matrix taking care of different choices of bases.", "Then, $P$ , as a permutation matrix, does not change the weight of a codeword— that is a vector in $\\mathbb {F}^n$ .", "(There are other operations that preserve weights [87], but we restrict to consider this setting for simplicity.)", "The GA-Inv problem for this group action is called the linear code permutation equivalence (LCPE) problem, which has been studied in the coding theory community since the 1980's [62], and we can dodge the only successful attack [84] by restricting to self-dual codes.", "The following group action induces a problem called the polynomial isomorphism problems proposed by Patarin [77], and has been studied in the multivariate cryptography community since then.", "Group Action 4 (Group action for the Isomorphism of Quadratic Polynomials with two Secrets problem (IQP2S)) Let $m,d$ be integers and $\\mathbb {F}$ a finite field.", "Group $G$ : $\\operatorname{\\mathrm {GL}}(d, \\mathbb {F}) \\times \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})$ .", "Set $S$ : The set of tuples of homogeneous polynomials $(f_1, f_2,\\ldots , f_m)$ for $f_i \\in \\mathbb {F}[x_1, x_2, \\ldots , x_d]$ the polynomial ring of $d$ variables over $\\mathbb {F}$ .", "Action $\\alpha $ : for $f = (f_1, f_2, \\ldots , f_m) \\in S$ , $M = (C, D)\\in G$ , $C^{\\prime }=C^{-1}$ , define $\\alpha (M, f) = (g_1, g_2, \\ldots , g_m)$ by $g_i(x_1, x_2, \\ldots ,x_d)=\\sum _{j=1}^m D_{i,j} f_i(x_1^{\\prime }, \\dots , x_d^{\\prime })$ , where $x_i^{\\prime }=\\sum _{j=1}^d C^{\\prime }_{i,j}x_j$ .", "The GA-Inv problem for this group action is essentially the isomorphism of quadratic polynomials with two secrets (IQP2S) assumption.", "The algebraic interpretation here is that the tuple of polynomials $(f_1, \\dots ,f_n)$ is viewed as a polynomial map from $\\mathbb {F}^n$ to $\\mathbb {F}^m$ , by sending $(a_1,\\dots , a_n)$ to $(f_1(a_1, \\dots , a_n), \\dots , f_m(a_1, \\dots , a_n))$ .", "The changes of bases by $C$ and $D$ then are naturally interpreted as saying that the two polynomial maps are essentially the same.", "Finally, the GA-Inv problem for the following group action originates from computational group theory, and is basically equivalent to a bottleneck case of the group isomorphism problem (i.e.", "$p$ -groups of class 2 and exponent $p$ ) [76], [63].", "Group Action 5 (Group action for alternating matrix space isometry (AMSI)) Let $d, m$ be integers and $\\mathbb {F}$ be a finite field.", "Group $G$ : $\\operatorname{\\mathrm {GL}}(m, \\mathbb {F})$ .", "Set $S$ : the set of all linear spans $\\mathcal {A}$ of $d$ alternatingAn $m\\times m$ matrix $A$ is alternating if for any $v\\in \\mathbb {F}^n$ , $v^tAv=0$ .", "matrices $A_i$ of size $m\\times m$ .", "Action $\\alpha $ : for $\\mathcal {A}= \\langle A_1, A_2, \\ldots , A_d \\rangle \\in S$ , $C \\in G$ , $\\alpha (C, \\mathcal {A}) = \\langle B_1, B_2, \\ldots , B_d \\rangle $ where $B_i = C A_i C^t$ for all $i=1, 2, \\ldots , d$ ." ], [ "The central position of 3-tensor isomorphism", "As mentioned, the four problems, linear code permutation equivalence (LCPE), isomorphism of polynomials with two secrets (IQP2S), and alternating matrix space isometry (AMSI), have been studied in coding theory, multivariate cryptography, and computational group theory, respectively, for decades.", "Only recently we begin to see connections among these problems which go through the 3TI problem thanks to the work of Futorny, Grochow, and Sergeichuk [39].", "We spell out this explicitly.", "Observation 2 ([39], [47]) IQP2S, AMSI, GI, and LCPE reduce to 3TI.", "Note that the set underlying Group Action REF consists of $d$ -tuples of $m\\times m$ alternating matrices.", "We can write such a tuple $(A_1, \\dots , A_d)$ as a 3-tensor $A$ of dimension $m\\times m\\times d$ , such that $A_{i,j,k}=(A_k)_{i,j}$ .", "Then AMSI asks to test whether two such 3-tensors are in the same orbit under the action of $(M, N)\\in \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(d, \\mathbb {F})$ by sending a 3-tensor $A$ to the result of applying $(M, M, N)$ to $A$ as in the definition of $\\mathrm {GLAT}$ .", "Such an action belongs to the class of actions on 3-tensors considered in [39] under the name linked actions.", "This work constructs a function $r$ from 3-tensors to 3-tensors, such that $A$ and $B$ are in the same orbit under $\\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(d, \\mathbb {F})$ if and only if $r(A)$ and $r(B)$ are in the same orbit under $\\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(m,\\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(d, \\mathbb {F})$ .", "This function $r$ can be computed efficiently [39].", "This explains the reduction of the isomorphism problem for Group Action REF to the 3-tensor isomorphism problem.", "For Group Action REF , by using the classical correspondence between homogeneous quadratic polynomials and symmetric matrices, we can cast it in a form similar to Group Action REF , and then apply the above reasoning using again [39].", "Finally, to reduce the graph isomorphism problem (GI) and the linear code permutation equivalent problem (LCPE) to the 3-tensor isomorphism problem, we only need to take care of LCPE as GI reduces to LCPE [81].", "To reduce LCPE to 3TI, we can reduce it to the matrix Lie algebra conjugacy problem by [47], which reduces to 3TI by [39] along the linked action argument, though this time linked in a different way.", "This put 3TI at a central position of these difficult isomorphism testing problems arising from multivariate cryptography, computational group theory, and coding theory.", "In particular, from the worst-case analysis viewpoint, 3TI is the hardest problem among all these.", "This also allows us to draw experiences from previous research in various research communities to understand 3TI." ], [ "Current status of the tensor isomorphism problem and its one-way\naction assumption", "We now explain the current status of the tensor isomorphism problem to support it as a strong candidate for the $\\mathrm {OWA}$ assumption.", "Because of the connections with isomorphism of polynomials with two secrets (IQP2S) and alternating matrix space isometry (AMSI), we shall also draw results and experiences from the multivariate cryptography and the computational group theory communities.", "For convenience, we shall restrict to finite fields $\\mathbb {F}_q$ , though other fields are also interesting.", "That is, we consider the action of $\\operatorname{\\mathrm {GL}}(\\ell , \\mathbb {F}_q)\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F}_q) \\times \\operatorname{\\mathrm {GL}}(m, \\mathbb {F}_q)$ on $T \\in \\mathrm {T}(\\ell , n, m, \\mathbb {F}_q)$ .", "Without loss of generality, we assume $\\ell \\ge n \\ge m$ .", "The reader may well think of the case when $\\ell = n = m$ , which seems to be the most difficult case in general.", "Correspondingly, we will assume that the instances for IQP2S are $m$ -tuples of homogeneous quadratic polynomials in $n$ variables over $\\mathbb {F}_q$ , and the instances for AMSI are $m$ -tuples of alternating matrices of size $n\\times n$ over $\\mathbb {F}_q$ .", "To start, we note that 3TI over finite fields belongs to $\\mathrm {NP} \\cap \\mathrm {coAM} $ , following the same $\\mathrm {coAM} $ -protocol for graph isomorphism.", "For the worst-case time complexity, it can be solved in time $q^{m^2}\\cdot \\operatorname{\\mathrm {poly}}(\\ell , m, n, \\log q)$ , by enumerating $\\operatorname{\\mathrm {GL}}(m, q)$ , and then solving an instance of the matrix tuple equivalence problem, which asks to decide whether two matrix tuples are the same under the left-right multiplications of invertible matrices.", "This problem can be solved in deterministic polynomial time by reducing [57] to the module isomorphism problem, which in turn admits a deterministic polynomial-time solution [25], [17], [56].", "It is possible to reduce the complexity to $q^{c m^2}\\cdot \\operatorname{\\mathrm {poly}}(\\ell , m, n,\\log q)$ for some constant $0<c<1$ , by using some dynamic programming technique as in [63].", "But in general, the worst-case complexity could not go beyond this at present, which matches the experiences of IQP2S and AMSI as well; see [57].", "For the average-case time complexity, it can be solved in time $q^{O(m)}\\cdot \\operatorname{\\mathrm {poly}}(\\ell , n)$ , by adapting the average-case algorithm for AMSI in [63].", "This also matches the algorithm for IQP2S which has an average-case running time of $q^{O(n)}$  [15].", "For practical algorithms, we draw experiences from the computational group theory community and the multivariate cryptography community.", "In the computational group theory community, the current status of the art is that one can hope to handle 10-tuples of alternating matrices of size $10\\times 10$ over $\\mathbb {F}_{13}$ , but absolutely not, for 3-tensors of local dimension say 100, even though in this case the input can still be stored in only a few megabytes.We thank James B. Wilson, who maintains a suite of algorithms for $p$ -group isomorphism testing, for communicating this insight to us from his hands-on experience.", "We of course maintain responsibility for any possible misunderstanding, or lack of knowledge regarding the performance of other implemented algorithms.", "In the multivariate cryptography community, the Gröbner basis technique [41] and certain combinatorial technique [15] have been studied to tackle IQF2S problem.", "However, these techniques are not effective enough to break it [15]In particular, as pointed out in [15], one needs to be careful about certain claims and conjectures made in some literature on this research line.. For quantum algorithms, 3TI seems difficult for the hidden subgroup approach, due to the reasons presented in Section REF .", "Finally, let us also elaborate on the prospects of using those techniques for graph isomorphism [7] and for isomorphism of quadratic polynomials with one secret [57] to tackle 3TI.", "In general, the difficulties of applying these techniques seem inherent.", "We first check out the graph isomorphism side.", "Recall that most algorithms for graph isomorphism, including Babai's [7], are built on two families of techniques: group-theoretic, and combinatorial.", "To use the group-theoretic techniques, we need to work with matrix groups over finite fields instead of permutation groups.", "Algorithms for matrix groups over finite fields are in general far harder than those for permutation groups.", "For example, the basic membership problem is well-known to be solvable by Sims's algorithm [86], while for matrix groups over finite fields of odd order, this was only recently shown to be efficiently solvable with a number-theoretic oracle and the algorithm is much more involved [9].", "To use the combinatorial techniques, we need to work with linear or multilinear structures instead of combinatorial structures.", "This shift poses severe limitations on the use of most combinatorial techniques, like individualizing a vertex.", "For example, it is quite expensive to enumerate all vectors in a vector space over a finite field, while this is legitimate to go over all elements in a set.", "We then check out the isomorphism of quadratic polynomials with one secret side.", "The techniques for settling this problem as in [57] are based on those developed for the module isomorphism problem [25], [17], [56], involutive algebras [92], and computing algebra structures [42].", "The starting point of that algorithm solves an easier problem, namely testing whether two matrix tuples are equivalent under the left-right multiplications.", "That problem is essentially linear, so the techniques for the module isomorphism problem can be used.", "After that we need to utilize the involutive algebra structure [92] based on [42].", "However, for 3TI, there is no such easier linear problem to start with, so it is not clear how those techniques can be applied.", "To summarize, the 3-tensor isomorphism problem is difficult from all the four types of algorithms mentioned in Section REF .", "Furthermore, the techniques in the recent breakthrough on graph isomorphism [7], and the solution of the isomorphism of quadratic polynomials with one secret [57], seem not applicable to this problem.", "All these together support this problem as a strong candidate for the one-way assumption." ], [ "Choices of the parameters", "Having reviewed the current status of the tensor isomorphism problem, we lay out some principles of choosing the parameters for the security, namely the order $k$ , the dimensions $d_i$ , and the underlying field $\\mathbb {F}$ .", "Let us first explain why we focus on $k=3$ , namely 3-tensors.", "Of course, $k$ needs to be $\\ge 3$ as most problems about 2-tensors, i.e.", "matrices, are easy.", "We then note that there is certain evidence to support the possibility that the $k$ -tensor isomorphism problem reduces to the 3-tensor isomorphism problem.", "That is, over certain fields, by [3] and [39], the degree-$k$ homogeneous form equivalence problem reduces to the the 3-tensor isomorphism problem in polynomial time.", "The former problem can be cast as an isomorphism problem for symmetricA tensor $A=(A_{i_1, \\dots , i_k})$ is symmetric if for any permutation $\\sigma \\in \\operatorname{\\mathrm {S}}_k$ , and any index $(i_1, \\dots , i_k)$ , $A_{i_1, \\dots ,i_k}=A_{i_{\\sigma (1)}, \\dots , i_{\\sigma (k)}}$ .", "$k$ -tensors under a certain action of $\\operatorname{\\mathrm {GL}}$ .", "From the practical viewpoint though, it will be interesting to investigate into the tradeoff between the local dimensions $d_i$ and $k$ .", "After fixing $k=3$ , it is suggested to set $d_1=d_2=d_3$ .", "This is because of the argument when examining the worst-case time complexity in the above subsection.", "Then for the underlying finite field $\\mathbb {F}_q$ , the intuition is that setting $q$ to be a large prime would be more secure.", "Note that we can still store an exponentially large prime using polynomially-many bits.", "This is because, if $q$ is small, then the “generic” behaviors as ensured by the Lang–Weil type theorems [65] may not be that generic.", "So some non-trivial properties may arise which then help with isomorphism testing.", "This is especially important for the pseudorandom assumption to be discussed Section .", "We then examine whether we want to set $q$ to be a large prime, or a large field with a small characteristic.", "The former one is preferred, because the current techniques in computer algebra and computational group theory, cf.", "[57] and [9], can usually work efficiently with large fields of small characteristics.", "However, let us emphasize that even setting $q$ to be a constant, we do not have any concrete evidence for breaking $\\mathrm {GLAT}$ as a one-way group action candidate.", "That is, the above discussion on the field size issue is rather hypothetical and conservative." ], [ "The pseudorandom action assumption", "In this section, we introduce the new security assumption for group actions, namely pseudorandom group actions, which generalises the Decisional Diffie-Hellman assumption.", "In Section , we shall study the prospect of using the general linear action on tensors as a candidate for this assumption.", "Then in Section , we present the cryptographic uses of this assumption including signatures and pseudorandom functions.", "Definition 8 Let $\\mathcal {G}$ be a group family as specified before.", "Choose public parameters $\\texttt {params}= (G,S,\\alpha )$ to be $\\mathcal {G}(1^\\lambda )$ .", "Sample $s\\leftarrow S$ and $g\\leftarrow G$ .", "The group action pseudorandomness (GA-PR) problem is that given $(s, t)$ , where $t = \\alpha (g, s)$ or $t\\leftarrow S$ , decide which case $t$ is sampled from.", "Definition 9 (Pseudorandom group action game) The pseudorandom group action game is the following game between a challenger and an adversary $\\mathcal {A}$ : The challenger and the adversary $\\mathcal {A}$ agree on the public parameters $\\texttt {params}= (G, S, \\alpha )$ by choosing it to be $\\mathcal {G}(1^\\lambda )$ for some security parameter $\\lambda $ .", "Challenger samples random bit $b\\in \\lbrace 0,1\\rbrace $ , $s\\leftarrow S$ , $g\\leftarrow G$ , and chooses $t\\leftarrow S$ if $b=0$ and $t = g \\cdot s$ if $b=1$ .", "Give $(s, t)$ to $\\mathcal {A}$ who produces a bit $a\\in \\lbrace 0,1\\rbrace $ .", "We define the output of the game $\\textsf {GA-PR} _{\\mathcal {A},\\mathcal {G}}(1^\\lambda ) = 1$ and say $\\mathcal {A}$ wins the game if $a=b$ .", "Definition 10 We say that the group-action pseudorandomness (GA-PR) problem is hard relative to $\\mathcal {G}$ , if for any polynomial-time quantum algorithm $\\mathcal {A}$ , $\\Pr [ \\textsf {GA-PR} _{\\mathcal {A},\\mathcal {G}}(1^\\lambda ) = 1 ] = \\operatorname{\\mathrm {negl}}(\\lambda ).$ Some remarks on this definition are due here." ], [ "For transitive and almost transitive actions.", "In the case of transitive group actions, as an easy corollary of Observation REF , we have the following.", "Observation 3 GA-PR  problem is hard, if the group action $\\alpha $ is transitive.", "Slightly generalizing the transitive case, it is not hard to see that GA-PR  problem is hard, if there exists a “dominant” orbit $O\\subseteq S$ .", "Intuitively, this means that $O$ is too large such that random $s$ and $t$ from $S$ would both lie in $O$ with high probability.", "For example, consider the action of $\\operatorname{\\mathrm {GL}}(n, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ on $\\mathrm {M}(n,\\mathbb {F})$ by the left and right multiplications.", "The orbits are determined by the ranks of matrices in $\\mathrm {M}(n, \\mathbb {F})$ , and the orbit of matrices of full-rank is dominant.", "But again, such group actions seems not very useful for cryptographic purposes.", "Indeed, we require the orbit structure to satisfy that random $s$ and $t$ do not fall into the same orbit.", "Let us formally put forward this condition.", "Definition 11 We say that a group action $\\alpha $ of $G$ on $S$ does not have a dominant orbit, if $\\Pr _{s,t\\leftarrow S}\\, [s, t \\text{ lie in the same orbit}] = \\operatorname{\\mathrm {negl}}(\\lambda ).$ Assumption 2 (Pseudorandom group action ($\\mathrm {PRA}$ ) assumption) There exists an $\\mathcal {G}$ outputting a group action without a dominant orbit, relative to which the $\\textsf {GA-PR} $ problem is hard.", "The name comes from the fact that the PRA assumption says `in spirit' that the function $\\Gamma :G\\times S\\rightarrow S\\times S$ given by $\\Gamma (g, s)=(g\\cdot s, s)$ is a secure PRG.", "Here, it is only `in spirit', because the PRA assumption does not include the usual expansion property of the PRG.", "Rather, it only includes the inexistence of a dominant orbit." ], [ "Subsuming the classical Diffie-Hellman assumption.", "We now formulate the classical decisional Diffie-Hellman (DDH) assumption as an instance of the pseudorandom group action assumption.", "To see this, we need the following definition.", "Definition 12 Let $\\alpha :G\\times S\\rightarrow S$ be a group action.", "The $d$ -diagonal action of $\\alpha $ , denoted by $\\alpha ^{(d)}$ , is the group action of $G$ on $S^d$ , the Cartesian product of $d$ copies of $S$ , where $g\\in G$ sends $(s_1, \\dots , s_d)\\in S^d$ to $(g\\cdot s_1, \\dots , g\\cdot s_d)$ .", "The following observation shows that the classical DDH can be obtained by instantiating GA-PR  with a concrete group action.", "Observation 4 Let $\\alpha $ be the group action in Group Action REF .", "The classical Decisional Diffie-Hellman assumption is equivalent to the $\\mathrm {PRA}$ assumption instantiated with $\\alpha ^{(2)}$ , the 2-diagonal action of $\\alpha $ .", "Recall from Group Action REF defines an action $\\alpha $ of $G\\cong \\mathbb {Z}_p^*$ on $S=C_p\\setminus \\lbrace \\mathrm {id}\\rbrace $ where $C_p$ is a cyclic group of order $p$ .", "The 2-diagonal action $\\alpha ^{(2)}$ is defined by $a\\in \\mathbb {Z}_p^*$ sending $(s, t)\\in S\\times S$ to $(s^a, t^a)$ .", "Note that while $\\alpha $ is transitive, $\\alpha ^{(2)}$ is not, and in fact it does not have a dorminant orbit.", "$\\mathrm {PRA}$ instantiated with $\\alpha ^{(2)}$ then asks to distinguish between the following two distributions.", "The first distribution is $((s, t), (s^{\\prime }, t^{\\prime }))$ where $s, t, s^{\\prime }, t^{\\prime }\\in _R S$ .", "Since $\\alpha $ is transitive, by Observation REF , this distribution is equivalent to $((s, s^a), (s^b, s^c))$ , where $s\\in _R S$ and $a, b, c\\in _R G$ .", "The second distribution is $((s, t), (s^b, t^b))$ , where $s, t\\in _R S$ , and $b\\in _R G$ .", "Again, by Observation REF , this distribution is equivalent to $((s, s^a), (s^b, s^{ab}))$ , where $s\\in _R S$ , and $a, b\\in _R G$ .", "We then see that this is just the Decisional Diffie-Hellman assumptionHere we use the version of DDH where the generator of the cyclic group is randomly chosen as also used in [28].", "A recent discussion on distinction between fixed generators and random generators can be found in [18].", "As will be explained in Section REF , the pseudorandom assumption is a strong one, in a sense much stronger than the one-way assumption.", "Therefore, Observation REF is important because, by casting the classical Diffie-Hellman assumption as an instance of the pseudorandom assumption, it provides a non-trivial and well-studied group action candidate for this assumption.", "Of course, the DDH assumption is no longer secure under quantum attacks.", "Recently, this assumption in the context of supersingular isogeny based cryptography has been proposed by De Feo and Galbraith in [38].", "We will study the possibility for the 3-tensor isomorphism problem as a pseudorandom group action candidate in Section" ], [ "The $d$ -diagonal pseudorandomness assumption.", "Motivated by Observation REF , it will be convenient to specialize $\\textsf {GA-PR} $ to diagonal actions, and make the following assumption.", "Definition 13 The $d$ -diagonal pseudorandomness ($\\textsf {GA-PR} (d)$ ) problem for a group action $\\alpha $ , is defined to be the pseudorandomness problem for the $d$ -diagonal group action $\\alpha ^{(d)}$ .", "We emphasize that $\\textsf {GA-PR} (d)$ is just $\\textsf {GA-PR} $ applied to group actions of a particular form, so a special case of $\\textsf {GA-PR} $ .", "Correspondingly, we define $\\mathrm {PRA} (d)$ to be the assumption that $\\textsf {GA-PR} (d)$ is hard relative to some $\\mathcal {G}$ .", "Given a group action $\\alpha :G\\times S\\rightarrow S$ , let $F_\\alpha =\\lbrace f_g:S\\rightarrow S \\mid g\\in G, f_g(s)=g\\cdot s\\rbrace $ .", "It is not hard to see that $\\mathrm {PRA} (d)$ is equivalent to say that $F_\\alpha $ is a $d$ -query weak PRF in the sense of Maurer and Tessaro [72].", "This gives a straightforward cryptographic use of the $\\mathrm {PRA} (d)$ assumption.", "Given $d, e\\in \\mathbb {Z}^+$ , $d<e$ , it is clear that $\\mathrm {PRA} (e)$ is a stronger assumption than $\\mathrm {PRA} (d)$ .", "Indeed, given an algorithm $A$ that distinguishes between $((s_1, \\dots , s_d), (g\\cdot s_1,\\dots g\\cdot s_d)) \\text{ and } ((s_1, \\dots , s_d), (t_1, \\dots , t_d)),$ where $s_i, t_j\\leftarrow S$ , and $g\\leftarrow G$ , one can use $A$ to distinguish between $((s_1, \\dots , s_e), (g\\cdot s_1, \\dots g\\cdot s_e))$ and $((s_1, \\dots , s_e),(t_1, \\dots , t_e))$ , by just looking at the first $d$ components in each tuple.", "The applications of the $\\mathrm {PRA} $ assumption including more efficient quantum-secure digital signature schemes and pseudorandom function constructions are given in Section .", "Next, we will provide candidates to instantiate the $\\mathrm {PRA} $ assumption." ], [ "Requirements for a group action to be pseudorandom", "Clearly, a first requirement for a group action to be pseudorandom is that it should be one-way.", "Further requirements naturally come from certain attacks.", "We have devised the following attack strategies.", "These attacks suggest that the pseudorandom assumption is closely related to the orbit closure intersection problem which has received considerable attention recently." ], [ "Isomorphism testing in the average-case setting.", "To start with, we consider the impact of an average-case isomorphism testing algorithm on the pseudorandom assumption.", "Recall that for a group action $\\alpha :G\\times S\\rightarrow S$ , an average-case algorithm is required to work for instances $(s, t)$ where $s\\leftarrow S$ and $t$ is arbitrary.", "Let $n$ be the input size to this algorithm.", "The traditional requirement for an average-case algorithm is that it needs to work for all but at most $1/\\operatorname{\\mathrm {poly}}(n)$ fraction of $s\\in S$ , like such algorithms for graph isomorphism [11] and for alternating matrix space isometry [63].", "However, in order for such an algorithm to break the pseudorandom assumption, it is enough that it works for a non-negligible, say $1/\\operatorname{\\mathrm {poly}}(n)$ , fraction of the instances.", "This is quite relaxed compared to the traditional requirement." ], [ "The supergroup attack.", "For a group action $\\alpha :G\\times S\\rightarrow S$ , a supergroup action of $\\alpha $ is another group action $\\beta :H\\times S\\rightarrow S$ , such that (1) $G$ is a subgroup of $H$ , (2) the restriction of $\\beta $ to $G$ , $\\beta |_G$ , is equal to $\\alpha $ .", "If it further holds that (3.1) the isomorphism problem for $H$ is easy, and (3.2) $\\beta $ is not dominant, we will then have the following so-called supergroup attack.", "Give input $s,t \\in S$ , the adversary for the $\\textsf {GA-PR} $ problem of $\\alpha $ will use the solver for the isomorphism problem for $H$ to check if $s, t$ are from the same orbit induced by $H$ and return 1 if they are from the same orbit and 0 otherwise.", "If $s, t$ are from the same orbit induced by $G$ , the adversary always returns the correct answer as $G$ is a subgroup of $H$ .", "In the case that $s, t$ are independently chosen from $S$ , by the fact that $\\beta $ is not dominant, the adversary will return the correct answer 0 with high probability." ], [ "The isomorphism invariant attack.", "Generalizing the condition (3) above, we can have the following more general strategy as follows.", "We now think of $G$ and $H$ as defining equivalence relations by their orbit structures.", "Let $\\sim _G$ (resp $\\sim _H$ ) be the equivalence relation defined by $G$ (resp.", "$H$ ).", "By the conditions (1) and (2), we have (a) $\\sim _H$ is coarser than $\\sim _G$ .", "By the condition (3.1), we have (b) $\\sim _H$ is easy to decide.", "By the condition (3.2), we have (c) $\\sim _H$ have enough equivalence classes.", "Clearly, if a relation $\\sim $ , not necessarily defined by a supergroup $H$ , satisfies (a), (b), and (c), then $\\sim $ can also be used to break the $\\mathrm {PRA} $ assumption for $G$ .", "Such an equivalence relation is more commonly known as an isomorphism invariant, namely those properties that are preserved under isomorphism.", "The sources of isomorphism invariants can be very versatile.", "The supergroup attack can be thought of as a special case of category where the equivalence relation is defined by being isomorphic under a supergroup action.", "Another somewhat surprising and rich “invariant” comes from geometry, as we describe now." ], [ "The geometric attack.", "In the case of matrix group actions, the underlying vector spaces usually come with certain geometry which can be exploited for the attack purpose.", "Let $\\alpha $ be a group action of $G$ on $V\\cong \\mathbb {F}^d$ .", "For an orbit $O\\subseteq V$ , let its Zariski closure be $\\overline{O}$ .", "Let $\\sim $ be the equivalence relation on $V$ , such that for $s, t\\in O$ , $s\\sim t$ if and only if $\\overline{O_s} \\cap \\overline{O_t} \\ne \\emptyset $ .", "It is obvious that $\\sim $ is a coarser relation than $\\sim _G$ .", "Furthermore, except some degenerate settings when $m$ or $n$ are very small, there would be enough equivalence classes defined by $\\sim $ , because of the dimension reason.", "So (a) and (c) are satisfied.", "Therefore, if we could test efficiently whether the orbit closures of $s$ and $t$ intersect, (b) would be satisfied and we could break the $\\mathrm {PRA} $ for $\\alpha $ .", "This problem, known as the orbit closure intersection problem, has received considerable attention recently.", "Another straightforward approach based on this viewpoint is to recall that the geometry of orbit closures is determined by the ring of invariant polynomials [68].", "More specifically, the action of $G$ on $V$ induces an action on $\\mathbb {F}[V]$ , the ring of polynomial functions on $V$ .", "As $V\\cong \\mathbb {F}^d$ , $\\mathbb {F}[V]\\cong \\mathbb {F}[x_1, \\dots , x_d]$ .", "Those polynomials invariant under this induced action form a subring of $\\mathbb {F}[V]$ , denoted as $\\mathbb {F}[V]^G$ .", "If there exists one easy-to-compute, non-trivial, invariant polynomial $f$ from $\\mathbb {F}[V]^G$ , we could then use $f$ to evaluate on the input instances and distinguish between the random setting (where $f$ is likely to evaluate differently) and the pseudorandom setting (where $f$ always evaluates the same)." ], [ "Example attacks", "We now list some examples to illustrate the above attacks." ], [ "An example of using the isomorphism invariant attack.", "We first consider the isomorphism invariant attack in the graph isomorphism case.", "Clearly, the degree sequence, consisting of vertex degrees sorted from large to small, is an easy to compute isomorphism invariant.", "A brief thought suggests that this invariant is already enough to break the pseudorandom assumption for graph isomorphism." ], [ "An example of using the geometric attack.", "We consider a group action similar to the 3-tensor isomorphism case (Group Action REF ), inspired by the quantum marginal problem [13].", "Given a 3-tensor of size $\\ell \\times n\\times m$ , we can “slice” this 3-tensor according to the third index to obtain a tuple of $m$ matrices of size $\\ell $ by $n$ .", "Consider the action of $G=\\operatorname{\\mathrm {O}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {O}}(n, \\mathbb {F})\\times \\operatorname{\\mathrm {SL}}(m, \\mathbb {F})$ on matrix tuples $\\mathrm {M}(\\ell \\times n, \\mathbb {F})^m$ , where the three direct product factors act by left multiplication, right multiplication, and linear combination of the $m$ components, respectively.", "For a matrix tuple $(A_1, \\dots , A_m)$ where $A_i\\in \\mathrm {M}(\\ell \\times n, \\mathbb {F})$ , form an $\\ell n\\times m$ matrix $A$ where the $i$ -th column of $A$ is obtained by straightening $A_i$ according to columns.", "Then $A^tA$ is an $m$ by $m$ matrix.", "The polynomial $f=\\det (A^tA)$ is then a polynomial invariant for this action.", "For this note that the group $\\operatorname{\\mathrm {O}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {O}}(n, \\mathbb {F})$ can be embedded as a subgroup of $\\operatorname{\\mathrm {O}}(\\ell n, \\mathbb {F})$ , so its action becomes trivial on $A^tA$ .", "Then the determinant is invariant under the $\\operatorname{\\mathrm {SL}}(m, \\mathbb {F})$ .", "When $m<\\ell n$ , which is the interesting case, $\\det (A^tA)$ is non-zero.", "It follows that we have a non-trivial, easy-to-compute, polynomial invariant which can break the $\\mathrm {PRA}$ assumption for this group action." ], [ "An example of using the supergroup attack.", "We then explain how the supergroup attack invalidates the $\\mathrm {PRA} (d)$ assumption for certain families of group actions with $d>1$ .", "Let $\\alpha $ be a linear action of a group $G$ on a vector space $V\\cong \\mathbb {F}^{N}$ .", "We show that as long as $d>N$ , $\\mathrm {PRA} (d)$ does not hold.", "To see this, the action of $G$ on $V$ gives a homomorphism $\\phi $ from $G$ to $\\operatorname{\\mathrm {GL}}(V)\\cong \\operatorname{\\mathrm {GL}}(N, \\mathbb {F})$ .", "For any $g\\in G$ , and $v_1, \\dots , v_d\\in V$ , we can arrange an $N\\times d$ matrix $S=[v_1, \\dots , v_d]$ , such that $T=[\\phi (g)v_1, \\dots ,\\phi (g)v_d]=\\phi (g)[v_1, \\dots , v_d]$ .", "On the other hand, for $u_1, \\dots , u_d\\in V$ , let $T^{\\prime }=[u_1, \\dots , u_d]$ .", "Let us consider the row spans of $S$ , $T$ and $T^{\\prime }$ , which are subspaces of $\\mathbb {F}^d$ of dimension $\\le N<d$ .", "Clearly, the row spans of $S$ and $T$ are the same.", "On the other hand, when $u_i$ 's are random vectors, the row span of $T^{\\prime }$ is unlikely to be the same as that of $S$ .", "This gives an efficient approach to distinguish between $T$ and $T^{\\prime }$ .", "We can upgrade the above attack even further as follows.", "Let $\\alpha $ be a linear action of $G$ on the linear space of matrices $M=\\mathrm {M}(m\\times n,\\mathbb {F})$ .", "Recall that $\\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ acts on $M$ by left and right multiplications.", "Suppose $\\alpha $ gives rise to a homomorphism $\\phi :G\\rightarrow \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ .", "For $g\\in G$ , if $\\phi (g)=(A, B)\\in \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ , we let $\\phi _1(g):=A\\in \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})$ , and $\\phi _2(g)=B\\in \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ .", "We now show that when $d>(m^2+n^2)/(mn)$ , $\\mathrm {PRA} (d)$ does not hold for $\\alpha $ .", "To see this, for any $g\\in G$ , and $S=(A_1, \\dots , A_d)\\in \\mathrm {M}(m\\times n, \\mathbb {F})^d$ , let $T=(\\phi _1(g)^tA_1\\phi _2(g), \\dots , \\phi _1(g)^tA_d\\phi _2(g)).$ On the other hand, let $T^{\\prime }=(B_1, \\dots , B_d)\\in M^d$ .", "Since $\\dim (S)=\\dim (\\operatorname{\\mathrm {GL}}(m\\times n, \\mathbb {F})^d)=mnd>m^2+n^2=\\dim (\\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n,\\mathbb {F})),$ $\\alpha $ does not have a dominant orbit (cf.", "Definition REF ) This means that, when $B_i$ 's are sampled randomly from $S$ , $T^{\\prime }$ is unlikely to be in the same orbit as $S$ .", "Now we use the fact that, the isomorphism problem for the action of $\\operatorname{\\mathrm {GL}}(m, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})$ on $S$ can be solved in deterministic polynomial time [57].", "This gives an efficient approach to distinguish between $T$ and $T^{\\prime }$ .", "Note that the set up here captures the Group Actions REF and REF in Section REF .", "For example, suppose for Group Action REF , we consider linear codes which are $n/2$ -dimensional subspaces of $\\mathbb {F}_q^n$ .", "Then we have $m=n/2$ , so $\\mathrm {PRA} (3)$ for this action does not hold, as $3>(m^2+n^2)/(mn)=5/2$ .", "On the other hand, when $d\\le (m^2+n^2)/(mn)$ , such an attack may fail, simply because of the existence of a dominant orbit." ], [ "The general linear action on tensors as a pseudorandom action\ncandidate", "We have explained why the general linear action on tensors is a good candidate for the one-way assumption in Section .", "We now argue that, to the best of our knowledge, it is also a candidate for the pseudorandom assumption.", "We have described the current status of average-case algorithms for 3-tensor isomorphism problem in Section REF .", "One may expect that, because of the relaxed requirement for the average-case setting as discussed in Section REF , the algorithms in [63], [15] may be accelerated.", "However, this is not the case, because these algorithms inherently enumerate all vectors in $\\mathbb {F}_q^n$ , or improve somewhat by using the birthday paradox.", "We can also explain why the relaxed requirement for the average-case setting is still very difficult, by drawing experiences from computational group theory, because of the relation between $\\mathrm {GLAT}$ and Group Action REF , which in turn is closely related to the group isomorphism problem as explained in Section REF .", "In group theory, it is known that the number of non-isomorphic $p$ -groups of class 2 and exponent $p$ of order $p^\\ell $ is bounded as $p^{\\frac{2}{27}\\ell ^3+\\Theta (\\ell ^2)}$ [19].", "The relaxed average-case requirement in this case then asks for an algorithm that could test isomorphism for a subclass of such groups containing non-isomorphic groups as many as $p^{\\frac{2}{27}\\ell ^3+\\Theta (\\ell ^2)}/\\operatorname{\\mathrm {poly}}(\\ell , \\log p)=p^{\\frac{2}{27}\\ell ^3+\\Theta (\\ell ^2)}$ .", "This is widely regarded as a formidable task in computational group theory: at present, we only know of a subclass of such groups with $p^{O(\\ell ^2)}$ many non-isomorphic groups that allows for an efficient isomorphism test [66].", "The supergroup attack seems not useful here.", "The group $G=\\operatorname{\\mathrm {GL}}(\\ell ,\\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})$ naturally lives in $\\operatorname{\\mathrm {GL}}(\\ell n m, \\mathbb {F})$ .", "However, by Aschbacher's classification of maximal subgroups of finite classical groups [4], there are few natural supergroups of $G$ in $\\operatorname{\\mathrm {GL}}(\\ell n m, \\mathbb {F})$ .", "The obvious ones include subgroups isomorphic to $\\operatorname{\\mathrm {GL}}(\\ell n, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})$ , which is not useful because it has a dominant orbit (Definition REF ).", "The geometric attack seems not useful here either.", "The invariant ring here is trivial [37]If instead of $\\operatorname{\\mathrm {GL}}(\\ell ,\\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(n, \\mathbb {F})\\times \\operatorname{\\mathrm {GL}}(m, \\mathbb {F})$ we consider $\\operatorname{\\mathrm {SL}}(\\ell , \\mathbb {F})\\times \\operatorname{\\mathrm {SL}}(n,\\mathbb {F})\\times \\operatorname{\\mathrm {SL}}(m, \\mathbb {F})$ , the invariant ring is non-trivial – also known as the ring of semi-invariants for the corresponding $\\operatorname{\\mathrm {GL}}$ action – but highly complicated.", "When $\\ell =m=n$ , we do not even know one single easy-to-compute non-trivial invariant.", "It further requires exponential degree to generate the whole invariant ring [33].. For the orbit closure intersection problem, despite some recent exciting progress in [16], [13], [1], [32], [58], the current best algorithms for the corresponding orbit closure intersection problems still require exponential time.", "Finally, for the most general isomorphism invariant attack, the celebrated paper of Hillar and Lim [51] is just titled “Most Tensor Problems Are NP-Hard.” This suggests that getting one easy-to-compute and useful isomorphism invariant for $\\mathrm {GLAT}$ is already a challenging task.", "Here, useful means that the invariant does not lead to an equivalence relation with a dominant class in the sense of Definition REF .", "The above discussions not only provide evidence for $\\mathrm {GLAT}$ to be pseudorandom, but also highlight how this problem connects to various mathematical and computational disciplines.", "We believe that this could serve a further motivation for all these works in various fields." ], [ "Primitives from the one-way assumption: proving quantum security", "Based on any one-way group action, it is immediate to derive a one-way function family.", "A bit commitment also follows by standard techniques, which we shall discuss later on in Section REF .", "In this section, we focus on the construction of a digital signature that exists in the literature.", "It follows a very successful approach of applying the Fiat-Shamir transformation on an identification scheme.", "However, proving quantum security of this generic method turns out to be extremely challenging and delicate.", "For this reason, we include a complete description of the construction and a formal security proof in the quantum setting." ], [ "Identification: definitions", "An identification scheme $\\textsc {ID} $ consists of a triple of probabilistic polynomial-time algorithmsWe only consider classical protocols where the algorithms can be efficiently realized on classical computers.", "$(\\textsc {KG}, \\mathcal {P},\\mathcal {V})$ : [label=$\\bullet $ ] Key generating: $(pk,sk)\\leftarrow \\textsc {KG} (1^\\lambda )$ generates a public key $pk$ and a secret key $sk$ .", "Interactive protocol: $\\mathcal {P}$ is given $(sk,pk)$ , and $\\mathcal {V}$ is only given $pk$ .", "They then interact in a few rounds, and in the end, $\\mathcal {V}$ outputs either 1 (i.e., “accept”) or 0 (i.e., “reject”).", "We assume that the keys are drawn from some relation, i.e.", "$(sk,pk)\\in R$ , and $(\\mathcal {P},\\mathcal {V})$ is an interactive protocol to prove this fact.", "Let $L_R := \\lbrace pk: \\exists \\, sk \\text{ such that } (sk,pk) \\in R \\rbrace \\, ,$ be the set of valid public keys.", "We will exclusively consider identification schemes of a special form, terms as $\\Sigma $ protocols.", "In a $\\Sigma $ protocol $(\\mathcal {P},\\mathcal {V})$ , three messages are exchanged in total: [label=$\\bullet $ ] Prover's initial message: $I\\leftarrow \\mathcal {P}(sk,pk)$ .", "$I$ is usually called the “commitment”, and comes from $\\lbrace 0,1\\rbrace ^{{\\ell _{\\mathrm {in}}}}$ .", "Verifier's challenge: $c\\leftarrow \\mathcal {V}(pk,I)$ .", "Let the set of challenge messages (the challenge space) be $\\lbrace 0,1\\rbrace ^{{\\ell _{\\mathrm {ch}}}}$ .", "Prover's response: prover computes a response $r\\in \\lbrace 0,1\\rbrace ^{{\\ell _{\\mathrm {re}}}}$ based on $(sk,I,c)$ and its internal state.", "Here ${\\ell _{\\mathrm {in}}},{\\ell _{\\mathrm {ch}}},{\\ell _{\\mathrm {re}}}$ are interpreted as functions of the security parameter $\\lambda $ .", "We omit writing $\\lambda $ explicitly for the ease of reading.", "A basic requirement of an $\\textsc {ID} $ is the correctness, which is basically the completeness condition of the interactive protocol $(\\mathcal {P},\\mathcal {V})$ on correctly generated keys.", "Definition 14 An $\\textsc {ID} = (\\textsc {KG},\\mathcal {P},\\mathcal {V})$ is correct if $\\Pr \\bigl [ \\mathcal {V}(pk)=1 : (pk,sk)\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ]\\ge 1 - \\operatorname{\\mathrm {negl}}(\\lambda ).$ Instead of defining security for $\\textsc {ID} $ directly, we usually talk about various properties of the $\\Sigma $ protocol associated with $\\textsc {ID} $ , which will make the $\\textsc {ID} $ useful in various applications.", "In the following, we consider an adversary $\\mathcal {A}$ which operates and shares states in multiple stages.", "Our notation does not explicitly show the sharing of states though.", "Definition 15 (Adapting [91]) Let $(\\mathcal {P},\\mathcal {V})$ be a $\\Sigma $ protocols with message length parameters ${\\ell _{\\mathrm {in}}},{\\ell _{\\mathrm {ch}}}$ and ${\\ell _{\\mathrm {re}}}$ .", "Statistical soundness: no adversary can generate a public key not in $L_R$ but manage to make $\\mathcal {V}$ accept.", "More precisely, for any algorithm $\\mathcal {A}$ (possibly unbounded), $\\begin{split}\\Pr \\bigl [ \\mathcal {V}(pk,I,c,r) = 1 \\wedge pk \\notin L_R:&\\\\r \\leftarrow \\mathcal {A}(pk,I,c), & c\\leftarrow \\mathcal {V}(pk, I), (pk,I)\\leftarrow \\mathcal {A}(1^\\lambda ) \\bigr ] \\le \\operatorname{\\mathrm {negl}}(\\lambda ).\\end{split}$ Honest-verifier zero-knowledge (HVZK): there is a quantum polynomial-time algorithm $\\mathcal {S}$ (the simulator) such that for any quantum polynomial-time adversary $\\mathcal {A}$ , $\\begin{split}\\Bigl |\\, & \\Pr \\bigl [ \\mathcal {A}(pk,I,c,r) = 1:(I,c,r) \\leftarrow (\\mathcal {P}(sk), \\mathcal {A}(pk)),(sk,pk)\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] \\\\- & \\Pr \\bigl [ \\mathcal {A}(pk,I,c,r) = 1: (I,c,r) \\leftarrow \\mathcal {S}(pk),(sk,pk)\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] \\,\\Bigr | \\le \\operatorname{\\mathrm {negl}}(\\lambda ).\\end{split}$ Computational special soundness: from any two accepting transcripts with the same initial commitment, we can extract a valid secret key.", "Formally, there is a quantum polynomial-time algorithm $\\mathcal {E}$ such that for any quantum polynomial-time $\\mathcal {A}$ , we have that $\\begin{split}\\Pr \\bigl [ (sk, pk) \\notin R\\wedge \\mathcal {V}(pk, I,c,r) \\wedge \\mathcal {V}(pk, I,c^{\\prime },r^{\\prime }) =1 \\wedge c\\ne c^{\\prime }: & \\\\(pk,I,c,r, c^{\\prime } , r^{\\prime } )\\leftarrow \\mathcal {A}(1^\\lambda ), sk \\leftarrow \\mathcal {E}(pk,I,c,r,c^{\\prime },r^{\\prime }) \\bigr ] & \\le \\operatorname{\\mathrm {negl}}(\\lambda ).\\end{split}$ Unique response: it is infeasible to find two accepting responses for the same commitment and challenge.", "Namely, for any quantum polynomial-time $\\mathcal {A}$ , $\\Pr \\bigl [ r\\ne r^{\\prime }\\wedge \\mathcal {V}(pk,I,c,r) = 1 \\wedge \\mathcal {V}(pk,I,c,r^{\\prime }) = 1 :(pk,I,c,r,r^{\\prime })\\leftarrow \\mathcal {A}(1^\\lambda ) \\bigr ] \\le \\operatorname{\\mathrm {negl}}(\\lambda ).$ Unpredictable commitment: Prover's commitment has superlogarithmic collision-entropy.", "$\\Pr \\bigl [ c = c^{\\prime }: c\\leftarrow \\mathcal {P}(sk,pk), c^{\\prime }\\leftarrow \\mathcal {P}(sk,pk),(sk,pk) \\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] \\le \\operatorname{\\mathrm {negl}}(\\lambda ).$" ], [ "Identification: construction from $\\mathrm {OWA}$", "We construct an $\\textsc {ID} $ based on the $\\textsf {GA-Inv} $ problem.", "This is reminiscent of the famous zero-knowledge proof system for graph-isomorphism.", "Figure: Identification protocol ID\\textsc {ID} based on 𝖦𝖠-𝖨𝗇𝗏\\textsf {GA-Inv} .To get a statistically sound protocol, we compose $\\textsc {ID} $ 's interactive protocol $(\\mathcal {P}_0,\\mathcal {V}_0)$ in parallel $\\ell =\\omega (\\log \\lambda )$ times.", "We denote the resulting protocol $\\textsc {GA-ID} $ .", "Figure: Identification protocol GA-ID\\textsc {GA-ID} Theorem 16 $\\textsc {GA-ID} $ has correctness, statistically soundness, HVZK and unpredictable commitment, assuming Assumption REF holds.", "We prove the properties one by one.", "Correctness.", "This is clear.", "Statistical soundness.", "For any adversary $\\mathcal {A}$ who produces some $pk = (s,t)\\notin L_R$ , it implies that for any $i\\in [\\ell ]$ , it can only answer one of the two challenges ($c_i = 0$ or 1) but not both.", "Since $c_i$ 's are all uniformly chosen, $\\mathcal {V}$ will reject except with probability $(\\frac{1}{2})^{\\ell } = \\operatorname{\\mathrm {negl}}(\\lambda )$ (noting that $\\ell = \\omega (\\log \\lambda )$ ).", "HVZK.", "We construct a simulator $\\mathcal {S}$ in Fig.", "REF .", "The simulated transcript is identically distributed as the real execution.", "Figure: Simulator 𝒮\\mathcal {S} Special soundness.", "If $\\mathcal {A}$ can produce $(I,c,r)$ and $(I,c^{\\prime },r^{\\prime })$ that are both accepting with $c \\ne c^{\\prime }$ .", "Then at least $c_i \\ne c_i^{\\prime }$ for some $i \\in [\\ell ]$ .", "The corresponding $r_i$ and $r_i^{\\prime }$ are hence $h$ and $hg^{-1}$ from which we can recover the secret key $g$ .", "Unpredictable commitment.", "Two commitment messages collide only if $\\alpha (g,s) = \\alpha (g^{\\prime },s)$ for random $g,g^{\\prime }\\leftarrow G$ .", "All our candidate group actions are (almost) injective." ], [ "Digital signature from $\\mathrm {OWA}$", "Definition 17 A digital signature scheme consists of a triple of probabilistic polynomial-time algorithms $(\\textsc {KeyGen},\\textsc {Sign},\\textsc {Verify})$ where [label=$\\bullet $ ] $\\textsc {KeyGen} $ : $(pk,sk) \\leftarrow \\textsc {KeyGen} (1^\\lambda )$ generates a pair of secret key and public key.", "$\\textsc {Sign} $ : on input $sk$ and message $m\\in \\mathcal {M}$ , outputs $\\sigma \\leftarrow \\textsc {Sign} _{sk}(m)$ .", "$\\textsc {Verify} $ : on input $pk$ and message-signature pair $(m,\\sigma )$ , output $\\textsc {Verify} _{pk}(m,\\sigma ) = \\text{acc/rej}$ .", "A signature is secure if no one without the secret key can forge a valid signature, even if it gets to see a number of valid message-signature pairs.", "This is modeled as giving an adversary the signing oracle.", "We give the formal definition below which explicitly incorporates a random oracle $H$ that is available to all users, and an adversary can access in quantum superposition.", "We stress that we do not allow quantum access to the signing oracle, which is a stronger attack model (cf. [23]).", "Definition 18 (Unforgeability) A signature scheme $(\\textsc {KeyGen},\\textsc {Sign},\\textsc {Verify})$ is unforgeable iff.", "for all quantum polynomial-time algorithm $\\mathcal {A}$ , $\\begin{split}\\Pr \\bigl [ \\textsc {Verify} ^H(pk,\\sigma ^*,m^*) = 1 \\wedge m^*\\notin \\mathcal {L}: & \\\\(pk,sk)\\leftarrow \\textsc {KeyGen} (1^\\lambda ), & (m^*,\\sigma ^*) \\leftarrow \\mathcal {A}^{H,\\textsc {Sign} _{sk}}(\\lambda ,pk) \\bigr ]\\le \\operatorname{\\mathrm {negl}}(\\lambda ).\\end{split}$ Here $\\mathcal {L}$ contains the list of messages that $\\mathcal {A}$ queries to the (classical) signing oracle $\\textsc {Sign} _{sk}(\\cdot )$ .", "Note the unforgeability does not rule out an adversary that produces a new signature on some message that it has queried before.", "Strong unforgeability takes this into account.", "Definition 19 (Strong Unforgeability) A signature scheme $(\\textsc {KeyGen},\\textsc {Sign},\\textsc {Verify})$ is strongly unforgeable iff.", "for all quantum polynomial-time algorithm $\\mathcal {A}$ , $\\begin{split}\\Pr \\bigl [ \\textsc {Verify} ^H(pk,\\sigma ^*,m^*) = 1 \\wedge (m^*,\\sigma ^*) \\in \\mathcal {L}: & \\\\(pk,sk) \\leftarrow \\textsc {KeyGen} (1^\\lambda ), (m^*,\\sigma ^*) & \\leftarrow \\mathcal {A}^{H,\\textsc {Sign} _{sk}}(\\lambda ,pk) \\bigr ] \\le \\operatorname{\\mathrm {negl}}(\\lambda ).\\end{split}$ Here $\\mathcal {L}$ contains the list of message and signature pairs that $\\mathcal {A}$ queries to the (classical) signing oracle $\\textsc {Sign} _{sk}(\\cdot )$ .", "Fiat and Shamir proposed a simple, efficient, and generic method that converts an identification scheme to a signature scheme using a hash function, and the security can be proven in the random oracle model [43], [82].", "Relatively recent, Fischlin proposed a variant to partly reduce the reliance on the random oracle [40].", "However, as shown in [2], both of them seem difficult to admit a security proof in the quantum setting.", "Instead, Unruh [89] proposed a less efficient transformation, hereafter referred to as Unruh transformation, and proved its security in the quantum random oracle model.", "Our $\\textsc {GA-ID} $ satisfies the conditions required in Unruh transformation, and hence we can apply it and obtain a digital signature scheme $\\textsc {GA-SIGN} $ .", "Figure: Unruh transformationTheorem 20 (Adapting Corollary 19 & Theorem 23 & of [89]) If an identification scheme $\\textsc {ID} $ have correctness, HVZK and special soundness, then protocol $\\textsc {SIGN} $ in Fig.", "REF is a strongly unforgeable signature in $\\mathrm {QRO}$ .", "Since $\\textsc {GA-ID} $ has these properties, we instantiate $\\textsc {SIGN} $ with $\\textsc {GA-ID} $ and call the resulting signature scheme $\\textsc {GA-SIGN} $ .", "Corollary 21 If Assumption REF holds, $\\textsc {GA-SIGN} $ is a strongly unforgeable signature in $\\mathrm {QRO}$ ." ], [ "Improved Digital signature based on $\\mathrm {PRA}$ via Fiat-Shamir", "In this subsection, we show that if we accept the possibly stronger assumption of $\\mathrm {PRA}$ , we can apply the standard Fiat-Shamir transform to $\\textsc {GA-ID} $ , and obtain a more efficient signature scheme in $\\mathrm {QRO}$ .", "This is due to a recent work by Unruh [91][61] includes a similar result, which is primarily tailored to lattice-based identification schemes., where he shows that if one can equip $\\textsc {ID} $ with a “dual-mode” key generation, Fiat-Shamir will indeed work in $\\mathrm {QRO}$ .", "A dual-mode key is a fake public key ${\\widetilde{pk}}$ that is indistinguishable from a valid public key.", "Nonetheless, ${\\widetilde{pk}}$ has no corresponding secret key (i.e., ${\\widetilde{pk}}\\notin L_R$ ).", "Definition 22 (Dual-mode key generator, adapting [91]) An algorithm $\\textsc {KG} $ is a dual-mode key generator for a relation $R$ iff.", "[label=$\\bullet $ ] $\\textsc {KG} $ is quantum polynomial-time, $\\Pr \\bigl [ (sk,pk)\\in R: (sk,pk)\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] \\ge 1 -\\operatorname{\\mathrm {negl}}(\\lambda )$ .", "for all quantum polynomial-time algorithm $\\mathcal {A}$ , there is a quantum polynomial-time algorithm $\\textsc {KG} ^*$ such that $\\Bigl | \\Pr \\bigl [ \\mathcal {A}(pk) = 1: (sk,pk)\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] -\\Pr \\bigl [ \\mathcal {A}({\\widetilde{pk}})=1: {\\widetilde{pk}}\\leftarrow \\textsc {KG} ^*(1^\\lambda ) \\bigr ]\\Bigr | \\le \\operatorname{\\mathrm {negl}}(\\lambda ),$ and $\\Pr \\Bigl [ {\\widetilde{pk}}\\in L_R: {\\widetilde{pk}}\\leftarrow \\textsc {KG} ^*(1^\\lambda )\\Bigr ] \\le \\operatorname{\\mathrm {negl}}(\\lambda ).$ Theorem 23 $\\textsc {KG} $ in $\\textsc {GA-ID} $ is a dual-mode key generator, if Assumption REF holds.", "We construct $\\textsc {KG} ^*$ as follows: choose $(G,S,\\alpha )$ to be $\\mathcal {G}(1^\\lambda )$ ; sample $s,t\\leftarrow S$ uniformly; output ${\\widetilde{pk}}:= (s,t)$ .", "By Assumption REF , it follows that $\\Bigl | \\Pr \\bigl [ \\mathcal {A}(pk) = 1: (sk,pk)\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] -\\Pr \\bigl [ \\mathcal {A}({\\widetilde{pk}})=1: {\\widetilde{pk}}\\leftarrow \\textsc {KG} ^*(1^\\lambda ) \\bigr ] \\Bigr |\\le \\operatorname{\\mathrm {negl}}(\\lambda )\\, .$ In addition, $\\Pr \\bigl [\\, pk\\in L_R: pk \\leftarrow \\textsc {KG} ^*(1^\\lambda ) \\,\\bigr ] = \\frac{|G|}{|S|}\\le \\operatorname{\\mathrm {negl}}(\\lambda ).", "$ Figure: Fiat-Shamir transformationTheorem 24 (Adapting [91]) If an identification scheme $\\textsc {ID} $ has correctness, HVZK, statistical soundness, unpredictable commitments, ${\\ell _{\\mathrm {ch}}}$ is superlogarithmic, and $\\textsc {KG} $ is a dual-mode key generator.", "Then in $\\mathrm {QRO}$ , $\\textsc {FS-SIGN} $ obtained from Fiat-Shamir transform (Construction in Fig.", "REF ) is weakly unforgeable.", "If $\\textsc {ID} $ has unique responses, the signature scheme is strongly unforgeable.", "Since $\\textsc {GA-ID} $ has all these properties, we can instantiate Construction $\\textsc {FS-SIGN} $ with $\\textsc {GA-ID} $ .", "Call the resulting signature scheme $\\textsc {GA-FS-SIGN} $ .", "Corollary 25 $\\textsc {GA-FS-SIGN} $ is strongly unforgeable, if Assumption REF holds.", "Note that $\\textsc {GA-FS-SIGN} $ is much more efficient than $\\textsc {GA-SIGN} $ .", "In particular, $\\textsc {GA-FS-SIGN} $ only invokes the underlying $(\\mathcal {P},\\mathcal {V})$ once as opposed to superpolylogarithmic times in $\\textsc {GA-SIGN} $ ." ], [ "Quantum-secure pseudorandom functions based on $\\mathrm {PRA}$", "Finally, we discuss how to construct quantum-secure pseudorandom functions using the $\\mathrm {PRA} $ assumption.", "Basically, we will show that we can instantiate the GGM construction [44] using the $\\mathrm {PRA} $ assumption.", "To do this, we need to first discuss constructing pseudorandom generators." ], [ "(Keyed) pseudorandom generators.", "We already have mentioned that we can construct a PRG $\\Gamma :S\\times G\\rightarrow S\\times S$ , given by $\\Gamma (s, g):=(s, g\\cdot s).$ In fact, we may modify this construction slightly to obtain a form of PRG with much better stretching almost for free as follows.", "For $s\\in S$ , we define $\\Gamma _s:G\\rightarrow S$ by $\\Gamma _s(g):=g\\cdot s.$ This can be considered as a `keyed PRG', where $s$ is a public key for the PRG instance $\\Gamma _s$ , and this instance stretches the seed $g\\leftarrow G$ to $g\\cdot s$ .", "Such notion of a keyed PRG is informally given in [52], but surely this notion was used implicitly in many works previously.", "We may give a formal definition of this notion as follows.", "Definition 26 (Keyed PRG) A keyed pseudorandom generator, or a keyed PRG, is a pair of probabilistic polynomial-time algorithms $(\\textsc {KG}, \\textsc {PRG})$ : Key generator: $k\\leftarrow \\textsc {KG} (1^\\lambda )$ generates a public key $k\\in \\mathcal {K}$ describing an instance of the keyed PRG.", "Pseudorandom generator: given $k$ sampled by $\\textsc {KG} (1^\\lambda )$ , $\\textsc {PRG} _k:\\mathcal {X}\\rightarrow \\mathcal {Y}$ stretches a uniform element $x\\leftarrow \\mathcal {X}$ to produce an element $\\textsc {PRG} _k(x)\\in \\mathcal {Y}$ .", "Note that this $\\textsc {PRG} $ algorithm is required to be deterministic.", "In the above, $\\mathcal {K}$ is the key space of the keyed PRG, and $\\mathcal {X}$ , $\\mathcal {Y}$ are the domain and the codomain of the keyed PRG, respectively.", "They are implicitly parametrized by the main parameter $\\lambda $ .", "Also, it is required that $|\\mathcal {Y}|>|\\mathcal {X}|$ .", "Definition 27 (Security of a keyed PRG) We say that a keyed PRG, $\\Gamma =(\\textsc {KG}, \\textsc {PRG})$ , is secure, if for any quantum polynomial-time adversary $\\mathcal {A}$ , we have $\\begin{split}\\operatorname{\\mathbf {Adv}}_\\Gamma ^{\\textnormal {\\textsf {prg}}}(\\mathcal {A}) & := \\Bigl | \\Pr \\bigl [\\mathcal {A}(k, \\textsc {PRG} _k(x))=1 \\,:\\, x\\leftarrow \\mathcal {X}, k\\leftarrow \\textsc {KG} (1^\\lambda ) \\bigr ] \\\\&\\qquad -\\Pr \\bigl [ \\mathcal {A}(k, y)=1 \\,:\\, y\\leftarrow \\mathcal {Y}, k\\leftarrow \\textsc {KG} (1^\\lambda )\\bigr ] \\Bigr | \\le \\operatorname{\\mathrm {negl}}(\\lambda ).\\end{split}$ Again, it is immediate that $\\mathrm {PRA} $ assumption implies that $g\\mapsto g\\cdot s$ is a secure keyed PRG, where $s$ is the key and $g$ is the seed." ], [ "Doubling keyed PRGs.", "The keyed PRG $\\Gamma _s$ that we have described above is of form $\\Gamma _s:G\\rightarrow S$ .", "While $|S|\\gg |G|$ , having $S$ which might `look different' from $G$ can be inconvenient for some applications, for example, constructing a PRF via the GGM construction.", "So, here we would like to construct a `doubling' keyed PRG out of the previous construction, using randomness extraction.", "The idea is simple: $\\Gamma _s(g)$ would look uniform random over $S$ for average $s$ , so we can use a randomness extractor to produce a pseudorandom bit string of enough length, and use that to sample two group elements of $G$ .", "Overall, the construction would be of form $G\\rightarrow G\\times G$ , while the PRF key would include not only the point $s\\in S$ but also the random seed for the randomness extraction.", "For concreteness, we may use the Leftover Hash Lemma (LHL) [50], but in fact any strong randomness extractor would be all right.", "More concretely, let $R_G:\\lbrace 0,1\\rbrace ^p\\rightarrow G$ be the sampling algorithm for the group $G$ which samples a random element of $G$ , (statistically close to) uniform.", "Note that this $R_G$ is required for our group $G$ .", "In fact, Babai [6] gives an efficient Monte Carlo algorithm for sampling a group element of a finite group in a very general setting which is applicable to all of our instantiations.", "Let $\\mathcal {H}=\\lbrace h:S\\rightarrow \\lbrace 0,1\\rbrace ^r\\rbrace $ be a family of 2-universal hash functions, where $r$ is sufficiently smaller than $\\log |S|$ .", "LHL implies, informally, that $(h, h(s))$ and $(h, u)$ are statistically indistinguishable, when $h\\leftarrow \\mathcal {H}$ , $s\\leftarrow S$ , $u\\leftarrow \\lbrace 0,1\\rbrace ^r$ are uniform and independent.", "Let us assume $\\log |S|$ is large enough so that we can take $r=2p$ .", "Then, we may construct a doubling keyed PRG $(\\textsc {KG}, \\textsc {PRG})$ as follows: Choose public parameters $\\texttt {params}(G, S, \\alpha )$ to be $\\mathcal {G}(1^\\lambda )$ .", "Key generator: $\\textsc {KG} (1^\\lambda )$ samples $s\\leftarrow S$ , $h\\leftarrow \\mathcal {H}$ , and outputs $k:=(s, h)$ .", "Pseudorandom generator: $\\textsc {PRG} _k(g):=(R_G(r_0), R_G(r_1))$ , where $r_0$ and $r_1$ are the left half and the right half of $h(g\\cdot s)$ , respectively.", "In short, this keyed PRG stretches the seed $g\\leftarrow G$ to $g\\cdot s$ , and extracts a pseudorandom bit string of length $2p$ , and use that to sample two independent-looking group elements.", "The security of this construction comes from the $\\mathrm {PRA} $ assumption and the Leftover Hash Lemma." ], [ "Pseudorandom functions.", "Of course, the notion of a PRF [44] is well-known and well-established.", "Here, following Maurer and Tessaro [72], we are going to extend the notion of PRF somewhat so that it may also have an extra `public key' part.", "Definition 28 (Pseudorandom function) A pseudorandom function (PRF) is a polynomial-time computable function $f$ of form $f:\\mathcal {P}\\times \\mathcal {K}\\times \\mathcal {X}\\rightarrow \\mathcal {Y}$ .", "We call the sets $\\mathcal {P}$ , $\\mathcal {K}$ , $\\mathcal {X}$ , $\\mathcal {Y}$ as the public-key space, the key space, the domain, and the codomain of $f$ , respectively.", "We would often write $f(p, k, x)$ as $f_p(k, x)$ .", "Note that we may regard an `ordinary' PRF as a special case of above where it has a trivial, empty public key.", "In this paper, we consider quantum-secure PRFs [94], or, sometimes called QPRFs, whose security is defined as follows.", "Definition 29 (Security of a PRF) Let $f:\\mathcal {P}\\times \\mathcal {K}\\times \\mathcal {X}\\rightarrow \\mathcal {Y}$ be a PRF.", "We say that $f$ is quantum-secure, if for any quantum polynomial-time adversary $\\mathcal {A}$ which can make quantum superposition queries to its oracle, we have the following: $\\operatorname{\\mathbf {Adv}}^{\\textnormal {\\textsf {prf}}}_f(\\mathcal {A}) := \\Bigl | \\Pr \\bigl [\\mathcal {A}^{f_p(k, \\cdot )}(p)=1 \\bigr ]- \\Pr \\bigl [ \\mathcal {A}^{\\rho }(p)=1 \\bigr ] \\Bigr | = \\operatorname{\\mathrm {negl}}(\\lambda ),$ where $p\\leftarrow \\mathcal {P}$ , $k\\leftarrow \\mathcal {K}$ , $\\rho \\leftarrow \\mathcal {Y}^\\mathcal {X}$ are uniformly and independently random and $\\lambda $ is the security parameter.", "Suppose we have a secure, doubling keyed PRG $\\Gamma =(\\textsc {KG}, \\textsc {PRG})$ where $\\textsc {PRG} _s$ is of form $\\textsc {PRG} _s:\\mathcal {K}\\rightarrow \\mathcal {K}\\times \\mathcal {K}.$ Writing the first component and the second component of $\\textsc {PRG} _s(k)$ as $f_s(k, 0)$ and $f_s(k, 1)$ , we obtain a PRF $f$ of form $f:\\mathcal {S}\\times \\mathcal {K}\\times \\lbrace 0,1\\rbrace \\rightarrow \\mathcal {K}.$ Here, $\\mathcal {S}$ is the public-key space of $f$ , which is the key space of the keyed PRG $\\Gamma $ .", "The key space of $f$ is $\\mathcal {K}$ , and the domain and the codomain of $f$ are $\\lbrace 0,1\\rbrace $ and $\\mathcal {K}$ , respectively.", "Moreover, we can immediately see that the security of the one-bit PRF $f$ is exactly equivalent to the security of $\\Gamma $ as a keyed PRG.", "In fact, we can say that the security of $f$ is just a re-statement of the security of $\\Gamma $ .", "Now we may apply the GGM construction to $f$ to define the following PRF $\\textsc {GGM} [f]:\\mathcal {S}\\times \\mathcal {K}\\times \\lbrace 0,1\\rbrace ^l\\rightarrow \\mathcal {K}$ , where $\\textsc {GGM} [f]_s(k, x_1\\dots x_l):= f_s(\\dots f_s(f_s(k, x_1), x_2), \\dots , x_l).$ In fact, the above is the same as the cascade construction for the one-bit PRF $f$ .", "When we instantiate the GGM construction using an ordinary PRG, or when we instantiate the cascade construction using an ordinary PRF (without the public-key part), the quantum security is already established [94], [88].", "The only difference is that here we instantiate the construction using a keyed PRG, or, equivalently, a one-bit PRF with a public key.", "Following [94] or [88], we can define a version of oracle security for such a PRF with a public key.", "Definition 30 (Oracle security of a PRF) Let $f:\\mathcal {P}\\times \\mathcal {K}\\times \\mathcal {X}\\rightarrow \\mathcal {Y}$ be a PRF.", "We say that $f$ is oracle-secure with respect to an index set $\\mathcal {I}$, if for any quantum polynomial-time adversary $\\mathcal {A}$ which can make quantum superposition queries to its oracle, we have the following: $\\operatorname{\\mathbf {Adv}}^{\\textnormal {\\textsf {os-prf}}}_{f, \\mathcal {I}}(\\mathcal {A}) := \\Bigl | \\Pr \\bigl [\\mathcal {A}^{O_0}(p)=1 \\bigr ] - \\Pr \\bigl [ \\mathcal {A}^{O_1}(p)=1\\bigr ]\\Bigr |=\\operatorname{\\mathrm {negl}}(\\lambda ),$ where the oracles $O_0, O_1$ are defined as $O_0(i, x):=f_p(\\kappa (i), x),\\quad O_1(i, x)&:=\\rho (i, x),$ and $p\\leftarrow \\mathcal {P}$ , $\\kappa \\leftarrow \\mathcal {K}^\\mathcal {I}$ , $\\rho \\leftarrow \\mathcal {Y}^{\\mathcal {I}\\times \\mathcal {X}}$ are chosen uniform randomly and independently.", "And, as in [88], we show that if a PRF with a public key is secure, then it is also oracle-secure.", "Theorem 31 Let $f:\\mathcal {P}\\times \\mathcal {K}\\times \\mathcal {X}\\rightarrow \\mathcal {Y}$ be a PRF.", "Suppose that it is secure as a PRF.", "Then, it is also oracle-secure.", "Here is a brief sketch of the proof.", "We are going to use the notion of relative (oracle) indistinguishability introduced in [88].", "Our random oracle $H$ would be a very simple one, $H:\\lbrace \\ast \\rbrace \\rightarrow \\mathcal {P}$ , where $\\lbrace \\ast \\rbrace $ is the singleton set containing only one element.", "Given this $p=H(\\ast )\\leftarrow \\mathcal {P}$ , we define two distributions $D_0, D_1$ of functions of form $\\mathcal {X}\\rightarrow \\mathcal {Y}$ .", "$D_0$ : to sample a function $g$ from $D_0$ , sample $k\\leftarrow \\mathcal {K}$ , and define $g(x):=f_p(k, x).$ $D_1$ : to sample a function $g$ from $D_1$ , simply sample a uniform random function $g:\\mathcal {X}\\rightarrow \\mathcal {Y}$ .", "Then, the security of $f$ is in fact equivalent to indistinguishability of $D_0$ and $D_1$ relative to the simple oracle $H$ .", "Again according to [88], when two function distributions are indistinguishable relative to $H$ , then they are oracle-indistinguishable relative to $H$ .", "We can also observe that this is equivalent to the oracle security of $f$ defined as above.", "Finally, the security of the GGM construction comes from the oracle security.", "Theorem 32 Suppose that $f:\\mathcal {S}\\times \\mathcal {K}\\times \\lbrace 0,1\\rbrace \\rightarrow \\mathcal {K}$ is a secure PRF.", "Then, the GGM construction $\\textsc {GGM} [f]:\\mathcal {S}\\times \\mathcal {K}\\times \\lbrace 0,1\\rbrace ^l\\rightarrow \\mathcal {K}$ is also secure.", "The proof is essentially identical to that of Zhandry [94] or Song and Yun [88]; since $f$ is secure as a PRF, it is also oracle-secure.", "This allows the same hybrid argument in the security proof for GGM in [94], or the security proof for the cascade construction in [88]." ], [ "Bit commitment schemes based on $\\mathrm {OWA}$ and {{formula:4891f011-d80c-43c7-a554-a00084e9bccf}}", "Based on $\\mathrm {OWA}$ (Assumption REF ), Brassard and Yung [22] describe a bit commitment scheme, which we can easily adapt and instantiate it with non-abelian group actions.", "Figure: A bit commitment scheme based on OWA \\mathrm {OWA}We briefly argue how the security conditions are met.", "A formal proof (against both classical and quantum attacks) can be obtained along the same line.", "Let $b^{\\prime }=1-b$ .", "BindingHere we do not consider more sophisticated binding notions in the quantum setting (Cf. [90]).", ": in order to change her mind, Alice needs to compute $h^{\\prime }$ such that $h^{\\prime } \\cdot s_{b^{\\prime }}=t$ .", "If she can do that, given that she already knows $h \\cdot s_b=t$ , she can compute $g$ .", "This violates the one-way assumption.", "Hiding: from Bob's viewpoint, since $s_0$ and $s_1$ are in the same orbit, whichever bit Alice commits, the distributions of $t$ are the same.", "Let us then examine an alternative route to bit commitment based on the $\\mathrm {PRA}$ assumption.", "Consider the function below: $T: S \\times G & \\rightarrow S\\times S \\\\(s,g) & \\mapsto (s, g \\cdot s) \\, .$ It is easy to see that this gives a quantum-secure pseudorandom generator (PRG) based upon Assumption REF , assuming that the size $\\left|{S}\\right|$ is larger than $\\left|{G}\\right|$ .", "Therefore, after we apply the Blum-Micali amplification to increase the expansion to triple, we can plug it into Naor's commitment [74] and get a quantum computationally hiding and statistically binding commitment.", "Theorem 33 There is a perfectly hiding and computationally binding bit commitment, if Assumption REF holds; there is a quantum computationally hiding and statistically binding commitment scheme, if Assumption REF holds." ], [ "Acknowledgement.", "Y.Q.", "would like to thank Joshua A. Grochow for explaining the results in [39] to him.", "Y.Q.", "was partially supported by Australian Research Council DE150100720.", "F.S.", "was partially supported by the U.S. National Science Foundation under CCF-1816869.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", "A.Y.", "was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.", "2016-6-00598, The mathematical structure of functional encryption and its analysis)." ] ]
1906.04330
[ [ "Multiwavelength Study of the X-Ray Bright Supernova Remnant N300-S26 in\n NGC 300" ], [ "Abstract We present a multiwavelength examination of the supernova remnant (SNR) S26 in the nearby galaxy NGC 300 using data from Chandra X-ray Observatory, XMM-Newton X-ray Observatory, Hubble Space Telescope (HST), the Very Large Array, and the Australia Telescope Compact Array.", "We simultaneously fit all of the available X-ray data with a thermal plasma model and find a temperature of $0.77 \\pm 0.13$ keV with a hydrogen column density of ($9.7^{+6.4}_{-4.8}$)$\\times 10^{20}$ cm$^{-2}$.", "HST imaging allows us to measure a semimajor axis of $0.78 \\pm 0.10$ arcsec ($7.5 \\pm 1.0$ pc) and a semiminor axis of $0.69^{+0.14}_{-0.12}$ arcsec ($6.7^{+1.2}_{-1.4}$ pc).", "This precise size helps to constrain the age and velocity of the shock to be ($3.3^{+0.7}_{-0.6}$)$\\times 10^{3}$ yr and $411^{+275}_{-122}$ km s$^{-1}$.", "We also fit photometry of the surrounding stars to infer the age and mass of the progenitor star to be $8 \\pm 1$ Myr and $25^{+1}_{-5}$ M$_{\\odot}$.", "Based on measured radio properties of the source and assuming equipartition, the estimated radio luminosity of $\\sim 1.7 \\times 10^{34}$ erg s$^{-1}$ over the $10^{8}-10^{11}$ Hz frequency range results in a minimum magnetic field associated with this SNR of $0.067$ mG and the minimum energy needed to power the observed synchrotron emission of $1.5 \\times 10^{49}$ erg.", "The size and temperature of N300-S26 appear to be similar to the Galactic SNR G311.5-0.3 except that G311.5-0.3 has a significantly lower X-ray luminosity, is older, and has a slower shock velocity." ], [ "Introduction", "The energy output of supernova remnants (SNRs) shock and excite the interstellar medium (ISM), which makes them visible across the electromagnetic spectrum out to distances of megaparsecs.", "While the Milky Way SNR population is closest to us, many of its SNRs are difficult to observe due to interstellar absorption along Galactic lines of sight and uncertainties in distance measurements.", "Analyses of SNRs outside our Galaxy provide key comparisons to the Galactic sample that probe differences in the ISM and how it influences SNR's multiwavelength morphological luminosity evolution.", "Since the nearby galaxy population contains a wide range of ISM densities and metallicities, detailed multiwavelength measurements of the SNRs allow us to shed light on the effects of the environment on SNR properties.", "There have been many SNR surveys of nearby galaxies including the Large Magellenic Cloud (LMC) , , , M31 , , , M33 , , , and M83 , , .", "One such galaxy is NGC 300, a face-on spiral ScD galaxy at 2 , which is the brightest of the five main spiral galaxies in the Sculptor Group .", "The $46^{\\circ }$ inclination of NGC 300 as well as the Galactic latitude of $-79.4$ degrees (which places NGC 300 toward the southern Galactic pole; ) allows it to be observed easily due to reduced absorption effects from gas in the host galaxy as well as our Galaxy.", "There are many different surveys of NGC 300's SNR population including, but not limited to, optical , , , radio , and X-ray , .", "All of this data allows for a detailed measurement of the physical properties of SNRs in NGC 300.", "Multiwavelength observations of SNRs provide details about the energetics and evolution associated with these sources.", "The X-ray, optical, and radio emission from SNRs probe the magnetic fields created by the source, the energy needed to drive synchrotron radiation, the temperature and column density of the surrounding gas, the physical size and velocity of the associated shock waves, and many other properties.", "These SNR properties should, in principle, be related to both the physical parameters of the progenitor star and those of the surrounding ISM.", "We have observed a bright extragalactic SNR in NGC 300 denoted as N300-S26 (, ; hereafter referred to as S26).", "Previous observations of S26 have included various radius measurements taken from multiple optical ground-based telescopes ($1.7$ arcsec which corresponds to $16.5\\ $ at the assumed distance to NGC 300; , ), H$\\alpha $ surface brightness , radio flux density , , as well as a temperature from the ROSAT telescope .", "In this paper, we have observed S26 in the optical using Hubble Space Telescopes (HST), in the X-ray by Chandra and XMM-Newton, and in the radio using the Very Large Array (VLA) and Australia Telescope Compact Array (ATCA) from and .", "We then compare S26 to other SNRs in the Galaxy, finding that S26 is most similar to G$311.5$ –$0.3$ , which has a temperature of $0.68^{+0.20}_{-0.24}$ keV and a radius of 9.", "In sec:obsred, we discuss the data sets used in our study and the methods utilized to extract information from the raw data.", "In sec:results, we share our results from our data reductions.", "In sec:discussion, we discuss the physical ramifications of our measurements.", "In sec:conclusion, we summarize our results.", "Values derived in the radio sections are at the 90% confidence range.", "The values derived in the X-ray sections for the tbabs*(pshock) normalization tied model are at the 90% confidence range while the other models are at the $1 \\sigma $ limit." ], [ "Observations and Data Reduction", "Our multiwavelength program makes use of data from the HST (optical), the Chandra X-ray Observatory (Chandra; X-ray), the XMM-Newton X-ray Observatory (XMM-Newton; X-ray), the VLA (radio), and the ATCA (radio) (radio data from and ).", "We now discuss each of these in detail below." ], [ "Optical Data", "We detected the optical shell of S26 with the Advanced Camera for Surveys on board the HST at 0:55:$15.447$ , $-37$ :44:$39.10$ (J2000).", "The data were obtained with the WFC detector using F814W and F606W filters on 2015 January 19.", "The F814W filter had a 966 s exposure and the F606W filter had a 850 s exposure.", "While the broadband SNR fluxes would not be useful for science, as they contain multiple emission lines and have a dense background of stars, the high-resolution imaging allowed us to measure a precise size for the optical shell.", "In addition, the field of resolved stars allowed us to perform crowded stellar field photometry on the stellar population local to the SNR, which provides constraints on the mass of the star that produced the SNR.", "We detail the analysis techniques we employed for each of these applications below.", "We used Deep Space 9 (DS9) version $7.2.1$http://ds9.si.edu/site/Home.html to create an RGB-rendered image of the source using F606W for the green channel, F814W for the red channel, and an estimated blue channel of $2\\ \\times $ F606W $-$ F814W (Figure REF ).", "We then used ellipses to measure the size of the SNR to a significantly higher precision than previous studies due to the HST's superior spatial resolution (see sec:resultssize for details).", "Figure:                                                                                                            Top: Optical image of S26 using F606W and F814W filters as well as an estimated blue channel using 2×2\\ \\times F606W - F814W.", "The semiminor axis is depicted with the blue line while the semimajor axis is depicted with the yellow line.", "Middle: Semiminor measurement using Surface Brightness Function aligned with semiminor axis taken from the optical image.", "Bottom: Semimajor measurement using Surface Brightness Function aligned with semimajor axis taken from the optical image.", "Green ellipse and vertical lines correspond to upper limit to fit.", "Red ellipse and vertical lines correspond to best-fit.", "Purple ellipse and vertical lines correspond to the lower limit to fit.In order to corroborate the size measurement using ellipses, we used the DS9 tool Projections to create surface brightness functions along the semimajor and semiminor axes." ], [ "Resolved Stellar Photometry", "We also measured resolved stellar photometry from the HST images in order to produce color-magnitude diagrams (CMDs) of the stellar populations surrounding the SNR using the VEGAMAG system.", "The photometry was performed using the automated point spread function (PSF) fitting pipeline from the Panchromatic Hubble Andromeda Treasury.", "The full details of how the pipeline works are given in , but briefly, the calibrated flat-fielded and CTE corrected (flc) HST images are masked and analyzed using a combination of the PyRAF routine astrodrizzle and the photometry package DOLPHOT, an updated version of HSTPHOT .", "The analysis is performed on the full set of images simultaneously, where the locations of stars are found using the statistics of the full stack of images, and the photometry is performed through forcing PSF fitting at all of the star locations on all of the individual exposures.", "The resulting measurements are combined and culled based on signal to noise and measurement quality.", "We then perform a series of artificial star tests, whereby a star of known color and magnitude is inserted into the images and the photometry routine is rerun to assess whether the star was recovered, and how the output magnitude compared to the input.", "This exercise is repeated $10^{5}$ times to build up statistics on completeness, photometric bias, and photometric error, as a function of color and magnitude.", "For example, the artificial stars showed that our completeness falls below 50% at F606W=$27.9$ and F814W=$27.1$ , and the uncertainties at those magnitudes is $27.9^{+0.4}_{-0.2}$ and $27.1^{+0.3}_{-0.2}$ for F606W and F814W, respectively.", "The asymmetric uncertainties are due to the bias of faint stars being measured brighter than their true flux due to crowding effects.", "The X-ray data used were three Chandra and six XMM-Newton observations.", "The information about each observation can be seen in tab:xrayobs.", "The SNR is near two other sources, as shown in Figure REF ; however, it is separated enough from these ($30.1$ and $50.1$ arcsec for the two objects) that we were able to mask them out as shown in the figure.", "We discuss the details of these observations below." ], [ "Chandra Data", "We extracted the spectroscopic data using Ciao v$.4.6.7$http://cxc.harvard.edu/ciao/ and CALDB v$4.1$http://cxc.harvard.edu/caldb/.", "Since the source was unresolved in X-rays, we adopted the point-source extraction method for the spectrum from the specextract commandhttp://cxc.harvard.edu/ciao/threads/pointlike/.", "We have a total of three observations using the ACIS detector totaling an exposure of 191 ks (see tab:xrayobs).", "For this, we selected regions centered on S26, which enclosed the entire source within a circle of radius $15.4$ arcsec.", "The background region was centered around a nearby empty patch of sky that had no sources within a circle of radius $59.8$ arcsec such that the area for the background region was $\\sim 15$ times larger than the area for the source region.", "The source region was limited to only $15.4$ arcsec because we wanted to maximize the extraction region yet there were other nearby sources that had to be avoided and not just masked out (see fig:xrayextraction).", "Table: Observational information of S26 in X-Ray and optical energy bandsFigure:                                                                                                            Left: X-ray image of the PN detector for the XMM-Newton observation 0112800101 over the 0.2-3.00.2-3.0 keV energy range.", "The large green circle has a radius of 82.782.7 arcsec, the green dotted circles have a radius of 23.923.9, 30.530.5, and 13.113.1 arcsec from left to right, and the white circle has a radius of 21.821.8 arcsec.", "Right: X-ray image of the Chandra observation 12238 over the 0.2-3.00.2-3.0 keV range.", "The green circle has a radius of 59.859.8 arcsec while the white circle has a radius of 15.415.4 arcsec.", "Source extraction region is depicted as the white circle.", "Background extraction regions are depicted as the green circles with the dashed circles corresponding to regions that were masked from the background extraction region due to the sources within.", "The larger circle for the background extraction region was shifted for each observation and each detector for the XMM-Newton data in order to minimize the amount of the chip gap in the background region.", "For both of these images, we used a bin-size of 16 detector pixel, a min-max log scale, and a Gaussian smoothing of radius 3 in DS9.Initially, we reprocessed the data from evt2 using the Ciao command chandra_reprohttp://cxc.harvard.edu/ciao/ahelp/chandra_repro.html using the default settings—observation 12238 had to be reprocessed with check_vf_pha set to true in order to properly reprocess the VFAINT data while the other two observation had check_vf_pha set to false.", "We then used the region files described above in specextract.", "The binning used for both the source and background files was the standard binning practice for specextract; namely, the data was grouped in such a way so each bin had a minimum of 15 counts." ], [ "XMM-Newton Data", "We reduced the XMM-Newton data using SAS v$15.0$http://xmmssc-www.star.le.ac.uk/SAS/ and using the xmmselect commandhttps://heasarc.gsfc.nasa.gov/docs/xmm/abc/node9.html.", "We have a total of six observations using the MOS1, MOS2, and PN detectors for each observation that totaled an effective exposure of $262.4$ ks for MOS1, $273.2$ ks for MOS2, and $243.1$ ks for PN (see tab:xrayobs).", "We created an image of the patch of sky that contained our source while applying filters and flags designed to remove artifacts.", "For the PN, MOS1, and MOS2 detectors, we selected the events with PATTERN in the $0-4$ range, set PI to be the preferred pulse height of the event with the range being between 200 and 3000 eV, and set 0xfa0000 to 0 to further clean up the images.", "We also used the standard practice of setting the #XMMEA_EP flag for the PN detector and #XMMEA_EM for the MOS1 and MOS2 detectors.", "To filter time intervals with high background counts, we extracted light curves from the three detectors in each of the six observations over the $0.2-3.0$ keV energy range.", "The command gtibuild was used to create Good Time Intervals (GTIs) for each observation, which filtered out data that had any sharp peaks in the light curve.", "Then, evselect was used to filter the data based on the GTIs created with gtibuild.", "The sharp peaks in the light curves were removed in order to obtain the data that corresponded to the unflared time intervals which assists with our spectral fits that were taken.", "We then extracted the spectra for our source and background by using the filtered event files via the xmmselect command.", "The xmmselect command would then create a PI file and a filtered image for both our source and the background regions.", "The source region used was a circle centered on the SNR with a radius of $21.8$ arcsec that was limited to this size due to nearby sources (see fig:xrayextraction).", "The background region was chosen near the SNR, avoiding other nearby sources, and any chip gaps.", "The area of the background region was the same for each observation and each detector and was $\\sim 11.4$ times larger than the source region area.", "After the source and background spectra were extracted, we created the associated RMF and ARF files using the rmfgen and arfgen commands.", "We then ran grppha to group the files together similar to the grouping for the Chandra data, which used dmgroup, but with these bins having a minimum of 1 count to eliminate any empty bins." ], [ "X-Ray Spectral Analysis", "After the spectra were extracted, we fit them using XSPEC v$.12.9.0$ nhttps://heasarc.gsfc.nasa.gov/xanadu/xspec/.", "Following the technique of , we first fit both a sky and instrument background model to our background spectra.", "This consisted of a pair of absorbed thermal plasma components as the sky background for both the XMM-Newton and Chandra data.", "The instrument background model was different for each of the detectors.", "For the PN detector, we used a broad Gaussian at 0 keV and a broken power law, while for the MOS1 and MOS2 detectors we used a pair of broken power laws.", "For Chandra, we used a combination of power laws and Gaussians.", "For more information about the background model, see .", "After we acquired the best-fit background model, we fit the data for the source including the background model components and setting the sky and instrument background parameters to the fitted values.", "We scaled the normalization of the background by the relative size of the source data region to the background data region.", "We also included tbabs and pshock components when fitting the source with a metal abundance of $0.5$ relative to solar abundance (calculated using the metallicity gradient found from and using a distance of $3.7$ kpc between S26 and the center of NGC 300)—this metal abundance was accounted for in the pshock model component and not the tbabs component.", "The tbabs component modeled the interstellar absorption along the line of sighthttps://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node251.html assuming a minimum value equal to the Galactic $N_{H}$ (foreground value of $3.0 \\times 10^{20}$ cm$^{-2}$ from COLDENhttp://cxc.harvard.edu/toolkit/colden.jsp) and any addition to that value being due to the column density from NGC 300, while the pshock component modeled X-ray emission from a constant temperature plane-parallel shock plasmahttps://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node206.html.", "We fit the six XMM-Newton observations and the three Chandra observations simultaneously to maximize the amount of counts for the SNR and restricted the energy range to be between $0.3$ and $2.0$ keV—including any higher energy data would just add the background noise because there was no significant detection above $2.0$ keV for S26.", "The only free parameters for the source model were the column density, temperature, and normalizations.", "The normalization for the source data for each observation was initially free, but the column density and temperature were the same for each observation because we wanted to allow the fit to normalize each observation separately to improve the fit.", "We also attempted fitting with the normalizations the same for each observation, with the normalizations the same but changed the energy range to be between $0.3$ and $5.0$ keV, and with the normalizations the same but the temperatures allowed to vary from observation to observation and detector to detector.", "There was also a model that included a powerlaw component with the normalizations freed and a model that only consisted of tbabs and powerlaw components with the normalizations freed.", "All of these various fits utilized Cash statistics (see ).", "Radio observations of S26 from and using data from VLA and ATCA have revealed a counterpart to the optically detected SNR.", "The data was obtained on 1993 May 22 for the 6 cm wavelength data and on 1998 June 13 for the 20 cm wavelength data.", "The beam size was $\\approx 4\"$ at 6 cm (4885) and $\\approx 6\"$ at 20 cm (1465) with an rms sensitivity of 36Jy at 6 cm and 60Jy at 20 cm.", "S26 was detected at 20 cm, but a counterpart was not detected at 6 cm— and measured roughly the same value for the flux density of the counterpart at 20 cm (namely $0.22$ mJy).", "From these measurements, both papers gave a value for the radio spectral index $\\alpha $ (defined such that flux density $S_{\\rm {\\nu }}$ $\\propto $ $\\nu ^{-\\alpha }$ ) to be $>0.70 \\pm 0.05$ (for the purpose of this paper we adopted a lower limit to $\\alpha $ of $0.65$ and just calculated the various physical parameters using this limit), which is consistent with synchrotron emission.", "The VLA and ATCA observations lacked the angular resolution to resolve clearly any spatial structure in the radio counterpart to S26.", "In sec:resultsradio, we discuss how this data was utilized to calculate the minimum energy needed to drive synchrotron radiation and the minimum strength of the magnetic field.", "Using the data extracted over the various wavelengths, we are able to measure a variety of physical parameters.", "From the optical data taken with HST, we can constrain the size of the SNR's shock as well as the mass and age of the progenitor star.", "From the X-ray data taken with Chandra and XMM-Newton, we can fit the X-ray spectra with various models to measure the best-fit temperature of the SNR as well as the density of surrounding gas using the normalization factor.", "From the radio data, we can measure the minimum magnetic field strength as well as the minimum energy needed to drive synchrotron radiation.", "All of these measurements are described in detail below." ], [ "Size from HST Data", "We positioned ellipses by eye to a color image of S26 to find the size of the SNR; the color image is shown in fig:optsize.", "The ellipse most likely to correspond to the size of S26 followed the middle of the shell edge.", "The lower limit was set to the inner edge of the shell while the upper limit was set to the outer edge of the shell.", "The edge of the shell was determined by using the DS9 tool Contours and adjusting the contour level to find the inner, outer, and middle parts of the shell edge for our ellipses that were placed by eye (see fig:optcontour for the middle part of the shell edge contour).", "The semimajor axis is $0.78 \\pm 0.10$ arcsec and the semiminor axis is $0.69^{+0.14}_{-0.12}$ arcsec, corresponding to $7.5 \\pm 1.0\\ $ and $6.7^{+1.2}_{-1.4}\\ $ , respectively.", "We also found that S26 has an eccentricity of $0.47$ .", "Figure: Image of the F606W HST data with the contour used to determine the middle part of the shell edge (i.e.", "the best-fit value for the size of the SNR).We corroborated the ellipse method by also calculating the radii using the DS9 tool Projections aligned with the semimajor and semiminor axes (see fig:optsize).", "This tool can be used to find the surface brightness function for the projected line.", "The semimajor and semiminor axes' brightness profiles give us a plot of the brightness versus distance that were used to find the semimajor and semiminor axes values—$0.79^{+0.19}_{-0.14}$ arcsec and $0.71^{+0.18}_{-0.24}$ arcsec, respectively.", "These values were found after performing third-order Taylor series expansions on the outer edges of the surface brightness functions.", "The values calculated are within the errors from the ellipse measurements.", "Previously, the size was estimated to have a diameter of 33 using a distance assumption of 2.1 to NGC 300 .", "The HST data shows the SNR is a factor of $\\sim 2$ times smaller, which is likely due to the improved spatial resolution— had a resolution of $\\sim 1\"$ while the data from HST has a resolution of $\\sim 0.1\"$ ." ], [ "Mass of the Progenitor", "We use the well-established technique of fitting the CMD of the resolved stellar populations within 50 pc ($5.2\"$ ) of the SNR with stellar evolution models using the fitting package MATCH , , to constrain the age of the SNR progenitor , , , , .", "We begin by assuming that the progenitor of S26 was a massive star ($>7$  M$_{\\odot }$ ) that underwent core-collapse.", "In addition, we assume that nearby young stars were associated with the progenitor star.", "With these assumptions, we can use the ages of the nearby stars, as determined from their CMD to measure the most likely age (and inferred mass) of the progenitor.", "Figure:                                                                                                            Left: Cumulative stellar mass fraction vs. age and mass used to constrain the age of the SNR progenitor star.", "The dashed vertical line marks the most likely age of the population surrounding the SNR, and the gray area shows the uncertainties on the cumulative fraction at each age.", "The red shaded region shows the median age of the population and the uncertainty.", "Right: Star formation rate vs age from the fit, along with the uncertainties.", "A clear peak at 8 Myr is detected.", "This distribution results in the cumulative distribution and errors shown in left, where the fraction makes a rapid rise at 8 Myr ago.To determine the ages of the nearby stars, we begin with a CMD of the sample of 1576 stars within 50 pc of the SNR center.", "These are shown with the red points in Figure REF .", "The grayscale in the plot shows the remaining $376,000$ stars in the field, which were scaled by area and used as a background sample for the fitting.", "The red plume of stars at WFC606W-WFC814W$\\sim 1$ is the red giant branch, and it is made up of old ($>500$ Myr) stars.", "The blue plume at WFC606W-WFC814W$\\sim 0$ is the upper main sequence, and it consists of massive ($>3$ M$_{\\odot }$ ) young ($<500$ Myr) stars.", "There are many more old stars than young ones, but the strong upper main sequence presence at this location in NGC 300 suggests that our assumption of a massive progenitor is reasonable.", "We then fit the CMD using the MATCH package , to constrain the age distribution.", "While MATCH returns the ages up to 13 Gyr, we focus on the young component relevant to the SNR.", "We show our results in fig:optmass, where the population clearly shows a strong peak at an age of 8 Myr.", "We can use this age to infer the initial mass of the progenitor star assuming standard single star evolution.", "This age corresponds to the expected lifetime of a 25 M$_{\\odot }$ star.", "The uncertainties of our age distribution, as determined from the MATCH hybridmc package , , are shown by the red shading in Figure REF , and correspond to 8 1yr and $25^{+1}_{-5}$ M$_{\\odot }$ , assuming the Padova stellar evolution models , .", "This, therefore, suggests that this is one of the most massive SNR progenitors found in the Local Volume.", "The CMD associated with this population as well as the differential extinction can also be seen in fig:optmassdav and they show that the result is not sensitive to the amount of differential extinction associated with this population.", "Figure:                                                                                                            Left: Cumulative Stellar Mass Fraction versus age (Lookback Time) for several different assumed dAv values.", "The colors represent different fits, and the result appears insensitive to the choice of dAv.", "Middle: Plot depicting the fit values (lower is a better fit) based on Poisson maximum likelihood values , , for each assumed dAv.", "The best-fit is for dAv=0= 0.", "This value was adopted, and the final result is shown in Figure                                                                                                            Right: Color Magnitude Diagram of the region around N300-S26 used to find the star formation history in fig:optmass.", "The red points are the 1576 stars within 50 pc of the SNR center, and the grayscale shows the distribution of the remaining 376,000 stars in the field, which were scaled by area and applied as a background sample during the fitting.", "The dashed line shows the 50% completeness limit.", "The area above this line was included in the fit." ], [ "X-Ray Spectra", "We simultaneously fit nine observations (total of 21 different data sets due to the various detectors) from Chandra and XMM-Newton; each of the effective exposure times can be found in tab:xrayobs.", "We found that our best joint fit to the extracted spectra had a hydrogen column density of ($9.7^{+6.4}_{-4.8}$ )$\\times 10^{20}$ cm$^{-2}$ which is a slightly larger than the value for the foreground, $3.0 \\times 10^{20}$ , obtained from COLDEN,http://cxc.harvard.edu/toolkit/colden.jsp, suggesting a small amount of extinction due to NGC 300 as well as possibly from S26.", "For each data set, we first fit a model to the extracted background data and found the normalization values for the sky and instrument modeled backgrounds.", "Then, we fit all of the source data with the modeled background data.", "The background models were all frozen to the values determined from their individual fits.", "We fit the source data to a model consisting of tbabs and pshock components using a distance of 2 and a metal abundance of $0.5$ (calculated using the metallicity gradient found from and using a distance of $3.7$ kpc between S26 and the center of NGC 300).", "The fitted values can be found in Table REF for having all of the normalizations tied together (see fig:xrayspectra for the plotted spectra).", "The fit quality for the tied normalizations was excellent showing robust cross-calibration of the extractions.", "We also fit the spectra with the normalizations not tied together, the normalizations tied together and extending the energy range to be between $0.3$ and $5.0$ keV, and the normalizations tied together but the temperatures allowed to vary from observation to observation and detector to detector (see tab:xraytemp for a breakdown of the varying temperatures), but found no significant change in the resulting parameter values (see tab:xrayfits).", "The values from the models in which normalizations were not tied together did result in significantly different parameters, but the fit was not significantly better and the tied normalizations are more physically plausible.", "Table: Acknowledgements" ] ]
1906.04531
[ [ "Approximate Variational Inference Based on a Finite Sample of Gaussian\n Latent Variables" ], [ "Abstract Variational methods are employed in situations where exact Bayesian inference becomes intractable due to the difficulty in performing certain integrals.", "Typically, variational methods postulate a tractable posterior and formulate a lower bound on the desired integral to be approximated, e.g.", "marginal likelihood.", "The lower bound is then optimised with respect to its free parameters, the so called variational parameters.", "However, this is not always possible as for certain integrals it is very challenging (or tedious) to come up with a suitable lower bound.", "Here we propose a simple scheme that overcomes some of the awkward cases where the usual variational treatment becomes difficult.", "The scheme relies on a rewriting of the lower bound on the model log-likelihood.", "We demonstrate the proposed scheme on a number of synthetic and real examples, as well as on a real geophysical model for which the standard variational approaches are inapplicable." ], [ "Introduction", "Bayesian inference is becoming the standard mode of inference as computational resources increase, algorithms advance and scientists across fields become aware of the importance of uncertainty.", "However, exact Bayesian inference is hardly ever possible whenever the model likelihood function deviates from mathematically convenient forms (i.e.", "conjugacy).", "Deterministic approximations are constantly gaining ground on the ubiquitous and computationally intensive Monte Carlo sampling methods that are capable of producing high quality approximations to otherwise intractable quantities such as posterior densities or marginal likelihoods.", "Often, however, deterministic schemes are tailored to a particular model, or family of models, and hence previously derived methods might not be transferable to a new setting (e.g.", "in [4] a specialised algorithm for Bayesian inference in neural networks is considered).", "This introduces practical difficulties, mostly in fields beyond machine learning, whenever an implementation of Bayesian inference is required particularly in early explorative stages where the model formulation is likely to keep changing.", "In this work we introduce a scheme for approximate Bayesian inference that aims to be general enough so that it can accommodate a variety of models.", "The proposed scheme is conceptually simple and requires only that the gradient of the model log-likelihood with respect to its parameters be available.", "Specifically, we consider the task of computing a global Gaussian approximation $q(\\mbox{$w$}) = \\mathcal {N}(\\mbox{$w$}|\\mu ,\\Sigma )$ to a given intractable posterior distribution representing a probabilistic model by maximizing a standard variational lower bound [14].", "This lower bound involves the expectation of the log-likelihood of the model distribution with respect to the approximating distribution $q(\\mbox{$w$})$ .", "To enable the computation of these expectations either in closed form or through computationally tractable numerical approximations, the likelihoods are typically restricted to conditionally factorized forms.", "This step of the approach specifically depends on the model at hand.", "By contrast, our conceptually simple method presented below is more generally applicable to various models in the same way.", "We demonstrate this by working out examples for a variety of models.", "In particular, we demonstrate empirically that our approach results in Gaussian approximations that are superior to the basic Laplace approximation [6], which is the typical objective of variational approximation schemes.", "A formal comparison to related state-of-the-art methods for computing improved Gaussian approximations, e.g.", "through the nested Laplace approximation [18] or by expectation propagation [9], is beyond the scope of this paper, however.", "Our paper is organized as follows: in Section we introduce in general terms the proposed scheme.", "Starting from a standard formulation, that typically variational methods use, we postulate a Gaussian posterior $q(\\mbox{$w$})$ and show how to form an approximation to the lower bound of the marginal log-likelihood.", "The obtained approximation allows us to optimise the variational parameters $\\mu $ and $\\Sigma $ of $q(\\mbox{$w$})$ by gradient optimisation.", "For the reader that wishes to refresh her/his memory or obtain a more detailed explanation of the equations presented in Section , we refer to [5], [21].", "In Section we demonstrate the proposed scheme on a number of applications and compare it against exact inference, Laplace and variational approximations.", "Specifically, in Subsection REF we show a visual comparison of approximating flexible bivariate densities using the Laplace approximation and the proposed scheme.", "In Subsection REF we apply our approach on the problem of Bayesian linear regression which actually does admit an analytical and exact solution.", "This is useful as it allows us to empirically verify the correctness of our scheme against the posterior obtained by exact inference.", "Subsequently, in Subsections REF and REF we compare the proposed scheme with approaches that take into account the functional forms of classification problems.", "We show that despite its general formulation, the proposed scheme performs up to par in this setting without exploiting any such problem specific knowledge.", "In Subsection REF we show how a change in the model likelihood of probabilistic principal component analysis [20], that renders inference problematic, can easily be accommodated by the proposed scheme.", "This demonstrates the versatility of our approach in handling such cases in a direct and simple manner.", "Finally, in Subsection REF we show how the proposed scheme can be applied beyond the usual statistical models, namely on a real geophysical model [7].", "We believe that the proposed method raises a range of interesting questions and directions of research; we briefly discuss them in Section ." ], [ "Proposed Scheme for Approximate Variational Inference", "Assume an observed dataset of inputs $\\mbox{$X$}=(x_1,\\dots ,x_N)^T$ and outputs $\\mbox{$Y$}=(y_1,\\dots ,y_N)^T$ modelled by a model $f$ parametrised by $\\mbox{$w$}\\in \\mathbb {R}^{M}$ .", "For observed outputs corrupted by Gaussian noise of precision $\\beta $ , the following likelihoodBesides the likelihood based on the Gaussian density, others can also be accommodated as shown in Sections REF -REF .", "The choice of the Gaussian is made for ease of exposition.", "arises: $p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$w$},\\beta ) &=& \\prod _{n=1}^N{\\mathcal {N}}(y_n|f(x_n;\\mbox{$w$}), \\beta ^{-1}) \\\\&=& {\\mathcal {N}}(\\mbox{$Y$}|f(\\mbox{$X$};\\mbox{$w$}), \\beta ^{-1} \\mbox{$I$}_N) \\ ,$ where $f(\\mbox{$X$};\\mbox{$w$})=(f(x_1;\\mbox{$w$}),\\dots ,f(x_N;\\mbox{$w$}))$ is the vector of model outputs calculated on all inputs $x_n$ .", "Furthermore, assume a Gaussian prior on the parameters $\\mbox{$w$}$ : $p(\\mbox{$w$}| \\alpha ) = {\\mathcal {N}}(\\mbox{$w$}|\\mbox{$0$},\\alpha ^{-1}\\mbox{$I$}_M) \\ .$ Our wish is to approximate the true posterior of the parameters $p(\\mbox{$w$}|\\mbox{$Y$},\\mbox{$X$},\\alpha ,\\beta )$ .", "We do not make any assumptions about the model having conjugate priors for the parameters $\\mbox{$w$}$ .", "Model $f$ may have a complex functional form that hinders exact Bayesian inference, or even the application of an approximate Bayesian scheme such as VBEM [5] with a factorised prior.", "However, we do have to make an assumption on the form of the posterior.", "We choose to postulate an approximate Gaussian posterior for the parameters $w$: $q(\\mbox{$w$}) = {\\mathcal {N}}(\\mbox{$w$}|\\mbox{$\\mu $},\\mbox{$\\Sigma $}) \\ .$ Parameters $\\mbox{$\\mu $}\\in \\mathbb {R}^{M}$ and $\\mbox{$\\Sigma $}\\in \\mathbb {R}^{M\\times M}$ of the posterior are called variational parameters.", "For reasons that will become obvious shortly, we choose to parametrise covariance matrix as $\\mbox{$\\Sigma $}=\\mbox{$L$}\\mbox{$L$}^T$ with $\\mbox{$L$}\\in \\mathbb {R}^{M\\times M}$ .", "The postulated posterior now reads: $q(\\mbox{$w$}) = {\\mathcal {N}}(\\mbox{$w$}|\\mbox{$\\mu $},\\mbox{$L$}\\mbox{$L$}^T) \\ .$ Hence, the actual variational parameters are $\\mbox{$\\mu $}$ and $\\mbox{$L$}$ ." ], [ "Approximate Lower Bound", "The first step in introducing the proposed scheme, is writing the marginal log-likelihood and lower-bounding it in the standard way using Jensens' inequality [5]: $\\log p(\\mbox{$Y$}|\\mbox{$X$},\\alpha , \\beta )&=& \\log \\int p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$w$}, \\beta ) p(\\mbox{$w$}|\\alpha )\\mbox{$dw$} \\\\&=& \\log \\int \\frac{q(\\mbox{$w$})}{q(\\mbox{$w$})} p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$w$}, \\beta ) p(\\mbox{$w$}|\\alpha )\\mbox{$dw$} \\\\&\\ge & \\int q(\\mbox{$w$}) \\log \\frac{p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$w$}, \\beta ) p(\\mbox{$w$}|\\alpha )}{q(\\mbox{$w$})}\\mbox{$dw$} \\\\&=& \\int q(\\mbox{$w$}) \\log p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$w$}, \\beta ) \\mbox{$dw$} \\\\& & + \\int q(\\mbox{$w$}) \\log \\frac{p(\\mbox{$w$}|\\alpha )}{q(\\mbox{$w$})}\\mbox{$dw$} \\\\&=& \\underbrace{ \\int q(\\mbox{$w$}) \\log p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$w$}, \\beta ) \\mbox{$dw$} }_{(1)} \\\\& & - \\underbrace{ \\int q(\\mbox{$w$}) \\log \\frac{q(\\mbox{$w$})}{p(\\mbox{$w$}|\\alpha )}\\mbox{$dw$} }_{(2)} \\\\&\\triangleq & \\mathcal {L}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha , \\beta ) \\ .$ An alternative motivation of the lower bound is provided in [21] Maximising the lower bound $\\mathcal {L}$ in Eq.", "(REF ) with respect to the free variational parameters $\\mu $ and $L$ of $q(\\mbox{$w$})$ , results in the best Gaussian approximation to the true posterior.", "Term $(1)$ , the integrated likelihood in Eq.", "(REF ), is a potentially intractable integral.", "We approximate term $(1)$ using Monte Carlo sampling: $\\frac{1}{S} \\sum _{s=1}^S \\log p(\\mbox{$Y$}|\\mbox{$X$}, \\mbox{$w$}_{(s)}, \\beta ) \\ ,$ where we draw $S$ samples $\\mbox{$w$}_{(s)}$ from the postulated posterior $q(\\mbox{$w$})$ .", "Due to the sampling, however, the variational parameters no longer appear in the approximation Eq.", "(REF ) .", "Nevertheless, it is possible to re-introduce them by rewriting the sampled weights $\\mbox{$w$}_{(s)}$ asThis is where the parametrisation $\\mbox{$\\Sigma $}=\\mbox{$L$}\\mbox{$L$}^T$ becomes useful.", ": $\\mbox{$w$}_{(s)} = \\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)} \\ ,$ where variables $\\mbox{$z$}_{(s)}$ are sampled from the standard normal $\\mbox{$z$} \\sim \\mathcal {N}(\\mbox{$0$},\\mbox{$I$}_M)$ .", "We summarise all samples $\\mbox{$z$}_{(s)}$ by $Z=\\lbrace \\mbox{$z$}_{(1)} \\dots ,\\mbox{$z$}_{(S)}\\rbrace $ .", "Hence, we can rewrite Eq.", "(REF ) as: $\\frac{1}{S} \\sum _{s=1}^S \\log p(\\mbox{$Y$}|\\mbox{$X$},\\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)},\\beta ) \\ .$ Hence, the variational parameters $\\mu $ and $L$ are now made explicit in this approximation.", "We expand the approximation of term $(1)$ further: $& &\\frac{1}{S} \\sum _{s=1}^S \\log p(\\mbox{$Y$}|\\mbox{$X$} ,\\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)}, \\beta ) = \\\\& & \\frac{1}{S} \\sum _{s=1}^S \\log \\mathcal {N}(\\mbox{$Y$}| f(\\mbox{$X$}; \\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)}),\\beta ^{-1}\\mbox{$I$}_N) = \\\\& & \\frac{1}{S} \\sum _{s=1}^S \\frac{N}{2}\\log (\\beta ) - \\frac{\\beta }{2}\\Vert \\mbox{$Y$}-f(\\mbox{$X$}; \\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)}) \\Vert ^2 \\\\& & +\\ const \\ .$ Term $(2)$ in Eq.", "(REF ) is simply the Kullback-Leibler divergence (KLD) between the two Gaussian densities $q(\\mbox{$w$})$ and $p(\\mbox{$w$}|\\alpha )$ , and can be calculated in closed form: $\\frac{1}{2}\\bigg ( tr(\\alpha \\mbox{$L$}^T\\mbox{$L$}) + \\alpha \\mbox{$\\mu $}^T \\mbox{$\\mu $} - M - \\ln | \\alpha \\mbox{$L$}\\mbox{$L$}^T | \\bigg )\\ .$ We can now put together the approximated term $(1)$ in Eq.", "(REF ) and the KLD term $(2)$ in Eq.", "(REF ), to formulate the following objective functionThe subscript $FS$ stands for finite sample.", "${\\mathcal {L}}_{(FS)}$ .", "Discarding constants, the proposed approximate lower bound reads: $& & {\\mathcal {L}}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha , \\beta , Z) = \\\\& & \\frac{1}{S} \\sum _{s=1}^S \\frac{N}{2}\\log (\\beta ) - \\frac{\\beta }{2}\\Vert \\mbox{$Y$}-f(\\mbox{$X$}; \\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)}) \\Vert ^2 \\\\& & - \\frac{1}{2}\\bigg ( tr(\\alpha \\mbox{$L$}\\mbox{$L$}^T) + \\alpha \\mbox{$\\mu $}^T \\mbox{$\\mu $} - \\ln | \\alpha \\mbox{$L$}\\mbox{$L$}^T | \\bigg ) \\ .$ Objective ${\\mathcal {L}}_{(FS)}$ is an approximation to the intractable lower bound ${\\mathcal {L}}$ in Eq.", "(REF ).", "It consists of two parts, the approximation to the integrated likelihood (term $(1)$ ) and the exact KLD (term $(2)$ ).", "The proposed lower bound ${\\mathcal {L}}_{(FS)}$ becomes more accurate when the number $S$ of samples $\\mbox{$z$}_{(s)}$ is large." ], [ "Optimisation of Approximate Lower Bound", "Gradients of ${\\mathcal {L}}_{(FS)}$ can be calculated with respect to the variational parameters $\\mu $ and $L$ in order to find the best approximating posterior $q(\\mbox{$w$})$ : $& & \\nabla _{\\mbox{$\\mu $}} \\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha , \\beta , Z) = \\\\& & \\frac{1}{S} \\sum _{s=1}^S \\beta (\\nabla _{\\mbox{$w$}} f)_{(s)}^T (\\mbox{$Y$} - f (\\mbox{$X$};\\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)}))- \\alpha \\mbox{$\\mu $} \\ ,$ $& & \\nabla _{\\mbox{$L$}} \\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha , \\beta , Z) = \\\\& & \\frac{1}{S} \\sum _{s=1}^S \\beta (\\nabla _{\\mbox{$w$}} f)_{(s)}^T (\\mbox{$Y$}-f (\\mbox{$X$};\\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)})) \\mbox{$z$}_{(s)}^T \\\\& & - \\alpha \\mbox{$L$} + {\\mbox{$L$}^+}^T \\ ,$ where $\\nabla _{\\mbox{$w$}} f$ denotes the Jacobian matrix of $f$ and $\\mbox{$L$}^+$ is the pseudo-inverse of $L$ due to the derivation of the log-determinantSee derivative rule 55 in [15].", "in Eq.", "(REF ).", "Analogous equations for the case of exact variational inference can be found at [5].", "Given the current posterior $q(\\mbox{$w$})$ , hyperparameters $\\alpha $ and $\\beta $ have analytical updates: $\\alpha = \\frac{M}{\\mbox{$\\mu $}^T \\mbox{$\\mu $} + \\mathrm {tr}(\\mbox{$L$}\\mbox{$L$}^T)} \\ ,$ $\\beta = \\frac{SN}{ \\sum _{s=1}^S \\Vert \\mbox{$Y$}- f (\\mbox{$X$} ; \\mbox{$\\mu $} + \\mbox{$L$}\\mbox{$z$}_{(s)}) \\Vert ^2 } \\ .$ Again, analogous equations for the above hyperparameter updates can be found in [21].", "The proposed scheme is summarised with the pseudocode in Algorithm REF .", "Convergence was established in the experiments by checking whether the difference between the objective function values ${\\mathcal {L}}_{(FS)}$ between two successive iterations is less than $10^{-4}$ .", "Gradient optimisation of $\\mu $ and $L$ was carried out using the scaled conjugate gradient algorithm [13].", "The outcome of the above scheme is the approximation ${\\mathcal {L}}_{(FS)}$ to the marginal log-likelihood $\\log p(\\mbox{$Y$}|\\mbox{$X$},\\alpha ,\\beta )$ (also called log-evidence).", "The scheme imparts us with the approximate Gaussian posterior $q(\\mbox{$w$})={\\mathcal {N}}(\\mbox{$w$}|\\mbox{$\\mu $},\\mbox{$L$}\\mbox{$L$}^T)$ .", "[ht] Proposed approximate variational inference Initialisation: Initialise variational parameters e.g.", "$\\mbox{$\\mu $}\\sim \\mathcal {N}(\\mbox{$0$},\\mbox{$I$})$ and $\\mbox{$L$} = c\\mbox{$I$}$ .", "% e.g.", "$c=0.1$ Initialise hyperparameters e.g.", "$\\alpha =\\beta =0.1$ .", "Alternative to above, ML estimates may be useful as initial values.", "Draw $S$ samples $\\mbox{$z$} \\sim \\mathcal {N}(0,\\mbox{$I$})$ that remain fixed throughout the algorithm.", "Alternating optimisation of $\\mathcal {L}_{(FS)}$ : $\\mbox{iter}=1:\\mbox{MaxIter}$ % e.g.", "$MaxIter=1000$ Record $L_{prv} \\leftarrow \\mathcal {L}_{(FS)}$ .", "Optimise $\\mu $ for $J$ iterations using the gradient in Eq.", "(REF ).", "% e.g.", "$J=10$ Optimise $L$ for $J$ iterations using the gradient in Eq.", "(REF ).", "Update $\\alpha $ and $\\beta $ using Eqs.", "(REF ) and (REF ) respectively.", "Record $L_{new} \\leftarrow \\mathcal {L}_{(FS)}$ .", "Break if e.g.", "$L_{new} - L_{prv} <10^{-4}$ .", "Result: Lower bound $\\mathcal {L}_{(FS)}$ to marginal log-likelihood.", "Gaussian Posterior $\\mathcal {N}(\\mbox{$\\mu $},\\mbox{$LL$}^T)$ ." ], [ "Monitoring Generalisation Performance", "For large values of $S$ the proposed lower bound $\\mathcal {L}_{(FS)}$ approximates the true bound $\\mathcal {L}$ in Eq.", "(REF ) closely.", "Therefore, we expect that optimising $\\mathcal {L}_{(FS)}$ will yield approximately the same optimal variational parameters $\\mbox{$\\mu $}, \\mbox{$L$}$ as the optimisation of the intractable true lower bound $\\mathcal {L}$ would.", "The proposed scheme exhibits some fluctuation as $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z)$ is a function of the random set of samples $Z$ .", "Hence, if the algorithm, as summarised in Algorithm REF , is run again, a new set $Z^{(new)}$ will be drawn and a different function $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{(new)})$ will be optimised.", "However, for large enough $S$ the fluctuation due to $Z$ will be innocuous and approximately the same variational parameters will be found for any drawn $Z$Discounting other sources of randomness like initialisation, etc..", "However, if on the other hand we choose a small $S$ , then the variational parameters will overly depend on the small set of samples $Z$ that happened to be drawn at the beginning of the algorithm.", "As a consequence, $\\mathcal {L}_{(FS)}$ will approximate $\\mathcal {L}$ poorly, and the resulting posterior $q(\\mbox{$w$})$ will also be a poor approximation to the true posterior.", "Hence, the variational parameters will be overfitted to the small set of samples $Z$ that happened to be drawn.", "Naturally, the question arises of how to choose a large enough $S$ in order avoid overfitting the variational parameters on $Z$ .", "A practical answer to this question is the following: at the beginning of the algorithm we draw a second independent set of samples $Z^{\\prime }=\\lbrace \\mbox{$z$}^{\\prime }_{(1)}, \\dots ,\\mbox{$z$}^{\\prime }_{(S^{\\prime })}\\rbrace $ where $S^{\\prime }$ is preferably a number larger than $S$ .", "At each (or every few) iteration(s) we monitor the quantity $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime })$ on the independentWe stress that $Z^{\\prime }$ is not used in training.", "sample set $Z^{\\prime }$ .", "If the variational parameters are not overfitting the drawn $Z$ , then we should see that as the lower bound $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z)$ increases, the quantity $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime })$ should also display a tendency to increase.", "If on the other hand the variational parameters are overfitting the drawn $Z$ , then though $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z)$ is increasing, we will notice that $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime })$ is actually deteriorating.", "This is a clear sign that a larger $S$ is required.", "The described procedure is reminiscent of monitoring the generalisation performance of a learning algorithm on a validation set during training.", "A significant difference, however, is that while validation sets are typically of limited size, here we can set $S^{\\prime }$ arbitrarily large.", "For practical purposes, we found that $S^{\\prime }=5S$ was good enough to detect overfitting.", "An illustration of overfitting the variational parameters in provided in Sec.", "REF ." ], [ "Applications", "In this section we apply the proposed approach on a variety of applications, namely regression, classification, denoising and geophysical modelling.", "In particular, the geophysical example shows how the method can be applied beyond standard statistical models." ], [ "Fitting Bivariate Posteriors", "We test our proposed scheme on some artificially constructed posteriors by using a flexible parametric class of densities, due to [3], which reads: $f(w_1,w_2) = 2{\\mathcal {N}}(w_1,w_2|0,\\mbox{$I$}_2) \\Phi (h(w_1,w_2)) \\ ,$ where $\\Phi $ is the cumulative distribution function of the standard normal distribution, and in general $h$ is a real-value function such that $h(-w)=-h(w)$ .", "Here, we take $h$ to be the dot product of a row and column vector, $h(w_1,w_2)=(w_1,w_2,w_1 w_2^2,w_1^2 w_2,w_1^3,w_2^3)\\mbox{$a$}$ as in [3].", "The goal is to find the best Gaussian approximation to instances of Eq.", "(REF ) for different column vectors $a$.", "To that end, we tried to find the best Gaussian using the Laplace approximation and the proposed scheme.", "We used $S=50$ .", "The results are shown in Fig.", "REF .", "The Gaussian approximations are drawn as black dashed curves with their mean marked as a red cross.", "The goodness of each approximation has been measured as the Kullback-Leibler divergence, and it is noted in the respective captions.", "We note that the proposed scheme fares better than the Laplace approximation as the latter evidently focuses on the mode of the target density instead on where the volume of the density lies.", "The KLD in these examples was calculated numerically as there is no closed form between a Gaussian and a member of the densities in Eq.", "(REF ) Figure: Top 𝐚=(-3,1,-1,-1,-1,-1) T \\mbox{$a$}=(-3,1,-1,-1,-1,-1)^T, middle 𝐚=(0,-2,-4,-1,-3,0) T \\mbox{$a$}=(0,-2,-4,-1,-3,0)^T, bottom 𝐚=(1,0,2,1,-1,0) T \\mbox{$a$}=(1,0,2,1,-1,0)^T.The KLD values reveal (lower is better) that the proposed scheme fares better than the Laplace approximation." ], [ "Bayesian Linear Regression", "Bayesian linear regression constitutes a useful example for corroborating that the proposed scheme works correctly as we can compare the obtained posterior $q(\\mbox{$w$})$ to the posterior obtained by exact Bayesian inference [6].", "We consider data targets $y_n$ generated by the equation $y = 2\\cos (x)\\sin (x)-0.1x^2 \\ ,$ with inputs $x_n$ uniformly drawn in the range $[-6,6]$ .", "We add white Gaussian noise to the data targets with a standard deviation of $\\sigma = 0.2$ .", "We calculate a set of radial basis functions on the data inputs $ \\mbox{$\\phi $}_n = [ \\phi (x_n;r,c_1) \\dots \\phi (x_n;r,c_{M-1}) \\ 1]^T $ where $\\phi (x_n;r,c_m) = \\exp (-\\frac{\\Vert x_n-c_m\\Vert ^2}{2r^2})$ .", "The last element 1 in $\\mbox{$\\phi $}_n$ serves as a bias term.", "We set $r=1$ , and adopt the linear model $y=\\mbox{$\\phi $}^T\\mbox{$w$}$ where $\\mbox{$w$}\\in \\mathbb {R}^{M}$ .", "We complete the model by choosing the following densities: Figure: Monitoring generalisation performance.", "Green solid line is ℒ (FS) (μ,𝐋,α,β,Z ' )\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime }), blue dashed line is ℒ (FS) (μ,𝐋,α,β,Z)\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z).Overfitting of the variational parameters occurs when the number SS of samples 𝐳\\mbox{$z$} is not large enough.", "Likelihood: $p(\\mbox{$Y$}|\\mbox{$X$}, \\mbox{$w$}, \\beta ) = \\prod _{n=1}^N \\mathcal {N}(y_n | \\mbox{$\\phi $}_n^T \\mbox{$w$},\\beta ^{-1})$ .", "Prior: $p(\\mbox{$w$}) = \\mathcal {N}(\\mbox{$w$} | \\mbox{$0$},\\alpha ^{-1}\\mbox{$I$}_M)$ .", "Postulated posterior: $q(\\mbox{$w$}) = \\mathcal {N}(\\mbox{$w$} | \\mbox{$\\mu $},\\mbox{$L$}\\mbox{$L$}^T)$ , where $\\mbox{$\\mu $}\\in \\mathbb {R}^{M}$ and $\\mbox{$L$}\\in \\mathbb {R}^{M\\times M}$ .", "We set the number of samples of variables $\\mbox{$z$}$ to $S=100$ .", "We inferred the Gaussian posterior of the weights $w$ using both exact Bayesian inference [6] and the proposed scheme.", "In Fig.", "REF we plot the mean predictions as obtained by the two schemes and note that they are very similar, especially in the areas where enough data items are present.", "Similarly, in Fig.", "REF we plot the covariance matrices found by the two schemes and note their close similarity.", "Hence, we conclude that the proposed scheme stands in close agreement to the exact Bayesian solution.", "Finally, we demonstrate on the same dataset the effect of overfitting the variational parameters when $S$ is set too low.", "In Fig.", "REF we are monitoring the lower bounds $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z)$ and $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime })$ , see Sec.", "REF .", "In the left, we run the algorithm for $S=100$ and $S^{\\prime }=500$ : we see that as $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z)$ increases with each iteration, so does $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime })$ .", "This means that the fitted variational parameters $\\mbox{$\\mu $}, \\mbox{$L$}$ generalise well.", "On the right hand side, we run the algorithm for $S=10$ but kept $S^{\\prime }=500$ .", "Here we clearly see that while $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z)$ increases, the lower bound $\\mathcal {L}_{(FS)}(\\mbox{$\\mu $},\\mbox{$L$},\\alpha ,\\beta ,Z^{\\prime })$ is deteriorating.", "This a clear sign that a larger $S$ is required and that the variational parameters are overfitted." ], [ "Bayesian Logistic Regression", "In this section we apply the proposed scheme to Bayesian logistic regression and compare with the variational approach presented in [11].", "The data are input-label pairs $(\\mbox{$x$},y)$ with $y\\in \\lbrace 0,1\\rbrace $ .", "Again, like in Sec.", "REF , we calculate basis functions $\\mbox{$\\phi $}_n$ on the input data $\\mbox{$x$}_n$ and take $r=0.5$ .", "We set $S=200$ .", "We complete the model by choosing the following densities: $\\begin{fleqn}[0pt]\\begin{aligned}[t] \\mbox{Likelihood:\\ } & p(\\mbox{$Y$}|\\mbox{$X$}, \\mbox{$w$}) = \\\\ &\\prod _{n=1}^N \\sigma (\\mbox{$\\phi $}_n^T \\mbox{$w$})^{y_n} (1-\\sigma (\\mbox{$\\phi $}_n^T \\mbox{$w$}))^{1-y_n} \\ .", "\\end{aligned}\\end{fleqn}$ Prior: $p(\\mbox{$w$}) = \\mathcal {N}(\\mbox{$w$} | \\mbox{$0$},\\alpha ^{-1}\\mbox{$I$}_M)$ .", "Postulated posterior: $q(\\mbox{$w$}) = \\mathcal {N}(\\mbox{$w$} | \\mbox{$\\mu $},\\mbox{$L$}\\mbox{$L$}^T)$ .", "We evaluated both schemes on datasets preprocessed by Rätsch et alhttp://www.raetschlab.org/Members/raetsch/benchmark.", "Each preprocessed dataset has been randomly partitioned into a 100 non-overlapping training and testing sets.", "Hence the performance of both schemes was evaluated as the accuracy on the test set, that is the ratio of correctly classified test samples over all test samples.", "The predictive distribution for the proposed scheme was approximated using a Monte Carlo estimate.", "We drew 200 parameter samples from the fitted Gaussian posterior $q$ and measured performance on the testing set as the average accuracy under each sample of parameters.", "The results reported in Table REF , mean squared error and standard deviation, show that both the proposed schemes and the variational bound in [11] perform virtually the same.", "We note that, as opposed to [11] which exploits the functional form of logistic regression in order to design a bespoke lower bound, the proposed method does not take into account any such knowledge and still is capable of delivering comparable performance.", "Hence, we find the results in this section encouraging.", "Table: Classification performance for Bayesian logistic regression (higher is better)." ], [ "Bayesian Multiclass Classification", "In this section we apply the proposed scheme on Bayesian multiclass classification.", "The data are input-label pairs $(\\mbox{$x$},\\mbox{$y$})$ .", "Vectors $\\mbox{$y$}$ are binary vectors encoding class labels using 1-of-$K$ coding scheme, e.g.", "$[0\\ 1\\ 0]$ encodes class label 2 in a 3-class problem.", "A typical way of formulating multiclass classification is multiclass logistic regression (MLR), see [6] for more details.", "MLR models the probability $p(C_k|\\mbox{$\\phi $}_n)$ of the $n$ -th data item belonging to class $C_k$ via the softmax function $p(C_k|\\mbox{$\\phi $}_n)=\\frac{\\exp (\\mbox{$\\phi $}_n^T\\mbox{$w$}_k)}{\\sum _{\\ell =1}^K \\exp (\\mbox{$\\phi $}_n^T\\mbox{$w$}_\\ell )}$ .", "$K$ denotes the total number of classes, and each class $C_k$ is associated with a weight vector $\\mbox{$w$}_k$ .", "Similarly to logistic regression, MLR does not allow direct Bayesian inference as the use of the softmax function renders integrals over the likelihood term intractable.", "Thus, Bayesian MLR is a good candidate problem for the proposed approach.", "We specify the following model: $\\begin{fleqn}[0pt]\\begin{aligned}[t] \\mbox{Likelihood:\\ } & p(\\mbox{$Y$}|\\mbox{$X$}, \\mbox{$w$}_1,\\dots ,\\mbox{$w$}_K) = \\\\ & \\prod _{n=1}^N \\prod _{k=1}^K p(C_k|\\mbox{$\\phi $}_n)^{y_{nk}} \\ .", "\\end{aligned}\\end{fleqn}$ Prior: $\\prod _{k=1}^K p(\\mbox{$w$}_k) = \\mathcal {N}(\\mbox{$w$}_k | \\mbox{$0$},\\alpha ^{-1}\\mbox{$I$}_M)$ .", "Postulated posterior: $q(\\mbox{$w$}_1,\\dots ,\\mbox{$w$}_K) = \\prod _{k=1}^K q(\\mbox{$w$}_k)$ , with $q(\\mbox{$w$}_k) = \\mathcal {N}(\\mbox{$w$}_k | \\mbox{$\\mu $}_k,\\mbox{$L$}_k\\mbox{$L$}_k^T)$ .", "Table: Classification performance for Bayesian multiclass classification (higher is better).As a corroboration of the usefulness of our approximation to Bayesian MLR, we compare with the multiclass generalisation of the relevance vector machine (RVM) [19] presented in [16].", "We use the multiclass UCI datasets suggested therein.", "Amongst the two generalisations of the RVM suggested in [16], we use the $\\mbox{mRVM}_2$ version.", "We also follow the suggestion of [16] concerning the choice of kernels for the different datasets.", "We set $S=200$ .", "Table REF summarises the results of our numerical simulations along with details of the datasets.", "While the $\\mbox{mRVM}_2$ designs a refined probabilistic model in order to make probabilistic multiclass classification possible, the proposed scheme does not take into account any kind of such knowledge and is still able to deliver competitive performance, in terms of predictive accuracy, as seen in Table REF .", "The good performance demonstrates both the usefulness and versatility of the proposed method." ], [ "Probabilistic Image Denoising", "In this section we further demonstrate how the proposed method can take in its stride a change in the model-likelihood that complicates computations.", "The model considered here is the ubiquitous probabilistic principal component analysis (PPCA) introduced in [20].", "PPCA assumes that the observed high-dimensional data $\\mbox{$y$}\\in \\mathbb {R}^{d}$ are manifestations of low-dimensional latent variables $\\mbox{$x$}\\in \\mathbb {R}^{q}$ , under a linear mapping expressed by a matrix $\\mbox{$W$}\\in \\mathbb {R}^{d\\times q}$ and an offset $\\mbox{$\\xi $}\\in \\mathbb {R}^{d}$ , corrupted by Gaussian noise $\\epsilon $: $\\mbox{$y$} = \\mbox{$W$}\\mbox{$x$} + \\mbox{$\\xi $} + \\mbox{$\\epsilon $} \\ .$ PPCA formulates a computationally amenable linear-Gaussian model which allows integrating out the latent variables $x$ and obtaining the marginal log-likelihood.", "Estimating $W$ and $\\mu $ follows by maximising the marginal log-likelihood [20].", "Various works extend PPCA by replacing the noise model with other choices, e.g.", "[2] uses the Student-t distribution, in order to deal with different types of noise.", "A recent interesting suggestion is the choice of the Cauchy density as the noise model [22], albeit in a non-probabilistic formulation.", "The Cauchy density with location $x_0$ and scale $\\gamma >0$ parameters reads: $\\left(\\pi \\gamma \\left[1 + \\left(\\frac{x-x_0}{\\gamma }\\right)^2 \\right] \\right)^{-1} \\ .$ Choosing the Cauchy density as the noise model leads to a version of PPCA where the marginal log-likelihood is no longer tractable and so the latent variables $x$ cannot be integrated out.", "This is simply because the prior on $x$ is not conjugate to the Cauchy likelihood.", "However, the proposed method can be used to approximate this intractable marginal log-likelihood.", "Formally, we specify the following Cauchy-PPCA model: $\\begin{fleqn}[0pt]\\begin{aligned}[t] &\\mbox{Likelihood:\\ } p(\\mbox{$Y$}|\\mbox{$X$}, \\mbox{$W$},\\mbox{$\\xi $},\\gamma ) = \\\\&\\prod _{n=1}^N \\left(\\pi \\gamma \\left[1 + \\left(\\frac{\\mbox{$y$}_n-\\mbox{$W$}\\mbox{$x$}_n - \\mbox{$\\xi $}}{\\gamma }\\right)^2 \\right] \\right)^{-1} \\ .", "\\end{aligned}\\end{fleqn}$ Prior: $p(\\mbox{$X$}) = \\mathcal {N}(\\mbox{$X$}| \\mbox{$0$}, \\mbox{$I$}_N)$ .", "Postulated posterior: $q(\\mbox{$X$}) = \\mathcal {N}(\\mbox{$X$} | \\mbox{$\\mu $},\\mbox{$L$}\\mbox{$L$}^T)$ .", "Parameters $\\mbox{$W$}$ , $\\mbox{$\\xi $}$ and $\\gamma $ are obtained by gradient optimisation of the proposed lower bound $\\mathcal {L}_{(FS)}$ .", "Figure: Columns from left to right: original image, corrupted image, reconstruction by PPCA and reconstruction by Cauchy-PPCA.", "Columns two to four: below each image we quote its distance to the original image in the first column (lower is better), i.e.", "the quality of reconstruction.We applied the original PPCA [20] and the proposed Cauchy-PPCA on a task concerning the denoising of images that have undergone pixel corruption.", "The aim here is to show the ease with which the proposed method can accommodate a change in the specification (i.e.", "change from Normal to Cauchy likelihood) and deliver a well-performing model.", "The data $\\mbox{$Y$}$ are $2,414$ face images of 38 individuals obtained from the Extended Yale B Database [10].", "There are 64 images per individual under 9 poses and 64 illumination conditions.", "The images are grayscale images whose pixels have values between 0 and 255.", "We rescale the images to $96\\times 84$ pixels.", "Hence $d=96\\times 84=8064$ and we set $q=2$ , i.e.", "both PCA schemes project the images to a latent space of dimension equal to 2.", "We corrupt $33.33\\%$ of the pixels in each image by drawing a new value uniformly in the range $[0,\\dots ,255]$ .", "For each individual, we use half of the corrupted images as the training set and the other half as the test set.", "Fig.", "REF presents results obtained on test images from 4 individuals.", "The figure shows from left to right the original and corrupted test image followed by the two reconstructions obtained by PPCA and Cauchy-PPCA respectively.", "In order to quantify the quality of reconstruction, we use the following measure between the original and reconstructed images: $\\Vert \\mbox{$y$}_{\\mbox{\\tiny orig}} - \\mbox{$y$}_{\\mbox{\\tiny rec}} \\Vert ^2 / \\Vert \\mbox{$y$}_{\\mbox{\\tiny orig}} \\Vert ^2$ .", "This measure is quoted below each image.", "The results in Fig.", "REF evidently show that Cauchy-PPCA achieves better denoising levels than PPCA.", "In actual fact, in our numerical experiments we found that Cauchy-PPCA outperformed PPCA on all 38 individuals.", "The present numerical experiment demonstrates the versatility of the proposed method in how it can easily extend PPCA to incorporate a Cauchy likelihood.", "This is achieved without exploiting any particular knowledge pertaining to the probabilistic specification of the model." ], [ "Bayesian Inference for the Stochastic Model by Boore", "In this section we apply the proposed scheme on a geophysical model called the stochastic model.", "The stochastic model, due to Boore [7], is used to predict ground motion at a given site of interest caused by an earthquake.", "Ground motion is simply the shaking of the earth and it is a fundamental quantity in estimating the seismic hazard of structures.", "From a physical point of view, the stochastic model describes the radiated source spectrum and its amplitude changes in the frequency domain due to wave propagation from the source to the site of interest.", "The inputs to the stochastic model are the distance $R$ of the site of interest to the seismic source, the magnitude $M_w$ of the earthquake, and the frequency $f$ of ground motion.", "The stochastic model, in its simple form, has a parameter associated with the seismic source known in the literature as stress parameter ($\\Delta \\sigma $ ), two parameters associated with the path attenuation called geometrical spreading ($\\eta $ ) and quality factor ($Q$ ), and one more parameter associated with the site called near-surface attenuation ($\\kappa _0$ ).", "All aforementioned parameters are bounded within a physical range.", "In the case of multiple seismic sources, each source is associated with its own distinct stress parameter.", "The scalar output of the model $y$ is the mean Fourier amplitude of the ground motion.", "The type of ground motion we consider here is acceleration.", "We denote the stochastic model as a function $y=g(M_w,R,f ; \\mbox{$w$})$ , where $\\mbox{$w$}=[\\Delta \\sigma _1,\\dots ,\\Delta \\sigma _E, \\eta , Q, \\kappa _0]$ , where $E$ is the number of seismic sources.", "This situation is depicted in Fig.", "REF .", "We refer the interested reader to [7] for more details.", "Estimating the posterior uncertainty of the model parameters is important in seismic hazard analysis as the propagation of uncertainty in the parameters can have an impact on the estimated hazard curve [17].", "A discussion of how these posteriors can be utilised in later stages of seismic hazard analysis is beyond the scope of this work.", "Figure: Physical setting of seismic wave propagation from the source to the site of interest.", "Recorded at the site is the signal Fourier amplitude against frequency of ground motion.We specify the model by choosing the following densities: $\\begin{fleqn}[0pt]\\begin{aligned}[t] \\mbox{Likelihood:\\ } & p(\\mbox{$Y$}|\\mathcal {D},\\mbox{$w$}) = \\\\ &\\prod _{n=1}^N \\mathcal {N}(y_n | g({M_w}_n,R_n,f_n;\\mbox{$w$}),\\sigma ^2) \\ .", "\\end{aligned}\\end{fleqn}$ Flat prior: $p(\\mbox{$w$}) \\propto 1$ .", "Postulated posterior: $q(\\mbox{$w$}) = \\mathcal {N}(\\mbox{$w$} | \\mbox{$\\mu $},\\mbox{$L$}\\mbox{$L$}^T)$ .", "In contrast to the previous applications, here we choose a very flat prior.", "Ground motion data inputs are denoted by $\\mathcal {D}$ and targets by $Y$.", "We performed experiments on a subset of the recently compiled RESORCE database [1] which is a pan-European database of strong-motion recordings.", "In particular we focused on data records originating from a station in the region of L'Aquilla for $E=8$ seismic sources.", "Hence, the total number of free model parameters in $w$ is 11.", "We experimented with varying number of data records, $N\\in \\lbrace 100,200,500,1000\\rbrace $ , in order to test the robustness of the Laplace and the proposed approximation in scenarios of limited data.", "Such situations arise in geophysical studies when data recordings are incomplete due to distortions in frequencies caused by instrumentation errors.", "The performance of Laplace and the proposed scheme was evaluated as the prediction error on test sets.", "Both schemes were run 10 times, and each run involved a new random realisation of the training and testing set.", "Parameter $S$ was set to 1000 for all experiments in this section.", "The predictive distribution for the proposed scheme was approximated using a Monte Carlo estimate.", "We drew 200 parameter samples from the Gaussian fitted posterior $q$ and estimated performance on the testing set as the average of the mean squared error under each parameter sample.", "The results are reported in Table REF .", "Table: Prediction error for ground motion problem (lower is better).The results show that the proposed approximation fares better than Laplace, although at $N=1000$ the performances are virtually identical.", "For lower $N$ , however, the Laplace approximation exhibits much higher variance than the proposed scheme." ], [ "Discussion and Conclusion", "We have presented a scheme for Bayesian variational inference that is applicable in cases where the likelihood function renders more standard approaches difficult.", "The scheme is conceptually simple as it relies on a simple Monte Carlo average of the intractable part of the variational lower bound, see Eq.", "(REF ), and the re-introduction of the variational parameters resulting in the objective of Eq.", "(REF ).", "The scheme can thus be generally applied to other models where variational inference is difficult requiring only the gradients of the log-likelihood function with respect to the parameters.", "In the numerical experiments we have shown that (a) the proposed scheme stands in close agreement with exact inference in Bayesian linear regression, (b) it performs up to par in classification tasks against methods that design bespoke model formulations, (c) it fares better than the Laplace approximation in a number of cases, and (d) it is very versatile and can be applied to a variety of problems.", "Future work will address the relationship of our approach to variational approaches [18], [9] that provide alternative ways to compute improved Gaussian approximations to intractable posteriors relative to the Laplace approximation.", "Another aspect concerns ways to cope with very large problems that would require a large number of samples $S$ to obtain a sufficiently accurate approximation in Eq.", "(REF ).", "A natural choice would be to turn the scheme in Algorithm REF into a recursive stochastic optimisation scheme [12] that employs small sample sets computed at each iteration, akin to stochastic gradient-based large-scale empirical risk minimisation [8].", "These two approaches should not be confused, however.", "The latter employs subsampling of the data $(\\mbox{$X$},\\mbox{$Y$})$ whereas our scheme generates samples based on current parameter estimates of the approximate posterior.", "Clearly, our scheme could incorporate sequential subsampling of the data as well.", "The problem of proving convergence of such an overall stochastic approximation approach in a suitable sense [12] seems to be open." ], [ "Acknowledgement", "The RESORCE database [1] was used in this work with the kind permission of the SIGMA projecthttp://www.projet-sigma.com.", "N. Gianniotis was partially funded by the BMBF project “Potsdam Research Cluster for Georisk Analysis, Environmental Change and Sustainability\".", "C. Molkenthin and S. S. Bora were funded by the graduate research school GeoSim of the Geo.X initiativehttp://www.geo-x.net." ] ]
1906.04507
[ [ "On singular Frobenius for linear differential equations of second and\n third order, part 1: ordinary differential equations" ], [ "Abstract We study second order and third order linear differential equations with analytic coefficients under the viewpoint of finding formal solutions and studying their convergence.", "We address some untouched aspects of Frobenius methods for second order as the convergence of formal solutions and the existence of Liouvillian solutions.", "A characterization of regular singularities is given in terms of the space of solutions.", "An analytic classification of such linear homogeneous ODEs is obtained.", "This is done by associating to such an ODE a Riccati differential equation and therefore a global holonomy group.", "This group is a computable group of Moebius maps.", "These techniques apply to classical equations as Bessel and Legendre equations.", "In the second part of this work we study third order equations.", "We prove a theorem similar to classical Frobenius theorem, which describes all the possible cases and solutions to this type of ODE.", "Once armed with this we pass to investigate the existence of solutions in the non-homogeneous case and also the existence of a convergence theorem in the same line as done for second order above.", "Our results are concrete and (computationally) constructive and are aimed to shed a new light in this important, useful and attractive field of science." ], [ "Introduction", "Differential equations are among the most powerful tools in mathematics and physics ([1], [12], [21], [13], [11]).", "Roughly speaking, these are equations involving one or more functions and their derivatives.", "Their study has many aspects, from quantitative theory, ie.", "the search of solutions, to qualitative theory.", "There are two main groups of differential equations: ordinary differential equations (ODEs for short) and partial differential equations (PDEs for short).", "The first consists of equations depending on a single variable (time for instance).", "By its turn PDEs involves partial derivatives, depending on several variables.", "Since the first appearance of Newton's laws of motion ([25]), the study of ordinary differential equations has been associate with fundamental problems in physics and science in general.", "This has been reinforced by the work of many scientists (mathematicians, physicists, meteorologists, etc) through their contributions in problems as: universal gravitation and planetary dynamics, dynamics of particles under the action of a force field as the electromagnetic field, thermodynamics, meteorology and weather forecast, study of climate phenomena as typhoons and hurricanes, aerodynamics and hydrodynamics, atomic models, etc.", "The list is as long and the possibilities of human scientific development.", "Thanks to the nature of Newton's laws and other laws as Maxwell's equations or Faraday's and Kepler's laws ([10], [19]), most of the pioneering work is in ODEs.", "Furthermore, these classical equations are of first or second order (the higher order of the derivatives is not beyond two).", "Of special interest are the laws of the oscillatory movement (pendulum equation and Hill lunar movement equation[17]) and Hooke's law (spring extension or compression).", "Let us not forget that classical fundamental solutions to problems as heat conduction (heat equation), vibration (wave equation) and others (Laplace equation).", "Though these problems are modeled by partial differential equations, they may be solved with the aid of ordinary differential equations.", "This is for instance the idea of the method of separation of variables and eventual use of Fourier series.", "All these classical equations above are, or have nice approximations by, linear equations.", "Among the linear equations the homogeneous case is a first step and quite meaningful.", "Thus, to be able to solve classical ordinary linear homogeneous differential equations is an important subject of active study in mathematics.", "The arrival of features like scientific computing gives new breath to the problem of solution of a given ODE by looking for solutions via power series.", "This of course in the real analytic framework, which is quite common in the Nature ([15]).", "In this sake, a classical and powerful method is due to Frobenius.", "The method of Frobenius can be summarized as follows: Given a linear homogeneous second order differential equation $a(x) y^{\\prime \\prime } + b(x)y^\\prime + c(x)y=0$ for some real analytic functions $a(x), b(x), c(x)$ at some point $x_0\\in \\mathbb {R}$ , we look for solutions which are of the form $y(x)= \\sum \\limits _{n=0}^\\infty d_n (x-x_0)^{n+r}$ where $d_n$ and $r$ are constants.", "We shall not detail this method now, but we must say that this is based on Euler's equation $a x^2 y^{\\prime \\prime } + b x y ^\\prime + cy=0$ and the idea of looking for solutions of the form $y=x^r$ .", "Then $r$ must be a root of the so called indicial equation.", "The main point is that Frobenius method works pretty well in a suitable class of second order ODEs, so called regular singular ODEs around $x_0$ .", "Third-order differential equations models are used in modeling an important number of high energy physical problems.", "One of the first that comes to mind is the problem of oscillations in a nuclear reactor ([33]).", "The deflection of a curved beam having a constant or a varying cross-section is another example.", "Other examples are three layer beams, electromagnetic waves or gravity-driven flows.", "We also have Barenblatt's equation for diffusion in a porous fissured medium.", "In neurobiology, for example modeling current flow in neurons with microstructure, $V_t = V_{xx} +gV_{txx} -V$ where $g$ is a constant (see [26]).", "More generally, third order ODEs appear in astrodynamics: the Clohessy-Wiltsjire equations (relative motion about a circular orbit), the Tschauner-Hempel equations (relative motion about an ellipse), the two-body equation after the substitution y=1/r.", "There are other situations.", "For instance, third order differential equation is the one for the temperature appearing in the heat transport theory of materials contradicting the “fading memory paradigm\".", "Finally, more Physical examples are: the Abraham-Lorentz force (electron self-force) ([11], [30]) and the Jerk for parabolic curves in the roads.", "In this paper we study both second order and third order differential equations with analytic coefficients under the viewpoint of finding solutions and studying their convergence.", "In very few words, we study forgotten aspects of Frobenius methods for second order as convergence of formal solutions and the existence of Liouvillian solutions.", "We also discuss the characterization of the so called regular singularities in terms of the space of solutions.", "An analytic classification is obtained via associating to such an ODE a Riccati differential equation and therefore a global holonomy group.", "This group is a computable group of Moebius maps.", "Next we apply these techniques and results to classical equations as Bessel and Legendre equations.", "In the second part of this work we study third order linear differential equations.", "After presenting a model for the Euler equation and its corresponding indicial equation in this case, we introduce the notion of regular singular point for this class of equations.", "Then we prove a theorem similar to classical Frobenius theorem, which describes all the possible cases and solutions to this type of ODE.", "Once armed with this we pass to investigate the existence of solutions in the non-homogeneous case and also the existence of a convergence theorem in the same line as done for second order above.", "Our results are concrete and (computationally) constructive and are aimed to shed a new light in this important, useful and attractive field of science.", "Next we give a more detailed description of our results." ], [ "The classical method of Frobenius for second order", "The classical method of Frobenius is a very useful tool in finding solutions of a homogeneous second order linear ordinary differential equations with analytic coefficients.", "These are equations that write in the form $a(x) y^{\\prime \\prime } + b(x)y^\\prime + c(x)y=0$ for some real analytic functions $a(x), b(x), c(x)$ at some point $x_0\\in \\mathbb {R}$ .", "It well known that if $x_0$ is an ordinary point, i.e., $a(x_0)\\ne 0$ then there are two linearly independent solutions $y_1(x), y_2(x)$ of the ODE, admitting power series expansions converging in some common neighborhood of $x_0$ .", "This is a consequence of the classical theory of ODE and also shows that the solution space of this ODE has dimension two, i.e., any solution is of the form $c_1 y_1(x) + c_2 y_2(x)$ for some constants $c_1, c_2 \\in \\mathbb {R}$ .", "Second order linear homogeneous differential equations appear in many concrete problems in natural sciences, as physics, chemistry, meteorology and even biology.", "Thus solving such equations is an important task.", "The existence of solutions for the case of an ordinary point is not enough for most of the applications.", "Indeed, most of the relevant equations are connected to the singular (non-ordinary) case.", "We can mention Bessel equation $x^2 y^{\\prime \\prime } + x y ^\\prime + (x^2 - \\nu ^2) y=0$ , whose range of applications goes from heat conduction, to the model of the hydrogen atom ([2], [16]).", "This equation has the origin $x=0$ as a singular point.", "Another remarkable equation is the Laguerre equation $x y ^{\\prime \\prime } + (\\nu +1 -x) y^\\prime + \\lambda y=0$ where $\\lambda , \\nu \\in \\mathbb {R}$ are parameters.", "This equation is quite relevant in quantum mechanics, since it appears in the modern quantum mechanical description of the hydrogen atom.", "All these are examples of equations with a regular singular point.", "According to Frobenius a singular point $x=x_0$ of the ODE $a(x) y^{\\prime \\prime } + b(x)y^\\prime + c(x)y=0$ is regular if $\\displaystyle \\lim _{x \\rightarrow x_0} (x-x_0)\\frac{b(x)}{a(x)}$ and $\\displaystyle \\lim _{x\\rightarrow x_0} (x-x_0)^2 \\frac{c(x)}{a(x)}$ are finite.", "We shall refer to this as follows: the ODE $y^{\\prime \\prime } + \\alpha (x) y ^\\prime + \\beta (x)=0$ has a regular singular point at $x=x_0$ if $\\displaystyle \\lim _{x\\rightarrow x_0} (x-x_0) \\alpha (x)$ and $\\displaystyle \\lim _{x \\rightarrow x_0}(x-x_0)^2\\beta (x)$ admit extensions which are analytic at $x=x_0$ .", "In this case we have the following classical theorem of Frobenius: Theorem 1.1 (Frobenius theorem, [3], [14], [6]) Assume that the ODE $(x-x_0)^2y^{\\prime \\prime }+(x-x_0)b(x)y^\\prime +c(x)y=0$ has a regular singularity at $x=x_0$ , where the functions $b(x), c(x)$ are analytic with convergent power series in $|x-x_0|<R$ .", "Then there is at least one solution of the form $y(x)=|x-x_0|^r \\sum \\limits _{n=0}^\\infty d_n (x-x_0)^n$ where $r$ is a root of the indicial equation, $d_0=1$ where the series converges for $|x-x_0|<R$ .", "The method of Frobenius (for this case of second order ODE) consists in associating to the original ODE an Euler equation, i.e., an equation of the form $ A(x-x_0)^2y^{\\prime \\prime } + B (x-x_0)y ^\\prime +Cy=0$ and looking for solutions (to this equation) of the form $y_0(x)=(x-x_0)^r$ .", "This gives an algebraic equation of degree two $A r (r-1) + B r + C=0$ , so called indicial equation, whose zeroes $r$ give solutions $y_0(x)=(x-x_0)^r$ of the Euler equation.", "The Euler equation associate to the original ODE with a regular singular point at $x=x_0$ is given by $(x-x_0)^2y^{\\prime \\prime } + p_0 (x-x_0)y ^\\prime +q_0y=0$ where $p_0=\\displaystyle \\lim _{x \\rightarrow x_0} (x-x_0)\\frac{b(x)}{a(x)}$ and $q_0=\\displaystyle \\lim _{x\\rightarrow x_0} (x-x_0)^2 \\frac{c(x)}{a(x)}$ .", "Then Frobenius method consists in looking for solutions of the original ODE as of the form $y_1(x)=|x-x_0|^r \\sum \\limits _{n=0}^\\infty d_n (x-x_0)^n$ where $r$ is the greater real part zero of the indicial equation given by the Euler equation as above.", "The equation whether there is a second linearly independent solution is related to the roots of the indicial equation.", "Indeed, there is some zoology and in general the second solution is of the form $y_2(x)=|x-x_0|^{\\tilde{r}} \\sum \\limits _{n=0}^\\infty \\tilde{d}_n(x-x_0)^n$ in case there is a second root $\\tilde{r}$ of the indicial equation and this root is such that $r-\\tilde{r}\\notin \\mathbb {Z}$ .", "If $\\tilde{r} =r$ then there is a solution of the form $y_2(x)=y_1(x)\\log |x-x_0| + |x-x_0|^{r+1}\\sum \\limits _{n=0}^\\infty \\hat{d}_n (x-x_0)^n$ .", "Finally, if $0 \\ne r - \\tilde{r}\\in \\mathbb {N}$ then we have a second solution of the form $y_2(x)= ky_1(x)\\log |x-x_0| + |x-x_0|^{\\tilde{r}}\\sum \\limits _{n=0}^\\infty \\check{d}_n (x-x_0)^n$ .", "This brief description of the method of Frobenius already suggests that there may exist a higher order version of this result.", "For instance for third order linear homogeneous ODEs." ], [ "Second order equations", "Section 2 is dedicated to the study of second order linear differential equations.", "We start with the following." ], [ "Convergence of formal solutions for second order linear homogenous ODEs", "We shall now discuss the problem of convergence of formal solutions for linear homogeneous ODEs of order two.", "We recall that there are examples of ODEs admitting a formal solution that is nowhere convergent ( cf.", "Example REF ).", "Our next result may be seen as a version of a theorem due to Malgrange and also to Mattei-Moussu for holomorphic integrable systems of differential forms.", "By a formal solution centered at $x_0\\in \\mathbb {R}$ of an ODE we shall mean a formal power series $\\hat{y}(x)=\\sum \\limits _{n=0}^\\infty a_n (x-x_0)^n$ with complex coefficients $a_n \\in \\mathbb {C}$ .", "We prove: Theorem A (formal solutions order two) Consider a second order ordinary differential equation given by $a(x)y^{\\prime \\prime }+b(x)y^\\prime +c(x)y=0$ where $a,b,c$ are analytic functions at $x_0 \\in \\mathbb {R}$ .", "Suppose also that there exist two linearly independent formal solutions $\\hat{y}_1(x)$ and $\\hat{y}_2(x)$ centered at $x_0$ of equation (REF ).", "Then $x_0$ is an ordinary point or a regular singular point of (REF ).", "Moreover, $\\hat{y}_1(x)$ and $\\hat{y}_2(x)$ are convergent.", "Theorem B (1 formal solution second order regular singularity) Consider a second order ordinary differential equation given by $a(x)y^{\\prime \\prime }+b(x)y^\\prime +c(x)y=0$ where $a,b,c$ are analytic functions at $x_0 \\in \\mathbb {R}$ .", "Suppose (REF ) has at $x_0$ an ordinary point or a regular singular point.", "Given a formal solution $\\hat{y}(x)$ of (REF ) then this solution is convergent." ], [ "Characterization of regular singular points in order two", "We shall say that a function $u(x)$ for $x$ in a disc $|x|<R$ centered at the origin $0\\in \\mathbb {R}, \\mathbb {C}$ is an analytic combination of log and power (anclop for short) if it can be written as $u(x)=\\alpha (x) + \\log (x) \\beta (x) + \\gamma (x) x^r$ for some analytic functions $\\alpha (x), \\beta (x), \\gamma (x)$ defined in the disc $|x|<R$ and $r \\in \\mathbb {R}$ or $r \\in \\mathbb {C}$ .", "In the real case we assume that $x>0$ in case we have $\\beta \\lnot \\equiv 0$ or $\\gamma \\lnot \\equiv 0$ and a power $x^r$ with $r\\in \\mathbb {R} \\setminus \\mathbb {Q}$ .", "Definition 1.1 A one-variable complex function $u(z)$ considered in a domain $U\\subset \\mathbb {C}$ will be called analytic up to log type singularities (autlos for short) if: $u(z)$ is holomorphic in $U\\setminus \\sigma $ where $\\sigma \\subset U$ is a discrete set of points, called singularities.", "Given a singularity $p \\in \\sigma $ either $p$ is a removable singularity of $u(z)$ or there is a germ of real analytic curve $\\gamma \\colon [0,\\epsilon ) \\rightarrow U$ such that $\\gamma (0)=p$ and $u(z)$ is holomorphic in $D\\setminus \\gamma [0,\\epsilon )$ for some disc $D\\subset U$ centered at $p$ .", "A one variable real function $u(x)$ defined in an interval $J \\subset \\mathbb {R}$ will be called analytic up to log type singularities (autlos for short) if, after complexification, the corresponding function $u_\\mathbb {C}(z)$ , which is defined in some neighborhood $J \\times \\lbrace 0\\rbrace \\subset U\\subset \\mathbb {C}$ is analytic up to log type singularities, as defined above.", "Theorem C (characterization of regular points) Consider a linear homogeneous ordinary differential equation of second order given by $a(x)y^{\\prime \\prime }+b(x)y^\\prime +c(x)y=0$ where $a,b,c$ are analytic functions at $x_0 \\in \\mathbb {R}$ .", "Then the following conditions are equivalent: The equation admits two linearly independent solutions $y_1(x), y_2(x)$ which are anclop (analytic combinations of log and power).", "Then $x_0$ is an ordinary point or a regular singular point for the ODE.", "The equation admits two solutions $y_1(x), y_2(x)$ which are autlos (analytic up to logarithmic singularities).", "The equation has an ordinary point or a regular singular point at $x_0$ ." ], [ "Riccati model and holonomy of a second order equation", "We start with a polynomial second order linear equation of the form $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ in the complex plane.", "By introducing the change of coordinates $t = u ^\\prime / u$ we obtain a first order Riccati equation which writes as $\\frac{dt}{dz}=-\\frac{a(z)t^2+b(z)t+c(z)}{a(z)}.$ Definition 1.2 The Riccati differential equation above is called Riccati model of the ODE     $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ .", "By its turn, since the work of Paul Painlevé ([27]), a polynomial Riccati equation is studied from the point of view of its transversality with respect to the vertical fibers $z=constant$ , even at the points at the infinity.", "With the advent of the theory of foliations, due to Ehresmann, the notion of holonomy was introduced as well as the notion of global holonomy of a foliation transverse to the fibers of a fibration.", "This is the case of a polynomial Riccati foliation once placed in the ruled surface $\\mathbb {P}^1(\\mathbb {C}) \\times \\mathbb {P}^1(\\mathbb {C})$ , where $\\mathbb {P}^1(\\mathbb {C})=\\mathbb {C} \\cup \\lbrace \\infty \\rbrace $ is the Riemann sphere.", "This allows us to introduce the notion of global holonomy of a second order linear equation as above.", "This permits the study of the equation from this group theoretical point of view, since the global holonomy will be a group of Moebius maps of the form $t \\mapsto \\frac{ \\alpha t + \\beta }{\\gamma _t + \\delta }$ .", "We do calculate this group in some special cases and reach some interesting consequences for the original ODE.", "Theorem D Consider a second order polynomial ODE given by $u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $b,c$ are complex polynomials of a variable $z$ .", "Then the equation above admits a general solution of the form $u_{\\ell ,k}(z)=k \\exp \\big (\\int _0^z \\frac{\\ell D(\\xi ) - B(\\xi )}{A(\\xi ) - \\ell C(\\xi )}d\\xi \\big ), \\, k , \\ell \\in \\mathbb {C}.$" ], [ "Liouvillian solutions", "One important class of solutions for ODEs is the class of Liouvillian solutions, a notion introduced by Liouville, developed by Rosentich and Ross among other authors.", "The question of whether a polynomial first order ODE admits a Liouvillian solution or first integral has been addressed by M. Singer in [32] and others.", "Recall the notion of Liouvillian function in $n$ complex variables as introduced in [32].", "Such a function has (holomorphic) analytic branches in some Zariski dense open subset of $\\mathbb {C}^n$ .", "In particular we can ask whether an ODE admits such a solution.", "Question 1.1 What are the polynomial ODEs of the form $a(z) u^{\\prime \\prime } + b(z) u ^\\prime + c(z)u=0$ admitting a Liouvillian solution $u(z)$ ?", "Theorem E (characterization liouville) Consider a polynomial complex linear homogeneous ordinary differential equation of second order given by $L[u](z)=a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $a,b,c$ complex polynomials.", "Then we have the following: If $L[u]=0$ admits a solution satisfying a Liouvillian relation then it has a Liouvillian first integral (cf.", "Corollary pages 674,675 [32]).", "If $L[u]=0$ admits a Liouvillian solution then it has a Liouvillian first integral (cf.", "Corollary page 674,675 [32]).", "If $L[y]=0$ admits a Liouvillian first integral then its solutions are Liouvillian and given by one of the forms below: (a) $u(z)=\\exp \\big (-\\int ^z\\gamma (\\eta )d\\eta \\big )\\bigg [ k \\int ^z \\exp \\big (\\int ^{\\eta } \\frac{2\\gamma (\\xi )-b(\\xi )}{a(\\xi )}d\\xi \\big )d\\eta +\\ell \\bigg ]$ for constants $k,\\;\\ell \\in \\mathbb {C}$ and $\\gamma (z)$ is rational solution for the Riccati equation.", "(b) $u(z) = k_1 + k_2 \\int ^z \\exp \\big (-\\int ^\\eta \\frac{b(\\xi )}{a(\\xi )}d\\xi \\big )d\\eta $ , for constants $k_1,k_2 \\in \\mathbb {C}$ ." ], [ "Third order equations", "In § 3 we study third order linear ordinary differential equations in the homogeneous and non-homogeneous cases.", "For this sake we first extend the notion of regular singular point above to ODEs of order $n \\ge 2$ ." ], [ "Frobenius method for third order ODEs", "Consider a linear ordinary differential equation with variable coefficients of the form $a_0(x)y^{(n)}+a_1(x)y^{(n-1)}+\\ldots +a_{n-1}(x)y^\\prime +a_n(x)y=0.$ We shall assume that the coefficients $a_0,a_1,\\ldots ,a_{n-1},a_n$ are analytic at some point $x_0$ , and we shall study the case where $a_0(x_0)=0$ .", "A point $x_0$ such that $a_0(x_0)=0$ , is called singular point of equation (REF ), otherwise it is an ordinary point.", "Definition 1.3 We shall say that $x_0$ is a regular singular point of (REF ), if the equation can be written as follows $(x-x_0)^ny^{(n)}+b_1(x)(x-x_0)^{n-1}y^{(n-1)}+\\ldots +b_{n-1}(x)(x-x_0)y^\\prime +b_n(x)y=0$ for $x$ close enough to $x_0$ , where the functions $b_1,\\ldots ,b_{n-1},b_n$ are analytic at $x_0$ .", "Remark 1.1 $\\;$ If the functions $b_1,\\ldots ,b_{n-1},b_n$ can be written as $b_k(x)=(x-x_0)^k\\beta _k(x)\\;\\;\\mbox{ for every }\\;k=1,\\ldots ,n,$ where $\\beta _1,\\ldots ,\\beta _{n-1},\\beta _n$ are analytic functions in $x_0$ , we see that (REF ) is transformed into equation $y^{(n)}+\\beta _1(x)y^{(n-1)}+\\ldots +\\beta _{n-1}(x)y^{\\prime }+\\beta _n(x)y=0.$ In this case, by classical theorems of ODEs, there are three linearly independent analytic solutions $y_j(x)=\\sum \\limits _{n=0}^\\infty a_n ^j x^n$ , converging for $|x|<R$ .", "Moreover, any solution is a linear combination of these solutions.", "An equation of the form $c_0(x)(x-x_0)^ny^{(n)}+c_1(x)(x-x_0)^{n-1}y^{(n-1)}+\\ldots +c_{n-1}(x)y^\\prime +c_n(x)y=0$ has a regular singular point at $x_0$ if $c_0,c_1,\\ldots ,c_{n-1},c_n$ are analytic at $x_0$ , and $c_0(x_0)\\ne 0$ .", "We first obtain an existence theorem like Frobenius theorem above.", "This reads as: Theorem F (Existence of a first solution) Consider the equation $L(y):=x^3y^{\\prime \\prime \\prime }+x^2a(x)y^{\\prime \\prime }+xb(x)y^\\prime +c(x)y=0,$ with $a(x),b(x)$ and $c(x)$ real analytic functions defined for $|x|<R$ , $R>0$ .", "Let $r_1,\\;r_2$ and $r_3$ be the roots of the indicial polynomial $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0),$ ordered in such a way that $\\mbox{Re}(r_1)\\ge \\mbox{Re}(r_2)\\ge \\mbox{Re}(r_3)$ .", "Then for $0<|x|<R$ there is a solution $\\varphi _1$ of equation (REF ) given by $\\varphi _1(x)=|x|^{r_1}\\sum ^{\\infty }_{n=0}d_nx^n,\\;\\;\\;d_0=1,$ where the series converges for $|x|<R$ .", "Moreover, if $r_1-r_2\\notin \\mathbb {Z}^+_0$ , $r_1-r_3\\notin \\mathbb {Z}^+_0$ and $r_2-r_3\\notin \\mathbb {Z}^+_0$ then there exist other two linearly independent solutions $\\varphi _2$ and $\\varphi _3$ defined in $0<|x|<R$ , given by: $\\varphi _2(x)=|x|^{r_2}\\sum ^{\\infty }_{n=0}\\tilde{d_n}x^n\\;\\;\\;\\;\\tilde{d_0}=1$ and $\\varphi _3(x)=|x|^{r_3}\\sum ^{\\infty }_{n=0}\\hat{d_n}x^n\\;\\;\\;\\;\\hat{d_0}=1$ where the series converge for $|x|<R$ .", "The coefficients $d_n$ , $\\tilde{d_n}$ , $\\hat{d_n}$ can be obtained replacing the solutions in equation (REF ).", "We shall consider a first simplest case of an equation, not of type (REF ), that has a regular singular point.", "This equation is an Euler equation, that is of the form (REF ) with $b_1,\\ldots ,b_n$ constants.", "In the case of a second order differential equation with a regular singular point $x^2y^{\\prime \\prime }+xb(x)y^\\prime +c(x)y=0,\\;x>0,$ where $b,c$ are analytic at 0, the classical method of Frobenius determines the form of the solutions of (REF ) which are given by $\\psi (x)=x^{r}\\rho (x)+x^{s}\\eta (x)\\log x,$ where $r,s$ are constants and $\\rho ,\\eta $ are analytic at 0.", "In this work we shall study the general third order linear equation with a regular singular point, and describe the method of obtaining solutions in a neighborhood of the singular point.", "Motivated by the classical method of Frobenius and using some techniques introduced in ([6], Chapter IV Section 4.5) we prove the following complete result: Theorem REF is valid in the case of complex ordinary differential equations.", "Theorem G (non exceptional case) Consider the equation $L(u):=z^3u^{\\prime \\prime \\prime }+z^2a(z)u^{\\prime \\prime }+zb(z)u^\\prime +c(z)u=0,$ with $a(z),b(z)$ and $c(z)$ analytic for $|z|<R$ , $R>0$ .", "Let $r_1,\\;r_2$ and $r_3$ be the roots of the indicial polynomial $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0).$ Let us assume that $\\mbox{Re}(r_1)\\ge \\mbox{Re}(r_2)\\ge \\mbox{Re}(r_3)$ .", "Then for $0<|z|<R$ there is a solution $\\varphi _1$ of equation (REF ) given by $\\varphi _1(z)=z^{r_1}\\sum ^{\\infty }_{n=0}d_nz^n,\\;\\;\\;d_0=1,$ where the series converges for $|z|<R$ .", "Moreover, if $r_1-r_2\\notin \\mathbb {Z}^+_0$ , $r_1-r_3\\notin \\mathbb {Z}^+_0$ and $r_2-r_3\\notin \\mathbb {Z}^+_0$ then there exist other two linearly independent solutions $\\varphi _2$ and $\\varphi _3$ defined in $0<|z|<R$ , given by: $\\varphi _2(z)=z^{r_2}\\sum ^{\\infty }_{n=0}\\tilde{d_n}z^n\\;\\;\\;\\;\\tilde{d_0}=1$ and $\\varphi _3(z)=z^{r_3}\\sum ^{\\infty }_{n=0}\\hat{d_n}z^n\\;\\;\\;\\;\\hat{d_0}=1$ where the series converge for $|z|<R.$ The coefficients $d_n$ , $\\tilde{d_n}$ , $\\hat{d_n}$ can be obtained replacing the solutions in equation (REF )." ], [ "Special cases:", "We divide the special cases into four groups, according to the roots $r_1,r_2,r_3$ (always ordered such that $Re(r_1)\\ge Re(r_2)\\ge Re(r_3)$ ) of the indicial polynomial, satisfy: (i) $r_1=r_2=r_3$ (ii) $r_1=r_2$ and $r_1-r_3\\in \\mathbb {Z}^+$ (iii) $r_2=r_3$ and $r_1-r_2\\in \\mathbb {Z}^+$ (iv) $r_1-r_2\\in \\mathbb {Z}^+$ and $r_2-r_3\\in \\mathbb {Z}^+$ We want to find solutions defined for $|x|<R$ .", "Theorem H (exceptional cases) Consider the equation $L(y):=x^3y^{\\prime \\prime \\prime }+x^2a(x)y^{\\prime \\prime }+xb(x)y^\\prime +c(x)y=0,$ with $a(x),b(x)$ and $c(x)$ analytic for $|x|<R$ , $R>0$ .", "Let $r_1,\\;r_2$ and $r_3$ ($\\mbox{Re}(r_1)\\ge \\mbox{Re}(r_2)\\ge \\mbox{Re}(r_3)$ ) be roots of the indicial polynomial $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0).$ (i) If $r_1=r_2=r_3$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ defined in $0<|x|<R$ , which has the following form: $\\varphi _1(x)=|x|^{r_1}\\sigma _1(x),\\;\\;\\varphi _2(x)=|x|^{r_1+1}\\sigma _2(x)+(\\log |x|)\\varphi _1(x)$ and $\\varphi _3(x)=|x|^{r_1+1}\\sigma _3(x)+2(\\log |x|)\\varphi _2(x)-(\\log |x|)^2\\varphi _1(x), $ where $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|x|<R$ and $\\sigma _1(0)\\ne 0$ .", "(ii) If $r_1=r_2$ and $r_1-r_3\\in \\mathbb {Z}^+$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ defined in $0<|x|<R$ , which has the form: $\\varphi _1(x)=|x|^{r_1}\\sigma _1(x),\\;\\;\\varphi _2(x)=|x|^{r_1+1}\\sigma _2(x)+(\\log |x|)\\varphi _1(x)$ and $\\varphi _3(x)=|x|^{r_3+2}\\sigma _3(x)+c\\;(\\log |x|)\\varphi _1(x), $ where $c$ constant, $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|x|<R$ , $\\sigma _1(0)\\ne 0$ and $\\sigma _3(0)\\ne 0$ .", "(iii) If $r_2=r_3$ and $r_1-r_2\\in \\mathbb {Z}^+$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ defined in $0<|x|<R$ , which has the form: $\\varphi _2(x)=|x|^{r_2}\\sigma _2(x),\\;\\;\\varphi _3(x)=|x|^{r_2+1}\\sigma _3(x)+(\\log |x|)\\varphi _2(x)$ and $\\varphi _1(x)=|x|^{r_2+2}\\sigma _1(x)+c\\;(\\log |x|)\\varphi _2(x), $ where $c$ constant, $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|x|<R$ , $\\sigma _1(0)\\ne 0$ and $\\sigma _2(0)\\ne 0$ .", "(iv) If $r_1-r_2\\in \\mathbb {Z}^+$ and $r_2-r_3\\in \\mathbb {Z}^+$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ defined in $0<|x|<R$ , which has the form: $\\varphi _1(x)=|x|^{r_1}\\sigma _1(x),\\;\\;\\varphi _2(x)=|x|^{r_2}\\sigma _2(x)+c\\;(\\log |x|)\\varphi _1(x)$ and $\\varphi _3(x)=|x|^{r_3}\\sigma _3(x)+\\tilde{c}\\;(\\log |x|)\\varphi _2(x), $ where $c$ and $\\tilde{c}$ are constants, $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|x|<R$ , $\\sigma _1(0)\\ne 0$ , $\\sigma _2(0)\\ne 0$ and $\\sigma _3(0)\\ne 0$ .", "Theorem REF is valid in the case of complex ordinary differential equations.", "Theorem I (complex analytic ODEs) Consider the equation $L(u):=z^3u^{\\prime \\prime \\prime }+z^2a(z)u^{\\prime \\prime }+zb(z)u^\\prime +c(z)u=0,$ with $a(z),b(z)$ and $c(z)$ complex analytic for $|z|<R$ , $R>0$ .", "Let $r_1,\\;r_2$ and $r_3$ be the roots of the indicial polynomial $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0),$ ordered such that $\\mbox{Re}(r_1)\\ge \\mbox{Re}(r_2)\\ge \\mbox{Re}(r_3)$ .", "(i) If $r_1=r_2=r_3$ then there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ of the form: $\\varphi _1(z)=z^{r_1}\\sigma _1(z),\\;\\;\\varphi _2(z)=z^{r_1+1}\\sigma _2(z)+(\\log z)\\varphi _1(z)$ and $\\varphi _3(z)=z^{r_1+1}\\sigma _3(z)+2(\\log z)\\varphi _2(z)-(\\log z)^2\\varphi _1(z), $ where $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|z|<R$ and $\\sigma _1(0)\\ne 0$ .", "(ii) If $r_1=r_2$ and $r_1-r_3\\in \\mathbb {Z}^+$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ of the form: $\\varphi _1(z)=z^{r_1}\\sigma _1(z),\\;\\;\\varphi _2(z)=z^{r_1+1}\\sigma _2(z)+(\\log z)\\varphi _1(z)$ and $\\varphi _3(x)=z^{r_3+2}\\sigma _3(z)+c\\;(\\log z)\\varphi _1(z), $ where $c$ constant, $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|z|<R$ , $\\sigma _1(0)\\ne 0$ and $\\sigma _3(0)\\ne 0$ .", "(iii) If $r_2=r_3$ and $r_1-r_2\\in \\mathbb {Z}^+$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ of the form: $\\varphi _2(z)=z^{r_2}\\sigma _2(z),\\;\\;\\varphi _3(z)=z^{r_2+1}\\sigma _3(z)+(\\log z)\\varphi _2(z)$ and $\\varphi _1(z)=z^{r_2+2}\\sigma _1(z)+c\\;(\\log z)\\varphi _2(z), $ where $c$ constant, $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|z|<R$ , $\\sigma _1(0)\\ne 0$ and $\\sigma _2(0)\\ne 0$ .", "(iv) If $r_1-r_2\\in \\mathbb {Z}^+$ and $r_2-r_3\\in \\mathbb {Z}^+$ there exist three linearly independent solutions $\\varphi _1,\\varphi _2,\\varphi _3$ of the form: $\\varphi _1(z)=z^{r_1}\\sigma _1(z),\\;\\;\\varphi _2(z)=z^{r_2}\\sigma _2(z)+c\\;(\\log z)\\varphi _1(z)$ and $\\varphi _3(z)=z^{r_3}\\sigma _3(z)+\\tilde{c}\\;(\\log z)\\varphi _2(z), $ where $c$ and $\\tilde{c}$ are constants, $\\sigma _1,\\sigma _2,\\sigma _3$ are analytic in $|z|<R$ , $\\sigma _1(0)\\ne 0$ , $\\sigma _2(0)\\ne 0$ and $\\sigma _3(0)\\ne 0$ .", "The relevance of this situation is explained by Example REF .", "Acknowledgement: Theorem REF or, more generally, Frobenius method for order $n$ linear homogeneous complex analytic equations can be found in [18] throughout Chapter XVI (§16.1 page 396 and on).", "Indeed, in §16.1 (page 396) the author proceeds in the classical way to prove the existence of a formal solution of the form $z^r \\sum \\limits _{n=0}^\\infty a_n z^n$ where $r$ is a (maybe complex) root of the indicial equation.", "Next, in §16.2 page 398 the same author makes use of Cauchy Integral Formula to prove, always in the case of a complex regular singular point, the existence of a first \"convergent\" solution of the form $z^r \\sum \\limits _{n=0}^\\infty a_n z^n$ where the power series $\\sum \\limits _{n=0}^\\infty a_n z^n$ converges in some neighborhood of the singular point.", "Next, in §16.3 page 400 the author hints the form of the other possible solutions according to the disposition of the roots of the indicial equation.", "Our Theorems REF and REF above give a solid confirmation of this statement.", "Indeed, we discuss more accurately the connection between the disposition of the roots of the indicial equation and the types of the solutions.", "Moreover, our proof of the convergence is more elementary, without the need of Cauchy Integral Formula, and our estimates in the coefficients of the power series are more clear and may be used in a computing process in order to control the speed of the convergence, something which is fundamental in applications to engineering.", "Finally, it is not clear from the argumentation in [18] that the real case, ie., the case of real ODEs may be treated in the same way.", "Indeed, the roots of the indicial equation may be complex non-real and therefore the coefficients in the power series in the formal solution may be complex non-real, which would not be useful in the search of real solutions.", "A stronger evidence of this fact is given in Example REF where a real ODE of third order gives rise to one real root and two complex conjugate roots for the indicial equation.", "This complexity spreads throughout the recurrence and gives complex coefficients for the power series of the solutions.", "This shows that the natural idea of starting from a real analytic ODE, considering its complexification, applying the Frobenius methods in [18] and then considering a sort of \"decomplexification\" of the solution, may not be a reasonable way of finding the (real) solutions of the original (real) ODE.", "To overcome this difficult is one of the gains of our §3." ], [ "Convergence of formal solutions in order three", "Similarly to Theorem REF we may prove, for third order ODEs the following convergence theorem: Theorem J (1 formal solution third order regular singularity) Consider the third order ordinary differential equation given by $a(x)y^{\\prime \\prime \\prime }+b(x)y^{\\prime \\prime }+c(x)y^\\prime +d(x)y=0,$ where $a,b,c$ are analytic functions at the origin $0 \\in \\mathbb {R}$ .", "Assume that (REF ) has at $x=0$ an ordinary point or a regular singular point.", "Then a formal solution $\\hat{y}(x) = \\sum \\limits _{n=0}^\\infty a_n x^n$ is always convergent in some neighborhood $|x|<R$ of the origin.", "Indeed, this solution converges in the same maximal interval $]-R,R[\\subset \\mathbb {R}$ where the coefficients $a(x), b(x), c(x)$ are analytic.", "Conjecture 1.1 Consider a linear homogeneous ordinary differential equation of third order given by $a(x)y^{\\prime \\prime \\prime }+b(x)y^{\\prime \\prime } +c(x)y^\\prime + d(x) y=0$ where $a,b,c, d$ are analytic functions at $x_0 \\in \\mathbb {R}$ .", "Suppose that this equation admits three linearly independent formal solutions $y_1(x), y_2(x), y_3(x)$ .", "Then $x_0$ is an ordinary point or a regular singular point for the ODE and therefore the solutions are analytic convergent.", "Conjecture 1.2 Consider a linear homogeneous ordinary differential equation of third order given by $a(x)y^{\\prime \\prime \\prime }+b(x)y^{\\prime \\prime }+c(x)y^\\prime +d(x)y=0$ where $a,b,c, d$ are analytic functions at $x_0 \\in \\mathbb {R}$ .", "Suppose that this equation admits three linearly independent solutions $y_1(x), y_2(x), y_3(x)$ which are linear combinations of log type and real or complex power type with analytic functions as coefficients.", "Then $x_0$ is an ordinary point or a regular singular point for the ODE." ], [ "Second order equations", "The classical second order Euler equation is $ax^2 y^{\\prime \\prime } + b x y ^\\prime + c y=0$ where $a, b, c \\in \\mathbb {R}, \\mathbb {C}$ are constants and $ a \\ne 0$ .", "The classical method of solution is to look for solutions of the form $y=x^r$ and to obtain a second degree equation on $r$ of the form $ar(r-1) +br + c=0$ , called associate indicial equation (see [3] Chapter 5).", "This is the basis of the Frobenius methods for solving non-constant coefficients suitable second order ODEs as it is well-known ([3], [6]).", "In this part of our work we push further these techniques by looking at the convergence of formal solutions, characterization of the regular singularity case and complete description of the cases where there are solutions of Liouvillian type.", "We also associate to a polynomial second order linear homogeneous complex ODE, a Riccati first order ODE and consequently a group of Moebius maps, called the global holonomy of the ODE.", "The case where the group is trivial corresponds to an important class of ODEs which can be explicitly integrated." ], [ "Second order Euler equations", "Let us first give a characterization of Euler equations in the complex plane, in terms of regularity of its singular points.", "Theorem 2.1 Consider a second order differential equation $z^2u^{\\prime \\prime }+zb(z)u^\\prime +c(z)u=0$ where $b,c$ are entire functions in the complex plane.", "Then (REF ) is an Euler equation if, and only if, the origin and the infinity are regular singular points of (REF ).", "It is a straightforward computation to check that an Euler equation has the origin and the infinity as singular points.", "Let us see the converse.", "Putting $z=1/t$ .", "Let $w(t)=u\\big (\\frac{1}{t}\\big )$ $u^\\prime \\big (\\frac{1}{t}\\big )=-t^2w^\\prime (t), \\, \\,u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )=t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t),$ hence $\\frac{1}{t^2}u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )+\\frac{1}{t}b\\big (\\frac{1}{t}\\big )u^\\prime \\big (\\frac{1}{t}\\big )+c\\big (\\frac{1}{t}\\big )u\\big (\\frac{1}{t}\\big )=0$ $\\frac{1}{t^2}[t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t)]+\\frac{1}{t}b\\big (\\frac{1}{t}\\big )[-t^2w^\\prime (t)]+c\\big (\\frac{1}{t}\\big )w(t)=0$ $t^2w^{\\prime \\prime }(t)+t\\big [2-b\\big (\\frac{1}{t}\\big )\\big ]w^\\prime (t)+c\\big (\\frac{1}{t}\\big )w(t)=0.$ Given that the infinity is a regular singular point of (REF ) we have that $t=0$ is a regular singular point of (REF ) consequently there exist the limits $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^2\\big [2-b\\big (\\frac{1}{t}\\big )\\big ]}{t^2}=\\displaystyle \\lim _{t\\rightarrow 0}\\big [2-b\\big (\\frac{1}{t}\\big )\\big ]=\\displaystyle \\lim _{t\\rightarrow 0}\\big [(2-b_0)-\\frac{b_1}{t}+\\frac{b_2}{t^2}+\\ldots \\big ]$ and $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^2c\\big (\\frac{1}{t}\\big )}{t^2}=\\displaystyle \\lim _{t\\rightarrow 0}c\\big (\\frac{1}{t}\\big )=\\displaystyle \\lim _{t\\rightarrow 0}\\big [c_0+\\frac{c_1}{t}+\\frac{c_2}{t^2}+\\ldots \\big ].$ Given that these limits exist we have that $0=b_1=b_2=\\ldots $ and $0=c_1=c_2=\\ldots $ .", "Hence equation (REF ) is of the form $z^2u^{\\prime \\prime }+zb_0u^\\prime +c_0u=0$ which is an Euler equation.", "A far more general statement is found below: Theorem 2.2 Consider a second order differential equation $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $a,b,c$ are entire functions in the complex plane with $a^\\prime (0)\\ne 0$ .", "The origin and the infinity are regular singular points of (REF ) if and only if there exist $k=0,1,2,\\ldots $ in such a way that equation (REF ) is of the form $(A_1z+\\ldots +A_{k+2}z^{k+2})u^{\\prime \\prime }+(B_0+B_1z+\\ldots +B_{k+1}z^{k+1})u^\\prime +(C_0+C_1z+\\ldots +C_{k}z^{k})u=0$ where $A_1,\\ldots ,A_{k+2},B_0,\\ldots ,B_{k+1},C_0,\\ldots ,C_{k}$ are constants such that $A_1,A_{k+2}\\ne 0$ .", ".", "Let us first see that (REF ) has the origin and the infinity as regular singular point.", "Clearly the origin é singular point of (REF ) and since there exist the limits $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z(B_0+B_1z+\\ldots +B_{k+1}z^{k+1})}{(A_1z+\\ldots +A_{k+2}z^{k+2})}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{B_0+B_1z+\\ldots +B_{k+1}z^{k+1}}{(A_1+\\ldots +A_{k+2}z^{k+1})}=\\frac{B_0}{A_1}$ and $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^2(C_0+C_1z+\\ldots +C_{k}z^{k})}{A_1z+\\ldots +A_{k+2}z^{k+2}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z(C_0+C_1z+\\ldots +C_{k}z^{k})}{A_1+\\ldots +A_{k+2}z^{k+1}}=0$ we have that the origin is a regular singular point.", "Putting $z=1/t$ .", "Let $w(t)=u\\big (\\frac{1}{t}\\big )$ $u^\\prime \\big (\\frac{1}{t}\\big )=-t^2w^\\prime (t), \\, \\, u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )=t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t),$ hence $\\big (\\frac{A_1}{t}+\\ldots +\\frac{A_{k+2}}{t^{k+2}}\\big )u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )+\\big (B_0+\\frac{B_1}{t}+\\ldots +\\frac{B_{k+1}}{t^{k+1}}\\big )u^\\prime \\big (\\frac{1}{t}\\big )+\\big (C_0+\\frac{C_1}{t}+\\ldots +\\frac{C_{k}}{t^{k}}\\big )u\\big (\\frac{1}{t}\\big )=0$ $\\begin{array}{c} \\big (\\frac{A_1}{t}+\\ldots +\\frac{A_{k+2}}{t^{k+2}}\\big )[t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t)]+\\big (B_0+\\frac{B_1}{t}+\\ldots +\\frac{B_{k+1}}{t^{k+1}}\\big )[-t^2w^\\prime (t)]\\\\\\\\+\\big (C_0+\\frac{C_1}{t}+\\ldots +\\frac{C_{k}}{t^{k}}\\big )w(t)=0\\end{array}$ $\\begin{array}{c}\\big (A_1t^{k+3}+\\ldots +A_{k+2}t^2\\big )w^{\\prime \\prime }(t)+\\big ((2A_1-B_0)t^{k+2}+\\ldots +(2A_{k+2}-B_{k+1})t\\big )w^\\prime (t)\\\\\\\\+\\big (C_0t^k+\\ldots +C_{k}\\big )w(t)=0.\\end{array}$ Observe that the origin is a singular point of (REF ) and since there exist the limits $\\begin{array}{c}\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t\\big ((2A_1-B_0)t^{k+2}+\\ldots +(2A_{k+2}-B_{k+1})t\\big )}{A_1t^{k+3}+\\ldots +A_{k+2}t^2}\\\\\\\\=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{(2A_1-B_0)t^{k+1}+\\ldots +(2A_{k+2}-B_{k+1})}{A_1t^{k+1}+\\ldots +A_{k+2} }=\\frac{2A_{k+2}-B_{k+1}}{A_{k+2}}\\end{array}$ and $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^2\\big (C_0t^k+\\ldots +C_{k}\\big )}{A_1t^{k+3}+\\ldots +A_{k+2}t^2}=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{C_0t^k+\\ldots +C_{k}}{A_1t^{k+1}+\\ldots +A_{k+2}}=\\frac{C_k}{A_{k+2}}$ we have that o the origin is a regular singular point of (REF ) consequently the infinity is a regular singular point of (REF ).", "Conversely, assume that the origin and the infinity are regular singular points of (REF ).", "Hence by the change of coordinates $z=1/t$ and considering $v(t)=u\\big (\\frac{1}{t}\\big )$ $u^\\prime \\big (\\frac{1}{t}\\big )=-t^2v^\\prime (t), \\, \\, u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )=t^4v^{\\prime \\prime }(t)+2t^3v^\\prime (t),$ hence $a\\big (\\frac{1}{t}\\big )u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )+b\\big (\\frac{1}{t}\\big )u^\\prime \\big (\\frac{1}{t}\\big )+c\\big (\\frac{1}{t}\\big )u\\big (\\frac{1}{t}\\big )=0$ $a\\big (\\frac{1}{t}\\big )\\big [t^4v^{\\prime \\prime }(t)+2t^3v^\\prime (t)\\big ]+b\\big (\\frac{1}{t}\\big )\\big [-t^2v^\\prime (t)\\big ]+c\\big (\\frac{1}{t}\\big )u\\big (\\frac{1}{t}\\big )=0$ $t^4a\\big (\\frac{1}{t}\\big )w^{\\prime \\prime }(t)+\\big [2t^3a\\big (\\frac{1}{t}\\big )-t^2b\\big (\\frac{1}{t}\\big )\\big ]w^\\prime (t)+c\\big (\\frac{1}{t}\\big )w(t)=0.$ Given that the infinity is a regular singular point of (REF ) then the origin is a regular singular point of (REF ).", "Hence, there exist the limits $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t\\big [2t^3a\\big (\\frac{1}{t}\\big )-t^2b\\big (\\frac{1}{t}\\big )\\big ]}{t^4a\\big (\\frac{1}{t}\\big )}=\\displaystyle \\lim _{t\\rightarrow 0}\\big [2-\\frac{b\\big (\\frac{1}{t}\\big )}{ta\\big (\\frac{1}{t}\\big )}\\big ]=2-\\lim _{t\\rightarrow 0}\\frac{b_0+\\frac{b_1}{t}+\\frac{b_2}{t^2}+\\ldots }{a_1+\\frac{a_2}{t}+\\frac{a_3}{t^2}+\\ldots }$ and $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^2c\\big (\\frac{1}{t}\\big )}{t^4a\\big (\\frac{1}{t}\\big )}=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{c\\big (\\frac{1}{t}\\big )}{t^2a\\big (\\frac{1}{t}\\big )}=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{c_0+\\frac{c_1}{t}+\\frac{c_2}{t^2}+\\ldots }{a_1t+a_2+\\frac{a_3}{t}+\\ldots }$ hence we have there exists $k=0,1,2\\ldots $ in such a way that $a_{k+2}\\ne 0$ , $0=a_{k+3}=a_{k+4}=\\ldots $ , $0=b_{k+1}=b_{k+2}=\\ldots $ and $0=c_{k+1}=c_{k+2}=\\ldots $ .", "Given that the origin is a regular singular point of (REF ) we have that there exist the limits $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{zb(z)}{a(z)}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{b_0z+b_1z^2+\\ldots +b_{k+1}z^{k+2}}{a_1z+a_2z^2+\\ldots +a_{k+2}z^{k+2}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{b_0+b_1z+\\ldots +b_{k+1}z^{k+1}}{a_1+a_2z+\\ldots +a_{k+2}z^{k+1}}$ and $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^2c(z)}{a(z)}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{c_0z^2+c_1z^3+\\ldots +c_{k}z^{k+2}}{a_1z+a_2z^2+\\ldots +a_{k+2}z^{k+2}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{c_0z+c_1z^2+\\ldots +c_{k}z^{k+1}}{a_1+a_2z+\\ldots +a_{k+2}z^{k+1}}$ provided that $a_1\\ne 0$ .", "Thence (REF ) is of the form (REF )." ], [ "Convergence of formal solutions", "Consider a second order ordinary differential equation given by $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $a,b,c$ are holomorphic in a neighborhood of the origin $0\\in \\mathbb {C}$ .", "We shall mainly address two questions: Question (i): Under what conditions can we assure that the origin is an ordinary point or a regular singular point of the equation?", "Question (ii): Is it that a formal solution of the ODE is always convergent?", "Theorem 2.3 (formal solutions order two) Consider a second order ordinary differential equation given by $a(x)y^{\\prime \\prime }+b(x)y^\\prime +c(x)y=0$ where $a,b,c$ are real or complex analytic functions at $x_0 \\in \\mathbb {R}, \\mathbb {C}$ .", "Suppose also that there exist two linearly independent formal solutions $\\hat{y}_1(x)$ and $\\hat{y}_2(x)$ centered at $x_0$ of equation (REF ).", "Then $x_0$ is an ordinary point or a regular singular point of (REF ).", "Moreover, $\\hat{y}_1(x)$ and $\\hat{y}_2(x)$ are convergent.", "Let us give a first proof of the convergence: First we consider the complex analytic case, i.e., $z$ is a complex variable and the coefficients $a(z), b(z), c(z)$ are complex analytic (holomorphic) functions in neighborhood $|z-z_0|<R$ of $z_0\\in \\mathbb {C}$ .", "According to [28] there is an integrable complex analytic one-form $\\Omega $ in $\\mathbb {C}^3$ defined as follows $\\Omega =-a(z)ydx + a(z)xdy +[a(z)y^2+b(z)xy +c(z)x^2]dz.$ This one-form is tangent to the vector field in $\\mathbb {C}^3$ associated to the reduction of order of the ODE.", "Moreover, given two solutions $u_1(z)$ and $u_2(z)$ of the ODE the function $H=\\frac{xu_1^\\prime (z)-yu_1(z)}{xu_2^\\prime (z)-yu_2(z)}$ is a first integral for the form $\\Omega $ , ie., $dH\\wedge \\Omega =0$ .", "By hypothesis there exist two linearly independent formal solutions $\\hat{u}_1$ and $\\hat{u}_2$ of equation (REF ).", "Each solution writes as a formal complex power series $\\hat{u}_j(z)=\\sum \\limits _{n=0}^\\infty a_n ^j z^n\\in \\mathbb {C}\\lbrace \\lbrace z\\rbrace \\rbrace $ .", "According to the above, there exists a first integral purely formal $H=\\frac{x\\hat{u}_1^\\prime (z)-y\\hat{u}_1(z)}{x\\hat{u}_2^\\prime (z)-y\\hat{u}_2(z)}$ of the integrable one-form $\\Omega $ above.", "Now we recall the following convergence theorem: Theorem 2.4 (Cerveau-Mattei, [9], Theorem 1.1 page 106) Let $\\Omega $ be a germ at $0\\in \\mathbb {C}^n$ of an integrable holomorphic 1-form and $H=\\frac{f}{g}\\in \\hat{\\mathcal {M}}_n$ a purely formal meromorphic first integral of $\\Omega $ , i.e., $\\Omega \\wedge dH=0$ and $H,1/H\\notin \\hat{\\mathcal {O}}_n$ .", "Then $H$ converges, i.e., $H\\in \\mathcal {M}_n$ .", "From the above theorem there exist $f,g\\in \\mathcal {O}_3$ such that $H=\\frac{f}{g}$ .", "Putting $x=0$ and $y=y_0$ small enough and non-zero we have $\\frac{\\hat{u}_1(z)}{\\hat{u}_2(z)}=\\frac{f_1(z)}{g_1(z)}.$ Also, putting $y=0$ and $x=x_0$ small enough and non-zero we have $\\frac{\\hat{u}_1^\\prime (z)}{\\hat{u}_2^\\prime (z)}=\\frac{f_2(z)}{g_2(z)}$ .", "Hence we have $\\hat{u}_1=\\frac{f_1}{g_1}\\hat{u}_2$ and $\\hat{u}_1^\\prime =\\frac{f_2}{g_2}\\hat{u}_2^\\prime .$ Derivating (REF ) we have $\\hat{u}_1^\\prime =\\big (\\frac{f_1}{g_1}\\big )^\\prime \\hat{u}_2+\\frac{f_1}{g_1}\\hat{u}_2^\\prime .$ Replacing (REF ) in (REF ) we have $\\frac{f_2}{g_2}\\hat{u}_2^\\prime =\\big (\\frac{f_1}{g_1}\\big )^\\prime \\hat{u}_2+\\frac{f_1}{g_1}\\hat{u}_2^\\prime $ $\\big (\\frac{f_2}{g_2}-\\frac{f_1}{g_1}\\big ) \\hat{u}_2^\\prime =\\big (\\frac{f_1}{g_1}\\big )^\\prime \\hat{u}_2.$ Observe that $\\frac{f_2}{g_2}-\\frac{f_1}{g_1}\\ne 0$ since $\\hat{u}_1$ and $\\hat{u}_2$ are linearly independent.", "Therefore from (REF ) we obtain $\\hat{u}_2^\\prime =\\frac{\\big (\\frac{f_1}{g_1}\\big )^\\prime }{\\big (\\frac{f_2}{g_2}-\\frac{f_1}{g_1}\\big )}u_2 $ thus $\\hat{u}_2(z)=\\exp \\big (\\displaystyle \\int ^z_{z_0}\\frac{\\big (\\frac{f_1(w)}{g_1(w)}\\big )^\\prime }{\\big (\\frac{f_2(w)}{g_2(w)}-\\frac{f_1(w)}{g_1(w)}\\big )}dw \\big )$ is convergent and according to (REF ) $\\hat{u}_1$ also is convergent.", "This proves the convergence part in Theorem REF .", "We stress the fact that we are not assuming the ODE to be regular at $x_0$ ." ], [ "The wronskian I", "Consider the linear homogeneous second order ODE $a(x)y^{\\prime \\prime }+b(x)y^\\prime +c(x)y=0$ where $a(x),b(x),c(x)$ are differentiable real or complex functions defined in some open subset $U\\subset \\mathbb {R}, \\mathbb {C}$ .", "We may assume that $U$ is an open disc centered at the origin $0 \\in \\mathbb {R}, \\mathbb {C}$ .", "We make no hypothesis on the nature of the point $x=0$ as a singular or ordinary point of (REF ).", "Given two solutions $y_1$ and $y_2$ of (REF ) their wronskian is defined by $W(y_1,y_2)(x)=y_1(x)y_2^\\prime (x)-u_2(x)y_1^\\prime (x)$ .", "Claim 2.1 The wronskian $W(y_1,y_2)$ satisfies the following first order ODE $a(x)w^\\prime +b(x)w=0.$ This is a well-known fact and we shall not present a proof, which can be done by straightforward computation.", "Most important, the above fact allows us to introduce the notion of wronskian of a general second order linear homogeneous ODE as (REF ) as follows: Definition 2.1 The wronskian of (REF ) is defined as the general solution of (REF ).", "Hence, in general the wronskian is of the form $W(x)=K\\exp \\big (-\\displaystyle \\int ^x\\frac{b(\\eta )}{a(\\eta )}d\\eta \\big )$ where $K$ is a constant.", "A well-known consequence of the above formula is the following: Lemma 2.1 Given solutions $y_1(x), y_2(x)$ the following conditions are equivalent: $W(y_1,y_2)(x)$ is identically zero.", "$W(y_1,y_2)(x)$ vanishes at some point $x=x_0$ .", "$y_1(x), y_2(x)$ are linearly dependent.", "Let us analyze the consequences of this form.", "We shall consider the origin as the center of our disc domain.", "In what follows the coefficients are analytic in a neighborhood of the origin.", "Case (1): If $\\frac{b}{a}$ has poles of order $r>1$ at the origin: In this case we can write $\\frac{b(x)}{a(x)}=\\frac{A_r}{x^r}+\\ldots +\\frac{A_2}{x^2}+\\frac{A_1}{x}+d(x)$ where $A_1,A_2,\\ldots ,A_r$ are constant, $A_r\\ne 0$ and $d$ is analytic at the origin.", "Thus we have $W(x)=K\\exp \\big (-\\displaystyle \\int ^x\\big (\\frac{A_r}{w^r}+\\ldots +\\frac{A_2}{w^2}+\\frac{A_1}{w}+d(w)\\big )dw\\big )$ $W(x)=K\\exp \\big (\\frac{A_r}{(r-1)x^{r-1}}+\\ldots +\\frac{A_2}{x}-A_1\\log |x|+\\tilde{d}(x)\\big )$ where $\\tilde{d}$ is analytic.", "Hence $W(x)=K|x|^{-A_1}\\exp \\big (\\frac{A_r}{(r-1)x^{r-1}}+\\ldots +\\frac{A_2}{x}\\big )\\exp \\big (\\tilde{d}(x)\\big ).$ Now observe that $\\exp \\big (\\frac{A_r}{(r-1)x^{r-1}}+\\ldots +\\frac{A_2}{x}\\big )$ is neither analytic nor formal.", "Therefore, in this case, $W$ is neither analytic nor formal.", "Case (2): $\\frac{b}{a}$ has poles of order $\\le 1$ at the origin.", "In this case $\\frac{b(x)}{a(x)}=\\frac{A_1}{x}+d(x)$ and $W(x)=K|x|^{-A_1}\\exp \\big (\\tilde{d}(x)\\big ).$ If $W$ is analytic or formal then we must have $A_1\\in \\lbrace 0,-1,-2,-3,\\ldots \\rbrace $ .", "Summarizing we have: Lemma 2.2 Assume that the wronskian $W$ of the ODE $a(x) y^{\\prime \\prime } + b(x) y^\\prime + c(x) y=0$ , with analytic coefficients, is analytic or formal.", "Then $\\frac{b}{a}$ has a pole of order $r\\le 1$ at the origin.", "Moreover, we must have $W(x)=K|x|^{-A}\\exp \\big (f(x)\\big )$ , where $A\\in \\lbrace 0,-1,-2,-3,\\ldots \\rbrace $ and $f$ is analytic.", "Now we are able to prove the remaining part of Theorem REF : We have already proved the first part.", "Let us now prove that the origin is an ordinary point or a regular singularity of the ODE.", "This is done by means of the two following claims: Claim 2.2 The quotient $\\frac{b}{a}$ has poles of order $\\le 1$ at the origin.", "Indeed, since by hypothesis there are two formal linearly independent functions, the wronskian is formal.", "Thus, from the above discussion we conclude.", "The last part is done below.", "For simplicity we shall assume that $x=z\\in \\mathbb {C}$ and that the coefficients are complex analytic (holomorphic) functions.", "Claim 2.3 We have $\\displaystyle \\lim _{z\\rightarrow 0} z^2 \\frac{c(z)}{a(z)} \\in \\mathbb {C}$ .", "Write $a(z) u^{\\prime \\prime } + b(z) u ^\\prime + c(z) u=0$ and $a(z)=z^k$ according to the local form of holomorphic functions.", "Since $\\displaystyle \\lim _{z \\rightarrow 0}z\\frac{b(z)}{a(z)}\\in \\mathbb {C}$ we must have $\\frac{b(z)}{a(z)}=\\frac{\\tilde{b}(z)}{z}$ for some holomorphic function $\\tilde{b}(z)$ at 0.", "Then we have for the ODE above $z^{3+ \\nu } u ^{\\prime \\prime } + z^{ 2 + \\nu } \\tilde{b}(z) u ^\\prime + \\tilde{c} (z) u=0.$ Assume that the Claim is not true, then $\\frac{c(z)}{a(z)}$ must have a pole of order $\\ge 3$ at 0.", "Thus we may write $\\frac{c(z)}{a(z)}=\\frac{c(z)}{z^k}= \\frac{\\tilde{c}(z)}{z^{3+ \\nu }}$ for some holomorphic function $\\tilde{c}(z)$ at 0 and some $\\nu \\ge 0$ .", "For sake of simplicity we will assume that $\\tilde{c}(0)=1$ and $\\nu =0$ .", "This does not affect the argumentation below.", "We write $\\tilde{b}(z)= b_0 + b_1 z + b_2 z^2 +\\ldots $ and $\\tilde{c}(z)= 1 +c_1 z + c_2 z^2 + \\ldots $ in power series.", "Substituting this in the ODE we obtain $z^{3+\\nu }u^{\\prime \\prime } + z^{2+\\nu } (b_0+b_1z+b_2 z^2+\\ldots ) u ^\\prime +(1 +c_1 z + c_2 z^2 + \\ldots ) u=0.$ Now we write $u(z)=\\sum \\limits _{n=0}^\\infty a_n z^n$ in power series.", "We obtain $\\sum \\limits _{n=2}^\\infty n(n-1)a_n z^{n+ 1 + \\nu } + (b_0 + b_1 z +\\ldots )\\sum \\limits _{n=1}^\\infty na_n z^{n+1+\\nu } + (1 + c_1 z+ c_2 z^2 +\\ldots ) \\sum \\limits _{n=0}^\\infty a_n z^n=0.$ Now we start by comparing the lower powers of $z$ on each term of the above expression.", "We have $\\sum \\limits _{n=2}^\\infty n(n-1) a_nz^{n+ 1 + \\nu }=2a_2z^{3 +\\nu } +6a_3 z^{4+ \\nu }+ \\ldots $ $(b_0 + b_1 z +\\ldots )\\sum \\limits _{n=1}^\\infty na_n z^{n+1+\\nu }= b_0 a_1 z^{ 2 + \\nu } +(2 b_0a_2 + b_1 a_1) z^{3+\\nu } + \\ldots $ and finally $(1 + c_1 z+ c_2 z^2 +\\ldots ) \\sum \\limits _{n=0}^\\infty a_n z^n= a_0 + (a_1 + c_1 a_0) z +(a_2 + c_1 a_1 + a_0 c_2) z^2 + \\ldots $ Starting now from the lowest powers of $z$ in the expression of the ODE above we obtain $a_0=0$ .", "Also we obtain $a_1 + c_1 a_0=0$ and therefore $a_1=0$ .", "Since $\\nu =0$ we have the power of $z^2$ this gives $b_0 a_1 + a_2 + c_1 a_1 + a_0 c_2=0$ and then $a_2=0$ .", "Now for the coefficient of $z^3$ we obtain $2a_2 + 2 b_0a_2 + b_1 a_1 + a_3=0$ and therefore $a_3=0$ .", "And so on we conclude that $a_n=0$ , for all $n \\ge 0$ ie., $u=0$ is the only possible formal solution.", "This proves the claim by contradiction.", "The two claims above end the proof of Theorem REF .", "Next we present a result that also implies a more simple proof of Theorem REF .", "Theorem 2.5 (1 formal solution second order regular singularity) Consider a second order ordinary differential equation given by $a(x)y^{\\prime \\prime }+b(x)y^\\prime +c(x)y=0$ where $a,b,c$ are analytic functions at $x_0 \\in \\mathbb {R}$ .", "Suppose that (REF ) has at $x_0$ an ordinary point or a regular singular point.", "Then a formal solution $\\hat{y}(x) = \\sum \\limits _{n=0}^\\infty a_n (x-x_0)^n$ is always convergent in some neighborhood $|x-x_0|<R$ of the point $x_0$ .", "Indeed, this solution converges in the same disc type neighborhood where the coefficients $a(x), b(x), c(x)$ are analytic.", "First of all we are assuming that the origin is an ordinary point or a regular singularity of the ODE.", "If it is an ordinary point, then by the classical existence theorem for ODEs there are two linearly independent analytic solutions and any solution, formal or convergent, will be a linear combination of these two solutions.", "Such a solution is therefore convergent.", "Thus we may write the ODE as $x^2 y^{\\prime \\prime } + xb(x)y^\\prime + c(x) y=0$ where the new coefficients $b(x)$ and $c(x)$ , obtained after renaming $xb(x)/a(x)$ and $x^2c(x)/a(x)$ conveniently, are analytic.", "Let us consider a formal solution $\\hat{y}(x)=\\sum \\limits _{n=0}^\\infty d_n x^n$ .", "We can write $\\hat{y}(x)=x^{r_1}(1+{\\varphi }(x))$ for some $r_1 \\ge 0$ and ${\\varphi }(x)$ a formal function with ${\\varphi }(0)=0$ .", "In other words, $r_1\\in \\lbrace 0,1,2,\\ldots \\rbrace $ is the order of $\\hat{y}(x)$ at the origin.", "Then we have $\\hat{y}^\\prime (x)= r_1 x^{r_1-1} ( 1 + {\\varphi }(x)) + x^{r_1} {\\varphi }^\\prime (x)$ and $\\hat{y}^{\\prime \\prime }(x)=r_1 (r_1-1) x^{r_1-2} (1 + {\\varphi }(x)) + 2 r_1 x^{r_1-1} {\\varphi }^\\prime (x)+x^{r_1}{\\varphi }^{\\prime \\prime }(x)$ .", "Substituting this in the ODE $x^2 \\hat{y}^{\\prime \\prime }(x) + xb(x)\\hat{y}^\\prime (x) + c(x) \\hat{y}(x)=0$ and dividing by $x^{r_1}$ we obtain $r_1(r_1-1)(1 +{\\varphi }(x)) + 2r_1 x {\\varphi }^\\prime (x) + x^2 {\\varphi }^{\\prime \\prime } (x) +r_1 b(x) (1 + {\\varphi }(x)) + x b(x) {\\varphi }^{\\prime }(x) + c(x)(1 + {\\varphi }(x))=0.$ For $x=0$ , since ${\\varphi }(0)=0$ , we then obtain the equation $r_1(r_1-1) + r_1 b(0) + c(0)=0.$ The above is exactly the indicial equation associated to the original ODE.", "We then conclude that the original ODE has an indicial equation with a root $r_1$ that belongs to the set of non-negative integers.", "Let now $r\\in \\mathbb {Z}$ be the other root of the indicial equation.", "There are two possibilities: (i) $r\\ge r_1$ .", "In this case, then according to Frobenius classical theorem we conclude that there is at least one solution $y_r(x)=x^r \\sum \\limits _{n=0}^\\infty e_n x^n$ which is convergent.", "There are two possibilities: (i.1) $y_r(x)$ and $\\hat{y}(x)$ are linearly dependent: in this case, $y_r(x)=\\ell \\cdot \\hat{y}(x)$ for some constant $\\ell \\in \\mathbb {R}, \\mathbb {C}$ .", "Then $r=r_1$ and therefore $y_r(x)$ is analytic and the same holds for $\\hat{y}(x)$ .", "More precisely, $\\hat{y}(x)$ is analytic in the same neighborhood $|x|<R$ where $b(x), c(x)$ are convergent.", "(i.2) $y_r(x)$ and $\\hat{y}(x)$ are linearly independent: Since $y_r(x)$ is analytic and seeing $y_r(x)$ as a formal solution, we have two linearly independent formal solutions.", "From what we have seen above in Theorem REF both solutions are convergent in the common disc domain of analyticity of the functions $b(x), c(x)$ .", "(ii) $r_1\\ge r$ .", "In this case, then according to Frobenius classical theorem we conclude that there is at least one solution $\\tilde{y}_{r_1}(x)=x^{r_1}\\sum \\limits _{n=0}^\\infty f_n x^n$ , where the power series is convergent.", "There are two possibilities: (ii.1) $\\tilde{y}_{r_1}(x)$ and $\\hat{y}(x)$ are linearly dependent: in this case, $\\tilde{y}_{r_1}(x)=\\tilde{\\ell }\\cdot \\hat{y}(x)$ for some constant $\\tilde{\\ell }\\in \\mathbb {R}, \\mathbb {C}$ .", "Then $r=r_1$ and therefore $\\tilde{y}_{r_1}(x)$ is analytic and the same holds for $\\hat{y}(x)$ .", "More precisely, $\\hat{y}(x)$ is analytic in the same neighborhood $|x|<R$ where $b(x), c(x)$ are convergent.", "(ii.2) $\\tilde{y}_{r_1}(x)$ and $\\hat{y}(x)$ are linearly independent: in this case, $\\tilde{y}_{r_1}(x)$ is analytic and seeing $\\tilde{y}_{r_1}(x)$ as a formal solution, we have two linearly independent formal solutions.", "From what we have seen above in Theorem REF both solutions are convergent in the common disc domain of analyticity of the functions $b(x), c(x)$ .", "The above proof still makes use of the convergence part in Theorem REF , thus it cannot be used to give an alternative proof of Theorem REF .", "Let us work on a totally independent proof of Theorem REF based only on classical methods of Frobenius and ODEs.", "For this sake we shall need a few lemmas." ], [ "The wronskian II", "We consider the ODE $x^2 y^{\\prime \\prime } + xb(x) y^\\prime + c(x) y=0$ with a regular singular point at the origin.", "Lemma 2.3 Let $\\hat{y}(x)$ be a formal solution of the ODE.", "Then we must have $\\hat{y}(x)=x^r (1+ \\sum \\limits _{n=1}^\\infty a_n x^n)$ where $r$ is a root of the indicial equation of the ODE.", "Indeed, from the last proof if we write $\\hat{y}(x)=x^r(1+{\\varphi }(x))$ for some $r \\ge 0$ and ${\\varphi }(x)$ a formal function with ${\\varphi }(0)=0$ then we have that $r(r-1) + r b(0) + c(0)=0.$ which is exactly the indicial equation associated to the original ODE.", "Remark 2.1 Let $r\\in \\lbrace 0,1,2,\\ldots \\rbrace $ be a root of the indicial equation and assume that we have two solutions $\\hat{y}_1(x)=x^r (1+ {\\varphi }_1(x))$ and $\\hat{y}_2(x)=x^r (1 + {\\varphi }_2(x))$ which are formal.", "The wronskian writes $W(\\hat{y}_1,\\hat{y}_2)(x)= \\hat{y}_1(x)\\hat{y}_2^\\prime (x)-\\hat{y}_1^\\prime (x)\\hat{y}_2(x)=x^r(1 +{\\varphi }_1(x))[r x^{r-1} (1+ {\\varphi }_2(x)) + x^r {\\varphi }_2 ^\\prime (x)] -x^r(1 +{\\varphi }_2(x))[r x^{r-1} (1+ {\\varphi }_1(x)) + x^r {\\varphi }_1 ^\\prime (x)] =x^{2r}[\\varphi ^\\prime _2(x)(1+\\varphi _1(x))-\\varphi ^\\prime _1(x)(1+\\varphi _2(x))]$ .", "Then we have two cases: (i) $r \\ge 1$ .", "In this case $W(\\hat{y}_1,\\hat{y}_2)(0)=0$ .", "In this situation we must have $W(\\hat{y}_1,\\hat{y}_2) (x)=0$ and therefore $\\hat{y}_1, \\hat{y}_2$ are linearly dependent.", "(ii) $r=0$ .", "Let us proceed.", "We are assuming now that we have two formal solutions $\\hat{y}_1,\\hat{y}_2$ for the ODE above.", "We write $\\hat{y}_j(x)=x^{r_j}(1 + {\\varphi }_j(x))$ for some formal series ${\\varphi }_j(x)$ that satisfies ${\\varphi }_j(0)=0$ .", "The exponents $r_j$ are non-negative integers and from what we have seen above, these are roots of the indicial equation $r(r-1) + r b(0) + c(0)=0$ of the ODE.", "We may assume that $r_1 \\ge r_2$ .", "So we have the following possibilities: (i) $r_1=r_2$ .", "If this is the case we cannot a priori assure that the indicial equation has only the root $r=r_1=r_2$ .", "Anyway, if $r \\ne 0$ then from what we have seen above the formal solutions $\\hat{y}_1, \\hat{y}_2$ are linearly dependent.", "This is a contradiction.", "Thus we must have $r=0$ .", "If $r=0$ is the only root of the indicial equation then we have a basis of the solution space given by $y_1(x)=1+\\sum \\limits _{n=1}^\\infty e_n x^n$ and $y_2(x)=y_1(x) \\log |x| + \\sum \\limits _{n=1}^\\infty f_n x^n$ .", "If a linear combination $\\hat{y}(x)=c_1 y_1(x) + c_2 y_2(x)$ is a formal function then necessarily $c_2=0$ .", "Thus any two formal solutions are linearly dependent.", "Assume now that $r=0$ is not the only root of the indicial equation.", "Denote by $\\tilde{r}\\in \\mathbb {Z}^*$ the other root of the indicial equation.", "There are two possibilities: (a) $\\tilde{r}>0$ then there is a basis of solutions given by $y_1(x) = x^{\\tilde{r}} (1+ \\sum \\limits _{n=1}^\\infty g_n x^n)$ and $y_2(x) = a y_1(x)\\log |x| + |x|^0( 1 + \\sum \\limits _{n=1}^\\infty h_n x^n)$ .", "Let $y(x)=c_1 y_1(x) + c_2 y_2(x)$ be a formal power series.", "Then $y(x)= (c_1 + a c_2 \\log |x|)x^{\\tilde{r}} (1+ \\sum \\limits _{n=1}^\\infty g_n x^n) + c_2( 1 + \\sum \\limits _{n=1}^\\infty h_n x^n).", "$ If $y(x)$ is a formal power series then we must have $a c_2=0$ and therefore $y(x)= c_1 x^{\\tilde{r}} (1+ \\sum \\limits _{n=1}^\\infty g_n x^n) + c_2( 1 + \\sum \\limits _{n=1}^\\infty h_n x^n)$ .", "In particular, since $\\tilde{r} \\in \\mathbb {N}$ , $y(x)$ is convergent.", "This shows that the formal solutions $\\hat{y}_1(x), \\hat{y}_2(x)$ are convergent and this is the only possible case where they can be linearly independent.", "(b) $\\tilde{r}<0$ then there is a basis of solutions given by $y_1(x)=1 + \\sum \\limits _{n=1}^\\infty p_n x^n$ and $y_2(x)=ay_1(x) \\log |x| + x^{\\tilde{r}} (1 + \\sum \\limits _{n=1}^\\infty q_n x^n)$ .", "Write $y(x)= c_1 y_1(x) + c_2 y_2(x)$ for a linear combination of $y_1(x)$ and $y_2(x)$ .", "Then $y(x)=(c_1 + a c_2 \\log |x|)(1 + \\sum \\limits _{n=1}^\\infty p_n x^n) + c_2 (x ^{\\tilde{r}} (1 + \\sum \\limits _{n=1}^\\infty q_n x^n))$ .", "If $y(x)$ is a formal series then necessarily $ac_2=0$ (because of the term $\\log |x|$ ) and also $c_2=0$ in this case because $\\tilde{r}<0$ .", "Thus we get $y(x) = c_1 y_1(x)$ which is convergent.", "This shows that again we must have that $\\hat{y}_1$ and $\\hat{y}_2$ are multiple of $y_1$ and therefore they are linearly dependent, contradiction again.", "(ii) $0< r_1 - r_2 =N \\in \\mathbb {N}$ .", "This case follows from facts already used above.", "Since $r_1 > r_2$ and since each $r_j$ is a root of the indicial equation, we conclude that these are the roots of the indicial equation.", "By Frobenius theorem there is a basis of the solutions given by $y_1(x)= x^{r_1}(1+ \\sum \\limits _{n=1}^\\infty s_n x^n) $ and $y_2(x) =a y_1(x) \\log |x| + |x|^{r_2}(1+\\sum \\limits _{n=1}^\\infty t_n x^n)$ .", "If $y(x)=c_1 y_1(x) + c_ 2 y_2(x)$ is a formal power series then we must have $ac_2=0$ and $y(x)=c_1x^{r_1}(1+ \\sum \\limits _{n=1}^\\infty s_n x^n) +c_2 x^{r_2}(1+\\sum \\limits _{n=1}^\\infty t_n x^n)$ which is convergent.", "This shows that $\\hat{y}_1, \\hat{y}_2$ must be convergent.", "We are now in conditions of giving a second proof to Theorem REF .", "Indeed, from the second part of the proof (which is based only on classical methods of Frobenius and ODEs) we know that the origin is an ordinary point or a regular singular point of the ODE.", "Given the two linearly independent formal solutions $\\hat{y}_j(x),\\;j=1,2$ , from the above discussion, the solutions $\\hat{y}_1(x),\\hat{y}_2(x)$ are analytic." ], [ "The wronskian III: some examples", "The next couple of examples show that the information on the wronskian (whether it is convergent, formal,etc) is not enough to infer about the nature of the solutions.", "Example 2.1 (convergent wronskian but no formal solution) This is an example of an ODE with a convergent wronskian but admitting no formal solution.", "$x^3y^{\\prime \\prime }-x^2y^\\prime -y=0.$ The origin is a non-regular singular point for (REF ).", "From what we have observed above the wronskian $W$ of two linearly independent solutions of (REF ) satisfies the following first order ODE $x^3w^\\prime -x^2w=0$ whose solution is of the form $W(x)=K\\exp \\big (\\int ^x \\frac{\\eta ^2}{\\eta ^3}d\\eta \\big )=K\\exp (\\log x)=Kx$ for some constant $K$ .", "Let us now check that there are no formal solutions besides the trivial.", "Indeed, assume that $y(x)=\\displaystyle \\sum ^\\infty _{n=0}a_nx^n$ is a formal solution of (REF ).", "Then we have $x^3\\big (\\sum ^{\\infty }_{n=2}n(n-1)a_nx^{n-2}\\big )-x^2\\big (\\sum ^\\infty _{n=1}na_nx^{n-1}\\big )-\\sum ^\\infty _{n=0}a_nx^n=0$ $\\sum ^{\\infty }_{n=2}\\big ([n(n-1)-n]a_n-a_{n+1}\\big )x^{n+1}-a_1x^{2}-a_2x^2-a_1x-a_0=0$ so that $a_0=a_1=a_2=0$ and $(n^2-2n)a_n=a_{n+1}$ for every $n=2,3,\\ldots $ .", "Thus $a_n=0$ for every $n=0,1,2,\\ldots $ .", "Example 2.2 (non-convergent wronskian no formal solution) We shall now give an example of an ODE with non-convergent wronskian and admitting no formal solution but the trivial one.", "The ODE $x^3y^{\\prime \\prime }-xy^\\prime -y=0.$ has a non-regular singular point at the origin.", "Indeed, this wronskian is solution of the first order ODE $x^3w^\\prime -xw=0$ which has solutions of the form $W(x)=K\\exp \\big (\\int ^x \\frac{\\eta }{\\eta ^3}d\\eta \\big )=K\\exp \\big (-\\frac{1}{x}\\big )$ where $K$ is a constant.", "Let us now check that (REF ) admits no non-trivial formal solutions.", "Assume that $y(x)=\\sum ^\\infty _{n=0}a_nx^n$ is a formal solution of (REF ).", "Then we must have $x^3\\big (\\sum ^{\\infty }_{n=2}n(n-1)a_nx^{n-2}\\big )-x\\big (\\sum ^\\infty _{n=1}na_nx^{n-1}\\big )-\\sum ^\\infty _{n=0}a_nx^n=0$ $\\sum ^{\\infty }_{n=3}\\big ((n-1)(n-2)a_{n-1}-(n+1)a_{n}\\big )x^{n}-2a_2x^{2}-a_2x^2-a_1x-a_1x-a_0=0$ and then $a_0=a_1=a_2=0$ and $(n-1)(n-2)a_{n-1}=(n+1)a_{n}$ for all $n=3,4,\\ldots $ .", "Hence $a_n=0$ for all $n=0,1,2,\\ldots $ ." ], [ "Characterization of regular singular points: proof of Theorem ", "We shall now prove Theorem REF .", "We shall first consider the complex analytic case.", "We start then with a complex analytic ODE of the form $a(z) u^{\\prime \\prime } + b(z) u ^\\prime + c(z) u=0$ .", "Let us assume that this equation admits two linearly independent solutions $u_1(z), u_2(z)$ , which are of autlos type in some neighborhood of the origin $z=0\\in \\mathbb {C}$ .", "The wronskian $W(u_1,u_2)(z)$ satisfies the first order ODE $a(z)w^\\prime +b(z)w=0$ and since it is given by $W(u_1,u_2)(z)=u_1^\\prime (z)u_2(z)-u_1(z)u_2^\\prime (z)$ , it is also of autlos type in some neighborhood of the origin $z=0\\in \\mathbb {C}$ .", "Using then the above first order ODE and arguments similar to those in the proof of Lemma REF we conclude that $b(z)/a(z)$ must have a pole of order $\\le 1$ at the origin, otherwise $W(u_1,u_2)(z)$ would have an essential singularity at the origin.", "Following now a similar reasoning in the proof of Claim REF in the second part of the proof of Theorem REF we conclude that $c(z)/a(z)$ must have a pole of order $\\le 2$ at the origin.", "This shows that the singularity at the origin is regular, or the origin is an ordinary point.", "If we start with a real analytic ODE then we consider its complexification.", "The fact that there are two linearly independent solutions of autlos type for the original ODE implies that there are two linearly independent solutions for the corresponding complex ODE, by definition these solutions will be of autlos type.", "Once we have concluded that the complex ODE has a regular singularity or an ordinary point at the origin, the same holds for the original real analytic ODE.", "Thus $(2)\\Rightarrow (3)$ .", "The classical Frobenius theorem shows that $(3)\\Rightarrow (1)$ .", "Finally, it is clear from the definitions that $(1)\\Rightarrow (2)$ .", "The next examples show how sharp is the statement of Theorem REF .", "Example 2.3 Consider the equation $z^3u^{\\prime \\prime }-zu^\\prime +u=0.$ The origin $z_0=0$ is a singular point, but not is regular singular point, since if we multiply equation (REF ) according to $1/z$ we have $z^2u^{\\prime \\prime }-u^\\prime +\\frac{1}{z}u=0$ and since the coefficient $-1$ of $u^\\prime $ does not have the form $zb(z)$ , where $b$ is holomorphic for 0.", "It is easy to see that $u_1=z$ is a solution of equation (REF ).", "Making use of the method of reduction of order we can construct a second solution $u_2$ linearly independent with $u_1$ .", "Hence we have that $u_2(z)=z\\int ^z\\big (\\frac{\\exp \\big (-\\int ^w \\frac{-v}{v^3}dv\\big )}{w^2}\\big )dw=z\\int ^z\\big (\\frac{\\exp \\big (\\int ^w \\frac{1}{v^2}dv\\big )}{w^2}\\big )dw $ $u_2(z)=z\\int ^z\\big (\\frac{\\exp \\big (-\\frac{1}{w}\\big )}{w^2}\\big )dw =z\\exp \\big (-\\frac{1}{z}\\big ).$ Note that $u_2$ is not holomorphic.", "Remark 2.2 Consider a second order differential equation of the form $z^3a(z)u^{\\prime \\prime }+z^2b(z)u^\\prime +c(z)u=0$ where $a,b,c$ are holomorphic at the origin with $a(0)\\ne 0$ and $c(0)\\ne 0$ .", "We shall see that (REF ) admits no formal solution.", "Indeed we assume that $u(z)=\\displaystyle \\sum ^\\infty _{n=0}d_nz^n$ is a formal solution of (REF ) hence $z^3\\big (\\sum ^\\infty _{n=0}a_nz^n\\big )\\big (\\sum ^{\\infty }_{n=2}n(n-1)d_nz^{n-2}\\big )+z^2\\big (\\sum ^\\infty _{n=0}b_nz^n\\big )\\big (\\sum ^\\infty _{n=1}nd_nz^{n-1}\\big )+\\big (\\sum ^\\infty _{n=0}c_nz^n\\big )\\big (\\sum ^\\infty _{n=0}d_nz^n\\big )=0$ $\\big (\\sum ^\\infty _{n=0}a_nz^n\\big )\\big (\\sum ^{\\infty }_{n=3}(n-1)(n-2)d_{n-1}z^{n}\\big )+\\big (\\sum ^\\infty _{n=0}b_nz^n\\big )\\big (\\sum ^\\infty _{n=2}(n-1)d_{n-1}z^{n}\\big )+\\big (\\sum ^\\infty _{n=0}c_nz^n\\big )\\big (\\sum ^\\infty _{n=0}d_nz^n\\big )=0$ $\\sum ^\\infty _{n=0}\\tilde{a}_nz^n+\\sum ^\\infty _{n=0}\\tilde{b}_nz^n+\\sum ^\\infty _{n=0}\\tilde{c}_nz^n=0$ where $\\tilde{a}_0=\\tilde{a}_1=\\tilde{a}_2=0,\\;\\tilde{a}_n=\\sum ^{n}_{k=3}a_{n-k}(k-1)(k-2)d_{k-1}\\;\\;\\mbox{ for }n\\ge 3,$ $\\tilde{b}_0=\\tilde{b}_1=0,\\;\\tilde{b}_n=\\sum ^{n}_{k=2}b_{n-k}(k-1)d_{k-1}\\mbox{ for }n\\ge 2$ and $\\tilde{c}_n=\\sum ^{n}_{k=0}c_{n-k}d_k\\;\\;\\mbox{ for }n\\ge 0.$ Hence we have $\\tilde{a}_n+\\tilde{b}_n+\\tilde{c}_n=0,\\;\\;\\mbox{ for all }n\\ge 0.$ For $n=0$ we have: $c_0d_0=0$ and since $c_0\\ne 0$ then $d_0=0$ .", "For $n=1$ we have: $c_1d_0+c_0d_1=0$ and then $c_0d_1=0$ and since $c_0\\ne 0$ then $d_1=0$ .", "For $n=2$ we have: $b_0d_1+c_2d_0+c_1d_1+c_0d_2=0$ and then $c_0d_2=0$ and since $c_0\\ne 0$ then $d_2=0$ .", "For $n=3$ we have: $2a_0d_2+b_1d_1+2b_0d_2+c_3d_0+c_2d_1+c_1d_2+c_0d_3=0$ and then $c_0d_3=0$ and since $c_0\\ne 0$ then $d_3=0$ .", "For $n\\ge 4$ we have: $c_0d_n=-\\sum ^n_{k=4}[(k-1)(k-2)a_{n-k}+(k-1)b_{n-k}+c_{n-k+1}]d_{k-1}$ and since $c_0\\ne 0$ then we have $d_n=0$ for all $n\\ge 0$ .", "Thence there exist no non trivial formal solution.", "Observe that there exists the limit $\\displaystyle \\lim _{x\\rightarrow 0}\\frac{x^3b(x)}{x^3a(x)}=\\displaystyle \\lim _{x\\rightarrow 0}\\frac{b_0+b_1x+\\ldots }{a_0+a_1x+\\ldots }=\\frac{b_0}{a_0}$ e it does not exist the limit $\\displaystyle \\lim _{x\\rightarrow 0}\\frac{x^2c(x)}{x^3a(x)}=\\displaystyle \\lim _{x\\rightarrow 0}\\frac{c_0+c_1x+\\ldots }{a_0x+a_1x^2+\\ldots }.$ Remark 2.3 Consider a second order differential equation of the form $z^2a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $a,b,c$ are holomorphic at the origin with $a(0)\\ne 0$ , $b(0)\\ne 0$ and $c(0)\\ne 0$ .", "We shall see that (REF ) always admits non trivial formal solution.", "Indeed we assume that $u(z)=\\displaystyle \\sum ^\\infty _{n=0}d_nz^n$ is a formal solution of (REF ) hence $z^2\\big (\\sum ^\\infty _{n=0}a_nz^n\\big )\\big (\\sum ^{\\infty }_{n=2}n(n-1)d_nz^{n-2}\\big )+\\big (\\sum ^\\infty _{n=0}b_nz^n\\big )\\big (\\sum ^\\infty _{n=1}nd_nz^{n-1}\\big )+\\big (\\sum ^\\infty _{n=0}c_nz^n\\big )\\big (\\sum ^\\infty _{n=0}d_nz^n\\big )=0$ $\\big (\\sum ^\\infty _{n=0}a_nz^n\\big )\\big (\\sum ^{\\infty }_{n=2}n(n-1)d_nz^n\\big )+\\big (\\sum ^\\infty _{n=0}b_nz^n\\big )\\big (\\sum ^\\infty _{n=0}(n+1)d_{n+1}z^{n}\\big )+\\big (\\sum ^\\infty _{n=0}c_nz^n\\big )\\big (\\sum ^\\infty _{n=0}d_nz^n\\big )=0$ $\\sum ^\\infty _{n=0}\\tilde{a}_nz^n+\\sum ^\\infty _{n=0}\\tilde{b}_nz^n+\\sum ^\\infty _{n=0}\\tilde{c}_nz^n=0$ where $\\tilde{a}_0=\\tilde{a}_1=0,\\;\\tilde{a}_n=\\sum ^{n}_{k=2}a_{n-k}k(k-1)d_k\\;\\;\\mbox{ for }n\\ge 2,$ $\\tilde{b}_n=\\sum ^{n}_{k=0}b_{n-k}(k+1)d_{k+1}\\mbox{ for }n\\ge 0$ and $\\tilde{c}_n=\\sum ^{n}_{k=0}c_{n-k}d_k\\;\\;\\mbox{ for }n\\ge 0.$ Hence we have $\\tilde{a}_n+\\tilde{b}_n+\\tilde{c}_n=0,\\;\\;\\mbox{ for all }n\\ge 0.$ For $n=0$ we have: $b_0d_1+c_0d_0=0$ and since $b_0\\ne 0$ then $d_1=-\\frac{c_0d_0}{b_0}$ .", "For $n=1$ we have: $b_1d_1+2b_0d_2+c_1d_0+c_0d_1=0$ and then $2b_0d_2=-(b_1+c_0)d_1-c_1d_0$ and since $b_0\\ne 0$ then $d_2=-\\big (\\frac{b_1+c_0}{2b_0}\\big )\\big (\\frac{-c_0d_0}{b_0}\\big )-\\frac{c_1d_0}{2b_0}=\\frac{(b_1c_0+c_0^2+c_1b_0)d_0}{b_0^2}$ .", "For $n\\ge 2$ we have: $b_0(n+1)d_{n+1}=-c_nd_0-\\sum ^{n}_{k=1}[k(k-1)a_{n-k}+kb_{n-k+1}+c_{n-k}]d_k$ and since $b_0\\ne 0$ we obtain $d_{n+1}=\\frac{1}{b_0(n+1)}\\big (-c_nd_0-\\sum ^{n}_{k=1}[k(k-1)a_{n-k}+kb_{n-k+1}+c_{n-k}]d_k\\big ).$ Observe that the coefficients of the series depend on $d_0$ , since we look for non trivial formal solutions it suffices to choose $d_0\\ne 0$ .", "Hence, there exist non trivial formal solution.", "Also note that there exists the limit $\\displaystyle \\lim _{x\\rightarrow 0}\\frac{x^2c(x)}{x^2a(x)}=\\displaystyle \\lim _{x\\rightarrow 0}\\frac{c_0+c_1x+\\ldots }{a_0+a_1x+\\ldots }=\\frac{c_0}{a_0}.$ and the following limit is not finite $\\displaystyle \\lim _{x\\rightarrow 0}\\frac{xb(x)}{x^2a(x)}=\\displaystyle \\lim _{x\\rightarrow 0}\\frac{b_0+b_1x+\\ldots }{a_0x+a_1x^2+\\ldots }.$ Example 2.4 Consider a second order differential equation given by $z^2u^{\\prime \\prime }+bu^\\prime +cu=0$ where $b$ and $c$ are nonzero constants.", "Observe that the origin is a non regular singular point of (REF ).", "Next we shall see that there exist non trivial formal solutions for (REF ).", "Let us assume that $u(z)=\\sum ^\\infty _{n=0}a_nz^n$ is a non trivial formal solution of (REF ).", "Hence we have $z^2\\big (\\sum ^{\\infty }_{n=2}n(n-1)a_nz^{n-2}\\big )+b\\big (\\sum ^\\infty _{n=1}na_nz^{n-1}\\big )+c\\sum ^\\infty _{n=0}a_nz^n=0$ $(ca_0+ba_1)+(ca_1+2ba_2)z+\\sum ^{\\infty }_{n=2}\\big ([n(n-1)+c]a_n+b(n+1)a_{n+1}\\big )z^{n}=0$ and then $ca_0+ba_1=0$ , $ca_1+2ba_2=0$ and $(n^2-n+c)a_n+b(n+1)a_{n+1}=0$ for all $n=2,3,\\ldots $ .", "Since $b\\ne 0$ we have $a_1=-\\frac{ca_0}{b}$ , $a_2=-\\frac{ca_1}{2b}=\\frac{c^2a_0}{2b^2}$ and $a_{n+1}=-\\frac{(n^2-n+c)a_n}{b(n+1)},\\;\\;\\mbox{ for all }n=2,3,\\ldots .$ Observe that the coefficients of the series depend on $a_0$ , since we look for non trivial formal solutions it suffices to choose $a_0\\ne 0$ .", "Hence, there exist non trivial formal solution.", "Observe now that this formal solution is not convergent.", "Applying the ratio test to the expressions (REF ) and (REF ), we have that $ \\big |\\frac{a_{n+1}z^{n+1}}{a_nz^n}\\big |=\\big |\\frac{n^2-n+c}{b(n+1)}\\big |\\cdot | z|\\rightarrow \\infty ,$ when $n\\rightarrow \\infty $ , whenever $|z|\\ne 0$ .", "Hence, the series converges only for $z=0$ ." ], [ "Riccati model for a second order linear ODE", "We shall now exhibit method of associating to a homogeneous linear second order ODE a Riccati differential equation.", "Consider a second order ODE given by $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $a,b,c$ are analytic functions, real or complex, of a variable $z$ real or complex, defined in a domain $U\\subset \\mathbb {R}, \\mathbb {C}$ .", "According to [28] there is an integrable one-form $\\Omega =-a(z)ydx + a(z)xdy +[a(z)y^2+b(z)xy +c(z)x^2]dz$ that vanishes at the vector field corresponding to the reduction of order of the ODE, i.e., $\\omega (X)=0$ where $X(x,y,z)=y\\frac{\\partial }{\\partial x}-\\big (\\frac{b(z)}{a(z)}y+\\frac{c(z)}{a(z)}x\\big )\\frac{\\partial }{\\partial y}+\\frac{\\partial }{\\partial z}.$ As a consequence the orbits of $X$ are tangent to the foliation ${\\mathcal {F}}_\\Omega $ given by the Pfaff equation $\\Omega =0$ .", "First of all we remark that we can write $\\Omega $ as follows $\\frac{\\Omega }{x^2}=a(z)d\\big (\\frac{y}{x}\\big ) +\\big [a(z)\\big (\\frac{y}{x}\\big )^2+b(z)\\big (\\frac{y}{x}\\big )+c(z)\\big ]dz.$ Thus, by introducing the variable $t=\\frac{y}{x}$ we see that the same foliation ${\\mathcal {F}}_\\Omega $ can be defined by the one-form $\\omega $ below: $\\omega =a(z)dt+[a(z)t^2+b(z)t+c(z)]dz.", "$ By its turn $\\omega =0$ defines a Riccati foliation which writes as $\\frac{dt}{dz}=-\\frac{a(z)t^2+b(z)t+c(z)}{a(z)}.$ Definition 2.2 The Riccati differential equation above is called Riccati model of the ODE     $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ .", "Remark 2.4 The Riccati model can be obtained in a less geometrically clear way by setting $t=u^\\prime /u$ as a new variable.", "Sometimes it is also useful to consider the change of variable $w=u/u^\\prime $ which leads to the Riccati equation $\\frac{dw}{dz}=\\frac{c(z) w^2 + b(z) w + a(z)}{a(z)}$ ." ], [ "Holonomy of a second order equation", "It is well-known that a complex rational Riccati differential equation $\\frac{dy}{dx}=\\frac{a(x)y^2 + b(x)y + c(x)}{p(x)}$ induces in the complex surface $\\mathbb {P}^1 \\times \\mathbb {P}^1$ a foliation ${\\mathcal {F}}$ with singularities, having the following characteristics: The foliation has a finite number of invariant vertical lines $\\lbrace x_0\\rbrace \\times \\mathbb {P}^1$ .", "These lines are given by the zeroes of $p(x)$ and possibly by the line $\\lbrace \\infty \\rbrace \\times \\mathbb {P}^1$ .", "For each non-invariant vertical line $\\lbrace x_0\\rbrace \\times \\mathbb {P}^1$ the foliation has its leaves transverse to this line.", "From Ehresmann we conclude that the restriction of ${\\mathcal {F}}$ to $(\\mathbb {P}^1\\setminus \\sigma )\\times \\mathbb {P}^1$ , where $\\sigma \\times \\mathbb {P}^1$ is the set of invariant vertical lines, is a foliation transverse to the fibers of the fiber space $\\mathbb {P}^1\\times \\mathbb {P}^1 \\rightarrow \\mathbb {P}^1$ with fiber $\\mathbb {P}^1$ and projection given by $\\pi (x,y)=x$ .", "The restriction $\\pi \\big |_{L}$ of the projection to each leaf $L$ of the Riccati foliation defines a covering map $L\\rightarrow \\mathbb {P}^1 \\setminus \\sigma $ .", "In particular, there is a global holonomy map which is defined as follows: choose any point $x_0 \\notin \\sigma $ as base point and consider the lifting of the closed paths $\\gamma \\in \\pi _1(\\mathbb {P}^1 \\setminus \\sigma )$ to each leaf $L\\in {\\mathcal {F}}$ by the restriction $\\pi \\big |_{L}$ above.", "Denote the lift of $\\gamma $ starting at the point $(x_0,z) \\in \\lbrace x_0\\rbrace \\times \\mathbb {P}^1$ by $\\tilde{\\gamma }_z$ .", "If the end point of $\\tilde{\\gamma }_z$ is denoted by $(x_0,h_\\gamma (z))$ then the map $z \\mapsto h_\\gamma (z)$ depends only on the homotopy class of $\\gamma \\in \\pi _1(\\mathbb {P}^1 \\setminus \\sigma )$ .", "Moreover, this defines a complex analytic diffeomorphism $h_{[\\gamma ]}\\in \\mbox{Diff}(\\mathbb {P}^1)$ and the map $\\pi _1(\\mathbb {P}^1\\setminus \\sigma ) \\rightarrow \\mbox{Diff}(\\mathbb {P}^1), \\, [\\gamma ] \\mapsto h_{[\\gamma ]}$ is a group homomorphism.", "The image is called global holonomy of the Riccati equation.", "It is well known from the theory of foliations transverse to fiber spaces that the global holonomy classifies the foliation up to fibered conjugacy ([8]).", "This will be useful to us in what follows.", "Let us start by observing that $\\mbox{Diff}(\\mathbb {P}^1)$ as meant above is the projectivization of the special linear group ie., $\\mbox{Diff}(\\mathbb {P}^1) = \\mathbb {P}SL(2,\\mathbb {C})$ meaning that every global holonomy map can be represented by a Moebius map $T(z)= \\frac{a_1 z + a_2}{a_3 z + a_4}$ where $a_1, a_2, a_3, a_4\\in \\mathbb {C}$ and $a_1 a_4 - a_2 a_3=1$ .", "Thus the global holonomy group of a Riccati foliation identifies with a group of Moebius maps.", "Definition 2.3 (holonomy of a second order ODE) Given a linear homogeneous second order ODE with complex polynomial coefficients $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ we call the holonomy of the ODE the global holonomy group of the corresponding Riccati model.", "Remark 2.5 As we have seen above we can also obtain a Riccati model by any of the changes of variables $t=u^\\prime /u$ or $w=u/u^\\prime $ .", "From the viewpoint of ODEs these models may seem distinct.", "Nevertheless, they differ only up to the change of coordinates $t=1/w$ .", "Moreover, both have the same global holonomy group, since the point at infinity is always considered in the definition of global holonomy group.", "Indeed, the ideal space for considering a Riccati equation from the geometrical viewpoint, is the space $\\mathbb {C} \\times \\mathbb {C}$ .", "Next we see a concrete example of the global holonomy group of a second order ODE: Example 2.5 Consider the equation given by $z^2u^{\\prime \\prime }+u=0.$ From what we observed above we have that for $a(z)=z^2$ , $b(z)=0$ and $c(z)=1$ there exist a Riccati equation given by $\\frac{dt}{dz}=\\frac{z^2+t^2}{z^2}.$ Observe that equation (REF ) is homogeneous therefore by the change of coordinates $w=\\frac{t}{z}$ we have $z\\frac{dw}{dz}+w=1+w^2$ $\\frac{dw}{dz}=\\frac{w^2-w+1}{z}$ the last equation may be written into separated variables and therefore we have $\\frac{dw}{w^2-w+1}=\\frac{dz}{z}$ $\\frac{i}{\\sqrt{3}}\\frac{dw}{w+\\frac{1+i\\sqrt{3}}{2}}-\\frac{i}{\\sqrt{3}}\\frac{dw}{w+\\frac{1-i\\sqrt{3}}{2}}=\\frac{dz}{z} $ $d\\big ( \\frac{i}{\\sqrt{3}} \\log \\big (\\frac{w+\\frac{1+i\\sqrt{3}}{2}}{w+\\frac{1-i\\sqrt{3}}{2}}\\big )\\big )=d(\\log z)$ and then $\\big (\\frac{w+\\frac{1+i\\sqrt{3}}{2}}{w+\\frac{1-i\\sqrt{3}}{2}}\\big )^{i/\\sqrt{3}}=Kz$ where $K$ is constant.", "Thence $\\frac{2t+(1+i\\sqrt{3})z}{2t+(1-i\\sqrt{3})z}=\\tilde{K}z^{-i\\sqrt{3}}$ where $\\tilde{K}$ is constant.", "Hence $t=\\frac{\\tilde{K}(1-i\\sqrt{3})z^{1-i\\sqrt{3}}-(1+i\\sqrt{3})z}{2-2\\tilde{K}z^{-i\\sqrt{3}}}.$ Let us now compute the global holonomy with basis $t=0$ .", "For this sake we take a loop $z(\\theta )=z_0e^{i\\theta }$ with $0\\le \\theta \\le 2\\pi $ .", "For $\\theta =0$ we have that $\\tilde{K}=z_0^{i\\sqrt{3}}\\frac{2t_0+(1+i\\sqrt{3})z_0}{2t_0+(1-i\\sqrt{3})z_0}$ and then we obtain that $h(t_0)=t(z(2\\pi ))=\\frac{\\tilde{K}(1-i\\sqrt{3})z_0^{1-i\\sqrt{3}}e^{2\\pi \\sqrt{3}}-(1+i\\sqrt{3})z_0}{2-2\\tilde{K}z_0^{-i\\sqrt{3}}e^{2\\pi \\sqrt{3}}}$ and replacing $K$ we get $h(t_0)=t(z(2\\pi ))=z_0\\frac{(t_0(1-i\\sqrt{3})+2z_0)e^{2\\pi \\sqrt{3}}-t_0(1+i\\sqrt{3})-2z_0}{2t_0+(1-i\\sqrt{3})z_0-\\big (2t_0+(1+i\\sqrt{3})z_0\\big )e^{2\\pi \\sqrt{3}}}.$" ], [ "Trivial Holonomy", "Let us investigate some interesting cases.", "First consider a Riccati foliation ${\\mathcal {F}}$ assuming that $\\sigma $ is a single point.", "Thus we may assume that in affine coordinates $(x,y)$ the ramification point is the point $x=\\infty , y=0$ .", "Then we may write ${\\mathcal {F}}$ as given by a polynomial differential equation $\\frac{dy}{dx}=a(x)y^2 + b(x)y + c(x)$ .", "The global holonomy of ${\\mathcal {F}}$ is given by an homomorphism $\\phi \\colon \\pi (\\mathbb {P}^1\\setminus \\sigma )\\rightarrow \\mbox{Diff}(\\mathbb {P}^1)$ .", "Since $\\sigma $ is a single point we have $\\mathbb {P}^1\\setminus \\sigma =\\mathbb {C}$ is simply-connected and therefore the global holonomy is trivial.", "By the classification of foliations transverse to fibrations ([8] Chapter V) there is a fibered biholomorphic map $\\Phi \\colon \\mathbb {C} \\times \\mathbb {P}^1 \\rightarrow \\mathbb {C} \\times \\mathbb {P}^1$ that takes the foliation ${\\mathcal {F}}$ into the foliation $\\mathcal {H} $ given by the horizontal fibers $\\mathbb {C} \\times \\lbrace y\\rbrace , y \\in \\mathbb {P}^1$ .", "Lemma 2.4 A holomorphic diffeomorphism $\\Phi \\colon \\mathbb {C} \\times \\mathbb {P}^1 \\rightarrow \\mathbb {C} \\times \\mathbb {P}^1$ preserving the vertical fibration writes in affine coordinates $(x,y)\\in \\mathbb {C}^2 \\subset \\mathbb {C} \\times \\mathbb {P}^1$ as $\\Phi (x,y)=\\big ( Ax+B, \\frac{a(x)y+ b(x)}{c(x)y + d(x)}\\big )$ where $a,b,c,d $ are entire functions satisfying $ad-bc=1$ , $0 \\ne A,B \\in \\mathbb {C}$ .", "Picard's theorem and the fact that $\\Phi $ preserves the fibration $x=const$ show that it is of the form $\\Phi (x,y)=(f(x),g(x,y))$ where $f(x)=Ax+B$ is an affine map.", "Finally, for each fixed $x\\in \\mathbb {C}$ the map $\\mathbb {P}^1 \\ni y \\mapsto g(x,y) \\in \\mathbb {P}^1$ is a diffeomorphism so it must write as $g(x,y)=\\frac{a(x)y +b(x)}{c(x)y + d(x)}$ for some entire functions $a,b,c,d$ satisfying $ad - bc=1$ .", "In particular we conclude that the leaves of ${\\mathcal {F}}$ are diffeomorphic with $\\mathbb {C}$ (including the one contained in the invariant fiber $\\lbrace (0,\\infty )\\rbrace \\times \\mathbb {P}^1$ , and ${\\mathcal {F}}$ admits a holomorphic first integral $g\\colon \\mathbb {C} \\times \\mathbb {P}^1 \\rightarrow \\mathbb {P}^1$ of the above form $g(x,y)=\\frac{a(x)y +b(x)}{c(x)y + d(x)}$ .", "Let us now apply this to our framework of second order linear ODEs.", "Beginning with the ODE $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ the Riccati model is $\\frac{dt}{dz}=-\\frac{a(z)t^2+b(z)t+c(z)}{a(z)}.$ Thus if we assume that $a(z)=1$ then we have for this Riccati equation that $\\sigma =\\lbrace \\infty \\rbrace $ as considered above.", "This implies that ${\\mathcal {F}}$ admits a holomorphic first integral $g\\colon \\mathbb {C} \\times \\mathbb {P}^1 \\rightarrow \\mathbb {P}^1$ of the above form $g(z,t)=\\frac{A(z)t +B(z)}{C(z)t + D(z)}$ .", "Given a leaf $L$ of the Riccati foliation there is a constant $\\ell \\in \\mathbb {P}^1$ such that $g(z,t)=\\ell $ for all $(t,z)\\in L$ .", "Hence $t=\\frac{\\ell D(z) - B(z)}{A(z) - \\ell C(z)}$ for all $(t,z)\\in L$ .", "This defines a meromorphic parametrization $z\\mapsto t(z)$ of the leaf.", "Since we have $t=\\frac{y}{x}=\\frac{u^\\prime }{u}$ therefore $u(z)=k \\exp \\big (\\int _0^z t(\\xi )d\\xi \\big )$ is a solution of the ODE with $k \\in \\mathbb {C}$ a constant.", "This gives $u_{\\ell ,k}(z)=k \\exp \\big (\\int _0^z \\frac{\\ell D(\\xi ) - B(\\xi )}{A(\\xi ) - \\ell C(\\xi )}d\\xi \\big ), \\, k , \\ell \\in \\mathbb {C};$ as general solution of the original ODE.", "Notice that $\\frac{u^\\prime _{\\ell ,k}(z)}{u_{\\ell ,k}(z)}=\\frac{\\ell D(z) - B(z)}{A(z) - \\ell C(z)}$ so that if $\\ell _1 \\ne \\ell _2$ then the corresponding solutions $u_{\\ell _1,k_1}$ and $u_{\\ell _2,k_2}$ generate a nonzero wronskian, and therefore they are linearly independent solutions for all $k_1 \\ne 0 \\ne k_2$ .", "Next we investigate the case where $\\sigma $ consists of two points.", "In this case the holonomy group of the ODE is cyclic generated by a single Moebius map.", "A first (regular singularity type) example is given below: Example 2.6 (Bessel equation) Consider the complex Bessel equation given by $z^2u^{\\prime \\prime }+zu^\\prime +(z^2-\\nu ^2)u=0$ where $z,\\nu \\in \\mathbb {C}$ .", "Since $a(z)=z^2$ , $b(z)=z$ and $c(z)=z^2-\\nu ^2$ the corresponding Riccati model is $\\frac{dt}{dz}=-\\frac{z^2t^2+zt+z^2-\\nu ^2}{z^2}$ If we change coordinates to $w=\\frac{1}{z}$ then we obtain a Riccati equation of the form $\\frac{dt}{dw}=\\frac{t^2+tw+1-\\nu ^2w^2}{w^2}.$ A non-regular singularity example is given below: Example 2.7 Let us consider the following polynomial ODE $z^n u^{\\prime \\prime } + b(z) u ^\\prime + c(z) u =0.$ If $n \\ge 2$ and $b(0)\\ne 0$ or if $n \\ge 3$ and $c(0)\\ne 0$ or $b(0).b^\\prime (0)\\ne 0$ then $z=0$ is a non-regular singular point.", "Let us assume that this is the case.", "The corresponding Riccati equation is $\\frac{dt}{dz}=-\\frac{z^nt^2 + b(z)t + c(z)}{z^n}.$ Changing coordinates $w=1/z$ we obtain $\\frac{dt}{dw}=\\frac{ t^2 + w^n b(1/w) t + w^n c(1/w)}{w^2}=\\frac{w^kt^2 + \\tilde{b}(w) t + \\tilde{c}(w)}{w^{2 + k}}$ for some polynomials $\\tilde{b} (w), \\tilde{c}(w)$ and some $ k \\in \\mathbb {N}$ .", "This shows that the ramification set $\\sigma \\subset \\mathbb {P}^1$ consists of the points $z=0$ and $z=\\infty $ .", "The fundamental group of the basis $\\mathbb {P}^1 \\setminus \\sigma $ is therefore cyclic generated by a single homotopy class.", "The holonomy of the ODE is then generated by a single Moebius map.", "The following is an example with a holonomy group generated by two Moebius maps.", "Example 2.8 (Legendre equation) Consider the equation of Legendre given by $(1-z^2)u^{\\prime \\prime }-2zu^\\prime +\\alpha (\\alpha +1)u=0$ where $\\alpha \\in \\mathbb {C}$ .", "From what we observed above we have that for $a(z)=1-z^2$ , $b(z)=-2z$ and $c(z)=\\alpha (\\alpha +1)$ there exists a Riccati equation given by $\\frac{dt}{dz}=-\\frac{(1-z^2)t^2-2zt+\\alpha (\\alpha +1)}{1-z^2}.$ Putting $w=\\frac{1}{z}$ we have $\\frac{dt}{dw}=\\frac{(w^2-1)t^2-2wt+\\alpha (\\alpha +1)w^2}{w^2(w^2-1)}.$ Example 2.9 (an equation without solutions) Let us consider the following Riccati equation $\\frac{du}{dz}=-\\frac{zu^2+u+z}{z}$ which was obtained from Besssel equation (Example REF ) for $\\nu =0$ .", "Rewriting this equation we have $zu^\\prime (z)=-zu^2(z)-u(z)-z$ Claim 2.4 Equation (REF ) admits non-trivial formal solution.", "Indeed, let us assume that $u(z)=\\sum ^\\infty _{n=0}a_nz^n$ is a formal solution of (REF ).", "Hence $\\sum ^{\\infty }_{n=1}na_nz^{n}=-z\\big (\\sum ^\\infty _{n=0}c_nz^n\\big )-\\sum ^\\infty _{n=0}a_nz^n-z$ where $c_n=\\sum ^{n}_{j=0}a_{n-j}a_j$ .", "Thus we have $\\sum ^{\\infty }_{n=2}[(n+1)a_n+c_{n-1}]z^{n}+(2a_1+c_0+1)z+a_0=0$ then $a_0=0$ , $2a_1+c_0+1=0$ and $(n+1)a_n+c_{n-1}$ for all $n=2,3,\\ldots $ .", "Hence $a_n=0$ for all $n=0,1,2,\\ldots $ .", "The corresponding ODE may be written as $zu^{\\prime \\prime } + u^\\prime +zu=0$ and in the Euler form it is $z^2u^{\\prime \\prime } + zu ^\\prime + z^2u=0$ .", "This last has indicial equation $r(r-1) +r=0$ which gives as only solution $r=0$ .", "Then Frobenius theorem assures the existence of a solution of the form $u(z)=z^0 \\sum \\limits _{n=0}^\\infty a_n z^n$ .", "Remark 2.6 Consider a second order ordinary differential equation given by $a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ where $a,b,c$ are polynomials.", "We shall see that (REF ) is invariant under Moebius transformations.", "Indeed, by the change of coordinates $z=\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }$ with $\\alpha \\delta -\\beta \\gamma =1$ and considering $\\tilde{\\varphi }(w)=\\varphi \\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\alpha }\\big )$ where $\\varphi $ is a solution of (REF ).", "Taking derivative we have $\\varphi ^\\prime \\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\alpha }\\big )=(\\gamma w+\\delta )^2\\tilde{\\varphi }^\\prime (w)$ $\\varphi ^{\\prime \\prime }\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\alpha }\\big )=(\\gamma w+\\delta )^4\\tilde{\\varphi }^{\\prime \\prime }(w)+2\\gamma (\\gamma w+\\delta )^3\\tilde{\\varphi }^\\prime (w).$ Given that, by (REF ) $a\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )\\varphi ^{\\prime \\prime }\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )+b\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )\\varphi ^\\prime \\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )+c\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )\\varphi \\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )=0.$ Given that $a,b,c$ are polynomials we have $a\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )=\\frac{\\tilde{a}(w)}{(\\gamma w+\\delta )^n},\\;\\;b\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )=\\frac{\\tilde{b}(w)}{(\\gamma w+\\delta )^m},\\;\\;c\\big (\\frac{\\alpha w+\\beta }{\\gamma w+\\delta }\\big )=\\frac{\\tilde{c}(w)}{(\\gamma w+\\delta )^p}$ where $\\tilde{a},\\tilde{b},\\tilde{c}$ are polynomials and $n,m,p\\in \\mathbb {N}$ .", "Hence back to equation (REF ) we obtain $(\\gamma w+\\delta )^{m+p+4}\\tilde{a}(w)\\tilde{\\varphi }^{\\prime \\prime }(w)+[2\\gamma (\\gamma w+\\delta )^{m+p+3} \\tilde{a}(w)+(\\gamma w+\\delta )^{n+p+2}\\tilde{b}(w)]\\tilde{\\varphi }^{\\prime }(w)+(\\gamma w+\\delta )^{m+n}\\tilde{c}(w)\\tilde{\\varphi }(w)=0 $ Hence $\\tilde{\\varphi }$ satisfies equation: $\\hat{a}(w)u^{\\prime \\prime }(w)+\\hat{b}(w)u^{\\prime }(w)+\\hat{c}(w)u(w)=0.$ where $\\hat{a}(w)=(\\gamma w+\\delta )^{m+p+4}\\tilde{a}(w),\\;\\;\\;\\tilde{\\tilde{b}}(w)=2\\gamma (\\gamma w+\\delta )^{m+p+3} \\tilde{a}(w)+(\\gamma w+\\delta )^{n+p+2}\\tilde{b}(w)$ and $\\tilde{\\tilde{c}}(w)=(\\gamma w+\\delta )^{m+n}\\tilde{c}(w)$ are polynomials.", "Hence equation (REF ) is transformed by a Moebius map into equation (REF ).", "Example 2.10 Consider the equation given by $u^{\\prime \\prime }-zu^\\prime -u=0.$ From what we have observed above we known that for $a(z)=1$ , $b(z)=-z$ and $c(z)=-1$ there exists a Riccati equation given by $\\frac{dt}{dz}=1+zt-t^2.$ It is not difficult to see that $t=z$ is a solution of the differential Riccati model.", "We assume that $t=s+z$ is a solution of (REF ) and then we obtain a Bernoulli equation $\\frac{ds}{dz}=-sz-s^2$ by the change of coordinates $v=s^{-1}$ in equation (REF ) we have equation $ \\frac{dv}{dz}=vz+1.", "$ Thus $v$ is of the form $v=\\exp \\big (\\frac{z^2}{2}\\big )\\big (A+\\int ^z \\exp \\big (-\\frac{\\eta ^2}{2}\\big )d\\eta \\big )$ where $A$ is constant.", "Hence $ s=\\frac{\\exp \\big (-\\frac{z^2}{2}\\big )}{A+\\int ^z \\exp \\big (-\\frac{\\eta ^2}{2}\\big )d\\eta }$ and consequently $t=\\frac{\\exp \\big (-\\frac{z^2}{2}\\big )}{A+\\int ^z \\exp \\big (-\\frac{\\eta ^2}{2}\\big )d\\eta }+z.$ In the construction of the Riccati equation associate to (REF ) it is considered that $t=\\frac{u^\\prime }{u}$ where $u$ is a solution of (REF ).", "Hence we have $\\frac{u^\\prime }{u}=\\frac{\\exp \\big (-\\frac{z^2}{2}\\big )}{A+\\int ^z \\exp \\big (-\\frac{\\eta ^2}{2}\\big )d\\eta }+z$ that may be written as $(\\log u)^\\prime =\\big (\\log \\big (A+\\int ^z \\exp \\big (-\\frac{\\eta ^2}{2}\\big )d\\eta \\big )+\\frac{z^2}{2} \\big )^\\prime .$ Hence $u(z)=K\\exp \\big (\\frac{z^2}{2}\\big ) \\big (A+\\int ^z \\exp \\big (-\\frac{\\eta ^2}{2}\\big )d\\eta \\big )$ where $K$ is constant.", "It is a straightforward computation to show that $u$ is a solution of (REF )." ], [ "Examples and counterexamples", "We start with an example.", "Example 2.11 Consider the equation $x^2y^{\\prime \\prime }-y^\\prime -\\frac{1}{2}y=0.$ The origin $x_0=0$ is a singular point, but not is regular singular point, since the coefficient -1 of $y^\\prime $ does not have the form $xb(x)$ , where $b$ is analytic for 0.", "Nevertheless, we can formally solve this equation by power series $\\sum ^{\\infty }_{k=0} a_kx^k$ , where the coefficients $a_k$ satisfy the following recurrence formula $(k+1)a_{k+1}=\\big [k^2-k-\\frac{1}{2}\\big ]a_k,\\;\\;\\;\\mbox{ for every }k=0,1,2,\\ldots .$ If $a_0\\ne 0$ , applying the quotient test to this expression we have that $ \\big |\\frac{a_{k+1}x^{k+1}}{a_kx^k}\\big |=\\big |\\frac{k^2-k-\\frac{1}{2}}{k+1}\\big |\\cdot |x|\\rightarrow \\infty ,$ when $k\\rightarrow \\infty $ , provided that $|x|\\ne 0$ .", "Hence, the series converges only for $x=0$ , and therefore does not represent a function in a neighborhood of $x=0$ ." ], [ "Liouvillian solutions", "In this section we shall refer to the notion of Liouvillian function as introduced in [32].", "We stress the fact that the generating basis field is the one of rational functions.", "Thus a Liouvillian function of $n$ complex variables $x_1,\\ldots ,x_n$ will be a function belonging to a Liouvillian tower of differential extensions $k_0\\subset k_1\\subset \\cdots \\subset k_r$ starting the field $k_0$ of rational functions $k_0=\\mathbb {C}(x_1,\\ldots ,x_n)$ equipped with the partial derivatives $\\frac{\\partial }{\\partial x_j}$ .", "Recall that a Liouvillian function is always holomorphic in some Zariski open subset of the space $\\mathbb {C}^n$ .", "Nevertheless, it may have several branches.", "Let us denote by $Dom(F)\\subset \\mathbb {C}^n$ the domain of $F$ as the biggest open subset where $F$ has local holomorphic branches.", "This allows the following definition: Definition 2.4 (Liouvillian solution, Liouvillian first integral, Liouvillian relation) Given an equation $a(z) u^{\\prime \\prime } + b(z) u^\\prime + c(z) u =0$ , a Liouvillian function $u(z)$ of the variable $z$ will be called a solution of the ODE if we have $a(z) u^{\\prime \\prime } + b(z) u^\\prime + c(z) u=0$ in some nonempty open subset where $u(z)$ is holomorphic.", "A three variables Liouvillian function $F(x_1,x_2,x_3)$ will be called a first integral of the ODE if given any local solution $u_0(z)$ of the ODE, defined for $z$ in a disc $D(z_0,r)\\subset \\mathbb {C}$ , we have that $F(z,u_0(z),u_0^\\prime (z))$ is constant for $|z-z_0|<r$ provided that $(z,u_0(z),u_0^\\prime (z))\\subset Dom(F),$ for all $z \\in D(z_0,r)$ .", "Similarly we shall say that a solution $u_0(z)$ of the ODE, defined for $z \\in Dom(u_0)\\subset \\mathbb {C}$ , satisfies a Liouvillian relation if there is a Liouvillian function $F(x_1,x_2,x_3)$ such that $\\lbrace (z,u(z),u^\\prime (z))\\in \\mathbb {C}^3, z \\in Dom(u)\\rbrace \\cap Dom(F) \\ne \\emptyset $ and $F(z,u_0(z),u_0^\\prime (z))=0$ in some dense open subset of $Dom(u_0)$ .", "Let us recall a couple of classical results: Theorem 2.6 (Singer, [32]) Assume that the polynomial first order ODE $\\frac{dx}{dz}=P(x,y), \\, \\frac{dy}{dz}=Q(x,y)$ admits a Liouvillian first integral.", "Then there are rational functions $U(x,y), \\, V(x,y)$ such that $\\frac{\\partial U}{\\partial y}=\\frac{\\partial V}{\\partial x}$ and the differential form $Q(x,y)dx - P(x,y)dy$ admits the integrating factor $R(x,y)=\\exp \\big [\\int _{(x_0,y_0)}^{(x,y)}U(x,y) dx + V(x,y)dy\\big ]$ .", "Theorem 2.7 (Rosenlitch, Singer) Let $p(z), q(z)$ be Liouvillian functions and $L(y)=y^{\\prime \\prime } + p(z) y ^\\prime + q(z)y$ .", "If $L(y)=0$ has a Liouvillian first integral then all solutions are Liouvillian.", "If $L(y)=0$ has a nontrivial Liouvillian solution, then this equation has a Liouvillian first integral.", "Example 2.12 (Bernoulli ODEs) Recall that a Bernoulli differential equation of power 1 is one of the form $\\frac{dy}{dx}=\\frac{a_1(x)y +a_2(x) y^2}{p(x)}$ .", "If we perform a change of variables as $(x,y) \\mapsto (x,y^{k})$ then we obtain an equation of the form $\\frac{dy}{dx}= \\frac{ y^{k+1} a(x) + y b(x)}{p(x)}$ which will be called a Bernoulli equation of power $k$ .", "We prove the existence of a first integral for $\\Omega =0$ of Liouvillian type.", "First we observe that $\\Omega =0$ can be given by $(k-1) \\frac{\\Omega }{p(x)y^k} = (k-1) \\frac{dy}{y^k} -(k-1) \\big (\\frac{a(x)}{p(x)} - \\frac{b(x)}{p(x)y^{k-1}}\\big )dx = 0.$ Let now $f(x)$ be such that $\\frac{f^\\prime (x)}{f(x)} = (k-1)\\frac{b(x)}{p(x)}$ and let $g(x)$ be such that $g^\\prime (x) =-\\frac{a(x)}{p(x)f(x)}\\cdot (k-1).$ Then $\\Omega =0$ can be given by $(k-1)\\frac{dy}{y^k} - (k-1) \\frac{a(x)}{p(x)}dx +\\frac{f^\\prime (x)}{y^{k- 1}f(x)}\\, dx = 0.$ Therefore $F(x,y) = g(x)- \\frac{1}{f(x) y^{k-1}}$ defines a first integral for $\\Omega =0$ which is clearly of Liouvillian type.", "Before proving Theorem REF we shall need a lemma: Lemma 2.5 Let $\\frac{dy}{dx}=\\frac{c(x) y^2 + b(x) y + a(x)}{a(x)}$ be a rational Riccati ODE, where $a(x), b(x), c(x)$ are complex polynomials.", "Assume that there is a Liouvillian first integral.", "Then we have the following possibilities: The equation is linear of the form $a(x)y^\\prime - b(x)y=a(x)$ .", "Up to a rational change of coordinates of the form $Y= y-A(x)/B(x)$ , the equation is a Bernoulli equation $\\frac{dY}{dx}= \\frac{\\tilde{c}(x) Y^2 + \\tilde{b}(x) Y}{\\tilde{a}(x)}$ .", "Let $\\Omega =(c(x)y^2 + b(x)y + a(x))dx - a(x)dy$ .", "The ODE is equivalent to $\\Omega =0$ .", "According to Singer [32] (Theorem REF above) there is a rational 1-form $\\eta =U(x,y) dx + V(x,y)dy$ such that $d \\eta =(\\frac{\\partial U}{\\partial y}- \\frac{\\partial V}{\\partial x})dy \\wedge dx=0$ and $\\exp (\\int \\eta )$ is an integrating factor for $\\Omega $ .", "This means that $d (\\Omega /\\exp (\\int \\eta ))=0$ and therefore $d \\Omega = \\eta \\wedge \\Omega $ .", "Case 1.", "$\\Omega =0$ admits some invariant algebraic curve which is not a vertical line $x=c\\in \\mathbb {C}$ .", "In this case we may choose an irreducible polynomial $f(x,y)$ such that $f(x,y)=0$ describes this non-vertical algebraic solution.", "Now we observe that the leaves of the Riccati foliation defined by $\\Omega =0$ on $\\mathbb {P}^1 \\times \\mathbb {P}^1$ are, except for those contained in the invariant vertical fibers, all transverse to the vertical fibers $\\lbrace x\\rbrace \\times \\mathbb {P}^1 \\subset \\mathbb {P}^1 \\times \\mathbb {P}^1$ .", "Thus we conclude that $\\frac{\\partial f}{\\partial y}(x,y)$ never vanishes for each $x$ such that $a(x) \\ne 0$ .", "Since $f(x,y)$ is polynomial, this implies that $f(x,y)=A(x) - B(x)y$ for some polynomials $A(x), B(x)$ : Look at the function $f_y=\\frac{\\partial f}{\\partial y}(x,y)$ .", "This function is constant along almost all the fibers of $x\\colon \\mathbb {C}^2 \\rightarrow \\mathbb {C}$ .", "Therefore we must have that $d f_y \\wedge dx=0$ almost everywhere.", "Thus $df_y \\wedge dx=0$ everywhere and this gives $f_{yy}=\\frac{\\partial ^2 f}{\\partial y^2}=0$ .", "Hence $f(x,y)=A(x)- B(x)y$ by standard integration.", "This shows that the non-vertical solution is a graph of the form $y(x)=\\frac{A(x)}{B(x)}$ .", "In this case we may perform a change of variables as follows: write $Y=y - y(x)$ to obtain: $\\frac{dY}{dx}=\\frac{c(x)Y^2 + (2c(x) y(x) + b(x))Y}{a(x)}=\\frac{B(x)c(x)Y^2 +(b(x)B(x)+2 c(x)A(x))Y}{a(x)B(x)}.$ This is a Bernoulli equation.", "Case 2.", "If there is no invariant curve other the vertical lines.", "Denote by $(\\eta )_\\infty $ the polar divisor of $\\eta $ in $\\mathbb {C}^2$ .", "Claim 2.5 We have $(\\eta )_\\infty =\\lbrace x \\in \\mathbb {C}, a(x)=0\\rbrace \\times \\mathbb {C}$ .", "First of all we recall that the polar set of $\\eta $ is invariant by $\\Omega =0$ ([5]).", "By the hypothesis we then conclude that $(\\eta )_\\infty \\supseteq \\lbrace x \\in \\mathbb {C}, a(x)=0\\rbrace \\times \\mathbb {C}$ is a union of vertical invariant lines.", "Thus, from the integration lemma in [4] we have $\\eta =\\sum \\limits _{j=1}^r \\lambda _j \\frac{dx}{x-x_j} + d \\bigg (\\frac{g(x,y)}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}}\\bigg )$ where $g(x,y)$ is a polynomial function, $\\lambda _j\\in \\mathbb {C}$ and $n_j\\in \\mathbb {N}$ is the order of the poles of $\\Omega $ in the component $(x=x_j)$ of the polar set.", "Now we use the equation $d\\Omega = \\eta \\wedge \\Omega $ to obtain $-[a^\\prime (x) + 2y c(x) + b(x)] dx \\wedge dy =-a(x) \\sum \\limits _{j=1}^r \\frac{\\lambda _j}{x-x_j} dx \\wedge dy + d \\bigg (\\frac{g(x,y)}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}}\\bigg ) \\wedge \\Omega $ where $d \\bigg (\\frac{g(x,y)}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}}\\bigg ) \\wedge \\Omega = g d\\bigg (\\frac{1}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}}\\bigg )\\wedge a(x) dy +\\frac{dg}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}} \\wedge \\Omega .$ Notice that $\\frac{dg}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}} \\wedge \\Omega =\\frac{g_x}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}}a(x) dx \\wedge dy - \\frac{g_y}{\\prod \\limits _{j=1}^r (x-x_j)^{n_j -1}} (c(x) y^2 + b(x) y + a(x))dx\\wedge dy$ Notice that the left side is $-[a^\\prime (x) + 2y c(x) + b(x)]$ has no term in $y^2$ so that we must have $g_y c(x)=0$ in the right side.", "If $g_y=0$ then $g=g(x)$ and from the left side we must have $c(x)=0$ .", "This shows that we must always have $c(x)=0$ .", "This implies that the original equation is a linear homogeneous equation of the form $\\frac{dy}{dx}=\\frac{ b(x)y + a(x)}{a(x)}$ which can be written as $a(x) y^\\prime - b(x)y = a(x)$ .", "Let us prove the second part, ie., the equivalence.", "We assume that $L[u](z)=a(z)u^{\\prime \\prime }+b(z)u^\\prime +c(z)u=0$ admits a Liouvillian first integral.", "We consider the change of coordinates $t=\\frac{u^\\prime }{u}$ which gives the following Riccati model $\\mathcal {R}: \\frac{dt}{dz}=-\\frac{a(z) t^2 + b(z) t + c(z)}{a(z)}$ .", "We claim: Claim 2.6 The Riccati model $\\mathcal {R}$ also admits a Liouvillian first integral.", "By hypothesis the ODE $L[u]= a(z) u^{\\prime \\prime } + b(z) u ^\\prime + c(z) u=0$ has a Liouvillian first integral.", "By the Corollary in [32] page 674 all solutions of $L[u]=0$ are Liouvillian.", "This implies, by Theorem 1 in [32] page 674, that $\\mathcal {R}$ admits a Liouvillian first integral.", "From the above lemma we have then two possibilities: Case 1.", "There is a solution $\\gamma (z)=A(z)/B(z)$ for the Riccati equation, where $A, B$ are polynomials.", "In this case there is a rational change of coordinates of the form $T= t - A(z)/B(z)$ that takes the Riccati model $\\mathcal {R}$ into a Bernoulli foliation $\\mathcal {B}: \\frac{dT}{dz}=-\\frac{B(z)a(z)T^2 +(b(z)B(z)+2a(z) A(z))T}{a(z)B(z)}=-T^2 -\\tilde{b}(z)T$ where $\\tilde{b}(z)=\\frac{b(z)B(z)+2a(z) A(z)}{ a(z) B(z)}=\\frac{b(z)}{a(z)}+2 \\frac{A(z)}{B(z)}=\\frac{b(z)}{a(z)}+ 2\\gamma (z)$ .", "In this case the original ODE $L[u](z)=0$ becomes $\\tilde{L}[U](z)=U^{\\prime \\prime } + \\tilde{b} (z) U^\\prime =0$ after a rational change of coordinates $U=\\exp (\\int ^z Td\\eta )=\\exp \\big (\\int ^z\\big (t - \\frac{A(\\eta )}{B(\\eta )}\\big )d\\eta \\big )=\\exp (\\int ^z td\\eta )\\exp \\big (-\\int ^z\\frac{A(\\eta )}{B(\\eta )}d\\eta \\big )=\\exp \\big (\\int ^z -\\frac{A(z)}{B(z)}dz\\big )\\cdot u=u\\exp (-\\int ^z\\gamma (\\eta )d\\eta ) $ where $\\gamma (z)=\\frac{A(z)}{B(z)}$ .", "This shows that we have Liouvillian solutions to the ODE which are given by $\\begin{array}{rl}\\exp \\big (-\\int ^z\\gamma (\\eta )d\\eta \\big ) u(z)&=U(z)=\\ell +k \\int ^z \\exp \\big (-\\int ^\\eta \\tilde{b}(\\xi )d\\xi \\big ) d\\eta \\\\&\\\\&=\\ell +k\\int ^z \\exp \\big (-\\int ^\\eta \\frac{b(\\xi )}{a(\\xi )}d\\xi \\big )\\cdot \\exp \\big (\\int ^\\eta -2\\gamma (\\xi )d\\xi \\big ) d\\eta \\end{array}$ for constants $k, \\ell \\in \\mathbb {C}$ .", "So that $u(z)=\\exp \\big (\\int ^z\\gamma (\\eta )d\\eta \\big )\\big [\\ell +k\\int ^z \\exp \\big (-\\int ^\\eta \\frac{b(\\xi )}{a(\\xi )}d\\xi \\big )\\cdot \\exp \\big (\\int ^\\eta -2\\gamma (\\xi )d\\xi \\big ) d\\eta \\big ]$ for constants $k, \\ell \\in \\mathbb {C}$ .", "Case 2.", "We have $c(z)=0$ and therefore the original ODE is of the form $L[u]=a(z) u^{\\prime \\prime } + b(z) u^\\prime =0$ .", "Thus the solutions are Liouvillian given by $u(z) =k_1 + k_2 \\int ^z \\exp (-\\int ^\\eta \\frac{b(\\xi )}{a(\\xi )}d\\xi )d\\eta $ for constants $k_1,k_2\\in \\mathbb {C}$ ." ], [ "Third order Euler equations", "Theorem 3.1 Consider the third order differential equation $z^3u^{\\prime \\prime \\prime }+z^2a(z)u^{\\prime \\prime }+zb(z)u^\\prime +c(z)u=0$ where $a,b,c$ are entire functions in the complex plane.", "Thus the origin and the infinity are regular singular points of (REF ) if and only if equation (REF ) is an Euler equation.", "We already know that every Euler equation has the origin and the infinity as singular points.", "Let us see the converse.", "By the change of coordinates $z=1/t$ and considering $w(t)=u\\big (\\frac{1}{t}\\big )$ we have $u^\\prime \\big (\\frac{1}{t}\\big )=-t^2w^\\prime (t), \\, \\, u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )=t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t), \\, \\, u^{\\prime \\prime \\prime }\\big (\\frac{1}{t}\\big )=-t^6w^{\\prime \\prime \\prime }(t)-6t^5w^{\\prime \\prime }(t)-6t^4w^\\prime (t),$ hence $\\frac{1}{t^3}u^{\\prime \\prime \\prime }\\big (\\frac{1}{t}\\big )+\\frac{1}{t^2}a\\big (\\frac{1}{t}\\big )u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )+\\frac{1}{t}b\\big (\\frac{1}{t}\\big )u^\\prime \\big (\\frac{1}{t}\\big )+c\\big (\\frac{1}{t}\\big )u\\big (\\frac{1}{t}\\big )=0$ $\\frac{1}{t^3}[-t^6w^{\\prime \\prime \\prime }(t)-6t^5w^{\\prime \\prime }(t)-6t^4w^\\prime (t)]+\\frac{1}{t^2}a\\big (\\frac{1}{t}\\big )[t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t)]+\\frac{1}{t}b\\big (\\frac{1}{t}\\big )[-t^2w^\\prime (t)]+c\\big (\\frac{1}{t}\\big )w(t)=0$ $t^3w^{\\prime \\prime \\prime }(t)+t^2\\big [6-a\\big (\\frac{1}{t}\\big )\\big ]w^{\\prime \\prime }(t)+t\\big [6-2a\\big (\\frac{1}{t}\\big )+b\\big (\\frac{1}{t}\\big )\\big ]w^\\prime (t)-c\\big (\\frac{1}{t}\\big )w(t)=0.$ Given that the infinity is a regular singular point of (REF ) we have that the origin is a regular singular point of (REF ) consequently there exist the limits $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^3\\big [6-a\\big (\\frac{1}{t}\\big )\\big ]}{t^3}=\\displaystyle \\lim _{t\\rightarrow 0}\\big [6-a\\big (\\frac{1}{t}\\big )\\big ]=\\displaystyle \\lim _{t\\rightarrow 0}\\big [(6-a_0)-\\frac{a_1}{t}+\\frac{a_2}{t^2}+\\ldots \\big ],$ $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^3\\big [6-2a\\big (\\frac{1}{t}\\big )+b\\big (\\frac{1}{t}\\big )\\big ]}{t^3}=\\displaystyle \\lim _{t\\rightarrow 0}\\big [6-2a\\big (\\frac{1}{t}\\big )+b\\big (\\frac{1}{t}\\big )\\big ]=\\displaystyle \\lim _{t\\rightarrow 0}\\big [(6-2a_0+b_1)-\\frac{a_1-b_1}{t}+\\frac{a_2-b_2}{t^2}+\\ldots \\big ]$ and $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^3c\\big (\\frac{1}{t}\\big )}{t^3}=\\displaystyle \\lim _{t\\rightarrow 0}c\\big (\\frac{1}{t}\\big )=\\displaystyle \\lim _{t\\rightarrow 0}\\big [c_0+\\frac{c_1}{t}+\\frac{c_2}{t^2}+\\ldots \\big ].$ and then $0=a_1=a_2=\\ldots $ , $0=b_1=b_2=\\ldots $ and $0=c_1=c_2=\\ldots $ .", "Hence equation (REF ) is of the form $z^3u^{\\prime \\prime \\prime }(z)+a_0z^2u^{\\prime \\prime }+b_0zu^\\prime (z)+c_0u(z)=0$ which is an Euler equation.", "Theorem 3.2 Consider the third order differential equation $a(z)u^{\\prime \\prime \\prime }+b(z)u^{\\prime \\prime }+c(z)u^\\prime +d(z)u=0$ where $a,b,c,d$ are entire functions with $a^\\prime (0)\\ne 0$ .", "Thus the origin and the infinity are regular singular points of (REF ) if and only if there exist $k=0,1,2,\\ldots $ in such a way that equation (REF ) is of the form $\\begin{array}{c}(A_1z+\\ldots +A_{k+3}z^{k+3})u^{\\prime \\prime \\prime }(z)+(B_0+B_1z+\\ldots +B_{k+2}z^{k+2})u^{\\prime \\prime }(z)\\\\ \\\\+(C_0+C_1z+\\ldots +C_{k+1}z^{k+1})u^\\prime (z)+(D_0+D_1z+\\ldots +D_{k}z^k)u(z)=0\\end{array}$ where $A_1,\\ldots ,A_{k+3},B_0,\\ldots ,B_{k+2},C_0,\\ldots ,C_{k+1},D_0,\\ldots ,D_k$ are constants such that $A_1,A_{k+3}\\ne 0$ .", ".", "Let us first see that (REF ) has the origin and the infinity as regular singular point.", "Clearly the origin é singular point of (REF ) and since there exist the limits $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z(B_0+B_1z+\\ldots +B_{k+2}z^{k+2})}{(A_1z+\\ldots +A_{k+3}z^{k+3})}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{B_0+B_1z+\\ldots +B_{k+2}z^{k+2}}{(A_1+\\ldots +A_{k+3}z^{k+2})}=\\frac{B_0}{A_1},$ $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^2(C_0+C_1z+\\ldots +C_{k+1}z^{k+1})}{A_1z+\\ldots +A_{k+3}z^{k+3}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z(C_0+C_1z+\\ldots +C_{k+1}z^{k+1})}{A_1+\\ldots +A_{k+3}z^{k+2}}=0$ and $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^3(D_0+D_1z+\\ldots +D_{k}z^{k})}{A_1z+\\ldots +A_{k+3}z^{k+3}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^2(D_0+D_1z+\\ldots +D_{k}z^{k})}{A_1+\\ldots +A_{k+3}z^{k+2}}=0$ we have that the origin is a regular singular point.", "Putting $z=1/t$ and considering $w(t)=u\\big (\\frac{1}{t}\\big )$ we have $u^\\prime \\big (\\frac{1}{t}\\big )=-t^2w^\\prime (t), \\, \\, u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )=t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t), \\, \\, u^{\\prime \\prime \\prime }\\big (\\frac{1}{t}\\big )=-t^6w^{\\prime \\prime \\prime }(t)-6t^5w^{\\prime \\prime }(t)-6t^4w^\\prime (t),$ hence $\\begin{array}{c}\\big (\\frac{A_1}{t}+\\ldots +\\frac{A_{k+3}}{t^{k+3}}\\big )u^{\\prime \\prime \\prime }\\big (\\frac{1}{t}\\big )+\\big (B_0+\\frac{B_1}{t}+\\ldots +\\frac{B_{k+2}}{t^{k+2}}\\big )u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )\\\\\\\\+\\big (C_0+\\frac{C_1}{t}+\\ldots +\\frac{C_{k+1}}{t^{k+1}}\\big )u^\\prime \\big (\\frac{1}{t}\\big )+\\big (D_0+\\frac{D_1}{t}+\\ldots +\\frac{D_{k}}{t^{k}}\\big )u\\big (\\frac{1}{t}\\big )=0\\end{array}$ $\\begin{array}{c}\\big (\\frac{A_1}{t}+\\ldots +\\frac{A_{k+3}}{t^{k+3}}\\big )[-t^6w^{\\prime \\prime \\prime }(t)-6t^5w^{\\prime \\prime }(t)-6t^4w^\\prime (t)]+\\big (B_0+\\frac{B_1}{t}+\\ldots +\\frac{B_{k+2}}{t^{k+2}}\\big )[t^4w^{\\prime \\prime }(t)+2t^3w^\\prime (t)]\\\\\\\\+\\big (C_0+\\frac{C_1}{t}+\\ldots +\\frac{C_{k+1}}{t^{k+1}}\\big )[-t^2w^\\prime (t)]+\\big (D_0+\\frac{D_1}{t}+\\ldots +\\frac{D_{k}}{t^{k}}\\big )w(t)=0\\end{array}$ $\\begin{array}{c}\\big (A_1t^{k+5}+\\ldots +A_{k+3}t^{3}\\big )w^{\\prime \\prime \\prime }(t)+\\big ((6A_1-B_0)t^{k+4}+\\ldots +(6A_{k+3}-B_{k+2})t^2\\big )w^{\\prime \\prime }(t)\\\\\\\\+\\big ( (6A_1-2B_0)t^{k+3}+(6A_2-2B_1+C_0)t^{k+2}+\\ldots +(6A_{k+3}-2B_{k+2}+C_{k+1})t\\big )w^\\prime (t)\\\\\\\\-\\big (D_0t^k+\\ldots +D_{k}\\big )w(t)=0\\end{array}$ Observe that the origin is a singular point of (REF ) and since there exist the limits $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t\\big ((6A_1-B_0)t^{k+4}+\\ldots +(6A_{k+3}-B_{k+2})t^2\\big )}{A_1t^{k+5}+\\ldots +A_{k+3}t^3}=\\frac{2A_{k+3}-B_{k+2}}{A_{k+3}}$ $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^2\\big ((6A_1-2B_0)t^{k+3}+\\ldots +(6A_{k+3}-2B_{k+2}+C_{k+1})t\\big )}{A_1t^{k+5}+\\ldots +A_{k+3}t^3}=\\frac{6A_{k+3}-2B_{k+2}+C_{k+1}}{A_{k+3}}$ and $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^3\\big (D_0t^k+\\ldots +D_{k}\\big )}{A_1t^{k+5}+\\ldots +A_{k+2}t^3}=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{D_0t^k+\\ldots +D_{k}}{A_1t^{k+2}+\\ldots +A_{k+3}}=\\frac{D_k}{A_{k+3}}$ we have that o the origin is a regular singular point of (REF ) consequently the infinity is a regular singular point of (REF ).", "Conversely, assume that the origin and the infinity are regular singular points of (REF ).", "Hence by the change of coordinates $z=1/t$ and considering $v(t)=u\\big (\\frac{1}{t}\\big )$ $u^\\prime \\big (\\frac{1}{t}\\big )=-t^2v^\\prime (t),\\, \\, u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )=t^4v^{\\prime \\prime }(t)+2t^3v^\\prime (t), \\, \\, u^{\\prime \\prime \\prime }\\big (\\frac{1}{t}\\big )=-t^6v^{\\prime \\prime \\prime }(t)-6t^5v^{\\prime \\prime }(t)-6t^4v^\\prime (t),$ hence $a\\big (\\frac{1}{t}\\big )u^{\\prime \\prime \\prime }\\big (\\frac{1}{t}\\big )+b\\big (\\frac{1}{t}\\big )u^{\\prime \\prime }\\big (\\frac{1}{t}\\big )+c\\big (\\frac{1}{t}\\big )u^\\prime \\big (\\frac{1}{t}\\big )+d\\big (\\frac{1}{t}\\big )u\\big (\\frac{1}{t}\\big )=0$ $a\\big (\\frac{1}{t}\\big )[-t^6v^{\\prime \\prime \\prime }(t)-6t^5v^{\\prime \\prime }(t)-6t^4v^\\prime (t)]+b\\big (\\frac{1}{t}\\big )[t^4v^{\\prime \\prime }(t)+2t^3v^\\prime (t)]+c\\big (\\frac{1}{t}\\big )[-t^2v^\\prime (t)]+d\\big (\\frac{1}{t}\\big )v(t)=0$ $t^6a\\big (\\frac{1}{t}\\big )v^{\\prime \\prime \\prime }(t)+\\big [6t^5a\\big (\\frac{1}{t}\\big )-t^4b\\big (\\frac{1}{t}\\big )\\big ]v^{\\prime \\prime }(t)+\\big [6t^4a\\big (\\frac{1}{t}\\big )-2t^3b\\big (\\frac{1}{t}\\big )+t^2c\\big (\\frac{1}{t}\\big )\\big ]v^\\prime (t)-d\\big (\\frac{1}{t}\\big )v(t)=0.$ Given that the infinity is a regular singular point of (REF ) then the origin is a regular singular point of (REF ).", "Hence, there exist the limits $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t\\big [6t^5a\\big (\\frac{1}{t}\\big )-t^4b\\big (\\frac{1}{t}\\big )\\big ]}{t^6a\\big (\\frac{1}{t}\\big )}=\\displaystyle \\lim _{t\\rightarrow 0}\\big [6-\\frac{b\\big (\\frac{1}{t}\\big )}{ta\\big (\\frac{1}{t}\\big )}\\big ]=6-\\lim _{t\\rightarrow 0}\\frac{b_0+\\frac{b_1}{t}+\\frac{b_2}{t^2}+\\ldots }{a_1+\\frac{a_2}{t}+\\frac{a_3}{t^2}+\\ldots }$ , $\\begin{array}{c l}\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^2\\big [6t^4a\\big (\\frac{1}{t}\\big )-2t^3b\\big (\\frac{1}{t}\\big )+t^2c\\big (\\frac{1}{t}\\big )\\big ]}{t^6a\\big (\\frac{1}{t}\\big )}&=\\displaystyle \\lim _{t\\rightarrow 0}\\big [6-\\frac{2tb\\big (\\frac{1}{t}\\big )-c\\big (\\frac{1}{t}\\big )}{t^2a\\big (\\frac{1}{t}\\big )}\\big ]\\\\ &\\\\&=6-\\displaystyle \\lim _{t\\rightarrow 0}\\frac{2b_0t+(2b_1-c_0)+\\frac{2b_2-c_1}{t}+\\frac{2b_3-c_2}{t^2}+\\ldots }{a_1t+a_2+\\frac{a_3}{t}+\\frac{a_4}{t^2}+\\ldots }\\end{array}$ and $\\displaystyle \\lim _{t\\rightarrow 0}\\frac{t^3d\\big (\\frac{1}{t}\\big )}{t^6a\\big (\\frac{1}{t}\\big )}=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{d\\big (\\frac{1}{t}\\big )}{t^3a\\big (\\frac{1}{t}\\big )}=\\displaystyle \\lim _{t\\rightarrow 0}\\frac{d_0+\\frac{d_1}{t}+\\frac{d_2}{t^2}+\\frac{d_3}{t^3}+\\ldots }{a_1t^2+a_2t+a_3+\\frac{a_4}{t}+\\ldots }$ and then we have there exists $k=0,1,2\\ldots $ in such a way that $a_{k+3}\\ne 0$ , $0=a_{k+4}=a_{k+5}=\\ldots $ , $0=b_{k+3}=b_{k+4}=\\ldots $ , $0=c_{k+2}=c_{k+3}=\\ldots $ and $0=d_{k+1}=d_{k+2}=\\ldots $ .", "Given that the origin is a regular singular point of (REF ) we have that there exist the limits $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{zb(z)}{a(z)}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{b_0z+b_1z^2+\\ldots +b_{k+2}z^{k+3}}{a_1z+a_2z^2+\\ldots +a_{k+3}z^{k+3}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{b_0+b_1z+\\ldots +b_{k+2}z^{k+2}}{a_1+a_2z+\\ldots +a_{k+3}z^{k+2}}$ $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^2c(z)}{a(z)}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{c_0z^2+c_1z^3+\\ldots +c_{k+1}z^{k+3}}{a_1z+a_2z^2+\\ldots +a_{k+3}z^{k+3}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{c_0z+c_1z^2+\\ldots +c_{k+1}z^{k+2}}{a_1+a_2z+\\ldots +a_{k+3}z^{k+2}}$ and $\\displaystyle \\lim _{z\\rightarrow 0}\\frac{z^3d(z)}{a(z)}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{d_0z^3+d_1z^4+\\ldots +d_{k}z^{k+3}}{a_1z+a_2z^2+\\ldots +a_{k+3}z^{k+3}}=\\displaystyle \\lim _{z\\rightarrow 0}\\frac{d_0z^2+d_1z^3+\\ldots +d_{k}z^{k+2}}{a_1+a_2z+\\ldots +a_{k+2}z^{k+1}}$ provided that $a_1\\ne 0$ .", "Thence (REF ) is of the form (REF )." ], [ "Regular singularities in order three", "The simplest example of a third order equation that has a regular singular point at the origin is a equation of Euler $L(y):=x^3y^{\\prime \\prime \\prime }+ax^2y^{\\prime \\prime }+bxy^\\prime +cy=0,$ where $a,b,c$ are constants.", "Let us consider that $x>0$ .", "The idea is to change variables $x=e^z$ .", "Suppose that $\\varphi $ is a solution of (REF ) and define $\\tilde{\\varphi }(z)=\\varphi (e^z)$ thus we have $\\frac{d\\varphi }{dx}\\big (e^z\\big )=e^{-z}\\frac{d\\tilde{\\varphi }}{dz}(z),$ $\\frac{d^2\\varphi }{dx^2}\\big (e^z\\big )=e^{-2z}\\big (\\frac{d^2\\tilde{\\varphi }}{dz^2}(z)-\\frac{d\\tilde{\\varphi }}{dz}(z)\\big ),$ $\\frac{d^3\\varphi }{dx^3}\\big (e^z\\big )=e^{-3z}\\big (\\frac{d^3\\tilde{\\varphi }}{dz^3}(z)-3\\frac{d^2\\tilde{\\varphi }}{dz^2}(z)+2\\frac{d\\tilde{\\varphi }}{dz}(z)\\big ).$ Since, from (REF ) $ e^{3z}\\frac{d^3\\varphi }{dx^3}\\big (e^z\\big )+ae^{2z}\\frac{d^2\\varphi }{dx^2}\\big (e^z\\big )+be^z\\frac{d\\varphi }{dx}\\big (e^z\\big )+c\\varphi \\big (e^z\\big )=0,$ we have: $\\tilde{\\varphi }^{\\prime \\prime \\prime }(z)+(a-3)\\tilde{\\varphi }^{\\prime \\prime }(z)+(2-a+b)\\tilde{\\varphi }^{\\prime }(z)+c\\tilde{\\varphi }(z)=0 $ Hence $\\tilde{\\varphi }$ satisfies the equation: $\\tilde{L}(y)=y^{\\prime \\prime \\prime }+(a-3)y^{\\prime \\prime }+(2-a+b)y^{\\prime }+cy=0.$ The characteristic polynomial associate to (REF ) is $r^3+(a-3)r^2+(2-a+b)r+c=0$ o qual is can be written equivalently $q(r)=r(r-1)(r-2)+ar(r-1)+br+c=0$ this polynomial is called indicial polynomial associate to (REF ).", "Let $r_1,r_2,r_2$ be the roots of the polynomial (REF ).", "There are three cases: All roots are equal: $r_1=r_2=r_3$ .", "Hence $\\lbrace e^{r_1z},ze^{r_1z},z^2e^{r_1z}\\rbrace $ is base of the solution space of equation (REF ) and considering the change of coordinate $x=e^z$ we have that $\\lbrace x^{r_1},\\log x\\;x^{r_1},(\\log x)^2\\;x^{r_1}\\rbrace $ is a base of the solution space of equation (REF ).", "Two equal roots and one different: $r_1=r_2\\ne r_3$ .", "Hence $\\lbrace e^{r_1z},ze^{r_1z},e^{r_3z}\\rbrace $ is base of the solution space of equation (REF ) and performing the change of coordinate $x=e^z$ we have that $\\lbrace x^{r_1},\\log x\\;x^{r_1},x^{r_3}\\rbrace $ is a base of the solution space of equation (REF ).", "All roots are different: $r_1\\ne r_2$ , $r_2\\ne r_3$ and $r_1\\ne r_3$ Hence $\\lbrace e^{r_1z},e^{r_2z},e^{r_3z}\\rbrace $ is base of the solution space of equation (REF ) and using the change of coordinate $x=e^z$ we have that $\\lbrace x^{r_1},x^{r_2},x^{r_3}\\rbrace $ is a base of the solution space of equation (REF )." ], [ "Third order equations with regular singular points", "A equation of third order with a regular singular point at $x_0$ , has the form: $(x-x_0)^3y^{\\prime \\prime \\prime }+a(x)(x-x_0)^2y^{\\prime \\prime }+b(x)(x-x_0)y^\\prime +c(x)y=0,$ where $a,b$ are analytic at $x_0$ .", "Hence, $a,b,c$ can be written in power series of the following form: $a(x)=\\sum ^\\infty _{k=0}\\alpha _k(x-x_0)^k,\\;b(x)=\\sum ^\\infty _{k=0}\\beta _k(x-x_0)^k,\\;c(x)=\\sum ^\\infty _{k=0}\\gamma _k(x-x_0)^k,$ which are convergent in a certain interval $|x-x_0|<R_0$ , for some $R_0>0$ .", "We shall look for solutions of (REF ) in a neighborhood of $x_0$ .", "For the sake of simplicity of notation, we shall assume that $x_0=0$ .", "If $x_0\\ne 0$ , is easy to transform (REF ) in an equivalent equation with a regular singular point at the origin.", "Put $t=x-x_0$ , and $\\tilde{a}(t)=a(x_0+t)=\\sum ^\\infty _{k=0}\\alpha _k t^k,\\;\\tilde{b}(t)=b(x_0+t)=\\sum ^\\infty _{k=0}\\beta _kt^k,\\;\\tilde{c}(t)=c(x_0+t)=\\sum ^\\infty _{k=0}\\gamma _kt^k,$ The power series for $\\tilde{a},\\tilde{b},\\tilde{c}$ converge in the interval $|t|<R_0$ centered at $t=0$ .", "Let $\\varphi $ be any solution of (REF ), and define $\\tilde{\\varphi }$ of the following form: $\\tilde{\\varphi }(t)=\\varphi (x_0+t).$ Then $\\frac{d\\tilde{\\varphi }}{dt}(t)=\\frac{d\\varphi }{dx}(x_0+t),\\;\\;\\frac{d^2\\tilde{\\varphi }}{dt^2}(t)=\\frac{d^2\\varphi }{dx^2}(x_0+t),\\;\\;\\;\\frac{d^3\\tilde{\\varphi }}{dt^3}(t)=\\frac{d^3\\varphi }{dx^3}(x_0+t),$ and therefore we see that $\\tilde{\\varphi }$ satisfies the equation: $t^3u^{\\prime \\prime \\prime }+\\tilde{a}(t)t^2u^{\\prime \\prime }+\\tilde{b}(t)tu^\\prime +\\tilde{c}(t)u=0,$ where now $u^\\prime =\\frac{du}{dt}$ .", "This is an equation with a regular singular point at $t=0$ .", "Conversely, if $\\tilde{\\varphi }$ satisfies (REF ), a function $\\varphi $ given by $\\varphi (x)=\\tilde{\\varphi }(x-x_0)$ satisfies (REF ).", "In this sense (REF ) is equivalent to (REF ).", "Com $x_0=0$ in (REF ), we can write this equation in the following form: $L(y)=x^3y^{\\prime \\prime \\prime }+a(x)x^2y^{\\prime \\prime }+b(x)xy^\\prime +c(x)y=0,$ where $a,b,c$ are analytic at the origin, and moreover has a power series expansion, of the following form: $a(x)=\\sum ^\\infty _{k=0}\\alpha _kx^k,\\;b(x)=\\sum ^\\infty _{k=0}\\beta _kx^k,\\;c(x)=\\sum ^\\infty _{k=0}\\gamma _kx^k,$ which are convergent in an interval $|x|<R_0$ , $R_0>0$ .", "Equation of Euler is a special case of (REF ), with $a,b,c$ constants.", "The fact that the term of greater order (terms that have $x$ as factor) in the series (REF ), is to introduce series in the solutions of (REF )." ], [ "Proof of theorem ", "Let $\\varphi $ be a solution of (REF ) of the form $\\varphi (x)=x^r\\sum ^{\\infty }_{n=0}d_nx^n$ where $d_0\\ne 0$ .", "Since $a(x),b(x)$ and $c(x)$ are analytic in $|x|<R$ we have that $a(x)=\\sum ^\\infty _{n=0}a_nx^n,\\;\\;\\;b(x)=\\sum ^\\infty _{n=0}b_nx^n\\;\\;\\;\\mbox{ and }\\;\\;\\;c(x)=\\sum ^\\infty _{n=0}c_nx^n.$ Then $\\varphi ^\\prime (x)=x^{r-1}\\sum ^{\\infty }_{n=0}(n+r)d_nx^n $ $\\varphi ^{\\prime \\prime }(x)=x^{r-2}\\sum ^{\\infty }_{n=0}(n+r)(n+r-1)d_nx^n$ $\\varphi ^{\\prime \\prime \\prime }(x)=x^{r-3}\\sum ^{\\infty }_{n=0}(n+r)(n+r-1)(n+r-2)d_nx^n$ and thus we have $\\begin{array}{l c l}c(x)\\varphi (x)&=&x^r\\big (\\sum ^\\infty _{n=0}d_nx^n\\big )\\big (\\sum ^\\infty _{n=0}c_nx^n\\big )=x^r\\sum ^\\infty _{n=0}\\tilde{c}_nx^n\\end{array}$ where $\\tilde{c}_n=\\sum ^{n}_{j=0}d_jc_{n-j}$ .", "$\\begin{array}{l c l}xb(x)\\varphi ^\\prime (x)&=&x^r\\big (\\sum ^\\infty _{n=0}(n+r)d_nx^n\\big )\\big (\\sum ^\\infty _{n=0}b_nx^n\\big )= x^r\\sum ^\\infty _{n=0}\\tilde{b}_nx^n\\end{array}$ where $\\tilde{b}_n=\\sum ^{n}_{j=0}(j+r)d_jb_{n-j}$ .", "$\\begin{array}{l c l}x^2a(x)\\varphi ^{\\prime \\prime }(x)&=&x^r\\big (\\sum ^\\infty _{n=0}(n+r)(n+r-1)d_nx^n\\big )\\big (\\sum ^\\infty _{n=0}a_nx^n\\big )=x^r\\sum ^\\infty _{n=0}\\tilde{a}_nx^n\\end{array}$ where $\\tilde{a}_n=\\sum ^{n}_{j=0}(j+r)(j+r-1)d_ja_{n-j}$ .", "$\\begin{array}{l c l}x^3\\varphi ^{\\prime \\prime \\prime }(x)&=&x^r\\sum ^{\\infty }_{n=0}(n+r)(n+r-1)(n+r-2)d_nx^n.\\end{array}$ Therefore $L(\\varphi )(x)=x^r\\sum ^\\infty _{n=0}\\big [(n+r)(n+r-1)(n+r-2)d_n+\\tilde{a}_n+\\tilde{b}_n+\\tilde{c}_n\\big ]x^n$ e according to (REF ) we have $[\\;\\;\\;]_n=(n+r)(n+r-1)(n+r-2)d_n+\\tilde{a}_n+\\tilde{b}_n+\\tilde{c}_n=0,\\;\\;\\;n=0,1,2,\\ldots $ Using the definitions of $\\tilde{a}_n,\\tilde{b}_n$ and $\\tilde{c}_n$ we can write $[\\;\\;\\;]_n$ since $\\begin{array}{lcl} [\\;\\;\\;]_n&=& (n+r)(n+r-1)(n+r-2)d_n+\\sum ^{n}_{j=0}(j+r)(j+r-1)d_ja_{n-j}\\\\&&\\\\&&+\\sum ^{n}_{j=0}(j+r)d_jb_{n-j}+\\sum ^{n}_{j=0}d_jc_{n-j}\\\\&&\\\\&=&[(n+r)(n+r-1)(n+r-2)+(n+r)(n+r-1)a_0+(n+r)b_0+c_0]d_n\\\\&&\\\\&&+\\sum ^{n-1}_{j=0}[(j+r)(j+r-1)a_{n-j}+(j+r)b_{n-j}+c_{n-j}]d_j\\end{array}$ For $n=0$ we have $ r(r-1)(r-2)+r(r-1)a_0+rb_0+c_0=0$ provided that $d_0\\ne 0$ .", "We see that $[\\;\\;\\;]_n=q(n+r)d_n+e_n,\\;\\;\\;\\;\\;n=1,2,\\ldots $ where $e_n=\\sum ^{n-1}_{j=0}[(j+r)(j+r-1)a_{n-j}+(j+r)b_{n-j}+c_{n-j}]d_j,\\;\\;\\;\\;\\;n=1,2,\\ldots $ Note that $e_n$ is a linear combination of $d_0,d_1,\\ldots ,d_{n-1}$ , whose coefficients involve the functions $a,b,c$ and $r$ .", "Letting $r$ and $d_0$ be unknown, for the moment we solve equations (REF ) and (REF ) in terms of $d_0$ and $r$ .", "These solutions are represented according to $D_n(r)$ , and a $e_n$ by $E_n(r)$ .", "Hence $E_1(r)=(r(r-1)a_1+rb_1+c_1)d_0\\;\\;\\;\\;\\;\\;D_1(r)=-\\frac{E_1(r)}{q(1+r)},$ and in general: $E_n(r)=\\sum ^{n-1}_{j=0}[(j+r)(j+r-1)a_{n-j}+(j+r)b_{n-j}+c_{n-j}]D_j(r)$ $D_n(r)=-\\frac{E_n(r)}{q(n+r)},\\;\\;\\;\\;\\;n=1,2,\\ldots $ The terms $D_n$ thus given, are rational functions of $r$ , whose only indefinite points are the points $r$ for which $q(r+n)=0$ for some $n=1,2,\\ldots $ .", "Among these, there exist only two possible points.", "Let us define $\\varphi $ as follows: $\\varphi (x,r)=d_0x^r+x^r\\sum ^{\\infty }_{n=1}D_n(r)x^n.$ If the series (REF ) converges for $0<x<R$ , then we have: $L(\\varphi )(x,r)=d_0q(r)x^r.$ Now we have a following situation: If a $\\varphi $ given by (REF ) is a solution of (REF ), then $r$ must be a root of the indicial polynomial $q$ , and then $d_n$ ($n\\ge 1$ ) are given exclusively in terms of $d_0$ and $r$ according to os $D_n(r)$ of (REF ), provided that $q(n+r)\\ne 0$ , $n=1,2,\\ldots $ .", "Conversely, if $r$ is a root of $q$ and if the $D_n(r)$ are given (i.e., $q(n+r)\\ne 0$ for $n=1,2,\\ldots $ ) then a function $\\varphi $ given by $\\varphi (x)=\\varphi (x,r)$ is a solution of (REF ) for every choice of $d_0$ , provided that that the series (REF ) is convergent.", "We have $r_1,r_2,r_3$ are as roots of $q$ with $\\mbox{Re}(r_1)\\ge \\mbox{Re}(r_2)\\ge \\mbox{Re}(r_3)$ .", "Then $q(n+r_1)\\ne 0$ for every $n=1,2,\\ldots $ .", "Hence, $D_n(r_1)$ exists for every $n=1,2,\\ldots $ , and putting $d_0=D_0(r_1)=1$ we have that a function $\\varphi _1$ given by $\\varphi _1(x)=x^{r_1}\\sum ^\\infty _{n=0}D_n(r_1)x^n,\\;\\;\\;\\;D_0(r_1)=1,$ is a solution of (REF ), provided that the series converges.", "Since $r_1-r_2\\notin \\mathbb {Z}^+_0$ , $r_1-r_3\\notin \\mathbb {Z}^+_0$ and $r_2-r_3\\notin \\mathbb {Z}^+_0$ then $q(n+r_2)\\ne 0\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;\\;q(n+r_3)\\ne 0 $ for every $n=1,2,\\ldots $ , then $D_n(r_2)$ and $D_n(r_3)$ are well defined for $n=1,2,\\ldots $ hence the functions $\\varphi _2$ and $\\varphi _3$ are given by $\\varphi _2(x)=x^{r_2}\\sum ^\\infty _{n=0}D_n(r_2)x^n,\\;\\;\\;\\;D_0(r_2)=1,$ and $\\varphi _3(x)=x^{r_3}\\sum ^\\infty _{n=0}D_n(r_3)x^n,\\;\\;\\;\\;D_0(r_3)=1.$ Now let us show the convergence of the series $\\sum ^{\\infty }_{n=0}D_n(r)x^n$ where $D_n(r)$ are given recursively by $\\begin{array}{c }D_0(r)=1,\\\\ \\\\ q(n+r)D_n(r)=-\\sum ^{n-1}_{j=0}[(j+r)(j+r-1)a_{n-j}+(j+r)b_{n-j}+c_{n-j}]D_j(r),\\;n=1,2,\\ldots \\end{array}$ to see (REF ) and (REF ).", "We need to show that the series (REF ) converge for $|x|<R$ if $r=r_1$ , if $r=r_2$ and if $r=r_3$ , provided that $r_1-r_2\\notin \\mathbb {Z}^+_0$ , $r_1-r_3\\notin \\mathbb {Z}^+_0$ and $r_2-r_3\\notin \\mathbb {Z}^+_0$ .", "Note that $q(r)=(r-r_1)(r-r_2)(r-r_3),$ and according to that we have $\\begin{array}{c} q(n+r_1)=n(n+r_1-r_2)(n+r_1-r_3),\\\\ \\\\q(n+r_2)=n(n+r_2-r_1)(n+r_2-r_3),\\\\ \\\\q(n+r_3)=n(n+r_3-r_1)(n+r_3-r_2).\\end{array}$ Henceforth: $\\begin{array}{c} |q(n+r_1)|\\ge n(n-|r_1-r_2|)(n-|r_1-r_3|),\\\\ \\\\|q(n+r_2)|\\ge n(n-|r_1-r_2|)(n-|r_2-r_3|),\\\\ \\\\|q(n+r_3)|\\ge n(n-|r_1-r_3|)(n-|r_2-r_3|).\\end{array}$ Let now $\\rho $ be any number that satisfies the inequality $0<\\rho <R$ .", "Since the series defined in (REF ) are convergent for $|x|=\\rho $ exists a constant $M>0$ such that $|a_j|\\rho ^j\\le M,\\;\\;\\;\\;\\;|b_j|\\rho ^j\\le M\\;\\;\\;\\;\\;|c_j|\\rho ^j\\le M\\;\\;\\;\\;\\;j=0,1,2,\\ldots $ Replacing (REF ) and (REF ) in equation (REF ), we obtain: $\\begin{array}{l}n(n-|r_1-r_2|)(n-|r_1-r_3|)|D_n(r_1)|\\le \\\\ \\\\\\hspace{85.35826pt}M\\sum ^{n-1}_{j=0}( (j+|r_1|)(j+|r_1-1|)+(j+|r_1|)+1)\\rho ^{j-n}|D_j(r_1)|,\\end{array}$ for $n=1,2,\\ldots $ .", "Let $N_1$ be the integer that satisfies the inequality $N_1-1\\le |r_1-r_2|<N_1, $ and $N_2$ the integer that satisfies the inequality $N_2-1\\le |r_1-r_3|<N_2, $ .", "Consider the $N=\\max \\lbrace N_1,N_2\\rbrace $ and we shall define $\\gamma _0,\\gamma _1,\\ldots $ of the following form: $\\gamma _0=D_0(r_1)=1,\\;\\;\\;\\gamma _n=|D_n(r_1)|,\\;\\;\\;n=1,2,\\ldots ,N-1,$ and $n(n-|r_1-r_2|)(n-|r_1-r_3|)\\gamma _n=M\\sum ^{n-1}_{j=0}( (j+|r_1|)(j+|r_1-1|)+(j+|r_1|)+1)\\rho ^{j-n}\\gamma _j,$ for $n=N,N+1,\\ldots $ .", "Then, comparing the definition of $\\gamma _n$ with (REF ), we see that $|D_n(r_1)|\\le \\gamma _n,\\;\\;\\;\\;\\;\\;n=0,1,2,\\ldots $ Hence we will shows that the series $\\sum ^\\infty _{n=0}\\gamma _nx^n$ is convergent for $|x|<\\rho $ .", "Replacing $n$ according to $n+1$ in (REF ) we have: $\\rho (n+1)(n+1-|r_1-r_2|)(n+1-|r_1-r_3|)\\gamma _{n+1}=$ $[n(n-|r_1-r_2|)(n-|r_1-r_3|)+ M((n+|r_1|)(n+|r_1-1|)+(n+|r_1|)+1)]\\gamma _n$ for $n\\ge N$ .", "Hence $\\big |\\frac{\\gamma _{n+1}x^{n+1}}{\\gamma _{n}x^{n}}\\big |=\\frac{[n(n-|r_1-r_2|)(n-|r_1-r_3|)+M((n+|r_1|)(n+|r_1-1|)+(n+|r_1|)+1)]}{\\rho (n+1)(n+1-|r_1-r_2|)(n+1-|r_1-r_3|)}|x|$ converge a $|x|/\\rho $ when $n\\rightarrow \\infty $ .", "Hence according to the quotient test the series (REF ) converges for $|x|<\\rho $ .", "Using (REF ) and according to the comparison test for series, we conclude that the series $\\sum ^\\infty _{n=0}D_n(r_1)x^n,\\;\\;\\;\\;D_0(r_1)=1,$ also converges for $|x|<\\rho $ .", "But since $\\rho $ is any number that satisfies the inequality $0<\\rho <R$ , this already shows that that this series converges for $|x|<R$ .", "Replacing $r_1$ by $r_2$ in all the above computations, we show that $\\sum ^\\infty _{n=0}D_n(r_2)x^n,\\;\\;\\;\\;D_0(r_2)=1,$ converge for $|x|<R$ provided that $r_1-r_2\\notin \\mathbb {Z}^+_0$ and $r_2-r_3\\notin \\mathbb {Z}^+_0$ .", "Also, replacing $r_1$ by $r_3$ in the above computations, we show that $\\sum ^\\infty _{n=0}D_n(r_3)x^n,\\;\\;\\;\\;D_0(r_3)=1,$ converge for $|x|<R$ provided that $r_1-r_3\\notin \\mathbb {Z}^+_0$ and $r_2-r_3\\notin \\mathbb {Z}^+_0$ .", "Remark 3.1 Since we already know that in (REF ), (REF ) and (REF ) the coefficients $d_n,\\tilde{d_n},\\hat{d_n}$ that appear in the solutions $\\varphi _1,\\varphi _2,\\varphi _3$ of theorem REF are given according to $d_n=D_n(r_1),\\;\\;\\;\\tilde{d_n}=D_n(r_2)\\;\\;\\;\\mbox{ and }\\;\\;\\;\\hat{d_n}=D_n(r_3),\\;\\;\\\\;n=1,2,\\ldots ,$ where the $D_n(r)$ are solutions of equations (REF ) and (REF ) with $D_0(r)=1$ ." ], [ "Proof of theorem ", "We shall work with a formal method for finding out the form of the solutions.", "For such $x$ , we have, according to (REF ) and (REF ), $L(\\varphi )(x,r)=d_0q(r)x^r,$ where $\\varphi $ is given by $\\varphi (x,r)=d_0x^r+x^r\\sum ^\\infty _{n=1}D_n(r)x^n.$ The functions $D_n(r)$ are given by the recurrence given by the formulas: $\\begin{array}{c} D_0(r)=d_0\\ne 0\\\\ \\\\ q(n+r)D_n(r)=-E_n(r)\\\\ \\\\ E_n(r)=\\sum ^{n-1}_{j=0}[(j+r)(j+r-1)a_{n-j}+(j+r)b_{n-j}+c_{n-j}]D_j(r),\\;\\;\\;n=1,2,\\ldots ;\\end{array}$ to see (REF ) and (REF ).", "$\\;$ We have $q(r_1)=0,\\;\\;\\;\\;q^\\prime (r_1)=0\\;\\;\\;\\;\\;q^{\\prime \\prime }(r_1)=0,$ and this clearly suggests the derivation of (REF ) with respect to $r$ .", "We obtain $\\begin{array}{lcl} \\frac{\\partial }{\\partial r}L(\\varphi )(x,r)&=&L\\big (\\frac{\\partial \\varphi }{\\partial r}\\big )(x,r)=d_0[q^\\prime (r)+(\\log x)q(r)]x^r, \\end{array}$ and we see that if $r=r_1=r_2=r_3$ , $d_0=1$ we have $\\varphi _2(x)=\\frac{\\partial \\varphi }{\\partial r}(x,r_1)$ which will give us a solution of the equation, provided that the series converges.", "A straightforward computation from (REF ), we have: $\\begin{array}{lcl} \\varphi _2(x)&=& x^{r_1}\\sum ^{\\infty }_{n=0}D_n^\\prime (r_1)x^n+(\\log x)x^{r_1}\\sum ^\\infty _{n=0}D_n(r_1)x^n\\\\&&\\\\&=&x^{r_1}\\sum ^{\\infty }_{n=0}D_n^\\prime (r_1)x^n+(\\log x)\\varphi _1(x) \\end{array}$ where $\\varphi _1$ is the already obtained solution: $\\varphi _1(x)=x^{r_1}\\sum ^{\\infty }_{n=0}D_n(r_1)x^n,\\;\\;\\;D_0(r_1)=1.$ Note that $D_n^\\prime (r_1)$ exists for every $n=0,1,2,\\ldots $ , since os $D_n$ are rational functions of $r$ whose denominator does not vanish in $r=r_1$ .", "Also $D_0(r)=1$ implies that $D_0^\\prime (r_1)=0$ , and therefore the series that in $\\varphi _2$ is multiplying $x^{r_1}$ starts with a first power of $x$ .", "In order to find other solution we take the derivative of (REF ) with respect to $r$ .", "We obtain $\\begin{array}{lcl} \\frac{\\partial ^2}{\\partial r^2}L(\\varphi )(x,r)&=&L\\big (\\frac{\\partial ^2\\varphi }{\\partial r^2}\\big )(x,r)\\\\&&\\\\&=&d_0[q^{\\prime \\prime }(r)+2(\\log x)q^\\prime (r)+(\\log x)^2q(r)]x^r, \\end{array}$ and we see that if $r=r_1=r_2=r_3$ , $d_0=1$ we have $\\varphi _3(x)=\\frac{\\partial ^2 \\varphi }{\\partial r^2}(x,r_1)$ which will give us a solution of the equation, provided that the series converges.", "A straightforward computation from (REF ), we have: $\\varphi _3(x)=x^{r_1}\\sum ^{\\infty }_{n=0}D_n^{\\prime \\prime }(r_1)x^n+(\\log x)x^{r_1}\\sum ^\\infty _{n=0}D_n^\\prime (r_1)x^n+(\\log x)[x^{r_1}\\sum ^{\\infty }_{n=0}D_n^\\prime (r_1)x^n+(\\log x)x^{r_1}\\sum ^\\infty _{n=0}D_n(r_1)x^n]$ and therefore $\\varphi _3(x)=x^{r_1}\\sum ^{\\infty }_{n=0}D_n^{\\prime \\prime }(r_1)x^n+(\\log x)x^{r_1}\\sum ^\\infty _{n=0}D_n^\\prime (r_1)x^n+(\\log x)[x^{r_1}\\sum ^{\\infty }_{n=0}D_n^\\prime (r_1)x^n+(\\log x)\\varphi _1(x)]$ $=x^{r_1}\\sum ^{\\infty }_{n=0}D_n^{\\prime \\prime }(r_1)x^n+2(\\log x)x^{r_1}\\sum ^\\infty _{n=0}D_n^\\prime (r_1)x^n+(\\log x)^2\\varphi _1(x)$ $=x^{r_1}\\sum ^{\\infty }_{n=0}D_n^{\\prime \\prime }(r_1)x^n+2(\\log x)\\big [x^{r_1}\\sum ^\\infty _{n=0}D_n^\\prime (r_1)x^n+(\\log x)\\varphi _1(x)\\big ]-(\\log x)^2\\varphi _1(x)$ $=x^{r_1}\\sum ^{\\infty }_{n=0}D_n^{\\prime \\prime }(r_1)x^n+2(\\log x)\\varphi _2(x)-(\\log x)^2\\varphi _1(x).$ Note that $D_n^{\\prime \\prime }(r_1)$ also exists for every $n=0,1,2,\\ldots $ , since os $D_n^\\prime $ are rational functions of $r$ whose denominator does not vanish in $r=r_1$ .", "Also $D_0(r)=1$ implies that $D_0^{\\prime \\prime }(r_1)=D_0^{\\prime }(r_1)=0$ , and therefore the series that in $\\varphi _3$ is multiplying $x^{r_1}$ starts with a first power of $x$ .", "Since $r_1=r_2$ proceed as in item [(i)] and obtain: $\\varphi _2(x)=x^{r_1}\\sum ^{\\infty }_{n=0}D_n^\\prime (r_1)x^n+(\\log x)\\varphi _1(x)$ where $\\varphi _1$ is the already obtained solution: $\\varphi _1(x)=x^{r_1}\\sum ^{\\infty }_{n=0}D_n(r_1)x^n,\\;\\;\\;D_0(r_1)=1.$ Suppose that that $r_1=r_3+m$ , where $m\\in \\mathbb {Z}^+$ .", "If $d_0$ is given, $D_1(r_3),\\cdots ,D_{m-1}(r_3)$ all exist and have finite values, but since $q(r+m)D_m(r)=-E_m(r),$ we encounter some difficulties in the computation of $D_m(r_3)$ .", "Now $q(r)=(r-r_1)^2(r-r_3),$ and therefore: $q(r+m)=(r-r_3)^2(r+m-r_3).$ If $E_m(r)$ has $(r-r_3)^2$ as a factor (i.e., $E_m(r_3)=E^\\prime _m(r_3)=0$ ) this implies that we can cancel the same factor in $q(r+m)$ , and then (REF ) gives $D_m(r_3)$ in form of finite number.", "Then: $D_{m+1}(r_3),\\;D_{m+2}(r_3),\\ldots $ all of them exist.", "In a situation also especial, we obtain a solution $\\varphi _3$ of the form $\\varphi _3(x)=x^{r_3}\\sum ^\\infty _{n=0}D_n(r_3)x^n,\\;\\;\\;D_0(r_3)=1.", "$ It is always possible to arrange of such form that $\\tilde{E}_m(r_3)=0$ , choosing $\\tilde{D}_0(r)=(r-r_3)^2.$ Observing (REF ) we see that $\\tilde{E}_n(r)$ is linear homogeneous em $\\tilde{D}_0(r),\\ldots ,\\tilde{D}_{n-1}(r),$ and therefore that $\\tilde{E}_n(r)$ has the $\\tilde{D}_0(r)=(r-r_3)^2$ as a factor.", "Hence, $\\tilde{D}_m(r_3)$ will exist in form of finite number.", "Putting $\\psi (x,r)=x^r\\sum ^{\\infty }_{n=0}\\tilde{D}_n(r)x^n,\\;\\;\\;\\;\\tilde{D}_0(r)=(r-r_3)^2,$ we find formally that $L(\\psi )(x,r)=(r-r_3)^2q(r)x^r.$ Putting $r=r_3$ we obtain formally a solution $\\psi $ given by $\\psi (x)=\\psi (x,r_3).$ Although, $\\tilde{D}_0(r_3)=\\tilde{D}_1(r_3)=\\cdots =\\tilde{D}_{m-1}(r_3)=0$ .", "Hence, a series that define $\\psi $ really starts with a $m$ -th power of $x$ , and then $\\psi $ has the form: $\\psi (x)=x^{r_3+m}\\sigma (x)=x^{r_1}\\sigma (x),$ where $\\sigma $ is a power series.", "It is not difficult to see that $\\psi $ is precisely a constant multiple of the solution $\\varphi _1$ that is already known.", "In order to find a solution really associate with $r_3$ , we take the derivative of (REF ) with respect to $r$ , we obtain: $\\begin{array}{lcl} \\frac{\\partial }{\\partial r}L(\\psi )(x,r)&=&L\\big (\\frac{\\partial \\psi }{\\partial r}\\big )(x,r)=[2(r-r_3)q(r)+(r-r_3)^2(q^\\prime (r)+(\\log x)q(r))]x^r, \\end{array}$ Now, putting $r=r_3$ we find a function $\\varphi _3$ given by $\\varphi _3(x)=\\frac{\\partial \\psi }{\\partial r}(x,r_3)$ is a solution, provided that the series involved are convergent.", "We have the form $\\varphi _3(x)=x^{r_3}\\sum ^\\infty _{n=0}\\tilde{D}^\\prime _n(r_3)x^n+(\\log x)x^{r_3}\\sum ^\\infty _{n=0}\\tilde{D}_n(r_3)x^n, $ where $\\tilde{D}_0(r)=(r-r_3)^2$ .", "Since $\\tilde{D}_0(r_3)=\\cdots =\\tilde{D}_{m-1}(r_3)=0,$ we can rewrite this in the following form: $\\varphi _3(x)=x^{r_3}\\sum ^\\infty _{n=0}\\tilde{D}^\\prime _n(r_3)x^n+c(\\log x)\\varphi _1(x), $ where $c$ is some constant.", "Since in item (ii) we obtain: $\\varphi _3(x)=x^{r_2}\\sum ^{\\infty }_{n=0}D_n^\\prime (r_2)x^n+(\\log x)\\varphi _2(x)$ where $\\varphi _2$ is the already obtained solution: $\\varphi _2(x)=x^{r_2}\\sum ^{\\infty }_{n=0}D_n(r_2)x^n,\\;\\;\\;D_0(r_2)=1.$ Also since in item (ii) we have that $r_1=r_2+m$ , where $m\\in \\mathbb {Z}^+$ .", "The other desired solution has the form $\\varphi _1(x)=x^{r_2}\\sum ^\\infty _{n=0}\\tilde{D}^\\prime _n(r_2)x^n+(\\log x)x^{r_3}\\sum ^\\infty _{n=0}\\tilde{D}_n(r_2)x^n, $ where $\\tilde{D}_0(r)=(r-r_2)^2$ .", "Since $\\tilde{D}_0(r_2)=\\cdots =\\tilde{D}_{m-1}(r_2)=0,$ we can write this in the following form: $\\varphi _1(x)=x^{r_2}\\sum ^\\infty _{n=0}\\tilde{D}^\\prime _n(r_2)x^n+c(\\log x)\\varphi _2(x), $ where $c$ is some constant.", "Since in item (ii) we have that $r_1=r_2+m$ , where $m\\in \\mathbb {Z}^+$ and $r_2=r_3+p$ , , where $p\\in \\mathbb {Z}^+$ .", "The desired solution has the form $\\varphi _2(x)=x^{r_2}\\sum ^\\infty _{n=0}\\tilde{D}^\\prime _n(r_2)x^n+(\\log x)x^{r_2}\\sum ^\\infty _{n=0}\\tilde{D}_n(r_2)x^n, $ where $\\tilde{D}_0(r)=r-r_2$ .", "Since $\\tilde{D}_0(r_2)=\\cdots =\\tilde{D}_{m-1}(r_2)=0,$ we can write this in the following form: $\\varphi _2(x)=x^{r_2}\\sum ^\\infty _{n=0}\\tilde{D}^\\prime _n(r_2)x^n+c(\\log x)\\varphi _1(x),$ where $c$ is some constant and $\\varphi _1(x)=x^{r_1}\\sum ^{\\infty }_{n=0}D_n(r_1)x^n,\\;\\;\\;D_0(r_1)=1.$ Analogously another solution has the form $\\varphi _3(x)=x^{r_3}\\sum ^\\infty _{n=0}\\hat{D}^\\prime _n(r_3)x^n+(\\log x)x^{r_3}\\sum ^\\infty _{n=0}\\hat{D}_n(r_3)x^n, $ where $\\hat{D}_0(r)=r-r_3$ .", "Since $\\hat{D}_0(r_3)=\\cdots =\\hat{D}_{p-1}(r_3)=0,$ we can write this in the following form: $\\varphi _3(x)=x^{r_3}\\sum ^\\infty _{n=0}\\hat{D}^\\prime _n(r_3)x^n+\\tilde{c}(\\log x)\\varphi _2(x),$ where $\\tilde{c}$ is some constant.", "Remark 3.2 $\\;$ The method used above is based in the classical ideas of Frobenius and we shall refer to it as the method of Frobenius for order three.", "All the series obtained above converge for $|x|<R$ .", "The solutions for $x<0$ , can be obtained by replacing $x^{r_1},x^{r_2},x^{r_3},\\log x$ in the expansion according to $|x|^{r_1},|x|^{r_2},|x|^{r_3},\\log |x|$ respectively." ], [ "Examples", "Example 3.1 Consider the equation $x^3y^{\\prime \\prime \\prime }+x^2y^{\\prime \\prime }+xy^\\prime +x^3y=0,\\;x>0.$ Note that in this case, $a(x)=1$ , $b(x)=1$ and $b(x)=x^3$ which are analytic at 0.", "The indicial polynomial is given $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0)=r(r-1)(r-2)+r(r-1)+r=r(r^2-2r+2).$ The roots are $r_1=1+i$ , $r_2=1-i$ and $r_3=0$ .", "Since $r_1-r_2=2i\\notin \\mathbb {Z}^+_0$ , $r_1-r_3=1+i\\notin \\mathbb {Z}^+_0$ and $r_2-r_3=1-i\\notin \\mathbb {Z}^+_0$ according to Theorem REF we have three solutions of the form $\\varphi _1(x)=x^{1+i}\\sum ^{\\infty }_{n=0}a_nx^n\\;\\;\\;(a_0=1),\\;\\;\\;\\varphi _2(x)=x^{1-i}\\sum ^{\\infty }_{n=0}b_nx^n\\;\\;\\;(b_0=1)$ and $\\varphi _3(x)=x^{0}\\sum ^{\\infty }_{n=0}c_nx^n\\;\\;\\;(c_0=1).$ Substituting the solutions of equation (REF ) we obtain the following: For $\\varphi _3$ we have the following recurrence relation for the coefficients $c_1=0,c_2=0\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;n((n-1)^2+1)c_{n}+c_{n-3}=0\\;\\mbox{ for every }n\\ge 3.$ Therefore, since $c_0=1$ we have that $\\varphi _3(x)=1+\\sum ^{\\infty }_{k=1}\\frac{(-1)^k}{3^{k}k!", "(2^2+1)(5^2+1)\\cdots ((3k-1)^2+1) }x^{3k}.$ For $\\varphi _1$ we have the following recurrence relation for the coefficients $a_1=0,a_2=0\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;n\\big [\\big (n+\\frac{1+3i}{2}\\big )^2+\\frac{i}{2}\\big ]a_n+a_{n-3}=0\\;\\mbox{ for every }n\\ge 3.$ Denote by $\\alpha _0=\\frac{1+3i}{2}$ , $\\beta _0=\\frac{i}{2}$ and since $a_0=1$ we have that $\\varphi _1(x)=x^{1+i}+\\sum ^{\\infty }_{k=1}\\frac{(-1)^k}{3^k\\cdot k!", "[(3+\\alpha _0)^2+\\beta _0][(6+\\alpha _0)^2+\\beta _0]\\cdots [(3k+\\alpha _0)^2+\\beta _0]}x^{3k+1+i}.$ For $\\varphi _2$ we have the following recurrence relation for the coefficients $b_1=0,\\;b_2=0\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;n\\big [\\big (n+\\frac{1-3i}{2}\\big )^2-\\frac{i}{2}\\big ]b_n+b_{n-3}=0\\;\\mbox{ for every }n\\ge 3.$ Therefore, since $b_0=1$ we have that $\\varphi _2(x)=x^{1-i}+\\sum ^{\\infty }_{k=1}\\frac{(-1)^k}{3^k\\cdot k!", "[(3+\\overline{\\alpha _0})^2+\\overline{\\beta _0}][(6+\\overline{\\alpha _0})^2+\\overline{\\beta _0}]\\cdots [(3k+\\overline{\\alpha _0})^2+\\overline{\\beta _0}]}x^{3k+1-i}.$ Remark 3.3 Recall that, from complex numbers theory we have $x^{1\\pm i}=x\\cdot x^{\\pm i}=xe^{\\pm i\\log x}=x\\cos (\\log x)\\pm ix\\sin (\\log x).$ Example 3.2 (Bessel equation of order zero for third order ODEs) Consider the equation $x^3y^{\\prime \\prime \\prime }+3x^2y^{\\prime \\prime }+xy^\\prime +(x^3-\\alpha ^3)y=0,\\;x>0$ where $\\mbox{Re}(\\alpha )\\ge 0$ .", "Note that in this case, $a(x)=3$ , $b(x)=1$ and $b(x)=x^3-\\alpha ^3$ which are analytic at 0.", "Also $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0)=r(r-1)(r-2)+3r(r-1)+r-\\alpha ^3=r^3-\\alpha ^3.$ Let us study the case $\\alpha =0$ , in this case an equation is given by $x^3y^{\\prime \\prime \\prime }+3x^2y^{\\prime \\prime }+xy^\\prime +x^3y=0.$ The indicial polynomial is given in this case according to $r^3=0$ .", "Hence as roots are $r_1=r_2=r_3=0$ , according to Theorem REF we have three solutions of the form $\\varphi _1(x)=x^{0}\\big (\\sum ^\\infty _{n=0}a_nx^n\\big ),\\;\\;\\varphi _2(x)=x^{1}\\big (\\sum ^\\infty _{n=0}b_nx^n\\big )+(\\log x)\\varphi _1(x)$ and $\\varphi _3(x)=x^{1}\\big (\\sum ^\\infty _{n=0}c_nx^n\\big )+2(\\log x)\\varphi _2(x)-(\\log x)^2\\varphi _1(x), $ where $a_0\\ne 0$ .", "Substituting the solutions of equation (REF ) we obtain the following: For $\\varphi _1$ we have the following recurrence relation for the coefficients $a_1=0,a_2=0\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;n^3a_{n}+a_{n-3}=0\\;\\mbox{ for every }n\\ge 3.$ Therefore, if we choose $a_0=1$ we have that $\\varphi _1(x)=\\sum ^{\\infty }_{k=0}\\frac{(-1)^k}{3^{3k}(k!", ")^3}x^{3k}=\\sum ^{\\infty }_{k=0}\\frac{(-1)^k}{(k!", ")^3}\\big (\\frac{x}{3}\\big )^{3k}.$ For $\\varphi _2$ we have the following recurrence relation for the coefficients: $b_0=b_1=0,\\;b_2=\\frac{1}{3^3},$ $(3n-2)^3b_{3n-3}+b_{3n-6}=0\\;\\;\\mbox{ for every }n\\ge 2,\\;\\;\\;(3n-1)^3b_{3n-2}+b_{3n-5}=0\\;\\;\\mbox{ for every }n\\ge 2,$ and $(3n)^3b_{3n-1}+b_{3n-4}=\\frac{(-1)^{n+1}n^2}{(n!", ")^33^{3n-3}} \\;\\;\\mbox{ for every }n\\ge 2.$ Therefore, we have that $\\varphi _2(x)=\\sum ^{\\infty }_{n=1}\\frac{(-1)^{n+1}}{3^{3n}(n!", ")^3}\\big [1+\\frac{1}{2}+\\frac{1}{3}+\\cdots +\\frac{1}{n}\\big ]x^{3n}+(\\log x)\\varphi _1(x).$ For $\\varphi _3$ we have the following recurrence relation for the coefficients: $c_0=c_1=0,\\;\\;c_2=\\frac{2^3}{3^4},$ $(3n-2)^3c_{3n-3}+c_{3n-6}=0\\;\\;\\mbox{ for every }n\\ge 2,\\;\\;\\;(3n-1)^3c_{3n-2}+c_{3n-5}=0\\;\\;\\mbox{ for every }n\\ge 2,$ and $(3n)^3c_{3n-1}+c_{3n-4}=\\frac{(-1)^{n+1} (18n)}{3^{3n}(n!", ")^3}\\big [(3n)\\big (1+\\frac{1}{2}+\\frac{1}{3}+\\cdots +\\frac{1}{n}\\big )+1\\big ] \\;\\;\\mbox{ for every }n\\ge 2.$ Therefore, we have that $\\varphi _3(x)=x\\sum ^{\\infty }_{n=0}c_nx^n+2(\\log x)\\varphi _2(x)-(\\log x)^2\\varphi _1(x), $ where $c_n$ given by the recurrence above.", "Definition 3.1 Equation (REF ) will called Bessel equation of order zero for third order ODEs, due to the fact that the solutions (REF ) and (REF ) have a similarity with the functions of Bessel of order zero for second order differential equations.", "Example 3.3 (Laguerre differential equation of third order) Consider the equation $x^2y^{\\prime \\prime \\prime }+3xy^{\\prime \\prime }+(1-x)y^\\prime +\\alpha y=0,\\;x>0$ where $\\alpha \\in \\mathbb {R}$ .", "Observe that equation (REF ) has 0 as a regular singular point, since multiplying by $x$ both sides of equation (REF ) we have $x^3y^{\\prime \\prime \\prime }+3x^2y^{\\prime \\prime }+(1-x)xy^\\prime +\\alpha x y=0.$ Note that in this case, $a(x)=3$ , $b(x)=1-x$ and $b(x)=\\alpha x$ which are analytic at 0.", "Also the indicial polynomial has the form $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0)=r(r-1)(r-2)+3r(r-1)+r=r^3.$ Hence as roots are $r_1=r_2=r_3=0$ , according to Theorem REF we have that a solution of (REF ) is of the form $\\varphi (x)=x^{0}\\big (\\sum ^\\infty _{n=0}a_nx^n\\big )$ where $a_0\\ne 0$ .", "Calculating the coefficients we obtain $a_1=-\\alpha a_0,a_2=\\frac{(1-\\alpha )(-\\alpha )}{2^3}\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;k^3a_k-((k-1)-\\alpha )a_{k-1}=0\\;\\mbox{ for every }k\\ge 3.$ Therefore, if we choose $a_0=1$ we have that $\\varphi (x)=1+\\sum ^{\\infty }_{k=1}\\frac{((k-1)-\\alpha )\\cdots (2-\\alpha )(1-\\alpha )(-\\alpha )}{(k!", ")^3}x^{k}.$ Definition 3.2 Equation (REF ) will be called Laguerre differential equation of third order since if $\\alpha =n-1$ where $n\\in \\mathbb {Z}^+$ then a solution (REF ) is polynomial.", "Example 3.4 Consider the equation $x^3y^{\\prime \\prime \\prime }+x^2y^{\\prime \\prime }+x^2y^\\prime +xy=0,\\;x>0.$ Note that in this case, $a(x)=1$ , $b(x)=x$ and $c(x)=x$ which are analytic at 0.", "The indicial polynomial is given $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0)=r(r-1)(r-2)+r(r-1)=r^3-2r^2+r.$ The roots are $r_1=r_2=1$ and $r_3=0$ .", "Since $r_1-r_3=1\\in \\mathbb {Z}^+$ according to Theorem REF there exist three solutions $\\varphi _1,\\varphi _2,\\varphi _3$ defined, which has the form: $\\varphi _1(x)=x\\big (\\sum ^\\infty _{n=0}a_nx^n\\big ),\\;\\;\\varphi _2(x)=x^{2}\\big (\\sum ^\\infty _{n=0}b_nx^n\\big )+(\\log x)\\varphi _1(x)$ and $\\varphi _3(x)=x^{2}\\big (\\sum ^\\infty _{n=0}c_nx^n\\big )+c\\;(\\log x)\\varphi _1(x), $ where $c$ constant, $a_0\\ne 0$ and $c_0\\ne 0$ .", "Substituting the solutions of equation (REF ) we obtain the following: For $\\varphi _1$ we have the following recurrence relation for the coefficients $a_1=-a_0\\;\\;\\;\\;\\mbox{ and }\\;\\;\\; n^2(n+1)a_n+(n+1)a_{n-1}=0\\;\\mbox{ for every }n\\ge 2.$ Therefore, if we choose $a_0=1$ we have that $\\varphi _1(x)=\\sum ^{\\infty }_{n=0}\\frac{(-1)^n}{(n!", ")^2}x^{n+1}.$ For $\\varphi _2$ we have the following recurrence relation for the coefficients: $b_0=2,\\;\\;\\;(n+1)^2b_n+b_{n-1}=\\frac{(-1)^{n}2(n+1)}{((n+1)!", ")^2}\\;\\;\\mbox{ for every }n\\ge 1.$ Therefore, we have that $\\varphi _2(x)=\\sum ^{\\infty }_{n=0}b_nx^{n+2}+(\\log x)\\varphi _1(x),$ where $b_n$ given by the recurrence above.", "For $\\varphi _3$ we have the following recurrence relation for the coefficients: $c_0=2c,\\;\\;\\;(n+1)^2c_n+c_{n-1}=\\frac{(-1)^{n}2c(n+1)}{((n+1)!", ")^2}\\;\\;\\mbox{ for every }n\\ge 1.$ Therefore, we have that $\\varphi _3(x)=\\sum ^{\\infty }_{n=0}c_nx^{n+2}+(\\log x)\\varphi _1(x),$ where $c_n$ given by the recurrence above.", "Example 3.5 Consider the equation $x^3y^{\\prime \\prime \\prime }+x^3y^{\\prime \\prime }+x^2y^\\prime -xy=0,\\;x>0.$ Note that in this case, $a(x)=x$ , $b(x)=x$ and $c(x)=-x$ which are analytic at 0.", "The indicial polynomial is given $q(r)=r(r-1)(r-2)+r(r-1)a(0)+rb(0)+c(0)=r(r-1)(r-2).$ The roots are $r_1=2,r_2=1$ and $r_3=0$ .", "Since $r_1-r_2=1\\in \\mathbb {Z}^+$ and $r_2-r_3=1\\in \\mathbb {Z}^+$ according to Theorem REF there exist three solutions $\\varphi _1,\\varphi _2,\\varphi _3$ defined, which has the form: $\\varphi _1(x)=x^{2}\\big (\\sum ^\\infty _{n=0}a_nx^n\\big ),\\;\\;\\varphi _2(x)=x\\big (\\sum ^\\infty _{n=0}b_nx^n\\big )+c\\;(\\log x)\\varphi _1(x)$ and $\\varphi _3(x)=x^{0}\\big (\\sum ^\\infty _{n=0}c_nx^n\\big )+\\tilde{c}\\;(\\log x)\\varphi _1(x), $ where $c$ , $\\tilde{c}$ are constants, $a_0\\ne 0$ , $b_0\\ne 0$ and $c_0\\ne 0$ .", "Substituting the solutions of equation (REF ) we obtain the following: For $\\varphi _1$ we have the following recurrence relation for the coefficients $(n+2)a_{n+1}+a_{n}=0\\;\\mbox{ for every }n\\ge 0.$ Therefore, if we choose $a_0=1$ we have that $\\varphi _1(x)=\\sum ^{\\infty }_{n=0}\\frac{(-1)^n}{(n+1)!", "}x^{n+2}.$ For $\\varphi _2$ we have the following recurrence relation for the coefficients: $c=0\\;\\;\\;(n+1)nb_{n+1}+nb_n=0\\;\\;\\mbox{ for every }n\\ge 1,$ we have that $\\varphi _2(x)=\\sum ^{\\infty }_{n=0}b_nx^{n+1},$ where $b_n$ given by the recurrence above.", "For $\\varphi _3$ we have the following recurrence relation for the coefficients: $\\tilde{c}=-c_0\\;\\;\\;nc_{n+1}+c_{n}=\\frac{(-1)^{n}c_0}{n!", "}\\;\\;\\mbox{ for every }n\\ge 2.$ Therefore, we have that $\\varphi _3(x)=\\sum ^{\\infty }_{n=0}c_nx^{n}-c_0(\\log x)\\varphi _2(x),$ where where $c_n$ given by the recurrence above." ], [ "Regular singular points at infinity", "We now proceed to investigate the solutions of an equation: $L(y):=y^{\\prime \\prime \\prime }+a_1(x)y^{\\prime \\prime }+a_2(x)y^\\prime +a_3(x)y=0$ for great values of $|x|$ .", "A simple way of doing this is through the change of variable $x=\\frac{1}{t}$ , and then analyze, in a neighborhood of $t=0$ , the solutions of the resulting equation.", "Then we may apply, according to example, the previous results about analytic equations and to equations with a regular singular point at $t=0$ .", "If $\\varphi $ is a solution of (REF ) for $|x|>R_0$ , for some $R_0>0$ , we put: $\\tilde{\\varphi }(t)=\\varphi \\big (\\frac{1}{t}\\big ),\\;\\;\\;\\tilde{a}_1(t)=a_1\\big (\\frac{1}{t}\\big ),\\;\\;\\;\\tilde{a}_2(t)=a_2\\big (\\frac{1}{t}\\big ),\\;\\;\\;\\tilde{a}_3(t)=a_3\\big (\\frac{1}{t}\\big ).$ These functions must exist for $|t|<\\frac{1}{R_0}$ , and $\\frac{d\\varphi }{dx}\\big (\\frac{1}{t}\\big )=-t^2\\frac{d\\tilde{\\varphi }}{dt}(t),$ $\\frac{d^2\\varphi }{dx^2}\\big (\\frac{1}{t}\\big )=t^4\\frac{d^2\\tilde{\\varphi }}{dt^2}(t)+2t^3\\frac{d\\tilde{\\varphi }}{dt}(t),$ $\\frac{d^3\\varphi }{dx^3}\\big (\\frac{1}{t}\\big )=-t^6\\frac{d^3\\tilde{\\varphi }}{dt^3}(t)-6t^5\\frac{d^2\\tilde{\\varphi }}{dt^2}(t)-6t^4\\frac{d\\tilde{\\varphi }}{dt}(t).$ Since, according to (REF ) $ \\frac{d^3\\varphi }{dx^3}\\big (\\frac{1}{t}\\big )+a_1\\big (\\frac{1}{t}\\big )\\frac{d^2\\varphi }{dx^2}\\big (\\frac{1}{t}\\big )+a_2\\big (\\frac{1}{t}\\big )\\frac{d\\varphi }{dx}\\big (\\frac{1}{t}\\big )+a_3\\big (\\frac{1}{t}\\big )\\varphi \\big (\\frac{1}{t}\\big )=0,$ we have: $t^6\\tilde{\\varphi }^{\\prime \\prime \\prime }(t)+[6t^5-t^4\\tilde{a}_1(t)]\\tilde{\\varphi }^{\\prime \\prime }(t)+[6t^4-2t^3\\tilde{a}_1(t)+t^2\\tilde{a}_2(t)]\\tilde{\\varphi }^{\\prime }(t)-\\tilde{a}_3(t)\\tilde{\\varphi }(t)=0 $ Hence $\\tilde{\\varphi }$ satisfies the equation: $\\tilde{L}(y)=t^6y^{\\prime \\prime \\prime }+[6t^5-t^4\\tilde{a}_1(t)]y^{\\prime \\prime }+[6t^4-2t^3\\tilde{a}_1(t)+t^2\\tilde{a}_2(t)]y^{\\prime }-\\tilde{a}_3(t)y=0.$ Conversely, if $\\tilde{\\varphi }$ satisfies the equation $\\tilde{L}(y)=0$ , then a function $\\varphi $ satisfies the equation $L(y)=0$ .", "Equation (REF ) is called associate induced equation with $L(y)=0$ and a substitution $x=1/t$ .", "We shall say that the the infinity is a regular singular point of (REF ), if the origin $t=0$ is a regular singular point of the induced equation (REF ).", "Writing equation (REF ) as: $t^3y^{\\prime \\prime \\prime }+\\big [6-\\frac{\\tilde{a}_1(t)}{t}\\big ]t^2y^{\\prime \\prime }+\\big [6-2\\frac{\\tilde{a}_1(t)}{t}+\\frac{\\tilde{a}_2(t)}{t^2}\\big ]ty^{\\prime }-\\frac{\\tilde{a}_3(t)}{t^3}y=0$ we see that $\\tilde{L}(y)=0$ has the $t=0$ as a regular singular point if and only if $\\frac{\\tilde{a}_1}{t},\\frac{\\tilde{a}_2}{t^2}$ and $\\frac{\\tilde{a}_3}{t^2}$ are analytic at $t=0$ .", "This means that $\\tilde{a}_1(t)=t\\sum ^\\infty _{k=0}\\alpha _kt^k,\\;\\tilde{a}_2(t)=t^2\\sum ^\\infty _{k=0}\\beta _kt^k,\\; \\tilde{a}_3(t)=t^3\\sum ^\\infty _{k=0}\\gamma _kt^k,$ where the series converge for $|t|<\\frac{1}{R_0}$ , $R_0>0$ .", "Translated into a condition that involves $a_1,a_2,a_3$ , this means that $a_1(x)=\\frac{1}{x}\\sum ^\\infty _{k=0}\\frac{\\alpha _k}{x^k},\\;a_2(x)=\\frac{1}{x^2}\\sum ^\\infty _{k=0}\\frac{\\beta _k}{x^k},\\; a_3(x)=\\frac{1}{x^3}\\sum ^\\infty _{k=0}\\frac{\\gamma _k}{x^k},$ where this series converges for $|x|>R_0$ .", "Hence, the infinity is a regular singular point for equation (REF ), if and only if (REF ) can be written as $x^3y^{\\prime \\prime \\prime }+a(x)x^2y^{\\prime \\prime }+b(x)xy^{\\prime }+c(x)y=0,$ where $a,b,c$ has expansion in power series converging expressed in powers of $1/x$ , with $|x|>R_0$ for some $R_0>0$ .", "The simplest example of an equation with regular singular point at the infinity is $x^3y^{\\prime \\prime \\prime }+ax^2y^{\\prime \\prime }+bxy^\\prime +cy=0,$ where $a,b,c$ are constants, i.e., an Euler equation.", "Hence, this equation has the origin and the infinity as regular singular points, and clearly we see that are no other singular points.", "Example 3.6 Consider the equation of the Example REF , we see that there exists no solution in a neighborhood of the origin.", "Nevertheless observe that if we change variables as $x=\\frac{1}{t}$ equation (REF ) has the infinity as a regular singular point since after the change we obtain the following equation: $ t^3y^{\\prime \\prime \\prime }+7t^2y^{\\prime \\prime }+(8t-t^2)y^\\prime +\\frac{1}{2}y=0.$ An example of equation that has three regular singular points (uniquely), is of the hypergeometric equation of third order (see [31], Chapter II Section 2.6): $(x^2-x^3)y^{\\prime \\prime \\prime }+[\\delta +\\eta +1-(\\alpha +\\beta +\\gamma +3)x]xy^{\\prime \\prime }+[\\delta \\eta -((\\beta +\\gamma +1)\\alpha +(\\beta +1)(\\gamma +1))x]y^{\\prime }-\\alpha \\beta \\gamma y=0$ where $\\alpha , \\beta , \\gamma , \\delta ,\\eta $ are constants.", "We can easily verify that 0, 1 and the infinity $\\infty $ are regular singular points.", "Remark 3.4 Consider the equation $a(x)y^{\\prime \\prime }+b(x)y^{\\prime \\prime }+c(x)y^\\prime +d(x)y=0$ having the origin as non regular singular point.", "Then (REF ) has the infinity as a regular singular point if and only if $ \\frac{b\\big (\\frac{1}{t}\\big )}{ta\\big (\\frac{1}{t}\\big )},\\;\\;\\;\\;\\frac{c\\big (\\frac{1}{t}\\big )}{t^2a\\big (\\frac{1}{t}\\big )}\\;\\;\\;\\mbox{ and }\\;\\;\\;\\frac{d\\big (\\frac{1}{t}\\big )}{t^3a\\big (\\frac{1}{t}\\big )}$ are analytic at $t=0$ .", "E in this case, there exist solutions of the Frobenius-Laurent type away from the origin." ], [ "Third order non-homogeneous equation", "A non-homogeneous equation of third order with non-constant coefficients is a differential equation of the form $a_{0}(x)y^{\\prime \\prime \\prime }+a_{1}(x)y^{\\prime \\prime }+a_2(x)y^{\\prime }+a_3(x)y=f(x)$ where $a_0,a_1,a_2,a_3, f$ are functions defined in an interval $I$ .", "Therefore the general solution will be of the form $y(x)=\\varphi _h(x)+\\varphi _p(x)$ where $\\varphi _h$ is general solution of the homogeneous equation $a_{0}(x)y^{\\prime \\prime \\prime }+a_{1}(x)y^{\\prime \\prime }+a_2(x)y^{\\prime }+a_3(x)y=0$ and $\\varphi _p$ is particular solution of (REF ).", "The problem is to find a solution particular of (REF ) and a general solution of the homogeneous equation (REF ).", "The technic that we use for finding a particular solution is variation of parameters." ], [ "Method of variation of parameters", "Let $\\lbrace \\varphi _1,\\varphi _2,\\varphi _3\\rbrace $ be a basis of the solution space of the homogeneous equation (REF ).", "We shall assume that $\\varphi _p(x)=C_1(x)\\varphi _1(x)+C_2(x)\\varphi _2(x)+C_3(x)\\varphi _3(x)$ where $C_1,\\;C_2,C_3$ are functions that verify $\\begin{array}{c c l}C_1^\\prime (x)\\varphi _1(x)+C_2^\\prime (x)\\varphi _2(x)+C_3^\\prime (x)\\varphi _3(x) &=&0\\\\\\\\C_1^\\prime (x)\\varphi _1^\\prime (x)+C_2^\\prime (x)\\varphi _2^\\prime (x)+C_3^\\prime (x)\\varphi _3^\\prime (x)&=&0\\\\\\\\C_1^\\prime (x)\\varphi _1^{\\prime \\prime }(x)+C_2^\\prime (x)\\varphi _2^{\\prime \\prime }(x)+C_3^\\prime (x)\\varphi _3^{\\prime \\prime }(x)&=&\\frac{f(x)}{a_0(x)}\\end{array}$ since $\\lbrace \\varphi _1,\\varphi _2,\\varphi _3\\rbrace $ are linearly independent then the Wronskian of third order of $\\varphi _1,\\varphi _2,\\varphi _3$ never vanishes, i.e., $W(\\varphi _1,\\varphi _2,\\varphi _3)(x)=\\det \\left(\\begin{array}{c c c}\\varphi _1(x)&\\varphi _2(x)&\\varphi _3(x)\\\\&&\\\\\\varphi _1^\\prime (x)&\\varphi _2^\\prime (x)&\\varphi _3^\\prime (x)\\\\&&\\\\\\varphi _1^{\\prime \\prime }(x)&\\varphi _2^{\\prime \\prime }(x)&\\varphi _3^{\\prime \\prime }(x)\\end{array}\\right)\\ne 0$ for every $x$ .", "Therefore the system (REF ) has a unique solution given by $\\begin{array}{c}C_1^\\prime (x)=\\frac{\\det \\left(\\begin{array}{c c c}0&\\varphi _2(x)&\\varphi _3(x)\\\\&&\\\\0&\\varphi _2^\\prime (x)&\\varphi _3^\\prime (x)\\\\&&\\\\f(x)&\\varphi _2^{\\prime \\prime }(x)&\\varphi _3^{\\prime \\prime }(x)\\end{array}\\right)}{W(\\varphi _1,\\varphi _2,\\varphi _3)(x)}\\\\\\\\C_2^\\prime (x)=\\frac{\\det \\left(\\begin{array}{c c c}\\varphi _1(x)&0&\\varphi _3(x)\\\\&&\\\\\\varphi _1^\\prime (x)&0&\\varphi _3^\\prime (x)\\\\&&\\\\\\varphi _1^{\\prime \\prime }(x)&f(x)&\\varphi _3^{\\prime \\prime }(x)\\end{array}\\right)}{W(\\varphi _1,\\varphi _2,\\varphi _3)(x)}\\\\ \\\\C_3^\\prime (x)=\\frac{\\det \\left(\\begin{array}{c c c}\\varphi _1(x)&\\varphi _2(x)&0\\\\&&\\\\\\varphi _1^\\prime (x)&\\varphi _2^\\prime (x)&0\\\\&&\\\\\\varphi _1^{\\prime \\prime }(x)&\\varphi _2^{\\prime \\prime }(x)&f(x)\\end{array}\\right)}{W(\\varphi _1,\\varphi _2,\\varphi _3)(x)}.\\end{array}$ We shall now construct as other linearly independent solutions from a solution of the homogeneous equation (REF )." ], [ "Reduction of order", "Let us consider a homogeneous third order equation with non-constant coefficients (REF ).", "We shall apply the method of reduction of order that consists in finding another solution of the differential equation from an already known solution as we shall see.", "Suppose that we know a solution $\\varphi $ of equation (REF ).", "Let us consider $\\psi =\\mu \\varphi $ where $\\mu $ is a function.", "We shall assume that $\\psi $ is a solution of (REF ).", "The substitution leads to the following second order equation for $u^\\prime $ : $a_0(x)\\varphi (x)u^{\\prime \\prime \\prime }+(3a_0(x)\\varphi ^\\prime (x)+a_1(x)\\varphi (x))u^{\\prime \\prime }+ (3a_0(x)\\varphi ^{\\prime \\prime }(x)+2a_1(x)\\varphi ^\\prime (x)+a_2(x)\\varphi (x))u^\\prime =0.$ From two linearly independent solutions we can construct a third solution.", "The proof of this construction can be found in [24].", "We shall state this technique in what follows:" ], [ "Construction of a third solution from two linearly independent solutions", "Let $y_1,y_2$ be linearly independent solutions of (REF ).", "Denote by $W_{ij}=-W_{ji}=y_iy_j^\\prime -y_jy_i^\\prime ,\\;i\\ne j.$ The idea is to verify that $y_1,y_2$ are solutions of the following equation $ W_{12}(x)z^{\\prime \\prime }-W^\\prime _{12}(x)z^\\prime +(y_1^\\prime (x)y^{\\prime \\prime }_2(x)-y^\\prime _2(x)y^{\\prime \\prime }_1(x))z=0$ and then consider a non-homogeneous equation $W_{12}(x)z^{\\prime \\prime }-W^\\prime _{12}(x)z^\\prime +(y_1^\\prime (x)y^{\\prime \\prime }_2(x)-y^\\prime _2(x)y^{\\prime \\prime }_1(x))z=W(x)$ where $W(s)=\\exp \\big (-\\displaystyle \\int ^s_{x_0}\\frac{a_1(t)}{a_0(t)}dt\\big )$ com $x_0\\in I$ .", "The particular solution of this equation is obtained by the method of variation of parameters (see [6], Chapter II Section 2.6) and this given by $y_3(x)=y_2(x)\\displaystyle \\int ^x_{x_0}\\frac{y_1(s)W(s)}{(W_{12}(s))^2}ds-y_1(x)\\displaystyle \\int ^x_{x_0}\\frac{y_2(s)W(s)}{(W_{12}(s))^2}ds.$ It is not difficult to see that $y_3$ is a solution of the homogeneous equation (REF ) and $W(y_1,y_2,y_3)(x)\\ne 0$ for every $x\\in I$ .", "Also we have: $\\begin{array}{c c l}y_1W_{23}+y_2W_{31}+y_3W_{12} &=&0\\\\\\\\y_1^\\prime W_{23}+y_2^\\prime W_{31}+y_3^\\prime W_{12}&=&0\\\\\\\\y_1^{\\prime \\prime }W_{23}+y_2^{\\prime \\prime }W_{31}+y_3^{\\prime \\prime }W_{12}&=&W.\\end{array}$ Now for finding a particular solution of (REF ) we use the method of variation of parameters used in section REF .", "Hence we consider $y_p(x)=C_1(x)y_1(x)+C_2(x)y_2(x)+C_3(x)y_3(x)$ where $C_1,\\;C_2,C_3$ are functions that verify $\\begin{array}{c c l}C_1^\\prime (x)y_1(x)+C_2^\\prime (x)y_2(x)+C_3^\\prime (x)y_3(x) &=&0\\\\\\\\C_1^\\prime (x)y_1^\\prime (x)+C_2^\\prime (x)y_2^\\prime (x)+C_3^\\prime (x)y_3^\\prime (x)&=&0\\\\\\\\C_1^\\prime (x)y_1^{\\prime \\prime }(x)+C_2^\\prime (x)y_2^{\\prime \\prime }(x)+C_3^\\prime (x)y_3^{\\prime \\prime }(x)&=&\\frac{f(x)}{a_0(x)}.\\end{array}$ Observe that according to (REF ) we have that $C_1^\\prime (x)=\\frac{f(x)W_{23}(x)}{a_0(x)W(x)},\\;\\;\\;C_2^\\prime (x)=\\frac{f(x)W_{32}(x)}{a_0(x)W(x)}\\;\\;\\;\\;\\mbox{ and }\\;\\;\\;C_3^\\prime (x)=\\frac{f(x)W_{12}(x)}{a_0(x)W(x)} $ are solutions of (REF ).", "Therefore a particular solution of (REF ) is given by $y_p(x)=y_1(x)\\displaystyle \\int ^{x}_{x_0}\\frac{W_{23}(s)f(s)}{W(s)a_0(s)}ds+y_2(x)\\displaystyle \\int ^{x}_{x_0}\\frac{W_{31}(s)f(s)}{W(s)a_0(s)}ds+y_3(x)\\displaystyle \\int ^{x}_{x_0}\\frac{W_{12}(s)f(s)}{W(s)a_0(s)}ds.$ On the other hand, according to Theorem REF and REF we have that in the case of third order differential equations with a regular singular point at the origin it is always possible to find the linearly independent solutions of the homogeneous equation and making use of the method introduced in this section we find a general solution of the non-homogeneous equation since segue: Corollary 3.1 Every solution of $L(y):=x^3y^{\\prime \\prime \\prime }+x^2a(x)y^{\\prime \\prime }+xb(x)y^\\prime +c(x)y=f(x),$ with $a(x),b(x), c(x)$ and $f(x)$ analytic for $|x|<R$ , $R>0$ , is of the form $\\begin{array}{l}y(x)=c_1y_1(x)+c_2y_2(x)+c_3y_3(x)+y_1(x)\\displaystyle \\int ^{x}_{x_0}\\frac{(y_2(s)y_3^\\prime (s)-y_3(s)y_2^\\prime (s))f(s)}{s^3\\exp \\big (-\\displaystyle \\int ^{s}_{x_0}\\frac{a(t)}{t}dt\\big )}ds\\\\\\\\\\;\\;\\;+y_2(x)\\displaystyle \\int ^{x}_{x_0}\\frac{(y_3(s)y_1^\\prime (s)-y_1(s)y_3^\\prime (s))f(s)}{s^3\\exp \\big (-\\displaystyle \\int ^{s}_{x_0}\\frac{a(t)}{t}dt\\big )}ds+y_3(x)\\displaystyle \\int ^{x}_{x_0}\\frac{(y_1(s)y_2^\\prime (s)-y_2(s)y_1^\\prime (s))f(s)}{s^3\\exp \\big (-\\displaystyle \\int ^{s}_{x_0}\\frac{a(t)}{t}dt\\big )}ds\\end{array},$ where $y_1,y_2,y_3$ are linearly independent solutions of $L(y)=0$ and $c_1,c_2,c_3$ constants." ], [ "Convergence of formal solutions for third order ODEs", "Now we are in conditions to prove Theorem REF : This is proved like Theorem REF once we have the description of the solutions given for order three by Theorems REF and  REF .", "Theorem REF above cannot be improved by simply removing the hypothesis of regularity on the singular point, as shows the following example: Example 3.7 Consider the equation $x^3y^{\\prime \\prime \\prime }-x^2y^{\\prime \\prime }-y^\\prime -\\frac{1}{2}y=0.$ The origin $x_0=0$ is a singular point, but not is regular singular point, since the coefficient -1 of $y^\\prime $ does not have the form $xb(x)$ , where $b$ is analytic for 0.", "Nevertheless, we can formally solve this equation by power series $\\sum ^{\\infty }_{k=0} a_kx^k,$ where the coefficients $a_k$ satisfy the following recurrence formula $(k+1)a_{k+1}=\\big [k^3-4k^2+3k-\\frac{1}{2}\\big ]a_k,\\;\\;\\;\\mbox{ for every }k=0,1,2,\\ldots .$ If $a_0\\ne 0$ , applying the quotient test to expressions (REF ) and (REF ), we have that $ \\big |\\frac{a_{k+1}x^{k+1}}{a_kx^k}\\big |=\\big |\\frac{k^3-4k^2+3k-\\frac{1}{2}}{k+1}\\big |\\cdot |x|\\rightarrow \\infty ,$ when $k\\rightarrow \\infty $ , provided that $|x|\\ne 0$ .", "Hence, the series converges only for $x=0$ ." ], [ "Further questions", "Other convergence aspects of solutions of third order ODEs as well as the characterization of those admitting regular singular points, in terms of the space of solutions, will be studied in a forthcoming work ([20]).", "We shall also discuss some more general notions of regularity for a singular point, under which we still have the existence of solutions.", "An interesting question is the classification of the third order ODEs which admit a Liouvillian solution or a Liouvillian first integral.", "Another intriguing problem is the search of a first or second order model for a third order ODE.", "This would probably lead to the introduction of a type of holonomy group for such ODEs.", "Finally, it seems reasonable to imagine that more general versions of Frobenius methods are valid for ODEs having coefficients in a function field $\\mathbb {K}(x)$ where $\\mathbb {K}$ is an algebraically closed ordered field, of characteristic $k\\ge 0$ .", "This is also treated in the continuation of our work." ] ]
1906.04277
[ [ "Dynamics of singular complex analytic vector fields with essential\n singularities II" ], [ "Abstract The singular complex analytic vector fields $X$ on the Riemann sphere $\\widehat{\\mathbb C}_z$ belonging to the family ${\\mathscr E}(r,d)=\\left\\{ X(z)=\\frac{1}{P(z)} e^{E(z)}\\frac{\\partial }{\\partial z}\\ \\Big\\vert \\ P, E\\in\\mathbb{C}[z]\\right\\}$, where $P$ is monic, $deg(P)=r$, $deg(E)=d$, $r+d\\geq 1$, have a finite number of poles on the complex plane and an isolated essential singularity at infinity (for $d\\geq 1$).", "Our aim is to describe geometrically $X$, particularly the singularity at infinity.", "We use the natural one to one correspondence between $X$, a global singular analytic distinguished parameter $\\Psi_X(z)=\\int^z P(\\zeta) e^{-E(\\zeta)}d\\zeta$, and the Riemann surface ${\\mathcal R}_X$ of this distinguished parameter.", "We introduce $(r,d)$-configuration trees which are weighted directed rooted trees.", "An $(r,d)$-configuration tree completely encodes the Riemann surface ${\\mathcal R}_X$ and the singular flat metric associated on ${\\mathcal R}_X$.", "The $(r,d)$-configuration trees provide \"parameters\" for the complex manifold ${\\mathscr E}(r,d)$, which give explicit geometrical and dynamical information; a valuable tool for the analytic description of $X\\in{\\mathscr E}(r,d)$.", "Furthermore, given $X$, the phase portrait of the associated real vector field $Re(X)$ on the Riemann sphere is decomposed into $Re(X)$-invariant components: half planes and finite height strips.", "The germ of $X$ at infinity is described as a combinatorial word (consisting of hyperbolic, elliptic, parabolic and entire angular sectors having the point at infinity of $\\widehat{\\mathbb C}_z$ as center).", "The structural stability, under perturbation in ${\\mathscr E}(r,d)$, of the phase portrait of $Re(X)$ is characterized by using the $(r,d)$-configuration trees.", "We provide explicit conditions, in terms of $r$ and $d$, as to when the number of topologically equivalent phase portraits of $Re(X)$ is unbounded." ], [ "Introduction", "Motivated by the nature of meromorphic and essential singularities of complex analytic vector fields on Riemann surfaces [17], [1], [2], we study the families ${E}(r,d)=\\Big \\lbrace X(z)=\\frac{1}{P(z)}{\\text{e}}^{E(z)}\\frac{\\partial }{\\partial z}\\ \\Big \\vert \\ P, E\\in {\\mathbb {C}}[z],\\ \\deg {P}=r, \\ \\deg {E}=d \\Big \\rbrace ,$ of 1–order $d$ vector fields on the Riemann sphere ${\\widehat{\\mathbb {C}}}$ , generically having an essential singularity at $\\infty $ and $r$ poles on ${\\mathbb {C}}$ .", "Each $X\\in {E}(r,d)$ is provided with a global singular analytic distinguished parameter $\\Psi _{X} (z)= \\int ^z P(\\zeta ) {\\text{e}}^{-E (\\zeta )} d\\zeta : {\\widehat{\\mathbb {C}}}_{z}\\longrightarrow {\\widehat{\\mathbb {C}}}_{t},$ which in turn has an associated Riemann surface ${\\mathcal {R}}_{X}=\\lbrace (z,\\Psi _{X}(z))\\rbrace .$ Thus there is a correspondence, for $r + d\\ge 1$ , between $X \\in {E}(r,d)\\ \\ \\longleftrightarrow \\ \\ \\left\\lbrace \\begin{array}{l}\\text{branched coverings }\\pi _{X,2}:{\\mathcal {R}}_X\\longrightarrow \\big ({\\widehat{\\mathbb {C}}}_{t},\\frac{\\partial }{\\partial t}\\big ) \\text{ having } \\\\d \\text{ logarithmic branch points over } \\infty , \\\\d \\text{ logarithmic branch points over } \\lbrace a_{\\sigma }\\rbrace \\subset {\\mathbb {C}}_{t}, \\\\r \\text{ ramified branch points over } \\lbrace \\widetilde{p}_{\\iota }\\rbrace \\subset {\\mathbb {C}}_{t}\\end{array} \\right\\rbrace ,$ where $\\pi _{X,2}$ is as in Diagram (REF ).", "See also Lemma REF , and [1], [2], [3].", "The existence of the biholomorphism $\\big ({\\widehat{\\mathbb {C}}}_{z},X\\big )\\cong \\big ({\\mathcal {R}}_{X},\\pi _{X,2}^{*}(\\frac{\\partial }{\\partial t})\\big )$ essentially provides a global flow box for $X$ , according to Lemma REF .", "The Riemann surface ${\\mathcal {R}}_{X}$ associated to $\\Psi _{X}$ can be naturally described by gluing half planes ${\\mathbb {H}}^2$ and finite height strips $\\lbrace 0 < {\\mathfrak {Im}\\left(z\\right)} < h\\rbrace $ , and it has its origin on the works of [19], [20], [22], [17], [18], [14], [23].", "Three natural cases arise for $X\\in {E}(r,d)$ : Case $X\\in {E}(r,0)$.", "$X$ has $r\\ge 1$ poles (counted with multiplicity) on ${\\mathbb {C}}_{z}$ and a zero of order $r+2$ at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ .", "$\\Psi _X$ is a polynomial map.", "See W. M. Boothby [8], [9] for pioneering work and S. K. Lando et al.", "[15] chapters 1 and 5 for advances in the combinatorial direction.", "Case $ X\\in {E}(0,d)$ .", "$X$ has an isolated essential singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ , no zeros or poles.", "$\\Psi _{X}$ is an infinitely ramified covering map and $\\lbrace \\widetilde{p}_{\\iota }\\rbrace =\\emptyset $ in (REF ).", "See the seminal works of R. Nevanlinna [19] chapter XI, M. Taniguchi [23]; and [1].", "Case $X\\in {E}(r,d)$.", "$X$ has $r\\ge 1$ poles (counted with multiplicity) on ${\\mathbb {C}}_{z}$ and an isolated essential singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ .", "$\\Psi _X$ is an infinitely ramified covering map as in (REF ).", "This is the main/generic case explored in this work.", "Obviously, ${E}(r,d)$ is an open complex submanifold of ${\\mathbb {C}}^{r+d+1}$ , see (REF ) and [2].", "However for the study of analytical, geometrical and topological aspects of $\\Psi _X$ and $X$ , suitable coordinates that shed light on these kind of problems are desirable (recall for instance the role of the critical value $\\lbrace c \\rbrace $ , as useful “coordinates”, in the dynamical study of the quadratic family $\\lbrace z \\mapsto z^2+c \\rbrace $ ).", "In particular, even though the map $\\big \\lbrace \\text{coefficients of }P(z), E(z)\\big \\rbrace \\rightarrow \\big \\lbrace \\text{critical and asymptotic values }\\lbrace \\widetilde{p}_\\iota \\rbrace \\cup \\lbrace a_\\sigma \\rbrace \\text{ of }\\Psi _{X}\\big \\rbrace $ is holomorphic, it is insufficient to completely describe the family ${E}(r,d)$ ; see Corollary REF and example 8.12.3, figures 11 (c), (d) in [1] for an instance in ${E}(0,3)$ .", "With this in mind, in §, we introduce $(r,d)$ –configuration trees $\\Lambda _X $ which are combinatorial objects that completely encode the branched Riemann surface ${\\mathcal {R}}_X$ , for $X\\in {E}(r,d)$ .", "Thus providing explicit “dynamical coordinates” for ${\\mathcal {R}}_{X}$ , which allows us to obtain a complete global analytical and geometrical classification for the family ${E}(r,d)$ .", "The vertices of $\\Lambda _{X}$ are the branch points in ${\\mathcal {R}}_{X}$ , as in (REF ), including their ramification index.", "The weighted edges of $\\Lambda _{X}$ provide us with two pieces of information: 1) each edge specifies which pair of branch points share the same sheet of ${\\mathcal {R}}_{X}$ , 2) the weight of the edge tells us the relative number of sheets of ${\\mathcal {R}}_{X}$ , we must go “up or down” on the surface in order to find another sheet containing other branch points.", "As a consequence we have: Main Theorem ($(r,d)$ –configuration trees as parameters for ${E}(r,d)$ ) There is an isomorphism, as complex manifolds of dimension $r+d+1$ , between ${E}(r,d)$ and equivalence classes of $(r,d)$ –configuration trees, i.e.", "${E}(r,d) \\cong \\left\\lbrace \\big [ \\Lambda _{X}\\big ] \\ \\big | \\ \\Lambda _{X}\\text{ is a }(r,d)\\text{--configuration tree}\\right\\rbrace .$ In § explicit examples of $\\Lambda _{X}$ as well as a digression on some of the difficulties encountered in the proof of the Main Theorem, are presented.", "The proof is presented in §, with the description of the equivalence relation and their classes $[\\, \\cdot \\, ]$ in §REF .", "The Main Theorem provides another characterization of the family ${E}(r,d)$ (see [1], [2] and [3]) and enhances the work of A. Speiser [21], R. Nevanlinna [19], [20] p. 291 and G. Elfving [11] on the classification, via line complexes, of (simply connected) Riemann surfaces ${\\mathcal {R}}_{X}$ related to meromorphic functions $\\Psi _{X}$ .", "Provided with the description of ${\\mathcal {R}}_X$ by means of $\\Lambda _X$ , we can now answer the following question: How can we describe the singularity of $X$ at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ , for $X\\in {E}(r,d)$ ?", "We ask for a topological/analytical classification of the germs $\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X \\big )$ for $X \\in {E}(r,d)$ .", "A natural idea is to look at the germ and try to split into a finite union of angular sectors hyperbolic $H$ , elliptic $E$ , parabolic $P$ and entire sectors ${}_{}{E}_{}$ , this last based upon ${\\text{e}}^z \\frac{\\partial }{\\partial z}$ at infinity; see Figure REF .", "Thus obtaining a cyclic word $\\mathcal {W}_X$ .", "Of course this classical idea has its roots in the work of I. Bendixon, A.", "A. Andronov and F. Dumortier et al.", "; see [4] p. 304, [5] p. 84 and theorem 5.1 in [1].", "The following theorem answers the above posed question, as well as the dynamical description of the phase portraits of ${\\mathfrak {Re}\\left(X\\right)}$ .", "Theorem (Dynamical applications) Let be $X\\in {E}(r,d)$ .", "The cyclic word $\\mathcal {W}_{X}$ associated to $X$ at $\\infty $ is recognized as $\\big ( ({\\widehat{\\mathbb {C}}}_{z},\\infty ),X\\big ) \\longmapsto \\mathcal {W}_X=W_{1} W_{2} \\cdots W_{k},\\quad W_{\\iota }\\in \\lbrace H,E,P,{}_{}{E}_{} \\rbrace ,$ with exactly $2d$ letters $W_{\\iota }={}_{}{E}_{}$ .", "The word $\\mathcal {W}_X$ is a complete topological invariant of a germ $\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X \\big )$ .", "Conversely, a germ of a singular complex analytic vector field $\\big (({\\mathbb {C}},0), Y \\big )$ is the restriction of a vector field $X \\in {E}(r,d)$ at $\\infty $ if and only if $\\bullet $ the point 0 is an isolated essential singularity of $Y$ and $\\bullet $ its admissible word $\\mathcal {W}_{Y}$ satisfies that the residue of the word $Res(\\mathcal {W}_{Y} ) = 0$ , the Poincaré–Hopf index of the word $PH(Y, 0 ) = 2 + r$ , it has exactly $2d$ entire sectors ${}_{}{E}_{}$ .", "The phase portrait of ${\\mathfrak {Re}\\left(X\\right)}$ is structurally stable (under perturbations in ${E}(r,d)$ ) if and only if $\\bullet \\ X$ has only simple poles and $\\bullet $ all edges of $\\Lambda _X$ have weights with a non–zero imaginary component.", "The number of non topologically equivalent phase portraits of ${\\mathfrak {Re}\\left(X\\right)}$ is infinite if and only if $(r,d)\\in \\big \\lbrace (r\\ge 2,1), (r\\ge 1,2),(r\\ge 0,d\\ge 3)\\big \\rbrace $ .", "For the accurate assertions and proofs, see Theorem REF , Theorem REF and Theorem REF respectively.", "A stronger version of the decomposition of the phase portrait into ${\\mathfrak {Re}\\left(X\\right)}$ –invariant components, can be found as Theorem REF .", "In particular, for $X\\in {E}(r,d)$ the Riemann surface ${\\mathcal {R}}_{X}$ admits an infinite number of half planes ${\\mathbb {H}}^2$ if and only if $d\\ge 1$ .", "However, Example REF provides a Riemann surface admitting a decomposition in an infinite number of half planes, where the corresponding vector field does not belong to any ${E}(r,d)$ .", "Moreover the topological classification of functions $\\Psi _X$ is coarser than the classification of phase portraits of vector fields ${\\mathfrak {Re}\\left(X\\right)}$ , for ${E}(r,d)$ , see Remark REF .", "Diagramatically, we have $X\\in {E}(r,d) \\longleftrightarrow [\\Lambda _{X}]\\longrightarrow \\underbrace{\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ),X(z) \\big ) }_{loc.\\ analytic\\ inv.", "}\\longrightarrow \\underbrace{\\mathcal {W}_{X}=W_{1} W_{2} \\cdots W_{k}}_{loc.\\ topological\\ inv.", "}.$ The Main Theorem provides the global, on ${\\widehat{\\mathbb {C}}}$ , analytic bijection.", "Moreover, the notion of local invariance makes sense, see §.", "For the essential singularity of $X$ at $\\infty $ the analytic/topological nature of the invariant is certainly a novel aspect.", "Some of the proofs presented are based upon technical results of [1], however the evidence and examples provided in this work allow for a self contained reading and understanding." ], [ "Vector fields, differential forms,\norientable quadratic differentials, flat metrics, distinguished parameters, Riemann surfaces", "We consider the family ${E}(r,d)$ as in (REF ).", "Let $X\\in {E}(r,d)$ be a vector field, we denote by ${\\mathcal {P}}=\\lbrace p_{\\iota }\\rbrace $ the set of poles of $X$ .", "The associated singular analytic differential form $\\omega _{X}=P(z)\\, {\\text{e}}^{-E(z)} dz,$ is such that $\\omega _{X}(X)\\equiv 1$ .", "A singular analytic quadratic differential $\\mathcal {Q}$ on ${\\widehat{\\mathbb {C}}}_{z}$ is orientable if it is globally given as $\\mathcal {Q}=\\omega \\otimes \\omega $ , for some singular analytic differential form $\\omega $ on ${\\widehat{\\mathbb {C}}}_{z}$ .", "In particular, $\\mathcal {Q}_{X}=\\omega _{X}\\otimes \\omega _{X}= P^{2}(z)\\, {\\text{e}}^{-2E(z)} dz^{2} .$ The singular horizontal foliation of $\\mathcal {Q}_X$ on ${\\mathbb {C}}_{z}\\backslash {\\mathcal {P}}$ corresponds to the trajectories of the real vector field ${\\mathfrak {Re}\\left(X\\right)}$ , see for instance (2.2) of [1].", "Since $\\omega _{X}$ is holomorphic on ${\\mathbb {C}}_{z}$ , the local notion of distinguished parameter, see [22] p. 20, can be extended as follows.", "Definition 2.1 The map $\\Psi _{X}(z)=\\int _{z_{0}}^{z} P(\\zeta )\\, {\\text{e}}^{-E(\\zeta )} d\\zeta :{\\mathbb {C}}_{z} \\longrightarrow {\\widehat{\\mathbb {C}}}_{t} $ is a global distinguished parameter for $X$ (note the dependence on $z_0\\in {\\mathbb {C}}_{z}$ ).", "A singular flat metric $g_{X}$ with singular set ${\\mathcal {P}}\\subset {\\mathbb {C}}_{z}$ is the flat Riemannian metric on ${\\mathbb {C}}_{z}\\backslash {\\mathcal {P}}$ defined as the pullback under $\\Psi _{X}:({\\mathbb {C}}_{z} , g_{X})\\rightarrow ({\\mathbb {C}}_{t},\\delta )$ , where $\\delta $ is the usual flat metric on ${\\mathbb {C}}_{t}$ .", "The singularities of $g_{X}$ at $p_{\\iota }\\in {\\mathcal {P}}$ are cone points with angle $(2\\mu _{\\iota }+2)\\pi $ , where $-\\mu _{\\iota }\\le -1$ is the order of the pole $p_{\\iota }$ of $X$ .", "Then the trajectories of ${\\mathfrak {Re}\\left(X\\right)}$ and ${\\mathfrak {Im}\\left(X\\right)}$ are unitary geodesics in $({\\mathbb {C}}_{z}\\backslash {\\mathcal {P}}, g_{X})$ .", "The graph of $\\Psi _{X}$ ${\\mathcal {R}}_{X}= \\lbrace (z,t) \\ \\vert \\ t=\\Psi _{X}(z) \\rbrace \\subset {\\mathbb {C}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ is a Riemann surface.", "The flat metric$({\\widehat{\\mathbb {C}}}_{z}, X)$ denotes a pair, Riemann sphere and a singular complex analytic vector field.", "$\\big (({\\mathbb {C}}_{z}, z_0), X\\big )$ denotes a germ of singular on $\\big ({\\mathcal {R}}_{X},\\pi _{X,2}^{*}(\\frac{\\partial }{\\partial t})\\big )$ is induced by the usual metric on $\\big ({\\widehat{\\mathbb {C}}},\\delta \\big )$ , equivalently $\\big ({\\widehat{\\mathbb {C}}}_{t},\\frac{\\partial }{\\partial t}\\big )$ , via the projection of $\\pi _{X,2}$ , and coincides with $g_{X}=\\Psi _{X}^{*}(\\delta )$ since $\\pi _{X,1}$ is an isometry.", "Lemma 2.2 The following diagram commutes $$ $\\big ({\\widehat{\\mathbb {C}}}_{z},X\\big ) $$\\big ({\\mathcal {R}}_X,\\pi ^*_{X,2}(\\frac{\\partial }{\\partial t})\\big )$$\\pi _{X,1}$$ \\pi _{X,2} $$ \\Psi _X $$\\big ({\\widehat{\\mathbb {C}}}_t,\\frac{\\partial }{\\partial t}\\big ) $Moreover, $\\pi _{X,1}$ is a biholomorphism between $({\\mathcal {R}}_{X},\\pi ^*_{X,2}(\\frac{\\partial }{\\partial t}))$ and $({\\mathbb {C}}_{z},X)$ .", "$\\Box $ In contrast, for a rational vector field with simple zeros, the associated $\\pi _{X,1}$ is not a biholomorphism between $({\\mathcal {R}}_{X},\\pi ^*_{X,2}(\\frac{\\partial }{\\partial t}))$ and $({\\mathbb {C}}_{z},X)$ , since $\\Psi _{X}$ is multivalued.", "In what follows, unless explicitly stated, we shall use the abbreviated form ${\\mathcal {R}}_{X}$ instead of the more cumbersome $\\big ({\\mathcal {R}}_{X},\\pi ^*_{X,2}(\\frac{\\partial }{\\partial t})\\big )$ , see Figures REF and REF .", "In Diagram (REF ) we abuse notation slightly by saying that the domain of $\\Psi _{X}$ is ${\\widehat{\\mathbb {C}}}_{z}$ .", "This is a delicate issue, see Remark REF .1 following Proposition REF .", "Lemma 2.3 1.", "The map $\\Psi _X$ is a global flow box of $X$ , i.e.", "$(\\Psi _{X})_{*} X = \\frac{\\partial }{\\partial t} $ on the whole ${\\mathbb {C}}_z$ .", "2.", "For fixed $z_{0}\\in {\\mathbb {C}}_{z}\\backslash {\\mathcal {P}}$ , the maximal (under analytic continuation) time domain of the local flow of $X$ is $\\pi _{X,1}(\\cdot )=\\varphi ( z_{0} , \\cdot ): \\Omega _{X} = {\\mathcal {R}}_{X}\\backslash \\cup _{p_\\iota \\in {\\mathcal {P}}} \\big \\lbrace (p_{\\iota },\\widetilde{p}_{\\iota })\\big \\rbrace \\longrightarrow {\\mathbb {C}}_{z}\\backslash {\\mathcal {P}}$ .", "$\\Box $" ], [ "The singular complex analytic dictionary", "Proposition 1 (Dictionary between the singular analytic objects originating from $X\\in {E}(r,d)$ , [1]) The following diagram describes a canonical one–to–one correspondence between its objects $X(z)=\\frac{1}{P(z)}\\, {\\text{e}}^{E(z)}\\frac{\\partial }{\\partial z} $$\\omega _X (z)= P(z)\\, {\\text{e}}^{-E(z)}dz$$\\Psi _X (z)= \\int \\limits ^z P(\\zeta )\\, {\\text{e}}^{-E(\\zeta )}d\\zeta _X$$\\big (({\\mathbb {C}},g_X ), {\\mathfrak {Re}\\left(X\\right)}\\big )$ .$\\omega _X \\otimes \\omega _X (z)$$({\\mathcal {R}}_X, \\pi ^{*}_{X,2} (\\frac{\\partial }{\\partial t}))$$$ Remark 1 1.", "The choice of initial and end points $z_{0}, z$ for the integral defining $\\Psi _X$ can be relaxed to include $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ by integrating along asymptotic paths associated to asymptotic values of $\\Psi _{X}$ at the essential singularity $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ , see §.", "2.", "${\\mathcal {R}}_{X}$ are non compact translation surfaces, following [25] §3.3 and [16]." ], [ "Branch points of ${\\mathcal {R}}_{X}$ : local ramification data", "For $X\\in {E}(r,d)$ , the distinguished parameter $\\Psi _{X}$ belongs to the family $SF_{r,d}=\\left\\lbrace \\int _{0}^{z} P(\\zeta )\\, {\\text{e}}^{-E(\\zeta )} d\\zeta \\ + b \\ \\big {\\vert } \\ P, E\\in {\\mathbb {C}}[z], \\ \\deg {P}=r, \\ \\deg {E}=d\\right\\rbrace ,$ of structurally finite entire functions of type $(r,d)$.", "In order to determine the Riemann surface ${\\mathcal {R}}_{X}$ precisely, one needs the knowledge of the the branch points $\\lbrace (z_{{\\mathfrak {a}}},t _{{\\mathfrak {a}}})\\rbrace \\subset {\\mathcal {R}}_{X}$ under $\\pi _{X,2}$ , see [19] chap XI, [23], [24] and [1].", "Lemma 3.1 (The existence of logarithmic and finitely ramified branch points) Let $\\Psi _X:{\\mathbb {C}}_{z}\\rightarrow {\\widehat{\\mathbb {C}}}_{t}$ be a structurally finite entire function of type $(r,d)$ , with $d\\ge 1$ .", "Then $\\Psi _X$ has $r$ critical values $\\lbrace \\widetilde{p}_\\iota \\rbrace \\subset {\\mathbb {C}}_{t}$ (counted with multiplicity), $\\Psi ^{-1}_X$ has $d$ direct singularities corresponding to $d$ logarithmic branch points over $d$ finite asymptotic values $\\lbrace a_{\\sigma }\\rbrace \\subset {\\widehat{\\mathbb {C}}}_t$ , and $\\Psi ^{-1}_X$ has $d$ direct singularities corresponding to $d$ logarithmic branch points over $\\infty \\in {\\widehat{\\mathbb {C}}}_{t}$ , Furthermore, $\\Psi ^{-1}_X$ has no indirect singularities.", "Case $(r,0)$ is elementary.", "Case $(r,d)$ with $d\\ge 1$ can be found as lemma 8.4 in [1] with a proof that relies heavily on the work of M. Taniguchi [23], [24].", "Remark 2 To be precise, the logarithmic branch points associated to the isolated singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ , are not in fact in ${\\mathcal {R}}_{X}\\subset {\\widehat{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ .", "Instead, see for instance [7], they lie on the non–Hausdorff closure $\\overline{{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ of ${\\mathbb {C}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ .", "Here $\\overline{{\\mathbb {C}}}_{z}:=\\Big (\\big ({\\widehat{\\mathbb {C}}}\\times \\lbrace 1\\rbrace \\big ) \\sqcup \\big ({\\widehat{\\mathbb {C}}}\\times \\lbrace 2\\rbrace \\big ) \\sqcup \\cdots \\sqcup \\big ({\\widehat{\\mathbb {C}}}\\times \\lbrace 2d\\rbrace \\big )\\Big ) \\slash \\sim $ is the sphere with $2d$ infinities, that is the disjoint union of $2d$ copies of the Riemann sphere ${\\widehat{\\mathbb {C}}}$ with the equivalence relation $\\sim $ , given by $(z,\\sigma )\\sim (z,\\rho )$ for all $\\sigma ,\\rho \\in \\lbrace 1,\\dots ,2d\\rbrace $ if $z\\ne \\infty $ .", "We will denote the $2d$ distinct infinities, referred to in Lemma REF , by $\\lbrace \\infty _{\\sigma }\\rbrace _{\\sigma = 1}^{2d}\\subset \\overline{{\\mathbb {C}}}_{z}$ .", "Suitable coordinate pairs $(z_{\\vartheta },t_{\\vartheta })\\in {\\mathcal {R}}_{X}\\subset \\overline{{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ will be identified with the branch points of ${\\mathcal {R}}_{X}$ .", "In what follows, the reader might find it helpful to follow along with Figures REF –REF in §REF .", "1) For $r\\ge 1$ , $p_{\\iota }\\in {\\mathbb {C}}_{z}$ is a pole of $X$ (zero of $\\omega _{X}$ ) if and only if its image $\\widetilde{p}_{\\iota }=\\Psi _{X}(p_{\\iota })\\in {\\mathbb {C}}_{t}$ is a critical value of $\\Psi _{X}$ .", "Moreover $(p_{\\iota },\\widetilde{p}_{\\iota })\\in {\\mathcal {R}}_{X}$ is a finitely ramified branch point (under $\\pi _{X,2}$ ) with ramification index $\\mu _{\\iota }+1\\ge 2$ , where $-\\mu _{\\iota } \\le -1$ is the order of the pole $p_{\\iota }$ .", "We enumerate the corresponding finitely ramified branch points in ${\\mathcal {R}}_{X}$ as $\\lbrace (p_{\\iota },\\widetilde{p}_{\\iota })\\rbrace _{\\iota =1}^{n}\\subset {\\mathcal {R}}_{X}, \\text{ with order }-\\mu _{\\iota }\\le -1 \\text{ and }\\sum _{\\iota =1}^{n} \\mu _{\\iota } = r.$ For $d\\ge 1$ , $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ is an isolated essential singularity of $X$.", "Lemma REF allows us to denote the distinct finite asymptotic values by $\\lbrace a_{j}\\rbrace _{j=1}^{m}\\subset {\\mathbb {C}}_{t}, \\text{ with multiplicities }\\lbrace \\nu _j \\rbrace _{j=1}^{m} \\text{ and }\\sum _{j=1}^{m} \\nu _{j}=d.$ Thus, $\\pi _{X,2}^{-1}(a_{j})$ should contain at least one logarithmic branch point of ${\\mathcal {R}}_{X}$ for each exponential tract associated to the finite asymptotic value $a_{j}$ ; see [7] p. 356 where exponential tracts are denoted $U(r)$ and [10] p. 212.", "In other words, if $\\alpha (\\tau )$ is an asymptotic path approaching $\\infty _{\\sigma }\\in \\overline{{\\mathbb {C}}}_{z}$ associated to the finite asymptotic value $a_{j}$ then we may assume that $\\alpha (\\tau )$ is restricted to one exponential tract (the one containing $\\infty _{\\sigma }\\in \\overline{{\\mathbb {C}}}_{z}$ ) and $\\lim \\limits _{\\tau \\rightarrow \\infty }\\Psi _{X}(\\alpha (\\tau ))=a_{j}.$ Hence, the exponential tracts $\\lbrace \\alpha \\rbrace $ serve as indices for the accurate description of the $2d$ logarithmic branch points in ${\\mathcal {R}}_{X}$ .", "2) We will denote the corresponding logarithmic branch points over the finite asymptotic values $a_{j(\\sigma )}\\in {\\mathbb {C}}_{t}$ , by $(\\infty _{\\sigma },a_{j(\\sigma )})\\in {\\mathcal {R}}_{X},\\text{ for } \\sigma \\in \\lbrace 1,\\ldots ,d\\rbrace .$ 3) To be precise, the $d$ logarithmic branch points over $\\infty \\in {\\widehat{\\mathbb {C}}}_{t}$ will be denoted by $(\\infty _{\\sigma },\\infty )\\in {\\mathcal {R}}_{X}, \\text{ for } \\sigma \\in \\lbrace d+1,\\ldots ,2d\\rbrace .$ Recalling that the finite asymptotic value $a_{j}$ has multiplicity $\\nu _{j}$ , the correspondence between indices is given by $\\begin{array}{r}\\ \\ \\sigma \\in \\underbrace{1,\\, \\ldots \\, ,\\nu _{1}}\\ , \\ \\underbrace{\\nu _{1}+1\\ ,\\, \\ldots \\, ,\\nu _{1}+\\nu _{2}}\\ ,\\,\\ldots \\, ,\\ \\underbrace{d-\\nu _{m}+1, \\, \\ldots \\, , d}\\, ,\\\\\\end{array}\\ j=j(\\sigma ) \\in \\quad \\ \\ 1\\,\\qquad ,\\quad \\qquad \\quad \\ 2\\ \\qquad \\qquad \\, , \\, \\ldots \\, , \\quad \\quad \\quad \\ \\ m\\quad \\quad \\quad \\ \\ , \\\\$ where $\\sigma $ enumerates the logarithmic branch points $b_{\\sigma }\\in {\\mathcal {R}}_{X}$ and the exponential tracts $\\alpha =\\alpha (\\sigma ):=\\sigma $ , while $j=j(\\sigma )$ enumerates the distinct finite asymptotic values $a_{j}\\in {\\mathbb {C}}_t$ .", "Remark 3 We can assign a unique $\\mu _{\\vartheta }\\in {\\mathbb {N}}\\cup \\lbrace 0,\\infty \\rbrace $ which denotes the ramification index minus one of $b_{\\vartheta }=(z_{\\vartheta },t_{\\vartheta })\\in {\\mathcal {R}}_{X}$ .", "Therefore, the assignment $X\\longmapsto \\sum _{\\iota =1}^{n}(p_{\\iota },\\widetilde{p}_{\\iota }, -\\mu _{\\iota })+\\sum _{\\sigma =1}^{d} (\\infty _{\\sigma },a_{\\sigma },\\infty )\\doteq \\sum _{t\\in {\\widehat{\\mathbb {C}}}_{t}} \\sum _{\\vartheta } ( z_{\\vartheta },t_{\\vartheta },\\mu _{\\vartheta }),$ can be thought as an ad hoc notion of divisor of $\\pi _{X,2}$ .", "The above discussion can be summarized in Table REF .", "Table: Branch points of ℛ X {\\mathcal {R}}_{X}.Note that (REF ) does not appear in (REF ) or Table REF since it will not be needed.", "Lemma 3.2 Let $X\\in {E}(r,d)$ .", "The associated $\\Psi _{X}$ has exactly one finite asymptotic or finite critical value $t_{1}\\in {\\mathbb {C}}_{t}$ if and only if $(r,d)={\\left\\lbrace \\begin{array}{ll}(r\\ge 1,0) & \\text{and } X \\text{ has a unique pole of order }-r, \\\\& \\text{in which case } t_{1} \\text{ is the critical value,} \\\\(0,1) & \\text{and } X \\text{ has an isolated essential singularity at }\\infty \\in {\\widehat{\\mathbb {C}}}_{z}, \\\\& \\text{in which case }t_{1}\\text{ is the finite asymptotic value.}\\end{array}\\right.", "}$ $(\\Leftarrow )$ When $(r,d)=(0,1)$ , $\\Psi _{X}(z)=\\int ^z{\\text{e}}^{a\\zeta +b}d\\zeta $ , $t_{1}=a_{1}$ , see example 4.16 or equation (8.19) in [1].", "In the case $(r,d)=(r\\ge 1,0)$ , the required distinguished parameter is $\\Psi _{X}(z)=\\int ^z (\\zeta -p)^{r}\\,d\\zeta $ .", "$(\\Rightarrow )$ By Lemma REF , $\\Psi _{X}^{-1}$ has $d$ logarithmic branch points over $d$ finite asymptotic values, $d$ logarithmic branch points over $\\infty \\in {\\widehat{\\mathbb {C}}}_{t}$ and $r$ critical values (with multiplicity).", "Let $\\lbrace (z_{{\\mathfrak {a}}},t _{{\\mathfrak {a}}})\\rbrace \\subset \\pi _{X,2}^{-1}(t_{0})\\subset {\\mathcal {R}}_{X}$ be all the branch points over $t_{0}$ .", "The set $\\lbrace (z_{{\\mathfrak {a}}},t _{{\\mathfrak {a}}})\\rbrace $ consists of exactly $d$ logarithmic branch points over $t_{0}$ and $n\\le r$ finitely ramified branch points $(p_{\\iota }, t_{\\iota })$ of ramification indices $\\mu _{\\iota }+1$ with $r=\\sum _{\\iota =1}^{n\\le r}\\mu _{\\iota }$ .", "We proceed by contradiction: suppose that $d+n\\ge 2$ .", "Then ${\\mathcal {R}}_{X}$ has $d+n$ connected components: $d$ arising from the logarithmic branch points and $n$ arising from the finitely ramified branch points.", "However, ${\\mathcal {R}}_{X}$ is biholomorphic to ${\\widehat{\\mathbb {C}}}$ , which of course consists of only one connected component.", "Thus $d+n=1$ which immediately implies both cases." ], [ "The Riemann surface ${\\mathcal {R}}_{X}$ described by glueing sheets", "Definition 3.3 Let $\\lbrace t_{\\tt k} \\rbrace _{{\\tt k}=1}^{\\tt r} \\subset {\\mathbb {C}}_t$ be a finite set of different points.", "A sheet is a copy of ${\\mathbb {C}}_{t}$ with ${\\tt r}\\ge 1$ branch cuts $L_{\\tt k}$ ; i.e.", "${\\mathbb {C}}_t$ is cut along horizontal right segments $L_{{\\tt k}}= [t_{\\tt k}, \\infty )$ , remaining connected, but with $2{\\tt r}$ horizontal boundaries (left there for further isometric glueingAs is usual in the isometric framework, for details see corollary 5.11 of [1].)", "${\\mathbb {C}}_{t}\\backslash \\lbrace L_{\\tt k}\\rbrace _{{\\tt k} = 1}^{\\tt r}\\ \\cong \\ \\left[ {\\mathbb {C}}_{t}\\backslash \\big ( \\cup _{{\\tt k}=1}^{{\\tt r}} [t_{\\tt k},\\infty ) \\big ) \\right]\\cup _{{\\tt k}=1}^{{\\tt r}}\\lbrace [t_{\\tt k},\\infty )_{+},[t_{\\tt k},\\infty )_{-}\\rbrace ,$ where the subindices $\\pm $ refer to the obvious upper or lower boundary using ${\\mathfrak {Im}\\left(t\\right)}$ .", "We say that the height of the cut $L_{\\tt k}$ is ${\\mathfrak {Im}\\left(t_{\\tt k}\\right)}$ .", "Note that cuts (and the corresponding boundaries) need not be to the right, they could be more general simple curves, however for notational simplicity, (REF ) is written using right cuts $[t_{\\tt k}, \\infty )_{\\pm }$ only.", "A diagonal of the sheet ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{\\tt k} \\rbrace _{{\\tt k} = 1}^{\\tt r}$ is an oriented straight line segment $\\Delta _{\\sigma \\rho }=\\overline{ t_{\\sigma } t_{\\rho } }\\subset {\\mathbb {C}}_{t}\\backslash \\lbrace L_{\\tt k} \\rbrace _{{\\tt k} = 1}^{\\tt r} ,$ starting at $t_{\\sigma }$ and ending at $t_{\\rho }$ , here $\\rho ,\\sigma \\in \\lbrace 1,\\ldots ,{\\tt r}\\rbrace $ .", "See Figures REF , REF and REF for examples of Riemann surfaces constructed as in Definition REF and Figures REF and REF for examples of diagonals.", "Noticing that sheets in turn can be decomposed further into elementary building blocks, we make the following.", "Definition 3.4 A (closed) half plane is the pair $\\big ({\\overline{{\\mathbb {H}}}}^2_\\pm ,\\frac{\\partial }{\\partial t}\\big )$ .", "A (closed) finite height horizontal strip, is $\\big (\\lbrace 0\\le {\\mathfrak {Im}\\left(t\\right)}\\le h\\rbrace , \\frac{\\partial }{\\partial t}\\big )$ .", "Note that diagonals are directly related to finite height horizontal strips.", "Similarly logarithmic and finitely ramified branch points in ${\\mathcal {R}}_{X}$ give rise to the following non elementary building blocks.", "A semi–infinite helicoid is $\\Big ( \\big ({\\overline{{\\mathbb {H}}}}^2_{\\pm } \\cup {\\overline{{\\mathbb {H}}}}^2_{\\mp }\\cup \\ldots \\big ),\\frac{\\partial }{\\partial t} \\Big )$ glued together along their boundaries as in the graph of $\\Psi _{X}(z)= \\exp (-z)$ , see Diagram (REF ).", "A finite helicoid is an even finite succession of half–planes $\\Big ( \\big ({\\overline{{\\mathbb {H}}}}^2_{\\pm } \\cup {\\overline{{\\mathbb {H}}}}^2_{\\mp }\\cup \\ldots \\cup \\, {\\overline{{\\mathbb {H}}}}^2_{\\mp } \\big ),\\frac{\\partial }{\\partial t}\\Big ) $ ." ], [ "Relative position of the branch points on ${\\mathcal {R}}_{X}$", "In order to completely describe ${\\mathcal {R}}_{X}$ we also require information of the relative position of the branch points $\\lbrace (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})\\rbrace $ on the surface.", "Definition 3.5 Let $t_{{\\mathfrak {a}}}, t_{{\\mathfrak {r}}} \\in \\lbrace a_{1},\\dots ,a_{m}, \\widetilde{p}_{1},\\dots ,\\widetilde{p}_{n} \\rbrace \\subset {\\mathbb {C}}_{t}$ be two distinct (finite) asymptotic or critical values of $\\Psi _{X}$ and consider the oriented straight line segment $\\overline{t_{{\\mathfrak {a}}}t_{{\\mathfrak {r}}}}\\subset {\\mathbb {C}}_{t}$ .", "The inverse image $\\pi _{X,2}^{-1}\\big (\\overline{t_{\\mathfrak {a}}t_{\\mathfrak {r}}}\\big )= \\lbrace \\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}\\rbrace \\subset {\\mathcal {R}}_{X}$ is a set consisting of a finite (when $m=0$ , equivalently $d=0$ ) or an infinite (when $m\\ge 1$ ) number of copies of $\\overline{t_{{\\mathfrak {a}}}t_{{\\mathfrak {r}}}}$ .", "For each segment $\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ , let $\\delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}=\\pi _{X,1}(\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}})\\subset \\overline{{\\mathbb {C}}}_{z}$ .", "1) A segment $\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}\\subset {\\mathcal {R}}_{X}$ is a diagonal of ${\\mathcal {R}}_{X}$ , when the interiorSince $\\delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ is a path homeomorphic to $[a,b]\\subset {\\mathbb {R}}$ , by the interior of $\\delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ we mean the preimage, under the homeomorphism, of $(a,b)$ .", "of $\\delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ is in ${\\mathbb {C}}_{z}$ and $\\delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ has its endpoints $z_{{\\mathfrak {a}}},z_{{\\mathfrak {r}}}\\in {\\mathcal {P}}\\cup \\lbrace \\infty _{1},\\cdots ,\\infty _{d}\\rbrace \\subset \\overline{{\\mathbb {C}}}_{z}$ .", "2) Moreover, for a given diagonal $\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ , the two endpoints $b_{{\\mathfrak {a}}}$ , $b_{{\\mathfrak {r}}}$ of $\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ share the same sheet ${\\mathbb {C}}_{\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}}\\backslash \\lbrace \\text{suitable branch cuts}\\rbrace $ in ${\\mathcal {R}}_{X}$.", "Remark 4 By notation if we drop the index $\\vartheta $ from $\\Delta _{\\vartheta {\\mathfrak {a}}{\\mathfrak {r}}}$ we are specifying a particular diagonal $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ .", "The diagonals $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ have endpoints as follows: $\\Delta _{\\iota \\kappa }$ has as endpoints two pole vertices $(p_{\\iota },\\widetilde{p}_{\\iota })$ and $(p_{\\kappa },\\widetilde{p}_{\\kappa })$ , $\\iota \\ne \\kappa $ .", "For an example see middle column of Figure REF in §REF .", "$\\Delta _{\\iota \\sigma }$ has as endpoints a pole vertex and an essential vertex $(p_{\\iota },\\widetilde{p}_{\\iota })$ and $(\\infty _{\\sigma },a_{\\sigma })$ , see equation (REF ) for the subindices.", "For an example see middle column of Figure REF in §REF .", "$\\Delta _{\\sigma \\rho }$ has as endpoints two essential vertices with finite asymptotic values $a_{\\sigma }$ and $a_{\\rho }$ with exponential tracts $\\alpha _\\sigma $ and $\\alpha _\\rho $ : $(\\infty _{\\sigma },a_{\\sigma })$ to $(\\infty _{\\rho },a_{\\rho })$ , $\\sigma \\ne \\rho $ , where the subindices are as in (REF ).", "For an example see right hand side of Figure REF in §REF .", "For $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ a diagonal associated to the finite asymptotic or critical values $t_{\\mathfrak {a}}$ and $t_{\\mathfrak {r}}$ , note that $\\pi _{X,1}(\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}})$ has its endpoints $z_{{\\mathfrak {a}}},z_{{\\mathfrak {r}}}\\in {\\mathcal {P}}\\cup \\lbrace \\infty _{1},\\cdots ,\\infty _{d}\\rbrace \\subset \\overline{{\\mathbb {C}}}_{z}$ and since $(z_{{\\mathfrak {r}}},t _{{\\mathfrak {r}}}), (z_{{\\mathfrak {a}}},t _{{\\mathfrak {a}}})\\in {\\mathcal {R}}_{X}$ , tha associated semi–residue is $S(\\omega _{X}, z_{{\\mathfrak {a}}}, z_{{\\mathfrak {r}}},\\delta _{{\\mathfrak {a}}{\\mathfrak {r}}})=\\int _{\\pi _{X,1}(\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}})}\\omega _{X}= t_{{\\mathfrak {r}}}-t_{{\\mathfrak {a}}}.$ In other words, an oriented straight line segment $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ in ${\\mathcal {R}}_{X}$ is equivalent to the number $t_{{\\mathfrak {r}}}-t_{{\\mathfrak {a}}}$ in ${\\mathbb {C}}^{*}$ .", "Lemma 3.6 (Existence of diagonals in ${\\mathcal {R}}_X$ ) Suppose that there are at least two branch points $\\lbrace (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})\\rbrace \\subset {\\mathcal {R}}_{X}$ , with $t_{{\\mathfrak {a}}}\\in {\\mathbb {C}}_{t}$ .", "Then every branch point $(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})$ is an endpoint for at least one diagonal.", "Consider any branch point $(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})\\in \\pi _{X,2}^{-1}(t_{{\\mathfrak {a}}})$ , with $t_{{\\mathfrak {a}}}\\in \\lbrace a_{1},\\dots ,a_{m}, \\widetilde{p}_{1},\\dots $ , $\\widetilde{p}_{n} \\rbrace \\subset {\\mathbb {C}}_{t}$ .", "Suppose that there is no diagonal $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ with endpoint $(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}}) $ .", "This implies that $(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})$ does not share a sheet, ${\\mathbb {C}}_{t}\\backslash \\lbrace $ suitable branch cuts$\\rbrace $ , with any other branch point $(z_{{\\mathfrak {r}}},t_{{\\mathfrak {r}}})\\in \\pi _{X,2}^{-1}(t_{{\\mathfrak {r}}})$ , for some finite asymptotic or critical value $t_{{\\mathfrak {r}}}\\ne t_{{\\mathfrak {a}}}$ (note that the existence of $t_{{\\mathfrak {r}}}$ is guaranteed by Lemma REF ).", "In other words the only sheets, ${\\mathbb {C}}_{t}\\backslash \\lbrace \\text{suitable branch cuts}\\rbrace $ , of ${\\mathcal {R}}_{X}$ containing the branch point $(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})$ are of the form ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}}\\rbrace $ , for $L_{{\\mathfrak {a}}}=[t_{{\\mathfrak {a}}},\\infty )$ , hence by the same arguments as in Lemma REF , ${\\mathcal {R}}_{X}$ will have at least 2 connected components (one containing $(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}})$ and the other containing $(z_{{\\mathfrak {r}}},t_{{\\mathfrak {r}}})$ ), leading to a contradiction." ], [ "Combinatorial objects: $(r,d)$ –configuration trees", "Denote the universal cover of ${\\mathbb {C}}^{*}$ by $\\widetilde{{\\mathbb {C}}^{*}}=\\lbrace \\left|z\\right|{\\text{e}}^{i \\arg (z)} \\rbrace $ , where $\\arg (z)$ is the multivalued argument.", "For $r+d\\ge 1$ we have the following.", "Definition 4.1 A $(r,d)$ –configuration tree is a graph tree $\\Lambda =\\Big \\lbrace V; E \\Big \\rbrace $ with: $\\bullet \\ d+n$ vertices $V=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\iota $};}} =\\underbrace{\\big (p_{\\iota },\\widetilde{p}_{\\iota },-\\mu _{\\iota }\\big )}_{\\text{pole vertex}}\\Big \\rbrace _{\\iota =1}^{n}\\bigcup \\ \\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\sigma $};}} =\\underbrace{\\big ( \\infty _{\\sigma },a_{\\sigma }, \\infty \\big )}_{\\text{essential vertex}}\\Big \\rbrace _{\\sigma =1}^{d}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}} \\big ) \\Big \\rbrace _{{\\mathfrak {a}}=1}^{d+n}$ where $z_{{\\mathfrak {a}}}\\in \\overline{{\\mathbb {C}}}_{z}$ , $t_{{\\mathfrak {a}}}\\in {\\widehat{\\mathbb {C}}}_{t}$ , $\\mu _{{\\mathfrak {a}}}\\in {\\mathbb {N}}\\cup \\lbrace \\infty \\rbrace $ , $\\sum \\limits _{\\iota =1}^{n}\\mu _{\\iota }=r$ ; and $\\bullet \\ d+n-1$ weighted edges $E=\\big \\lbrace (\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}},\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}) \\ \\vert \\ \\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}} \\text{ starts at } \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} \\text{ and ends at } \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} ,\\ \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\in \\widetilde{{\\mathbb {C}}^{*}} \\big \\rbrace .$ In addition, the following conditions on the number $d+n$ of vertices must be satisfied: If $\\Lambda $ consists of only one vertex, then the $(r,0)$ –configuration trees are $\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =\\big (p_{1},\\widetilde{p}_1, -r \\big );\\emptyset \\Big \\rbrace $ , the $(0,1)$ –configuration trees are $\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =\\big (\\infty _{1},a_{1},\\infty \\big );\\emptyset \\Big \\rbrace $ .", "If $\\Lambda $ has at least two vertices, then: Existence of edges.", "There are no edges between vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}}\\big )$ and $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} =\\big (z_{{\\mathfrak {r}}},t_{{\\mathfrak {r}}},\\mu _{{\\mathfrak {r}}}\\big )$ for $t_{{\\mathfrak {a}}}=t_{{\\mathfrak {r}}}$ .", "Weight of an edge.", "When an edge $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ exists its associated weight is $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}= (t_{{\\mathfrak {r}}} - t_{{\\mathfrak {a}}}) \\ {\\text{e}}^{i 2\\pi K({\\mathfrak {a}},{\\mathfrak {r}})}=\\left|t_{{\\mathfrak {r}}} - t_{{\\mathfrak {a}}}\\right| \\ {\\text{e}}^{i\\arg _{0}(t_{{\\mathfrak {r}}} - t_{{\\mathfrak {a}}}) + i2\\pi K({\\mathfrak {a}},{\\mathfrak {r}})}\\in \\widetilde{{\\mathbb {C}}^*},$ where $K({\\mathfrak {a}}, {\\mathfrak {r}})\\in {\\mathbb {Z}}$ .", "Minimality condition.", "There are at least two vertices, say $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =\\big (z_1,t_1,\\mu _1 \\big )$ and $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} =\\big (z_{\\mathfrak {z}},t_{\\mathfrak {z}},\\mu _{\\mathfrak {z}}\\big )$ with $t_1 \\ne t_{\\mathfrak {z}}$ , such that there is an edge $\\Delta _{1 {\\mathfrak {z}}}$ connecting them.", "The respective weightIn order to make it easier to describe the geometry of the Riemann surfaces ${\\mathcal {R}}_{X}$ , it will be convenient to sometimes use $\\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}}$ instead of $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}$ to emphasize that the argument lies in $[0,2\\pi )$ .", "satisfies $\\lambda _{1 {\\mathfrak {z}}}= t_{{\\mathfrak {z}}}-t_{1}\\in {\\mathbb {C}}^{*}, \\text{ {\\it i.e.}", "}K(1,{\\mathfrak {z}})=0.$ Preferred horizontal subtree.", "The edges $\\lbrace (\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}},\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}})\\rbrace $ with $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\in {\\mathbb {C}}^*$ form a finite set of connected horizontal subtrees $\\lbrace \\Lambda _H\\rbrace $ .", "On each horizontal subtree $\\Lambda _H=\\Big \\lbrace \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} \\rbrace ;\\lbrace (\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}},\\widetilde{\\lambda } _{{\\mathfrak {a}}{\\mathfrak {r}}})\\rbrace \\Big \\rbrace $ we require that $t_{\\mathfrak {z}}\\notin \\lbrace t\\in {\\mathbb {C}}\\ \\vert \\ {\\mathfrak {Im}\\left(t_{\\mathfrak {a}}\\right)} < {\\mathfrak {Im}\\left(t\\right)} < {\\mathfrak {Im}\\left(t_{\\mathfrak {r}}\\right)} \\rbrace $ for each vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} \\in \\Lambda _H$ not an endpoint of an edge $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}\\in \\Lambda _H$ not having as endpoints $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} $ (i.e.", "$ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} \\ne \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} $ ).", "Remark 5 When $r=0$ , this definition reduces to the definition of a $d$ –configuration tree presented in §8.3 of [1].", "The equivalence becomes explicit by observing that the essential vertices $\\big ( \\infty _{\\sigma }, a_{\\sigma },\\infty \\big )$ of $(0,d)$ –configuration trees correspond to the vertices $(\\infty _{\\sigma },a_{\\sigma })$ of $d$ –configuration trees.", "Remark 6 Note that $(r,d)$ –configuration trees are oriented, traversable trees.", "1.", "The orientation of the edge $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ coincides with the orientation of the line segment $\\overline{t_{\\mathfrak {a}}t_{\\mathfrak {r}}}$ .", "2.", "The vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ will be called the starting vertex for the traversal of the $(r,d)$ –configuration tree." ], [ "Low degree significative examples", "Example 1 Consider the vector field $X(z)= \\frac{1}{(z-p_{1})^{\\mu _{1}} (z-p_{2})^{\\mu _{2}}} \\frac{\\partial }{\\partial z},\\ p_{1},p_{2}\\in {\\mathbb {C}}_{z}, \\ p_{1}\\ne p_{2}, \\ \\mu _{1}+\\mu _{2}=r, \\ \\mu _1,\\mu _2\\ge 1$ , and its distinguished parameter $\\Psi _{X}(z)=\\int _{z_{0}}^{z} (\\zeta -p_{1})^{\\mu _{1}} (\\zeta -p_{2})^{\\mu _{2}} d\\zeta $ .", "In this case the $(r,0)$ –configuration tree has two pole vertices and one edge $\\Lambda _{X}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =(p_{1},\\widetilde{p}_{1},-\\mu _{1}), \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} =(p_{2},\\widetilde{p}_{2},-\\mu _{2});(\\Delta _{1\\, 2},\\lambda _{1\\, 2}) \\Big \\rbrace ,$ where $\\widetilde{p}_{j}=\\Psi _{X}(p_{j})$ , for $j=1,2$ , are the critical values and the weight $\\lambda _{1\\, 2}$ is the semi–residue $S(\\omega _{X},p_{1},p_{2},\\gamma )=\\widetilde{p}_{2}-\\widetilde{p}_{1}$ , according to (REF ).", "See Figure REF .", "Figure: Vector field 1 (z-p 1 ) μ 1 (z-p 2 ) μ 2 ∂ ∂z\\frac{1}{(z-p_{1})^{\\mu _{1}} (z-p_{2})^{\\mu _{2}}} \\frac{\\partial }{\\partial z}with two poles p ι p_{\\iota } of order -μ ι -\\mu _{\\iota }.The diagonalΔ 12 ⊂ℛ X \\Delta _{1\\,2}\\subset {\\mathcal {R}}_{X}associated to the finitely ramified branch points and its projectionsvia π X,1 \\pi _{X,1} and π X,2 \\pi _{X,2} are coloured red.The phase portrait (left drawing) is the case withpoles of orders -μ 1 =5-\\mu _{1}=5 and -μ 2 =3-\\mu _{2}=3.See Example , and §for the drawing on the right.Example 2 Consider the vector field $X(z)=\\lambda ^{-1} {\\text{e}}^{z}\\frac{\\partial }{\\partial z},$ with $\\lambda \\in {\\mathbb {C}}^{*}$ , and its distinguished parameter $\\Psi _{X}(z)=\\int _{z_0}^{z} \\omega _{X}=\\lambda \\big ({\\text{e}}^{-z_0}-{\\text{e}}^{-z}\\big )$ .", "We then have an isolated essential singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ with finite asymptotic value $a_{1}=\\Psi _{X}(\\infty )=\\lambda {\\text{e}}^{-z_0}$ .", "The $(0,1)$ –configuration tree consists of one essential vertex and no edges $\\Lambda _{X}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =(\\infty _{1},a_{1},\\infty );\\emptyset \\Big \\rbrace .$ See Figure REF .", "Figure: Vector field λ -1 e z ∂ ∂z\\lambda ^{-1} {\\text{e}}^{z}\\frac{\\partial }{\\partial z} with an essential singularity at ∞∈ℂ ^ z \\infty \\in {\\widehat{\\mathbb {C}}}_{z}.The surface ℛ X {\\mathcal {R}}_{X} isa logarithmic spiral formed by two semi–infinite helicoidsglued together.", "The soul, Definition , is shaded blue.See Example , and §for the right drawing.Example 3 Consider the vector field $X(z)=\\frac{{\\text{e}}^{z}}{\\lambda \\ (z-p_{1})}\\frac{\\partial }{\\partial z},$ with $\\lambda \\in {\\mathbb {C}}^{*}$ and $p_{1}\\in {\\mathbb {C}}_{z}$ , and its distinguished parameter $\\Psi _{X}(z)=\\int _{z_0}^{z} \\omega _{X}=\\lambda \\big ( {\\text{e}}^{-z_{0}}(z_0-p_1+1) -{\\text{e}}^{-z}(z-p_{1}+1)\\big )$ .", "Once again we have an isolated essential singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ with finite asymptotic value $a_{1}=\\Psi _{X}(\\infty )=\\lambda {\\text{e}}^{-z_0}(z_0-p_1+1)$ corresponding to the exponential tract $\\lbrace z\\in {\\mathbb {C}}_{z} \\ \\vert \\ {\\mathfrak {Re}\\left(z\\right)}>0\\rbrace $ , and the pole $p_{1}$ has an associated critical value $\\widetilde{p}_{1}=\\Psi _{X}(p_{1})=\\lambda \\big ( {\\text{e}}^{-z_0} (z_0-p_1+1) - {\\text{e}}^{-p_1} \\big ).$ The $(1,1)$ –configuration tree has an essential vertex, a pole vertex and one edge $\\Lambda _{X}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =(\\infty _{1},a_{1},\\infty ), \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} =(p_{1},\\widetilde{p}_{1},-1);(\\Delta _{1\\, 2}, \\lambda _{1\\, 2}) \\Big \\rbrace ,$ with weight $\\lambda _{1\\, 2}=\\widetilde{p}_{1}-a_{1}=-\\lambda {\\text{e}}^{-p_{1}}$ .", "See Figure REF .", "Figure: Vector fielde z λ(z-p 1 )∂ ∂z\\frac{{\\text{e}}^{z}}{\\lambda \\ (z-p_{1})}\\frac{\\partial }{\\partial z}with essential singularity at ∞\\infty and simple pole at p 1 p_{1}.The Riemann surface ℛ X {\\mathcal {R}}_{X} consists of two semi–infinite helicoids, and a cyclic helicoidwith 2 sheets;the two branch points are the endpoints of the diagonalΔ 12 ⊂ℛ X \\Delta _{1\\,2}\\subset {\\mathcal {R}}_{X}(coloured red) on the level 0 sheet.The soul, Definition , is shaded blue.See Example , and § for the right drawing.Example 4 Consider the vector field $X(z)=-\\frac{{\\text{e}}^{z^{3}}}{3 z^{2}}\\frac{\\partial }{\\partial z}$ .", "If $z_{0}=0$ the distinguished parameter is $\\Psi _{X}(z)={\\text{e}}^{-z^{3}}-1$ .", "Thus the pole $p_{1}=0$ has order $-\\mu _{1}=-2$ and critical value $\\widetilde{p}_{1}=0$ , while the essential singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ has finite asymptotic value $a_{1}=-1$ , with multiplicity 3, each corresponding to one of the following exponential tracts $\\begin{array}{cc}A_{1}=\\lbrace z\\in {\\mathbb {C}}\\, \\vert \\arg (z)\\in [-\\pi /6,\\pi /6]\\, \\rbrace , &A_{2}=\\lbrace z\\in {\\mathbb {C}}\\, \\vert \\arg (z)\\in [\\ \\pi /2,5\\pi /6]\\, \\rbrace , \\\\A_{3}=\\lbrace z\\in {\\mathbb {C}}\\, \\vert \\arg (z)\\in [7\\pi /6,3\\pi /2]\\, \\rbrace .\\end{array}$ That is $(\\infty _{1},-1), (\\infty _{2},-1), (\\infty _{3},-1)\\in {\\mathcal {R}}_{X}$ are 3 logarithmic branch points corresponding to the above exponential tracts as in Remark REF .", "The $(2,3)$ –configuration tree has three essential vertices, and one pole vertex, which we conveniently renumber as follows $\\begin{array}{rclrcl} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =(z_{1},t_{1},\\mu _{1}) &=& (\\infty _{1},a_{1},\\infty ), &\\quad \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} =(z_{2},t_{2},\\mu _{2}) &=& (p_{1},\\widetilde{p}_{1},-2),\\\\ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} =(z_{3},t_{3},\\mu _{3}) &=& (\\infty _{2},a_{1},\\infty ), &\\quad \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} =(z_{4},t_{4},\\mu _{4}) &=& (\\infty _{3},a_{1},\\infty ).\\end{array}$ In this way the $(2,3)$ –configuration tree is $\\Lambda _{X}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} ;(\\Delta _{1\\, 2}, \\lambda _{1\\, 2}), (\\Delta _{2\\, 3}, \\widetilde{\\lambda }_{2\\, 3}), (\\Delta _{2\\, 4}, \\widetilde{\\lambda }_{2\\, 4}) \\Big \\rbrace ,$ with weights given by $\\begin{array}{lr}\\lambda _{1\\, 2}=\\int _{\\infty _{1}}^{p_{1}} \\omega _{X}=\\widetilde{p}_{1}-a_{1}=1\\in {\\mathbb {C}}^{*}, &\\widetilde{\\lambda }_{2\\, 3}=\\int ^{\\infty _{2}}_{p_{1}} \\omega _{X}=-{\\text{e}}^{2\\pi i}\\in \\widetilde{{\\mathbb {C}}^{*}}, \\\\\\widetilde{\\lambda }_{2\\, 4}=\\int ^{\\infty _{3}}_{p_{1}} \\omega _{X}=-{\\text{e}}^{-2\\pi i}\\in \\widetilde{{\\mathbb {C}}^{*}},\\end{array}$ the difference in the phases arising from the fact that each exponential tract is on a different sheet on ${\\mathcal {R}}_{X}$ .", "See Figure REF and the left hand side of Figure REF .", "Figure: Vector field -e z 3 3z 2 ∂ ∂z-\\frac{{\\text{e}}^{z^{3}}}{3 z^{2}}\\frac{\\partial }{\\partial z} with an essential singularity at ∞\\infty and pole p 1 =0p_{1}=0 of order -2-2.The projection of the diagonals,Δ 12 \\Delta _{1\\,2}, Δ 23 \\Delta _{2\\,3} and Δ 24 \\Delta _{2\\,4},onto ℂ ^ t {\\widehat{\\mathbb {C}}}_{t} is shown in red.The Riemann surface ℛ X {\\mathcal {R}}_{X} is not drawn.See Example , and§ for the right drawing.Example 5 In a similar vein as the previous example consider the vector field $X(z)=\\frac{{\\text{e}}^{z^{3}}}{3z^{3}-1}\\frac{\\partial }{\\partial z},$ with simple poles at $p_{1}=\\frac{1}{\\@root 3 \\of {3}}$ , $p_{2}={\\text{e}}^{i 2\\pi /3}p_{1}$ , $p_{3}={\\text{e}}^{-i 2\\pi /3}p_{1}$ , and an essential singularity at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ .", "Its distinguished parameter is $\\Psi _{X}(z)=\\int _{0}^{z} \\omega _{X} = -z {\\text{e}}^{-z^{3}}$ .", "Thus the critical values corresponding to the poles are $\\widetilde{p}_{1}=-\\frac{1}{\\@root 3 \\of {3{\\text{e}}}}$ , $\\widetilde{p}_{2}={\\text{e}}^{i 2\\pi /3} \\widetilde{p}_{1}$ and $\\widetilde{p}_{3}={\\text{e}}^{-i 2\\pi /3}\\widetilde{p}_{1}$ .", "The essential singularity at $\\infty $ has $a_{1}=0$ as its finite asymptotic value with multiplicity 3, once again with the same exponential tracts as the previous example, see equation (REF ), hence $(\\infty _{1},0), (\\infty _{2},0), (\\infty _{3},0)\\in {\\mathcal {R}}_{X}$ are the 3 logarithmic branch points corresponding to the mentioned exponential tracts.", "The $(3,3)$ –configuration tree has three essential vertices and three pole vertices, which we renumber conveniently as $\\begin{array}{rclrcl} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} =(z_{1},t_{1},\\mu _{1}) &=& (\\infty _{1},a_{1},\\infty ),&\\quad \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} =(z_{2},t_{2},\\mu _{2}) &=& (p_{1},\\widetilde{p}_{1},-1),\\\\ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} =(z_{3},t_{3},\\mu _{3}) &=& (p_{2},\\widetilde{p}_{2},-1),&\\quad \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} =(z_{4},t_{4},\\mu _{4}) &=& (p_{3},\\widetilde{p}_{3},-1),\\\\ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} =(z_{5},t_{5},\\mu _{5}) &=& (\\infty _{2},a_{1},\\infty ),&\\quad \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$6$};}} =(z_{6},t_{6},\\mu _{6}) &=& (\\infty _{3},a_{1},\\infty ).\\end{array}$ Thus the $(3,3)$ –configuration tree (see Figure REF ) is $\\Lambda _{X}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$6$};}} ;\\\\(\\Delta _{1\\, 2}, \\lambda _{1\\, 2}), (\\Delta _{2\\, 3}, \\lambda _{2\\, 3}), (\\Delta _{2\\, 4}, \\widetilde{\\lambda }_{2\\, 4})(\\Delta _{3\\, 5}, \\widetilde{\\lambda }_{3\\, 5}), (\\Delta _{4\\, 6}, \\widetilde{\\lambda }_{4\\, 6}) \\Big \\rbrace ,$ with weights given by $\\begin{array}{c}\\lambda _{1\\, 2} = \\int ^{p_{1}}_{\\infty _{1}} \\omega _{X} = \\widetilde{p}_{1} - a_{1} =\\widetilde{p}_{1}=-\\frac{1}{\\@root 3 \\of {3{\\text{e}}}} \\in {\\mathbb {C}}^{*},\\\\\\lambda _{2\\, 3} = \\int _{p_{1}}^{p_{2}} \\omega _{X} = (\\widetilde{p}_{2} - \\widetilde{p}_{1}) =\\Big (\\frac{1-{\\text{e}}^{i 2\\pi /3}}{\\@root 3 \\of {3{\\text{e}}}}\\Big ) \\in {\\mathbb {C}}^{*},\\\\\\widetilde{\\lambda }_{2\\, 4} = \\int _{p_{1}}^{p_{3}} \\omega _{X} = (\\widetilde{p}_{3} - \\widetilde{p}_{1})\\,{\\text{e}}^{i2\\pi } =\\Big (\\frac{1-{\\text{e}}^{-i 2\\pi /3}}{\\@root 3 \\of {3{\\text{e}}}}\\Big )\\,{\\text{e}}^{i2\\pi } \\in \\widetilde{{\\mathbb {C}}^{*}},\\\\\\widetilde{\\lambda }_{3\\, 5} = \\int _{p_{2}}^{\\infty _{2}} \\omega _{X} = (a_{1} - \\widetilde{p}_{2})\\,{\\text{e}}^{i2\\pi } =- \\widetilde{p}_{2}\\,{\\text{e}}^{i2\\pi } = \\Big (\\frac{{\\text{e}}^{ i 2\\pi /3}}{\\@root 3 \\of {3{\\text{e}}}}\\Big )\\,{\\text{e}}^{i2\\pi } \\in \\widetilde{{\\mathbb {C}}^{*}},\\\\\\widetilde{\\lambda }_{4\\, 6} = \\int _{p_{3}}^{\\infty _{3}} \\omega _{X} = (a_{1} - \\widetilde{p}_{3})\\,{\\text{e}}^{i2\\pi } =- \\widetilde{p}_{3}\\,{\\text{e}}^{i2\\pi } = \\Big (\\frac{{\\text{e}}^{- i 2\\pi /3}}{\\@root 3 \\of {3{\\text{e}}}}\\Big )\\,{\\text{e}}^{i2\\pi } \\in \\widetilde{{\\mathbb {C}}^{*}}.\\end{array}$ Figure: Vector field e z 3 3z 3 -1∂ ∂z\\frac{{\\text{e}}^{z^{3}}}{3z^{3}-1}\\frac{\\partial }{\\partial z} with an essential singularity at ∞\\infty and 3 simple poles p ι p_{\\iota }.The projection of the five diagonals onto ℂ ^ t {\\widehat{\\mathbb {C}}}_{t} are shown in red.The Riemann surface ℛ X {\\mathcal {R}}_{X} is not drawn.See Example , and § for the right drawing.Figure: Detail of vector fields in Examples and .The left hand side shows the vector field X(z)=-e z 3 3z 2 ∂ ∂zX(z)=-\\frac{{\\text{e}}^{z^{3}}}{3z^{2}}\\frac{\\partial }{\\partial z}, the right hand sidethe vector field X(z)=e z 3 3z 3 -1∂ ∂zX(z)=\\frac{{\\text{e}}^{z^{3}}}{3z^{3}-1}\\frac{\\partial }{\\partial z}.Each angular sector around the poles corresponds to a half plane on ℛ X {\\mathcal {R}}_{X}.Note that the dynamics of ℜ𝔢X{\\mathfrak {Re}\\left(X\\right)}in a neighbourhood of ∞∈ℂ ^\\infty \\in {\\widehat{\\mathbb {C}}} are different.The images contain the information needed to construct the corresponding (r,d)(r,d)–configurationtrees, as is explained in the text.It is instructive to examine in detail how these weights are calculated.", "The use of Figures REF and REF will facilitate the discussion.", "For the calculation of the first weight, $\\infty _{1}\\in \\overline{{\\mathbb {C}}}_{z}$ is the starting point for the integration of $\\omega _{X}$ , hence $\\lambda _{1\\, 2}= \\widetilde{p}_{1} - a_{1}\\in {\\mathbb {C}}^{*}$ .", "Now consider the calculation of the weight $\\lambda _{2\\, 3}$ : we seek the value of the integral from $p_{1}$ to $p_{2}$ , keeping in mind that we have just integrated from $\\infty _{1}$ to $p_{1}$ .", "The integration path, that goes from $\\infty _{1}$ through $p_{1}$ and then proceeds to $p_{2}$ , remains on only two adjacent angular sectors of the pole $p_{1}$ (going counterclockwise around the pole $p_{1}$ , see Figures REF and REF where one can trace the path of integration on the phase plane); which is equivalent to the fact that the image on ${\\mathcal {R}}_{X}$ of the integration path remains on the same sheet.", "Hence $\\lambda _{2\\, 3}=\\widetilde{p}_{2} - \\widetilde{p}_{1} \\in {\\mathbb {C}}^{*}$ .", "Continuing with the weight $\\widetilde{\\lambda }_{2\\, 4}$ , in this case the path of integration, starting from $\\infty _{1}$ passing through $p_{1}$ and ending at $p_{3}$ (going counterclockwise around the pole $p_{1}$ ), crosses three adjacent angular sectors of the pole $p_{1}$ ; this in turn is equivalent to the fact that the image of the integration path crosses three adjacent half–planes on ${\\mathcal {R}}_{X}$ , i.e.", "goes “up” on the ramified surface.", "Hence $\\widetilde{\\lambda }_{2\\, 4}=(\\widetilde{p}_{3}-\\widetilde{p}_{1})\\,{\\text{e}}^{i2\\pi } \\notin {\\mathbb {C}}^{*}$ .", "For the calculation of $\\widetilde{\\lambda }_{3\\, 5}$ , the integration path must take into account that the previous integration path was coming from $p_{1}$ , then the integration continues past $p_{2}$ and finally ends at $\\infty _{2}$ .", "Since the path crosses three adjacent angular sectors of $p_{2}$ then $\\widetilde{\\lambda }_{3\\, 5}= (a_{1} - \\widetilde{p}_{2})\\,{\\text{e}}^{i 2\\pi } \\notin {\\mathbb {C}}^{*}$ .", "The final calculation, $\\widetilde{\\lambda }_{4\\, 6}$ is the same as the previous one except with $p_{3}$ and $\\infty _{3}$ replacing $p_{2}$ and $\\infty _{2}$ respectively.", "Thus, once again, $\\widetilde{\\lambda }_{4\\, 6}= (a_{1} - \\widetilde{p}_{3})\\,{\\text{e}}^{i 2\\pi } \\notin {\\mathbb {C}}^{*}$ ." ], [ "Why is classification of ${E}(r,d)$ difficult?", "Let $X$ be in ${E}(r,d)$ .", "Recall from section §, that the graph of $\\Psi _{X}$ is the flat Riemann surface ${\\mathcal {R}}_{X}$ and in order to specify completely the function $\\Psi _{X}$ , it is necessary to not only specify the finite asymptotic and critical values in ${\\mathbb {C}}_{t}$ , but also the relative position of the corresponding branch points on ${\\mathcal {R}}_{X}$ .", "Remark 7 In order to get an accurate description, two combinatorial implicit obstacles are the following ones.", "No canonical order can be given to the finite asymptotic and critical values $\\lbrace t_{\\mathfrak {a}}\\rbrace \\subset {\\mathbb {C}}_{t}$ of $\\Psi _X$ .", "There is no preferred/canonical horizontal level 0, ${\\mathbb {C}}_{\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}} \\backslash \\lbrace \\text{suitable branch cuts}\\rbrace \\ \\ \\subset \\ \\ {\\mathcal {R}}_{X}$ , that is to be chosen to start the description of ${\\mathcal {R}}_{X}$ as a combinatorial object.", "In particular, note that condition (D.1) will have a repercussion on the enumeration of the vertices in Definition REF , while condition (D.2) is associated to the choice of vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ and $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} $ in the minimality condition of Definition REF .", "Moreover, these difficulties will also arise in the choice of arguments for the diagonals $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\in \\widetilde{{\\mathbb {C}}^*}$ , associated to pairs of finite asymptotic or critical values, as will be made explicit in Example REF ." ], [ "Proof of Main Theorem: description of the family ${E}(r,d)$ via combinatorial scheme", "That ${E}(r,d)$ is a complex manifold of dimension $r+d+1$ is obvious, for a proof in more generality see [2].", "For the bijection ${E}(r,d) \\cong \\left\\lbrace \\big [ \\Lambda _{X}\\big ] \\right\\rbrace $ : the containment ${E}(r,d)\\subset \\left\\lbrace \\big [ \\Lambda _{X}\\big ] \\right\\rbrace $ is proved in §REF , and the other containment in §REF .", "The classes of $(r,d)$ –configuration trees will be explained in §REF .", "From $X\\in {E}(r,d)$ to an $(r,d)$ –configuration tree $\\Lambda _{X}$ $\\bullet $ The trivial case: $\\Psi _X$ has exactly one finite asymptotic or critical value: From Lemma REF , only the following two cases are possible, $X(z)=\\frac{1}{\\lambda (z-p)^{r}} \\frac{\\partial }{\\partial z}$ , i.e.", "$(r,d)=(r,0)$ , or $X(z)=\\lambda ^{-1} {\\text{e}}^z \\frac{\\partial }{\\partial z}$ , i.e.", "$(r,d)=(0,1)$ , where $p\\in {\\mathbb {C}}_{z}$ and $\\lambda \\ne 0$ .", "Example REF provides the corresponding $\\Lambda _{X}$ for (2).", "$\\bullet $ The non–trivial case: $\\Psi _{X}$ has two or more finite asymptotic or critical values, i.e.", "$d+n\\ge 2$ : Considering the surface ${\\mathcal {R}}_{X}$ , recall its divisor (REF ).", "1.", "Vertices of $\\Lambda _{X}$.", "Let the vertices be the triads obtained from the divisor $V =\\Big \\lbrace \\big (p_{\\iota },\\widetilde{p}_{\\iota },-\\mu _{\\iota } \\big ) \\Big \\rbrace _{\\iota =1}^{n}\\cup \\Big \\lbrace \\big ( \\infty _{\\sigma },a_{\\sigma }, \\infty \\big ) \\Big \\rbrace _{\\sigma =1}^{d}=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}} \\big ) \\Big \\rbrace _{{\\mathfrak {a}}=1}^{d+n}$ There are $d+n$ vertices.", "2.", "Edges of $\\Lambda _{X}$.", "From Definition REF , the diagonals, associated to different pairs $t_{{\\mathfrak {a}}},t_{{\\mathfrak {r}}}$ of finite asymptotic or critical values, are oriented segments $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}=\\overline{(z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}}) (z_{{\\mathfrak {r}}},t_{{\\mathfrak {r}}})}$ in ${\\mathcal {R}}_{X}$ , whose endpoints project down, via $\\pi _{X,2}$ , to the finite asymptotic or critical values $t_{{\\mathfrak {a}}}$ , $t_{{\\mathfrak {r}}}$ .", "From Lemma REF it follows that there is at least one diagonal associated to each finite asymptotic or critical value.", "Hence the set of diagonals form the edges of a connected oriented graph.", "Note that if a cycle appears on the graph, the branch points corresponding to the vertices in the cycle all lie on the same sheet of ${\\mathcal {R}}_{X}$ .", "Such a subgraph will be called a horizontal subgraph.", "Moreover, each horizontal subgraph formed by the set of branch points sharing a same sheet of ${\\mathcal {R}}_X$ , say $\\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\ell $};}} =(z_\\ell ,t_\\ell ,\\mu _\\ell )\\rbrace _{\\ell =1}^s$ with $t_1\\ge t_2\\ge \\ldots \\ge t_s$ , together with the set of diagonals (edges) forms a complete digraph $K_s$ with $s(s-1)$ oriented edges.", "However, by eliminating the appropriate edges from $K_s$ we can always obtain an oriented, traversable, horizontal subtree such that $\\begin{array}{clc}&\\text{no branch point } \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\ell $};}} \\text{ is in theopen horizontal strip}&\\\\& \\pi _{X,2}^{-1}\\left(\\lbrace t\\in {\\mathbb {C}}\\ \\vert \\ {\\mathfrak {Im}\\left(t_j\\right)}<{\\mathfrak {Im}\\left(t\\right)}<{\\mathfrak {Im}\\left(t_k\\right)}\\rbrace \\right)\\subset {\\mathcal {R}}_X, &\\\\&\\text{defined by any edge } \\Delta _{j k},\\text{ in }K_s\\text{ whose endpoints}&\\\\&\\text{are not } \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\ell $};}} .&\\end{array}$ As will become clear (REF ) will be the preferred horizontal subtree condition of Definition REF .", "On another note, by simple inspection (at least) one of the following cases occur.", "$\\text{A) \\quad }\\Delta _{\\iota \\kappa } = \\overline{(p_{\\iota },\\widetilde{p}_{\\iota }) (p_{\\kappa },\\widetilde{p}_{\\kappa })},\\qquad \\pi _{X,2}\\big ((p_{\\iota },\\widetilde{p}_{\\iota })\\big )=\\widetilde{p}_{\\iota },\\ \\pi _{X,2}\\big ((p_{\\kappa },\\widetilde{p}_{\\kappa })\\big )=\\widetilde{p}_{\\kappa }, \\\\\\text{for some }\\iota , \\kappa \\in \\lbrace 1,\\dots ,n\\rbrace , \\quad \\widetilde{p}_{\\iota } \\ne \\widetilde{p}_{\\kappa }.$ $\\text{B) \\quad }\\Delta _{\\sigma \\iota } = \\overline{(\\infty _{\\sigma },a_{\\sigma }) (p_{\\iota },\\widetilde{p}_{\\iota })}\\text{ or }\\Delta _{\\iota \\sigma } = \\overline{(p_{\\iota },\\widetilde{p}_{\\iota }) (\\infty _{\\sigma },a_{\\sigma })},\\\\\\pi _{X,2}\\big ((\\infty _{\\sigma },a_{\\sigma })\\big )=a_{\\sigma },\\ \\pi _{X,2}\\big ((p_{\\iota },\\widetilde{p}_{\\iota })\\big )=\\widetilde{p}_{\\iota }, \\\\\\text{for some }\\sigma \\in \\lbrace 1,\\dots ,d\\rbrace , \\iota \\in \\lbrace 1,\\dots ,n\\rbrace , \\quad a_{\\sigma }\\ne \\widetilde{p}_{\\iota }.$ $\\text{C) \\quad }\\Delta _{\\sigma \\rho } = \\overline{(\\infty _{\\sigma },a_{\\sigma }) (\\infty _{\\rho },a_{\\rho })},\\qquad \\pi _{X,2}((\\infty _{\\sigma },a_{\\sigma }))=a_{\\sigma },\\ \\pi _{X,2}((\\infty _{\\rho },a_{\\rho }))=a_{\\rho }, \\\\\\text{for some }\\sigma ,\\rho \\in \\lbrace 1,\\ldots ,d\\rbrace , \\quad a_{\\sigma }\\ne a_{\\rho }.$ We thus obtain a non–weighted, oriented connected traversable tree $\\Big \\lbrace \\big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}} \\big )\\big \\rbrace _{\\sigma = 1}^{d+n} ;\\ \\big \\lbrace \\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}} \\big \\rbrace \\Big \\rbrace .$ Without loss of generality, we assume that $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ is the starting leaf of the tree.", "3.", "Weights of $\\Lambda _{X}$.", "As an aid, the reader can follow the construction by considering Example REF .", "[We will include such references inside square brackets.]", "For the assignment of weights $\\lbrace \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\rbrace $ to the edges $\\lbrace \\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}\\rbrace $ we proceed to traverse the tree.", "We start to traverse the tree from the starting leaf $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ .", "The edge corresponding to the leaf is $\\Delta _{1{\\mathfrak {z}}}$ .", "We define the weight as $\\widetilde{\\lambda }_{1{\\mathfrak {z}}} := \\lambda _{1{\\mathfrak {z}}} = \\overline{(z_1,t_1) (z_{\\mathfrak {z}},t_{\\mathfrak {z}})}= t_{\\mathfrak {z}}-t_1\\in {\\mathbb {C}}^{*}$ .", "The branch points corresponding to $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ and $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} $ share the same sheet in ${\\mathcal {R}}_{X}$ .", "[Referring to Example REF , our first edge is $\\Delta _{1\\, 2}$ , and condition (B) is satisfied.]", "If we have only two vertices we have completed the construction of $\\Lambda _{X}$ .", "When there are at least 3 vertices, assume we are at the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} $ , we choose a vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ , with ${\\mathfrak {a}}\\ne 1,{\\mathfrak {z}}$ , such that the edge $\\Delta _{{\\mathfrak {z}}\\, {\\mathfrak {a}}}$ exists.", "The associated weight is defined as in (REF ) by $\\widetilde{\\lambda }_{{\\mathfrak {z}}\\, {\\mathfrak {a}}}=\\big (t_{\\mathfrak {a}}- t_{\\mathfrak {z}}\\big ) \\ {\\text{e}}^{i 2\\pi K({\\mathfrak {z}},{\\mathfrak {a}})}=\\left|t_{\\mathfrak {a}}- t_{\\mathfrak {z}}\\right| \\ {\\text{e}}^{i\\arg _{0}(t_{\\mathfrak {a}}- t_{\\mathfrak {z}})+ i2\\pi K({\\mathfrak {z}},{\\mathfrak {a}})},$ where $2\\pi K({\\mathfrak {z}},{\\mathfrak {a}})$ is the argument between the sheetsGeometrically $K({\\mathfrak {z}},{\\mathfrak {a}}) \\in {\\mathbb {Z}}$ corresponds to the number of sheets in ${\\mathcal {R}}_{X}$ that separate the diagonals $\\Delta _{1 {\\mathfrak {z}}}$ and $\\Delta _{{\\mathfrak {z}}{\\mathfrak {a}}}$ .", "As is usual language, going around a branch point counterclockwise corresponds to going “upwards” on the ramified surface and hence the number that separates the sheets is positive.", "Similarly going around the branch point clockwise corresponds to going “downwards”.", "Furthermore going $K$ times around a finitely ramified branch point of ramification index $\\mu $ is equivalent to going around it $K\\,(\\bmod {\\mu })$ times.", "containing $\\Delta _{1\\,{\\mathfrak {z}}}$ and $\\Delta _{{\\mathfrak {z}}\\sigma }$ .", "[Referring to Example REF , the weight $\\lambda _{2\\, 3}\\in {\\mathbb {C}}^{*}$ since on ${\\mathcal {R}}_{X}$ the diagonals $\\Delta _{1\\,2}$ and $\\Delta _{2\\,3}$ lie on the same sheet; however the weight $\\lambda _{2\\, 4}\\in \\widetilde{{\\mathbb {C}}^{*}}$ , since on ${\\mathcal {R}}_{X}$ the diagonals $\\Delta _{1\\,2}$ and $\\Delta _{2\\,4}$ lie on different sheets.]", "Continue the assignment of weights as in (b) for all the edges that contain the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} $ .", "This exhausts the edges containing the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {z}}$};}} $ .", "Continue traversing the tree and assigning the weights as in the previous step until all the vertices are exhausted.", "[Referring to Example REF , the last edge to be considered is $e_{4\\, 6}$ with corresponding weight $\\widetilde{\\lambda }_{4\\, 6}=(a_{1}-\\widetilde{p}_{3})\\,{\\text{e}}^{i2\\pi }$ .]", "We have thus constructed an $(r,d)$ –configuration tree $\\Lambda _X=\\Big \\lbrace \\big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}} \\big )\\big \\rbrace _{{\\mathfrak {a}}= 1}^{d+n} ;\\ \\big \\lbrace (\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}},\\ \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}} ) \\big \\rbrace \\Big \\rbrace $ associated to $\\Psi _{X}$ .", "Remark 8 Non–uniqueness of $(r,d)$ –configuration trees associated to $\\Psi _{X}$ .", "1.", "There is no canonical way of choosing the non–weighted, oriented connected traversable tree given by (REF ).", "This will change the values of $K({\\mathfrak {a}},{\\mathfrak {r}})$ , and hence of the weights $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}$ of the edges of $\\Lambda _{X}$ .", "2.", "The choice of the weight (in particular the argument) when considering an edge that connects a pole vertex with any other type of vertex is not unique because of the modular arithmetic involved.", "For instance, if we have an edge $(\\Delta _{\\iota {\\mathfrak {r}}},\\lambda _{\\iota {\\mathfrak {r}}})$ connecting a pole vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\iota $};}} =(p_{\\iota },\\widetilde{p}_{\\iota },\\mu _{\\iota })$ to any other vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} $ , then changing $\\lambda _{\\iota {\\mathfrak {r}}}$ by a factor of ${\\text{e}}^{i2\\pi (\\mu _{\\iota }+1)\\ell }$ for $\\ell \\in {\\mathbb {Z}}$ , will give rise to a different $(r,d)$ –configuration tree associated to the same $\\Psi _{X}$ .", "These issues will be addressed in §REF .", "From a $(r,d)$ –configuration tree $\\Lambda _{X}$ to ${\\mathcal {R}}_{X}$ associated to $\\Psi _{X}$ In this direction of the proof, we abuse notation by using $\\Lambda _{X}$ and ${\\mathcal {R}}_{X}$ instead of $\\Lambda $ and ${\\mathcal {R}}_{\\Lambda }$ .", "Let $\\Lambda _X$ be an $(r,d)$ –configuration tree as in (REF ).", "The construction will proceed in three steps.", "We will first construct the $(r,d)$ –skeleton of $\\Lambda _{X}$ (a “blow–up” of $\\Lambda _{X}$ , see Definition REF ), describing the embedding of $\\Lambda _{X}$ in $\\overline{{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ .", "As a second step, from the $(r,d)$ –skeleton of $\\Lambda _{X}$ we will construct a connected Riemann surface with boundary, the soul of $\\Lambda _{X}$ (see Definition REF ).", "As the third and final step, we shall glue infinite helicoids on the boundaries of the soul to obtain the simply connected Riemann surface ${\\mathcal {R}}_X$ .", "Figure REF presents a particular example that will help the reader follow the construction.", "Figure: (r,3)(r,3)–configuration tree Λ X \\Lambda _{X}, 𝐫=μ 1 +μ 2 \\mathbf {r=\\mu _{1}+\\mu _{2}}, and its(3,r)(3,r)–skeleton of Λ X \\Lambda _{X}.To show the possible complexities that arise in the proof of the Main Theorem,we present an example with two poles and 3 finite asymptotic values correspondingto the essential singularity at ∞∈ℂ ^ z \\infty \\in {\\widehat{\\mathbb {C}}}_{z}.The vertices are[baseline=(char.base)]bad hbox=(∞ 1 ,a 1 ,∞) \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} = (\\infty _1,a_{1},\\infty ),[baseline=(char.base)]bad hbox=(∞ 1 ,a 2 ,∞) \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} = (\\infty _1,a_{2},\\infty ),[baseline=(char.base)]bad hbox=(p 1 ,p ˜ 1 ,-μ 1 ) \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} = (p_{1},\\widetilde{p}_{1}, -\\mu _{1} ),[baseline=(char.base)]bad hbox=(p 2 ,p ˜ 2 ,-μ 2 ) \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} = (p_{2},\\widetilde{p}_{2}, -\\mu _{2} ),[baseline=(char.base)]bad hbox=(∞ 2 ,a 2 ,∞) \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} = (\\infty _2,a_{2},\\infty ).The starting vertex is [baseline=(char.base)]bad hbox \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} , with weight λ 12 =a 2 -a 1 ∈ℂ * \\lambda _{1\\, 2}=a_{2}-a_{1}\\in {\\mathbb {C}}^{*}, whileλ ˜ 23 ,λ ˜ 24 ,λ ˜ 35 ∈ℂ * ˜\\widetilde{\\lambda }_{2\\, 3},\\widetilde{\\lambda }_{2\\, 4},\\widetilde{\\lambda }_{3\\, 5}\\in \\widetilde{{\\mathbb {C}}^{*}}.The weights λ 23 =p ˜ 1 -a 2 \\lambda _{2\\, 3}=\\widetilde{p}_{1}-a_{2},λ 24 =p ˜ 2 -a 2 \\lambda _{2\\, 4}=\\widetilde{p}_{2}-a_{2},and λ 35 =a 2 -p ˜ 1 \\lambda _{3\\, 5}=a_{2}-\\widetilde{p}_{1}are also elements of ℂ * {\\mathbb {C}}^{*}: in the description of the (3,r)(3,r)–skeleton of Λ X \\Lambda _{X} the informationabout how many sheets we have gone “up” or “down” the Riemann surface is now included.For instance λ ˜ 23 =e i2π λ 23 \\widetilde{\\lambda }_{2\\, 3}={\\text{e}}^{i2\\pi }\\lambda _{2\\, 3} andλ ˜ 35 =e i2πs λ 35 \\widetilde{\\lambda }_{3\\, 5}={\\text{e}}^{i2\\pi s}\\lambda _{3\\, 5}, with s=-2mod(μ 1 +1)s=-2\\mod {(}\\mu _{1}+1).1.", "Construction of the $(r,d)$ –skeleton of $\\Lambda _{X}$.", "The $(r,d)$ –skeleton of $\\Lambda _{X}$ will contain the same information as $\\Lambda _{X}$ .", "With the disadvantage of being more cumbersome to express.", "With the advantage that it will enable us to identify the equivalence classes of $\\Lambda _{X}$ in §REF .", "First recall that we have two possible types of vertices: essential vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\sigma $};}} =(\\infty _{\\sigma }, a_{\\sigma }, \\infty )$ and pole vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\iota $};}} =(p_{\\iota },\\widetilde{p}_{\\iota },-\\mu _{\\iota })$ .", "Moreover for each weighted edge, $(e_{{\\mathfrak {a}}{\\mathfrak {r}}}, \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}})$ , that starts at $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ and ends at $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} $ , the weight can be expressed as $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}=\\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}}\\ {\\text{e}}^{i 2\\pi K({\\mathfrak {a}},{\\mathfrak {r}})},\\quad \\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}}\\in {\\mathbb {C}}^{*}, \\quad K({\\mathfrak {a}},{\\mathfrak {r}})\\in {\\mathbb {Z}}.$ For each essential vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\sigma $};}} =(\\infty _{\\sigma }, a_{\\sigma }, \\infty )$ , of $\\Lambda _{X}$ , let $K_{\\max }=\\max \\limits _{{\\mathfrak {r}}}\\lbrace 0,K(\\sigma ,{\\mathfrak {r}})\\rbrace $    and    $K_{\\min }=\\min \\limits _{{\\mathfrak {r}}}\\lbrace 0,K(\\sigma ,{\\mathfrak {r}})\\rbrace $ , where the maximum and minimum are taken over all the edges that start at $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\sigma $};}} $ and end at the respective $\\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} \\rbrace $ .", "Construct a vertical tower associated to $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\sigma $};}} $; that is an oriented linear graph consisting of exactly $(K_{\\max } - K_{\\min } +1)$ copies of the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\sigma $};}} $ joined by $(K_{\\max } - K_{\\min })$ vertical edges (without weights).", "We shall assign, consecutively, to each vertex of the vertical tower a level: an integer starting at $-K_{\\min }$ and ending at $K_{\\max }$ .", "Call the increasing direction up and the decreasing direction down.", "The vertical tower will have vertices of valence 1 at the extreme levels $-K_{\\min }$ and $K_{\\max }$ , otherwise of valence 2.", "[See vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} $ in Figure REF .]", "For each pole vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\iota $};}} =(p_{\\iota },\\widetilde{p}_{\\iota }, -\\mu _{\\iota })$ , of the original $\\Lambda _{X}$ , construct a vertical cycle of length $\\mu _\\iota +1$ associated to $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\iota $};}} $; that is an oriented cyclic graph consisting of exactly $\\mu _\\iota +1$ copies of the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$\\iota $};}} $ joined by $\\mu _{\\iota }+1$ vertical edges (without weights).", "The vertices on the vertical cycle are also assigned a level: in this case arithmetic modulo $(\\mu _{\\iota }+1)$ is to be used.", "The vertical cycle of length $\\mu _{\\iota }+1$ will only have vertices of valence 2.", "Once again, call one direction of the vertical cycle up and the other direction down.", "[See vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} $ in Figure REF .]", "Definition 6.1 The $(r,d)$ –skeleton of $\\Lambda _{X}$ is the oriented graph obtained by: Replacing each essential and pole vertices with their associated vertical tower or vertical cycle respectively.", "The edge, $(e_{{\\mathfrak {a}}{\\mathfrak {r}}}, \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}})\\in \\Lambda _{X}$ , is to end at the level 0 vertex of the vertical tower or vertical cycle associated to $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {r}}$};}} $ .", "Furthermore it should start at the level $K({\\mathfrak {a}},{\\mathfrak {r}})$ vertex of the vertical tower or vertical cycle associated to $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ , noting that if $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ is a pole vertex, modular arithmetic is to be used.", "Finally replace the weights $\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}$ by $\\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}}$ .", "Remark 9 The $(r,d)$ –skeleton of $\\Lambda _{X}$ has the following properties (also see Diagram (REF )): 1.", "The edges of the $(r,d)$ –skeleton of $\\Lambda _{X}$ are divided in two sets: the vertical edges (alluded to in (I) and (II) above), and the horizontal edges of the form $(e_{{\\mathfrak {a}}{\\mathfrak {r}}}, \\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}})$ with $\\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}}=\\left|t_{{\\mathfrak {r}}} - t_{{\\mathfrak {a}}}\\right| \\ {\\text{e}}^{i\\arg _{0}(t_{{\\mathfrak {r}}} - t_{{\\mathfrak {a}}})}\\in {\\mathbb {C}}^{*}$ , see (REF ).", "2.", "Consider two horizontal edges $(e_{{\\mathfrak {z}}{\\mathfrak {a}}}, \\lambda _{{\\mathfrak {z}}{\\mathfrak {a}}})$ and $(e_{{\\mathfrak {a}}{\\mathfrak {r}}}, \\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}})$ in the $(r,d)$ –skeleton of $\\Lambda _{X}$ that share the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ in the original $(r,d)$ –configuration tree $\\Lambda _{X}$ .", "We shall say that: $\\bullet $ The horizontal edge $(e_{{\\mathfrak {a}}{\\mathfrak {r}}}, \\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}})$ is $K({\\mathfrak {a}},{\\mathfrak {r}})$ levels upwards or downwards of the edge $(e_{{\\mathfrak {z}}{\\mathfrak {a}}}, \\lambda _{{\\mathfrak {z}}{\\mathfrak {a}}})$ in the $(r,d)$ –skeleton of $\\Lambda _{X}$ depending on whether $K({\\mathfrak {a}},{\\mathfrak {r}})$ is positive or negative respectively.", "$\\bullet $ The edges share the same horizontal level, when $K({\\mathfrak {a}},{\\mathfrak {r}})=0$ .", "Geometrically, $K({\\mathfrak {a}},{\\mathfrak {r}})$ can be recognized as a) the number of sheets in ${\\mathcal {R}}_{X}$ separating the diagonals $\\Delta _{{\\mathfrak {z}}{\\mathfrak {a}}}$ and $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ or equivalently b) the number of levels in the $(r,d)$ –skeleton of $\\Lambda _{X}$ separating the edges $(e_{{\\mathfrak {z}}{\\mathfrak {a}}},\\lambda _{{\\mathfrak {z}}{\\mathfrak {a}}})$ and $(e_{{\\mathfrak {a}}{\\mathfrak {r}}},\\lambda _{{\\mathfrak {a}}{\\mathfrak {r}}})$ .", "3.", "From the minimality condition the weight $\\lambda _{1\\,2}\\in {\\mathbb {C}}^{*}$ , hence the horizontal subtree containing $(e_{1\\,2}, \\lambda _{1\\,2})$ will be called the horizontal level 0 subtree of the $(r,d)$ –skeleton of $\\Lambda _{X}$ .", "4.", "By collapsing the vertical edges of the $(r,d)$ –skeleton of $\\Lambda _{X}$ we (almost) recover the original $(r,d)$ –configuration tree $\\Lambda _{X}$ .", "The $(r,d)$ –configuration tree $\\Lambda _{X}$ is a blow–down of the $(r,d)$ –skeleton of $\\Lambda _{X}$ , see Diagram (REF ) and Figure REF for an example.", "2.", "Construction of the soul of $\\Lambda _X$ from the $(r,d)$ –skeleton of $\\Lambda _{X}$.", "It will be convenient to construct an intermediate Riemann surface with boundary, the soul of $\\Lambda _{X}$ (see definition below), before completing the construction of ${\\mathcal {R}}_{X}$ .", "On the $(r,d)$ –skeleton of $\\Lambda _{X}$ there are two types of vertices; those that do not share horizontal edges and those that share horizontal edges (the vertices that belong to a horizontal subtree).", "For simplicity, let us first assume that on any given horizontal subtree the asymptotic and critical values $t_{{\\mathfrak {r}}}$ in (REF ) (associated to these vertices) all lie on different horizontal trajectories of $({\\mathbb {C}}_{t},\\frac{\\partial }{\\partial t})$ .", "Remark 10 Combinatorial aspects of a sheet arising from the $(r,d)$ –skeleton of $\\Lambda _{X}$ .", "We recall Definition REF .", "Case 1.", "From a vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ with only vertical edges attached to it (there are only two such vertical edges), we obtain a sheet ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}}\\rbrace $ with only one branch cut $L_{{\\mathfrak {a}}}$ .", "Note that the two boundaries $[t_{\\mathfrak {a}},\\infty )_{\\pm }$ , of the sheet ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}}\\rbrace $ , correspond to the vertical edges.", "Case 2.", "From a horizontal subtree in the $d$ –skeleton of $\\Lambda _{X}$ , say with vertices $\\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}_{\\ell }$};}} \\rbrace $ , we obtain a sheet ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}_{\\ell }} \\rbrace $ .", "Once again the edges $e_{{\\mathfrak {a}}_{\\ell } {\\mathfrak {r}}}$ correspond to the diagonals $\\Delta _{{\\mathfrak {a}}_{\\ell } {\\mathfrak {r}}}\\subset {\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}_{\\ell }} \\rbrace $ .", "We now start the construction of ${\\mathcal {R}}_X$ from the $(r,d)$ –skeleton of $\\Lambda _{X}$ .", "Replace each vertexRecall that all the vertices of the $(r,d)$ –skeleton of $\\Lambda _{X}$ are either the original vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ of the original $(r,d)$ –configuration tree $\\Lambda _{X}$ , or copies of them.", "Thus any vertex in the $(r,d)$ –skeleton of $\\Lambda _{X}$ projects to a unique vertex on $\\Lambda _{X}$ .", "of the $(r,d)$ –skeleton of $\\Lambda _{X}$ that does not share a horizontal edge with a sheet ${\\mathbb {C}}_{t}\\backslash L_{{\\mathfrak {a}}}$ .", "Given a horizontal subtree with $s$ vertices, say $\\lbrace v_\\ell \\rbrace _{\\ell =1}^{s}$ , denote by $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}_\\ell $};}} $ the vertex, of the original $d$ –configuration tree, to which the vertex $v_\\ell $ projects down to.", "Replace the given horizontal subtree with a sheet ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}_\\ell } \\rbrace _{\\ell =1} ^{s} ,$ where each $L_{{\\mathfrak {a}}_\\ell } $ is the horizontal branch cut associated to the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}_\\ell $};}} $ .", "Since all the values $\\lbrace t_{{\\mathfrak {a}}_\\ell }\\rbrace $ lie on different horizontal trajectories of $\\frac{\\partial }{\\partial t}$ , then none of the horizontal branch cuts $L_{{\\mathfrak {a}}_\\ell }$ intersect in ${\\mathbb {C}}_{t}$ .", "Continue this replacement process for every horizontal subtree.", "Note that we obtain stacked copies of ${\\mathbb {C}}_{t}\\backslash L_{{\\mathfrak {a}}}$ and ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{{\\mathfrak {a}}_\\ell } \\rbrace _{\\ell =1} ^{s}$ , but they retain their relative position respect to the $(r,d)$ –skeleton of $\\Lambda _{X}$ , by the fact that we still have not removed the vertical edges of the $d$ –skeleton of $\\Lambda _{X}$ .", "We now replace the vertical towers and vertical cycles in the $(r,d)$ –skeleton of $\\Lambda _{X}$ with finite helicoids or cyclic helicoids respectively (recall Definition REF ).", "On each vertical tower or vertical cycle, say the one associated to the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ , glue together the horizontal branch cuts by alternating the boundaries of ${\\mathbb {C}}_{t}\\backslash L_{{\\mathfrak {a}}}$ , so as to form finite helicoids or cyclic helicoids over the vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ , making sure that all the finite helicoids go upwards when turning counter–clockwise around the vertex.", "In the case where a vertical tower is involved, the finite helicoid has two boundaries consisting of $[t_{{\\mathfrak {a}}},\\infty )_{+}$ and $[t_{{\\mathfrak {a}}},\\infty )_{-}$ ; in the case where a vertical cycle is involved we obtain a cyclic helicoid, that is a finite helicoid whose boundaries have been identified/glued.", "Definition 6.2 The soul of the $(r,d)$ –configuration tree $\\Lambda _{X}$ is the Riemann surface with boundary described by (a)–(c) above.", "Remark 11 The soul is a simply connected Riemann surface that has as boundary $d$ horizontal branch cuts $[a_\\sigma ,\\infty )_{-}$ $\\cup \\ [a_\\sigma ,\\infty )_{+}$ associated exclusively to the finite asymptotic values $\\lbrace a_{\\sigma }\\rbrace _{\\sigma =1}^{d}$ $\\subset {\\mathbb {C}}_{t}$ .", "In particular, for $X(z)=\\frac{1}{P(z)}{\\text{e}}^{E(z)}\\frac{\\partial }{\\partial z}\\in {E}(r,d)$ the soul of $\\Lambda _X$ is the Riemann surface ${\\mathcal {R}}_{X_0}$ of $X_0(z)=\\frac{1}{P(z)}\\frac{\\partial }{\\partial z} \\in {E}(r,0)$ with $d$ branch cuts $\\lbrace L_\\sigma \\rbrace $ at $(\\infty _{\\sigma }, a_{\\sigma })$ ; here $\\sigma $ enumerates the finite asymptotic values as in (REF ).", "In the particular case when on some horizontal subtree there are at least two asymptotic or critical values $\\lbrace t_{{\\mathfrak {a}}}\\rbrace _{{\\mathfrak {a}}=1}^{d+n}\\subset ({\\mathbb {C}},\\frac{\\partial }{\\partial t})$ arising from the vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} $ , that lie on the same horizontal trajectory of $\\frac{\\partial }{\\partial t}$ .", "Then by Sard's theorem there is a small enough angle $\\theta >0$ such that the set of values $\\lbrace t_{{\\mathfrak {a}}}\\rbrace \\subset ({\\mathbb {C}},{\\text{e}}^{i\\theta }\\frac{\\partial }{\\partial t})$ lie on $m+n$ different trajectories of ${\\text{e}}^{i\\theta }\\frac{\\partial }{\\partial t}$ (in fact any small enough angle $\\theta \\ne 0$ will suffice).", "Proceed with the construction (a)–(e) as above but using ${\\text{e}}^{i\\theta } L_{{\\mathfrak {a}}}$ instead of $L_{{\\mathfrak {a}}}$ for the construction.", "Note that for small enough $\\theta >0$ all the surfaces obtained are homeomorphic.", "Finally let $\\theta \\rightarrow 0^{+}$ and consider the limiting surface.", "Example 6 1) $X\\in {E}(r,0)$ , so $\\Psi _{X}$ is a polynomial, in which case the soul of $\\Lambda _{X}$ is ${\\mathcal {R}}_{X}$ .", "See Figure REF .", "The soul is shaded blue in all the figures.", "2) $X(z)={\\text{e}}^{z}\\frac{\\partial }{\\partial z}$ , so $\\Psi _{X}$ is an exponential, in which case the soul of $\\Lambda _{X}$ consists of ${\\mathbb {C}}_{t}\\backslash L_{1}$ , a single sheet with exactly one branch cut.", "See Figure REF and figure 11.a in [1].", "3) $X(z)={\\text{e}}^{z^{2}}\\frac{\\partial }{\\partial z}$ , so $\\Psi _{X}$ is the error function, in which case the soul of $\\Lambda _{X}$ consists of ${\\mathbb {C}}_{t}\\backslash (L_{1}\\cup L_{2})$ , a single sheet with exactly two branch cuts.", "See Figure REF and figure 11.b in [1].", "3.", "Construction of ${\\mathcal {R}}_{X}$ from the soul of $\\Lambda _X$ .", "To each of the $2d$ boundaries of the soul of $\\Lambda _{X}$ , glue a semi–infinite helicoid to obtain a simply connected Riemann surface ${\\mathcal {R}}_{X}$ .", "This surface has exactly $d$ logarithmic branch points over $d$ finite asymptotic values and $n$ finitely ramified branch points with ramification indices that add up to $r+n$ .", "In fact, ${\\mathcal {R}}_{X}$ is realized via Maskit surgeries with $d$ exp–blocks and $r$ quadratic blocks, hence following M. Taniguchi [23], [24], there exist polynomials $E(z)$ of degree $d$ and $P(z)$ of degree $r$ arising from $\\Lambda _{X}$ , which characterize the function $\\Psi _{X}\\in SF_{r,d}=\\left\\lbrace \\int _{z_0}^{z}P(\\zeta )\\,{\\text{e}}^{-E(\\zeta )} d\\zeta \\ + b \\ \\big {\\vert } \\ P,E\\in {\\mathbb {C}}[z], \\ \\deg {P}=r, \\ \\deg {E}=d\\right\\rbrace .$ Finally assign to ${\\mathcal {R}}_{X}$ a flat metric $\\big ({\\mathcal {R}}_{X},\\pi _{X,2}^{*}(\\frac{\\partial }{\\partial t})\\big )$ induced by $\\pi _{X,2}$ .", "By Proposition REF , our sought after vector field is $X(z)=\\Psi _{X}^{*}(\\frac{\\partial }{\\partial t})(z)= \\frac{1}{P(z)}\\, {\\text{e}}^{E(z)} \\frac{\\partial }{\\partial z}\\in {E}(r,d)$ as required.", "Remark 12 An $(r,d)$ –configuration tree has all $K({\\mathfrak {a}},{\\mathfrak {r}})\\equiv 0$ if and only if on the corresponding Riemann surface ${\\mathcal {R}}_{X}$ all the diagonals share the same sheet ${\\mathbb {C}}_{t}\\backslash \\lbrace L_{\\mathfrak {a}}\\rbrace _{{\\mathfrak {a}}=1}^{d+n}$ .", "Remark 13 Note that the $(r,d)$ –configuration tree $\\Lambda _{X}$ is an abstract graph and, roughly speaking, the $(r,d)$ –skeleton of $\\Lambda _{X}$ , is a tree “embedded” in ${\\mathcal {R}}_{X}$ as a subset of $\\overline{{\\mathbb {C}}}_{z} \\times {\\widehat{\\mathbb {C}}}_{t}$ .", "It is not a genuine embedding since the branch points of ${\\mathcal {R}}_{X}$ are replaced by a vertical tower or vertical cycle during the blow–up process of $\\Lambda _{X}$ (the vertical edges of the $(r,d)$ –skeleton of $\\Lambda _{X}$ indicate how many sheets separate the diagonals).", "In this sense, both the $(r,d)$ –configuration tree $\\Lambda _{X}$ and the $(r,d)$ –skeleton of $\\Lambda _{X}$ project to a graph $\\pi _{X,2}(\\Lambda _{X})\\subset {\\mathbb {C}}_{t}$ .", "See Figures REF –REF and REF –REF , in particular $\\pi _{X,2}(\\Lambda _{X})$ need not be a tree as in Figures REF and REF .", "This is represented by the diagram: $$ $\\overline{{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}\\quad \\hookleftarrow \\quad {\\mathcal {R}}_{X}\\ $ “ $\\hookleftarrow $ ”$\\ (r,d)$ –skeleton of $\\Lambda _{X}$blow–downblow–up$(r,d)$ –configuration tree $\\Lambda _{X}$$ \\pi _{X,2} $$\\pi _{X,2}(\\Lambda _{X})\\subset {\\mathbb {C}}_{t}$ .", "The equivalence relation on $(r,d)$ –configuration trees Definition 6.3 Two $(r,d)$ –configuration trees are equivalent, $\\Lambda _{1}\\sim \\Lambda _{2},$ if their corresponding $(r,d)$ –skeletons are the same up to: Choice of the horizontal level 0 (See §REF .3.a).", "Relabelling of the vertices (See Remark REF .2).", "Choice of $K(\\iota ,{\\mathfrak {r}})$ on the weight of $\\widetilde{\\lambda }_{\\iota {\\mathfrak {r}}}$ associated to each vertical cycle (occurring when a pole vertex is present).", "The choice arises because of the modular arithmetic associated to the pole vertex (See Remark REF .3 and Footnote REF ).", "Choice of a representative for each horizontal subtree that satisfies the preferred horizontal subtree condition of Definition REF .", "This finishes the proof of the Main Theorem.", "Following is an examples that illustrate (1), (3) and (4) of the definition.", "Example 7 Choice of horizontal level 0 and of edge to remove when a horizontal cycle occurs.", "Let us consider Example REF once again.", "Notice that branch points corresponding to the vertices $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ , $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} $ and $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} $ share the same sheet on ${\\mathcal {R}}_{X}$ , hence the corresponding diagonals form a triangle (a horizontal cycle).", "Thus there is a choice to be made as to which two diagonals to include in the $(3,3)$ –configuration tree.", "In Example REF , Figure REF , the diagonals chosen are $\\overline{ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} }$ and $\\overline{ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} }$  ; if instead we choose $\\overline{ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} }$ and $\\overline{ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} }$ then we can not start to traverse the tree from vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ since $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} $ is not a leaf.", "This presents us with another choice: to start with vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} $ or vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$6$};}} $ .", "Choosing to start with vertex $ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} $ we obtain the following $(3,3)$ –configuration tree $\\Lambda _{X}^{\\prime }=\\Big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$4$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} , \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$6$};}} ;\\\\(\\Delta _{5\\, 3}, \\lambda _{5\\, 3}), (\\Delta _{3\\, 1}, \\widetilde{\\lambda }_{3\\, 1}),(\\Delta _{1\\, 2}, \\lambda _{1\\, 2}), (\\Delta _{2\\, 4}, \\widetilde{\\lambda }_{2\\, 4})(\\Delta _{4\\, 6}, \\widetilde{\\lambda }_{4\\, 6}) \\Big \\rbrace ,$ with the parameters given by (REF ) in Example REF and $\\lambda _{3\\, 1}=\\int ^{\\infty _{1}}_{p_2}\\omega _{X} = a_{1}-\\widetilde{p}_{2}=-\\widetilde{p}_{2}\\in {\\mathbb {C}}^{*},$ and $\\widetilde{\\lambda }_{3\\, 1}=\\lambda _{3\\, 1}{\\text{e}}^{i2\\pi }\\notin {\\mathbb {C}}^{*}$ , since the previous integration path was coming from $\\infty _{2}$ so the integration path crosses three adjacent angular sectors of $p_{2}$ .", "Note that since $\\lambda _{1\\,2}+\\lambda _{2\\,3}+\\lambda _{3\\,1}=0$ then even though the $(3,3)$ –configuration trees $\\Lambda _{X}^{\\prime }$ and $\\Lambda _{X}$ given by (REF ) and (REF ), respectively, are not the same, they give rise to the same Riemann surface ${\\mathcal {R}}_{X}$ .", "Compare Figures REF and REF .", "Figure: Essential singularity at ∞\\infty (of 1–order 3) and 3 simple poles revisited.Example revisited.", "The edge (e 23 ,λ 23 )(e_{2\\,3},\\lambda _{2\\,3}) was replaced by the edge(e 31 ,λ ˜ 31 )(e_{3\\,1},\\widetilde{\\lambda }_{3\\,1}), so as to not produce a cycle [baseline=(char.base)]bad hbox[baseline=(char.base)]bad hbox[baseline=(char.base)]bad hbox \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} .Also we now start to traverse the tree from [baseline=(char.base)]bad hbox \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$5$};}} since [baseline=(char.base)]bad hbox \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$1$};}} is no longer a leaf.The (3,3)(3,3)–configuration tree is not the same as that of Figure .Note that on ℂ ^ t , ∂ ∂t\\Big ({\\widehat{\\mathbb {C}}}_{t},\\frac{\\partial }{\\partial t}\\Big ) the projection of the diagonalΔ 23 =[baseline=(char.base)]bad hbox[baseline=(char.base)]bad hbox ¯\\Delta _{2\\,3}=\\overline{ \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$2$};}} \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {$3$};}} } is not present anymore.", "Vieta's map generalized to transcendental functions Recall that Vieta's map provides a parametrization of the space of monic polynomials of degree $s\\ge 1$ by the roots $\\lbrace q_{i}\\rbrace _{i=1}^{s}$ , up to the action of the symmetric group of order $s$ , $\\mathcal {S}(s)$ .", "Hence by allowing non–monic polynomials $P(z)$ and $E(z)$ in the description of $X\\in {E}(r,d)$ , and recalling the local parameter description of the classes of $(r,d)$ –configuration trees $[\\Lambda _{X}]$ , we have.", "Proposition 2 ${E}(r,d)$ can be parametrized by: The $r+d+2$ coefficients $\\lbrace (\\lambda , b_1,\\ldots ,b_r, c_1,\\ldots ,c_d) \\rbrace \\subset {\\mathbb {C}}^{2}\\times {\\mathbb {C}}_{coef}^{r+d}$ of the polynomials $P(z)$ and $E(z)$ .", "The $r$ roots of $P(z)$ , $d$ roots of $E(z)$ and the coefficient $\\lambda $ .", "The $r+d+1$ local complex parameters $\\Big \\lbrace z_{0}, (z_1,t_1,\\mu _{1})$ , $\\big \\lbrace \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\big \\rbrace _{1}^{r+d-1} \\Big \\rbrace $ defining the classes $[\\Lambda _{X}]$ .", "For (1) and (2) see [2].", "On the other hand, for (3), note that there are $r+d+1$ local complex parameters $\\Big \\lbrace z_{0}, (z_1,t_1,\\mu _{1})$ , $\\big \\lbrace \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\big \\rbrace _{1}^{r+d-1} \\Big \\rbrace $ defining the classes $[\\Lambda _{X}]$ .", "All are continuous, and because of the bijection $X\\longleftrightarrow [\\Lambda _{X}]$ , they form local charts for an atlas of ${E}(r,d)$ as a complex manifold of dimension $r+d+1$ .", "Corollary 1 There is a complex analytic dependence between the finite critical values and asymptotic values, the vertices of $\\Lambda _{X}$ , and the coefficients of the polynomials $P(z)$ and $E(z)$ .", "$\\Box $ As an example, in [1], §9.5, the complex analytic dependence for the cases $(r,d)=(0,1), (0,2)$ , $(0,3)$ are explicitly computed in terms of the exponential function, the error function and Airy's function respectively.", "Decomposition of the phase portraits into invariant components Theorem 8.1 The horizontal strip structure of $X\\in {E}(r,d)$ , into ${\\mathfrak {Re}\\left(X\\right)}$ –invariant components, is $\\big ({\\mathbb {C}}_{z},X\\big )=\\underbrace{\\left({\\overline{{\\mathbb {H}}}}^2_\\pm ,\\frac{\\partial }{\\partial z}\\right)\\cup \\ldots \\cup \\left({\\overline{{\\mathbb {H}}}}^2_\\mp ,\\frac{\\partial }{\\partial z}\\right)}_{4r\\ge N_p \\ge 2(r+1)}\\\\\\bigcup _{a_\\sigma }\\Biggl [\\left(\\Big \\lbrace 0\\le |{\\mathfrak {Im}\\left(z\\right)}|\\le 2\\pi K_\\sigma \\Big \\rbrace ,{\\text{e}}^z \\frac{\\partial }{\\partial z}\\right)_{a_{\\sigma }}\\qquad \\qquad \\\\\\qquad \\cup \\left({\\overline{{\\mathbb {H}}}}_{\\pm }^2,{\\text{e}}^z\\frac{\\partial }{\\partial z}\\right)_{a_{\\sigma },up}\\cup \\left({\\overline{{\\mathbb {H}}}}_{\\pm }^2,{\\text{e}}^z\\frac{\\partial }{\\partial z}\\right)_{a_{\\sigma },low}\\Biggr ]\\\\\\bigcup _ {\\ell }^{M\\le \\infty }\\left( \\Big \\lbrace 0\\le {\\mathfrak {Im}\\left(z\\right)}\\le h_{\\ell } \\Big \\rbrace ,\\frac{\\partial }{\\partial z}\\right),$ where $\\lbrace a_\\sigma \\rbrace $ are the finite asymptotic values of $\\Psi _{X}$ .", "Moreover, there are an infinite number of half planes $\\big ({\\overline{{\\mathbb {H}}}}^2_\\pm ,\\frac{\\partial }{\\partial z}\\big )$ in the decomposition if and only if $d\\ge 1$ .", "Decomposition (REF ) follows by recalling Definition REF , the biholomorphism $\\pi _{X,1}$ presented in Diagram and the fine structure of the $(r,d)$ –skeleton of $\\Lambda _X$ .", "It is an accurate description of the phase portrait decomposition of ${\\mathfrak {Re}\\left(X\\right)}$ : The first row depicts the, at least $2(r+1)$ and at most $4r$ , half planes associated to the $r$ poles.", "On the second row are the $d$ finite helicoids arising from the $d$ finite asymptotic values $\\lbrace a_\\sigma \\rbrace $ , where it is to be noticed that this can be an empty collection.", "On the third row are the $2d$ semi–infinite helicoids.", "And on the fourth row, the finite height strips associated to the non–horizontal diagonals in ${\\mathcal {R}}_{X}$ .", "On the topology of ${\\mathfrak {Re}\\left(X\\right)}$ Consider the group of orientation preserving homeomorphisms $Homeo({\\mathbb {C}})^{+}=\\lbrace h:{\\widehat{\\mathbb {C}}}_{z}\\rightarrow {\\widehat{\\mathbb {C}}}_{z}\\ |\\ \\text{ preserving orientation and fixing } \\infty \\in {\\widehat{\\mathbb {C}}}\\rbrace .$ Definition 9.1 Let $X_{1}, X_{2}\\in {E}(r,d)$ be two singular analytic vector fields.", "They are topologically equivalent if there exists $h\\in Homeo({\\mathbb {C}})^{+}$ which takes the trajectories of ${\\mathfrak {Re}\\left(X_{1}\\right)}$ to trajectories of ${\\mathfrak {Re}\\left(X_{2}\\right)}$ , preserving real time orientation, but not necessarily the parametrization.", "A bifurcation for ${\\mathfrak {Re}\\left(X_{1}\\right)}$ occurs, when the topology of its phase portrait topologically changes under small deformation of $X_1$ in the family ${E}(r,d)$ , otherwise $X_1$ is structurally stable, in ${E}(r,d)$ .", "Let $\\Lambda _{X}=\\Big \\lbrace \\big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}} \\big )\\big \\rbrace _{\\sigma = 1}^{d+n} ;\\big \\lbrace (\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}, \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}})\\big \\rbrace \\Big \\rbrace $ be a $(r,d)$ –configuration tree.", "By simple inspection we have Theorem 9.2 (Structural stability of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ ) The real vector field ${\\mathfrak {Re}\\left(X\\right)}$ is structurally stable in ${E}(r,d)$ if and only if $\\bullet \\ X$ has only simple poles and $\\bullet $ ${\\mathfrak {Im}\\left(\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\right)}\\ne 0$ for all weighted edges $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ of $\\Lambda _{X}$ .", "$\\Box $ As a direct consequence of the structure of the $(r,d)$ –skeleton of $\\Lambda _X$ we obtain Theorem 9.3 (Number of topologies of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ ) Given a fixed pair $(r,d)$ : The number of topologies of ${\\mathfrak {Re}\\left(X\\right)}$ is infinite when $(r,d)\\in \\big \\lbrace (r\\ge 2,1), (r\\ge 1,2),(r\\ge 0,d\\ge 3)\\big \\rbrace $ .", "The number of topologies is one when $(r,d)=(0,1), (1,0)$ ; two when $(r,d)=(0,2)$ , $(1,1)$ ; bounded above by $3^{(r-1)} \\times (r-1) \\times r!", "\\times p(r), \\quad \\text{ when } (r,d)=(r\\ge 2,0),$ where $p(r)$ is the partition function of the integer $r$ .", "Let us recall that the phase portrait ${\\mathfrak {Re}\\left(X\\right)}$ on ${\\mathbb {C}}_{z}$ , as in (3) of the theorem, only has a finite number ($\\le r$ ) of multiple saddle points.", "These phase portraits were first studied by W. M. Boothby [8], [9], showing that they appear as the real part of certain harmonic functions; in our framework, the imaginary part of $\\int \\omega _{X}$ .", "The number of topologies can be obtained by looking at the number of possible $(r,d)$ –skeletons of $\\Lambda _{X}$ associated to the $(r,d)$ –configuration trees.", "For each $(r,d)\\in \\big \\lbrace (r\\ge 2,1), (r\\ge 1,2),(r\\ge 0,d\\ge 3)\\big \\rbrace $ there will be at least one $(r,d)$ –skeleton of $\\Lambda _{X}$ with at least one vertical tower with two horizontal subgraphs attached to the same vertical tower.", "These horizontal subgraphs are vertically separated from each other by an integer number $K(\\sigma ,\\rho )$ , of degree 2 vertices on a vertical tower.", "Hence there are an infinite number of different ways, described by $\\lbrace K(\\sigma ,\\rho )\\ge 1\\rbrace $ , we can attach these two subgraphs to the vertical tower, each of which represents a different configuration in ${\\mathcal {R}}_{X}$ .", "The remaining cases are $(r,d)\\in \\big \\lbrace (0,1), (0,2), (1,1), (r\\ge 1,0)\\big \\rbrace $ .", "The cases $(0,1), (1,0)$ are trivial by Lemma REF .", "For cases $(0,2)$ and $(1,1)$ : ${\\mathcal {R}}_{X}$ has two branch points hence they must share the same sheet.", "Thus each one of these cases have exactly two topologies.", "Case (0,2) is illustrated in Figure REF .", "Case $(r\\ge 2,0)$ corresponds to $\\Psi _{X}$ being a polynomial, hence the number of topologies is finite.", "Since poles can have multiplicity, there are $p(r)$ ways of arranging $r$ poles (with multiplicity).", "Moreover, there are at most $r-1$ diagonals connecting the (at most) $r$ poles.", "For each diagonal we have at least two and at most three different topologies for ${\\mathfrak {Re}\\left(X\\right)}$ (characterized by the diagonal $\\Delta _{\\iota \\kappa }$ : ${\\mathfrak {Im}\\left(t_{\\iota }\\right)}={\\mathfrak {Im}\\left(t_{\\kappa }\\right)}$ , ${\\mathfrak {Im}\\left(t_{\\iota }\\right)}<{\\mathfrak {Im}\\left(t_{\\kappa }\\right)}$ , ${\\mathfrak {Im}\\left(t_{\\iota }\\right)}>{\\mathfrak {Im}\\left(t_{\\kappa }\\right)}$ ).", "Hence there are at most $3^{(r-1)} (r-1)!$ ways of placing the diagonals on the $(r,0)$ –configuration tree to obtain a different topology.", "Finally we must take into account that each of these diagonals, when viewed on the $(r,d)$ –skeleton of $\\Lambda _{X}$ , could be at a different level.", "Let $L(r)$ be the number of ways to place a diagonal on a $(r,d)$ –skeleton of $\\Lambda _{X}$ .", "Since there are at most $r-1$ diagonals and at most $r$ levels, then a (very rough) upper bound for $L(r)$ is $r(r-1)$ .", "Table REF presents a summary of the possible topologies of ${\\mathfrak {Re}\\left(X\\right)}$ , for $X\\in {E}(r,d)$ , that arise for different pairs $(r,d)$ .", "Table: Topologies of ℜ𝔢X{\\mathfrak {Re}\\left(X\\right)} for different pairs (r,d)(r,d).", "Epilogue: the singularity at $\\infty $ Our naive question; how can we describe the singularity of $X$ at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ , for $X\\in {E}(r,d)$ ?, is answered in this section.", "In [1], §5, germs of singular analytic vector fields $X$ are studied in detail.", "Starting with a simple closed path $\\gamma $ enclosingWith the usual anticlockwise orientation.", "the singularity $z_{\\vartheta } \\in {\\widehat{\\mathbb {C}}}$ , the notion of an admissible cyclic word $\\mathcal {W}_{X}$ in the alphabet $\\lbrace H, E, P, {}_{}{E}_{}\\rbrace $ is well defined, $\\big ( ({\\widehat{\\mathbb {C}}}, z_{\\vartheta }),X(z) \\big ) \\longmapsto \\mathcal {W}_{X}.$ It is to be noted that the affine group $Aut({\\mathbb {C}})$ is the largest complex automorphism group that acts on ${E}(r,d)$ , $\\mathcal {A}: Aut({\\mathbb {C}}) \\times {E}(r,d) \\longrightarrow {E}(r,r),\\ \\ \\ (T, X) \\longmapsto T^*X$ , see [2].", "Hence the germ $\\big ( ({\\widehat{\\mathbb {C}}}, z_{\\vartheta }),X(z) \\big )$ is a local analytic invariant.", "Figure: Hyperbolic HH, elliptic EE, parabolic PP andentire E {}_{}{E}_{} sectors in ℂ ^ z {\\widehat{\\mathbb {C}}}_{z}.The curve γ\\gamma is shown in red.Note that E E H∼ E E{}_{}{E}_{} H \\sim {}_{}{E}_{}is illustrated on the right.Figure: Two Riemann surfaces; (a) associated to the error functionΨ(z)=2 π∫ 0 z e -ζ 2 dζ\\Psi (z) = \\frac{2}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta and (b) toΨ(z)=2i π∫ 0 z e -ζ 2 dζ\\Psi (z) = \\frac{2i}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta .The red curves represent taut Γ ˜\\widetilde{\\Gamma }'s that allow therecognition of the words.The global topologies of the correspondingℜ𝔢X{\\mathfrak {Re}\\left(X\\right)}, X∈E(0,2)X \\in {E}(0,2), are described in the third row of Table 2,and the germ of singularities at ∞\\infty in Example.The letters in the alphabet are the usual angular sectors for vector fields: hyperbolic $H$ , elliptic $E$ , parabolic $P$ (see [4] p. 304, [5] p. 86) and the new class 1 entire sector ${}_{}{E}_{}$ (see [1] p. 151); see Figure REF .", "Specific attributions encoded by the word $\\mathcal {W}_X$ in (REF ) are as follows.", "Equivalence classes.", "The word $\\mathcal {W}_X$ is well defined up to the relations $E{}_{}{E}_{}H \\sim {E}$ and $H{}_{}{E}_{}E \\sim {E}$ , according to [1] pp.", "166–167.", "Under this equivalence the word becomes independent of the choice of the path $\\gamma $ enclosing the singularity.", "Poincaré–Hopf index.", "If the number of letters $H$ , $E$ and ${E}$ that appear in a word $\\mathcal {W}_{X}$ at $z_{\\vartheta }$ , is denoted by $h$ , $e$ and $\\varepsilon $ respectively, then the Poincaré–Hopf index formula is $PH(X, z_{\\vartheta }) = 1 + \\frac{e-h + \\varepsilon }{2}$ .", "Furthermore, in theorem A p. 130 and §6 of [1], the Poincaré–Hopf index theorem $\\chi ({\\widehat{\\mathbb {C}}}) = \\sum PH(X, z_{\\vartheta }) $ is extended to include germs of singular analytic vector fields $X$ that determine an admissible word.", "Displacement of parabolic sectors.", "As matter of record, each parabolic sector $P_\\nu $ of $\\mathcal {W}_X$ has a displacement number $\\nu \\in {\\mathbb {C}}\\backslash {\\mathbb {R}}$ , see [1] pp.", "149–150.", "The residue.", "In fact the residue of the word (the vector fields germ) is $ Res(\\mathcal {W}_X ) \\doteq Res(X, z_{\\vartheta })=\\frac{1}{2\\pi i }\\int _\\gamma \\omega _X$ , recall [1] p. 167.", "Clearly for $X \\in {E}(r,d)$ all the residues are zero, since $\\omega _X$ is holomorphic on ${\\mathbb {C}}_z$ .", "Example 8 (Cyclic words at poles) For a pole $p_{\\iota }\\in {\\mathbb {C}}_{z}$ of order $-\\mu _{\\iota }$ , the cyclic word $\\mathcal {W}_{X}$ consists of exactly $2(\\mu _{\\iota }+1)$ hyperbolic sectors $H$ : $\\Big ( ({\\mathbb {C}}_{z},p_{\\iota }),X(z)=(z-p_{\\iota })^{-\\mu _{\\iota }} \\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=\\underbrace{HH\\cdots HH}_{2(\\mu _{\\iota }+1)}.$ The Poincaré–Hopf index of $X$ at $p_\\iota $ is $- \\mu _\\iota $ .", "Example 9 (A cyclic word at $\\infty $ ) Recall the rational vector field in Example REF , in our language the description of the singularity at infinity is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{1}{(z-p_{1})^{-\\mu _{1}} (z-p_{1})^{-\\mu _{2}}}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=\\underbrace{EE\\cdots EE}_{\\mu _1+ \\mu _2 + 2 }.$ The Poincaré–Hopf index of $X$ at $\\infty $ is $\\mu _1+ \\mu _2 +2 $ .", "Example 10 (Cyclic words at $\\infty $ having entire sectors) Recall the exponential vector field in Example REF , this basic object produces $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\lambda ^{-1} {\\text{e}}^z\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=E {}_{}{E}_{} H {}_{}{E}_{} \\sim {}_{}{E}_{} {}_{}{E}_{}.$ The Poincaré–Hopf index of $X$ at $\\infty $ is 2.", "Example 11 (The error function) The vector field $X(z)= \\lambda \\frac{\\sqrt{\\pi } }{4} {\\text{e}}^{z^ 2} \\frac{\\partial }{\\partial z}$ , $\\lambda \\in {\\mathbb {C}}^*,$ has associated the error function $\\Psi (z) = \\lambda ^{-1}\\frac{2}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta $ .", "Case $\\lambda = 1$ , the logarithmic branch points are $\\lbrace (\\infty _1, -1), (\\infty _2, 1),(\\infty _3, \\infty ), (\\infty _4, \\infty ) \\rbrace $ , using the notation in equations (REF ), and the ${\\mathfrak {Re}\\left(X\\right)}$ –invariant decomposition is $({\\widehat{\\mathbb {C}}}, X) = \\bigcup _{\\sigma =1}^\\infty \\big ( \\overline{{\\mathbb {H}}}^2_\\sigma , \\frac{\\partial }{\\partial z} \\big ) .", "$ The word is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{ \\sqrt{ \\pi } }{4} {\\text{e}}^{z^2}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=E {}_{}{E}_{} H H {}_{}{E}_{} E {}_{}{E}_{} HH {}_{}{E}_{}.$ See Figure REF .", "Case $\\lambda = i$ , the logarithmic branch points are $\\lbrace (\\infty _1, -i), (\\infty _2, i),(\\infty _3, \\infty ), (\\infty _4, \\infty ) \\rbrace $ , and the ${\\mathfrak {Re}\\left(X\\right)}$ –invariant decomposition is $({\\widehat{\\mathbb {C}}}, X) = \\Big (\\bigcup _{\\sigma =1}^\\infty \\big ( \\overline{{\\mathbb {H}}}^2_\\sigma , \\frac{\\partial }{\\partial z} \\big )\\Big ) \\ \\cup \\ \\big ( \\lbrace -1 \\le {\\mathfrak {Im}\\left(z\\right)} \\le 1 \\rbrace , \\frac{\\partial }{\\partial z} \\big ).$ The word is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{ i \\sqrt{\\pi } }{4} {\\text{e}}^{z^2}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=E {}_{}{E}_{} H H {}_{}{E}_{} P_{2i} E {}_{}{E}_{} HH {}_{}{E}_{} P_{-2i},$ note that the appearance of two opposite parabolic sectors having displacements $\\pm 2i$ is due the horizontal strip in the decomposition.", "See Figure REF .", "In both cases the Poincaré–Hopf index of $X$ at $\\infty $ is 2.", "We now have that for the essential singularity: Theorem 10.1 Let be $X\\in {E}(r,d)$ , the cyclic word $\\mathcal {W}_{X}$ at $\\infty $ is recognized as $\\big ( ({\\widehat{\\mathbb {C}}}_{z},\\infty ),X\\big ) \\longmapsto \\mathcal {W}_X=W_{1} W_{2} \\cdots W_{k},\\quad W_{\\iota }\\in \\lbrace H,E,P,{}_{}{E}_{} \\rbrace ,$ with exactly $\\varepsilon =2d$ letters $W_{\\iota }={}_{}{E}_{}$ .", "Moreover, $h-e=2(d-r-1)$ .", "The word $\\mathcal {W}_X$ is a complete topological invariant of a germ $\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X \\big )$ .", "Conversely, a germ of a singular complex analytic vector field $\\big (({\\mathbb {C}},0), Y \\big )$ is the restriction of an $X \\in {E}(r,d)$ at $\\infty $ if and only if the point 0 is an isolated essential singularity of $Y$ and its admissible word $\\mathcal {W}_{Y}$ satisfies that i) the residue of the word $Res(\\mathcal {W}_{Y} ) = 0$ , ii) the Poincaré–Hopf index of the word $PH(Y, 0 ) = 2 + r$ , iii) it has exactly $2d$ entire sectors ${}_{}{E}_{}$ .", "The proof of the first statement follows the arguments in §5, §9 and §10 of [1].", "Step 1: Take a simple path $\\gamma \\subset ({\\widehat{\\mathbb {C}}}_z, \\infty )$ enclosing only $\\infty $ ($\\gamma $ does not enclose any poles of $X$ ).", "Step 2: Lift $\\gamma $ to $\\Gamma $ in ${\\mathcal {R}}_{X}\\subset {\\widehat{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ .", "Note that $\\Gamma $ lies completely in the soul of ${\\mathcal {R}}_{X}$ , recall Definition REF .", "Step 3: The singularity at $\\infty $ of $X$ has a certain self–similarity (as the examples in § shown), hence in order to recognize a simple word describing it, a suitable deformation of $\\Gamma $ is required.", "That is, we deform $\\Gamma $ to a taut deformation $\\widetilde{\\Gamma }$ in the soul of ${\\mathcal {R}}_{X}$ .", "For examples of a taut deformation $\\widetilde{\\Gamma }$ see Figures REF and REF .", "For the appropriate technical definitions and another example see pp.", "211–212 of [1], in particular figure 17.", "The taut deformation $\\widetilde{\\Gamma }$ recognizes letters $W_{\\iota }$ at $\\infty $ as follows: [leftmargin=*] letters $P$ when $\\widetilde{\\Gamma }$ crosses finite height strip flows, letters $H$ when $\\widetilde{\\Gamma }$ makes a half circle around a branch point of ${\\mathcal {R}}_{X}$ , letters $E$ when $\\widetilde{\\Gamma }$ makes a half circle around (the branch point at) $\\infty $ on a sheet of ${\\mathcal {R}}_{X}$ , letters ${E}$ when $\\widetilde{\\Gamma }$ bounces off the boundaries of the soul of ${\\mathcal {R}}_{X}$ .", "As for the difference $h-e$ between the number of sectors $H$ and $E$ appearing in the cyclic word $\\mathcal {W}_{X}$ at $\\infty $ , we shall use the Poincare–Hopf index theory extended to these kinds of singularities (theorem A in §6 of [1] with $M={\\widehat{\\mathbb {C}}}_{z}$ ).", "From the fact that $X\\in {E}(r,d)$ has exactly $r$ poles (counted with multiplicity) in ${\\mathbb {C}}_{z}$ and since $PH(X,p_{\\iota })=-\\mu _{\\iota }$ for a pole $p_{\\iota }$ of order $-\\mu _{\\iota }$ , then (6.6) of [1] gives us $2=\\chi ({\\widehat{\\mathbb {C}}})=PH(X,\\infty )+\\sum _{p_{\\iota }\\in {\\mathcal {P}}} PH(X,p_{\\iota }) \\\\=PH(X,\\infty )-r.$ On the other hand from (6.5) of [1] $PH(X,\\infty )=1+\\frac{e-h+2d}{2}$ , and the result follows.", "Assertion (2) follows by simple inspection.", "For assertion (3), use a slight modification of corollary 10.1 of [1].", "The only change arises from the fact that $X\\in {E}(r,d)$ has exactly $r$ poles (counted with multiplicity) in ${\\mathbb {C}}_{z}$ .", "Once again, by (REF ) the result follows.", "Example 12 (Cyclic words at $\\infty $ ) 1.", "Recall the vector field in Example REF , $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{{\\text{e}}^z}{ \\lambda (z-p_1)} \\frac{\\partial }{\\partial z}\\Big )\\longmapsto \\mathcal {W}_X= E P_{\\nu } E {}_{}{E}_{} H H {}_{}{E}_{} P_{-\\nu } E E \\\\\\sim E P_{\\nu } {}_{}{E}_{} H {}_{}{E}_{} P_{-\\nu } E E,$ where $\\nu =\\widetilde{p}_{1}-a_{1}=-\\lambda {\\text{e}}^{-p_{1}}$ .", "Note that if $\\nu \\in {\\mathbb {R}}$ then $P_{\\pm \\nu }$ do not appear as letters in $\\mathcal {W}_X$ and the word reduces to $\\mathcal {W}_X= {}_{}{E}_{} {}_{}{E}_{} E E$ .", "The Poincaré–Hopf index of $X$ at $\\infty $ is 3.", "2.", "Recall the vector field in Example REF , $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{-{\\text{e}}^{z^3}}{ 3z^2} \\frac{\\partial }{\\partial z}\\Big )\\longmapsto \\mathcal {W}_X= {}_{}{E}_{} {}_{}{E}_{} {}_{}{E}_{} {}_{}{E}_{}{}_{}{E}_{} {}_{}{E}_{}$ The Poincaré–Hopf index of $X$ at $\\infty $ is 4.", "3.", "Recall the vector field in Example REF , $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{{\\text{e}}^{z^3}}{ 3z^3-1} \\frac{\\partial }{\\partial z}\\Big )\\longmapsto \\mathcal {W}_X= {}_{}{E}_{} EE {}_{}{E}_{} {}_{}{E}_{} {}_{}{E}_{}{}_{}{E}_{} {}_{}{E}_{}.$ The Poincaré–Hopf index of $X$ at $\\infty $ is 5.", "The case that all critical and asymptotic values are real Recall the following result.", "Theorem (Eremenko et al., [12], [13]) If all critical points of a rational function $f$ are real, then $f$ is equivalent to a real rational function.", "This immediately implies that for such a rational function all the critical values are also real.", "Motivated by the above, we have.", "Corollary 2 (Real critical and asymptotic values) If all critical and asymptotic values of $\\Psi _X$ for $X \\in {E}(r,d)$ are in ${\\mathbb {R}}$ , then the following assertions hold.", "${\\mathcal {R}}_X$ , as in (REF ), is the union of half planes.", "$\\Psi _X : U \\subset {\\widehat{\\mathbb {C}}}\\longrightarrow {\\mathbb {H}}^2$ is a Schwartz–Christoffel map, for each half plane $U$ .", "$X$ is unstable in ${E}(r,d)$ .", "The critical and asymptotic values are in ${\\mathbb {R}}$ if and only if the family of rotated vector fields ${\\mathfrak {Re}\\left( {\\text{e}}^{i \\theta } X\\right)}$ bifurcates at $\\theta =n\\pi $ for $n\\in {\\mathbb {Z}}$ .", "$\\Box $ Relation to Belyĭ's functions A rational function is Belyĭ if it has only three critical values $\\lbrace 0,1, \\infty \\rbrace $ , see [6].", "We discuss the analogous notion considering asymptotic values.", "By Lemma REF , we have that for $X\\in {E}(r,d)$ , the distinguished parameter $\\Psi _X$ has an even number $2d$ of asymptotic values (counted with multiplicity).", "The construction of a $\\Psi _X(z)$ having three asymptotic values, say at $\\lbrace 0, 1, \\infty \\rbrace $ set theoretically, as in Belyĭ's theory, is possible for $X\\in {E}(r,d)$ .", "Example 13 The vector field $X(z)= \\frac{\\sqrt{\\pi } }{4} {\\text{e}}^{z^ 2} \\frac{\\partial }{\\partial z}$ from Example REF , having associated the error function $\\Psi (z) = \\frac{2}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta $ with logarithmic branch points, $\\lbrace (\\infty _1, -1), (\\infty _2, 1),(\\infty _3, \\infty ), (\\infty _4, \\infty ) \\rbrace $ , using the notation in Equations (REF ) and (REF ).", "In set theoretically language, its asymptotic values are $\\lbrace -1 , 1 , \\infty \\rbrace $ .", "Example 14 A transcendental Belyĭ function.", "With the present techniques, we can describe the following example of a vector field arising from a transcendental Belyĭ function as in [19] p. 292.", "Let ${\\mathcal {R}}$ be the Riemann surface that consists of half a Riemann sphere (cut along the extended real line ${\\mathbb {R}}\\cup \\lbrace \\infty \\rbrace \\subset {\\widehat{\\mathbb {C}}}$ ) glued to three semi–infinite towers of copies of ${\\widehat{\\mathbb {C}}}\\backslash (a,b]$ where $(a,b]\\in \\lbrace (-\\infty ,0], (0,1],(1,\\infty ] \\rbrace $ , as in Figure REF .", "The general version of the dictionary ([1] Lemma 2.6) shows that a transcendental function $\\Upsilon (z) :{\\mathbb {C}}_z \\longrightarrow {\\widehat{\\mathbb {C}}}_{t}$ and a vector field $X(z)= \\frac{1}{\\Upsilon ^\\prime (z)} \\frac{\\partial }{\\partial z}$ are associated to ${\\mathcal {R}}$ .", "The (logarithmic) branch points of $\\Upsilon (z)$ are $\\lbrace (\\infty _1, 0), (\\infty _2, 1),(\\infty _3, \\infty ) \\rbrace $ .", "Of course there is only one such possible Riemann surface (up to Möbius transformation).", "Compare also with the line complex description as in p. 292 of [19].", "The word is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{ 1 }{ \\Upsilon ^\\prime (z)}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=H {}_{}{E}_{} E {}_{}{E}_{} H \\mathcal {T},$ note the appearance of a new word $\\mathcal {T}$ that is an angular sectorThe phase portrait of ${\\mathfrak {Re}\\left(X\\right)}$ is obtained by considering the pullback of ${\\mathfrak {Re}\\left(\\frac{\\partial }{\\partial t}\\right)}$ via $\\Upsilon $ .", "having an accumulation point of double zeros of $X$ , see Figure REF .", "The 1–order of $X$ is finite and at least 1.", "Figure: Riemann surface corresponding to atranscendentalBelyĭ function Υ\\Upsilon .The path Γ ˜\\widetilde{\\Gamma } is the taut deformation ofΓ=(Ψ X ∘γ)\\Gamma = (\\Psi _X \\circ \\gamma ) originated by aγ\\gamma boundingthe singularity (ℂ ^,∞) , X Υ \\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X_\\Upsilon \\big ).Note that topologically this is the onlypossible surface with exactly three logarithmic branch points.Figure: The cyclic words (a)–(b)appearing in Examples and (c) in .Numerical models for (a)–(b)appeared as figures 15 and 16 in .", "Future work Topological classification of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ As suggested by the results of §; a careful study of the $(r,d)$ –skeleton of $\\Lambda _X$ allows for a complete topological classification of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ , in terms of the placement of the critical and asymptotic values.", "This will be the subject of future work.", "Dynamical coordinates for other families of vector fields As Example REF suggests, there are other families of vector fields where the construction of the dynamical coordinates $\\Lambda _{X}$ is certainly possible.", "For instance, when considering the family ${E}(s,r,d)=\\left\\lbrace X(z)=\\frac{Q(z)}{P(z)}\\ {\\text{e}}^{E(z)}\\frac{\\partial }{\\partial z} \\ \\Big \\vert \\ \\begin{array}{l}Q,\\ P,\\ E\\in {\\mathbb {C}}[z], \\\\\\deg {Q}=s,\\ \\deg {P}=r,\\ \\deg {E}=d\\end{array}\\right\\rbrace ,$ as in [2], we are presented with two intrinsically different cases: 1) If $\\Psi _X$ is univaluedThis is equivalent to requiring that the associated 1–form $\\omega _X$ have all its residues cero.", "then vertices of the form $(q_\\iota ,\\infty ,\\nu _\\iota )$ , corresponding to the zeros ${\\mathcal {Z}}=\\lbrace q_\\iota \\rbrace _{\\iota =1}^s$ of $X$ , need to be added to the description of $\\Lambda _X$ .", "2) If $\\Psi _X$ is multivalued, then extra structure will be required, because of the appearance of logarithmic singularities over those $q_\\iota \\in {\\mathbb {C}}_z$ where the associated 1–form has non–zero residue.", "On cyclic words Cyclic words as topological or analytical invariants for germs.", "The word $\\mathcal {W}_X$ (as in Theorem REF ), is a complete topological invariant of a germ $\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X \\big )$ , $X \\in {E}(r,d)$ .", "Moreover, the word $\\mathcal {W}_X$ in general, is not a global topological invariant of $X \\in {E}(r, d)$ .", "For example all the vector fields $X\\in {E}(r, 0)$ , $r \\ge 3$ , with all critical and asymptotic values in ${\\mathbb {R}}$ , have the same word $\\mathcal {W}_X = \\underbrace{EE \\cdots EE}_{2r +2}$ at $\\infty $ .", "However, it is possible to modify the definitions of angular sectors $P_\\nu $ , $E$ and ${}_{}{E}_{}$ so that in fact the corresponding $\\mathcal {W}_X$ is a global analytic invariant of $X$ modulo $Aut({\\mathbb {C}})$ .", "This is left for a future project.", "Other angular sectors as letters for cyclic words.", "As shown in Example REF and in examples 5.9, 5.12 and figures 2, 5 of [1]; there are certainly other possible angular sectors that can be used as letters for cyclic words.", "In this context and considering the above examples, it is clear that there are an infinite number of topologically different angular sectors (letters) that can appear in a cyclic word associated to an essential singularity for a vector field $X$ .", "However, it is not immediately clear how many topologically different letters there are when we specify the $p$ –order of $X$, that is the coarse analytic invariant of functions and vector fields.", "For instance, by once again considering Example REF , $X(z)=\\Upsilon ^{*}(\\frac{\\partial }{\\partial t})(z)=\\frac{1}{\\Upsilon ^{\\prime }(z)}\\frac{\\partial }{\\partial z}$ .", "However we may also consider $Y(z)=\\Upsilon ^{*}(\\lambda t\\frac{\\partial }{\\partial t})(z)=\\lambda \\frac{\\Upsilon (z)}{\\Upsilon ^{\\prime }(z)}\\frac{\\partial }{\\partial z}$ which provides a (very) different vector field.", "Remark 14 As a side note this shows that the topological classification of functions is coarser the topological classification of phase portraits of vector fields, even for $\\Psi _X$ and $X$ in ${E}(r,d)$ ." ], [ "Vieta's map generalized to transcendental functions", "Recall that Vieta's map provides a parametrization of the space of monic polynomials of degree $s\\ge 1$ by the roots $\\lbrace q_{i}\\rbrace _{i=1}^{s}$ , up to the action of the symmetric group of order $s$ , $\\mathcal {S}(s)$ .", "Hence by allowing non–monic polynomials $P(z)$ and $E(z)$ in the description of $X\\in {E}(r,d)$ , and recalling the local parameter description of the classes of $(r,d)$ –configuration trees $[\\Lambda _{X}]$ , we have.", "Proposition 2 ${E}(r,d)$ can be parametrized by: The $r+d+2$ coefficients $\\lbrace (\\lambda , b_1,\\ldots ,b_r, c_1,\\ldots ,c_d) \\rbrace \\subset {\\mathbb {C}}^{2}\\times {\\mathbb {C}}_{coef}^{r+d}$ of the polynomials $P(z)$ and $E(z)$ .", "The $r$ roots of $P(z)$ , $d$ roots of $E(z)$ and the coefficient $\\lambda $ .", "The $r+d+1$ local complex parameters $\\Big \\lbrace z_{0}, (z_1,t_1,\\mu _{1})$ , $\\big \\lbrace \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\big \\rbrace _{1}^{r+d-1} \\Big \\rbrace $ defining the classes $[\\Lambda _{X}]$ .", "For (1) and (2) see [2].", "On the other hand, for (3), note that there are $r+d+1$ local complex parameters $\\Big \\lbrace z_{0}, (z_1,t_1,\\mu _{1})$ , $\\big \\lbrace \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\big \\rbrace _{1}^{r+d-1} \\Big \\rbrace $ defining the classes $[\\Lambda _{X}]$ .", "All are continuous, and because of the bijection $X\\longleftrightarrow [\\Lambda _{X}]$ , they form local charts for an atlas of ${E}(r,d)$ as a complex manifold of dimension $r+d+1$ .", "Corollary 1 There is a complex analytic dependence between the finite critical values and asymptotic values, the vertices of $\\Lambda _{X}$ , and the coefficients of the polynomials $P(z)$ and $E(z)$ .", "$\\Box $ As an example, in [1], §9.5, the complex analytic dependence for the cases $(r,d)=(0,1), (0,2)$ , $(0,3)$ are explicitly computed in terms of the exponential function, the error function and Airy's function respectively." ], [ "Decomposition of the phase portraits into invariant components", "Theorem 8.1 The horizontal strip structure of $X\\in {E}(r,d)$ , into ${\\mathfrak {Re}\\left(X\\right)}$ –invariant components, is $\\big ({\\mathbb {C}}_{z},X\\big )=\\underbrace{\\left({\\overline{{\\mathbb {H}}}}^2_\\pm ,\\frac{\\partial }{\\partial z}\\right)\\cup \\ldots \\cup \\left({\\overline{{\\mathbb {H}}}}^2_\\mp ,\\frac{\\partial }{\\partial z}\\right)}_{4r\\ge N_p \\ge 2(r+1)}\\\\\\bigcup _{a_\\sigma }\\Biggl [\\left(\\Big \\lbrace 0\\le |{\\mathfrak {Im}\\left(z\\right)}|\\le 2\\pi K_\\sigma \\Big \\rbrace ,{\\text{e}}^z \\frac{\\partial }{\\partial z}\\right)_{a_{\\sigma }}\\qquad \\qquad \\\\\\qquad \\cup \\left({\\overline{{\\mathbb {H}}}}_{\\pm }^2,{\\text{e}}^z\\frac{\\partial }{\\partial z}\\right)_{a_{\\sigma },up}\\cup \\left({\\overline{{\\mathbb {H}}}}_{\\pm }^2,{\\text{e}}^z\\frac{\\partial }{\\partial z}\\right)_{a_{\\sigma },low}\\Biggr ]\\\\\\bigcup _ {\\ell }^{M\\le \\infty }\\left( \\Big \\lbrace 0\\le {\\mathfrak {Im}\\left(z\\right)}\\le h_{\\ell } \\Big \\rbrace ,\\frac{\\partial }{\\partial z}\\right),$ where $\\lbrace a_\\sigma \\rbrace $ are the finite asymptotic values of $\\Psi _{X}$ .", "Moreover, there are an infinite number of half planes $\\big ({\\overline{{\\mathbb {H}}}}^2_\\pm ,\\frac{\\partial }{\\partial z}\\big )$ in the decomposition if and only if $d\\ge 1$ .", "Decomposition (REF ) follows by recalling Definition REF , the biholomorphism $\\pi _{X,1}$ presented in Diagram and the fine structure of the $(r,d)$ –skeleton of $\\Lambda _X$ .", "It is an accurate description of the phase portrait decomposition of ${\\mathfrak {Re}\\left(X\\right)}$ : The first row depicts the, at least $2(r+1)$ and at most $4r$ , half planes associated to the $r$ poles.", "On the second row are the $d$ finite helicoids arising from the $d$ finite asymptotic values $\\lbrace a_\\sigma \\rbrace $ , where it is to be noticed that this can be an empty collection.", "On the third row are the $2d$ semi–infinite helicoids.", "And on the fourth row, the finite height strips associated to the non–horizontal diagonals in ${\\mathcal {R}}_{X}$ ." ], [ "On the topology of ${\\mathfrak {Re}\\left(X\\right)}$", "Consider the group of orientation preserving homeomorphisms $Homeo({\\mathbb {C}})^{+}=\\lbrace h:{\\widehat{\\mathbb {C}}}_{z}\\rightarrow {\\widehat{\\mathbb {C}}}_{z}\\ |\\ \\text{ preserving orientation and fixing } \\infty \\in {\\widehat{\\mathbb {C}}}\\rbrace .$ Definition 9.1 Let $X_{1}, X_{2}\\in {E}(r,d)$ be two singular analytic vector fields.", "They are topologically equivalent if there exists $h\\in Homeo({\\mathbb {C}})^{+}$ which takes the trajectories of ${\\mathfrak {Re}\\left(X_{1}\\right)}$ to trajectories of ${\\mathfrak {Re}\\left(X_{2}\\right)}$ , preserving real time orientation, but not necessarily the parametrization.", "A bifurcation for ${\\mathfrak {Re}\\left(X_{1}\\right)}$ occurs, when the topology of its phase portrait topologically changes under small deformation of $X_1$ in the family ${E}(r,d)$ , otherwise $X_1$ is structurally stable, in ${E}(r,d)$ .", "Let $\\Lambda _{X}=\\Big \\lbrace \\big \\lbrace \\text{[baseline=(char.base)]{\\node [shape=circle,draw,inner sep=1pt] (char) {${\\mathfrak {a}}$};}} =\\big (z_{{\\mathfrak {a}}},t_{{\\mathfrak {a}}},\\mu _{{\\mathfrak {a}}} \\big )\\big \\rbrace _{\\sigma = 1}^{d+n} ;\\big \\lbrace (\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}, \\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}})\\big \\rbrace \\Big \\rbrace $ be a $(r,d)$ –configuration tree.", "By simple inspection we have Theorem 9.2 (Structural stability of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ ) The real vector field ${\\mathfrak {Re}\\left(X\\right)}$ is structurally stable in ${E}(r,d)$ if and only if $\\bullet \\ X$ has only simple poles and $\\bullet $ ${\\mathfrak {Im}\\left(\\widetilde{\\lambda }_{{\\mathfrak {a}}{\\mathfrak {r}}}\\right)}\\ne 0$ for all weighted edges $\\Delta _{{\\mathfrak {a}}{\\mathfrak {r}}}$ of $\\Lambda _{X}$ .", "$\\Box $ As a direct consequence of the structure of the $(r,d)$ –skeleton of $\\Lambda _X$ we obtain Theorem 9.3 (Number of topologies of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ ) Given a fixed pair $(r,d)$ : The number of topologies of ${\\mathfrak {Re}\\left(X\\right)}$ is infinite when $(r,d)\\in \\big \\lbrace (r\\ge 2,1), (r\\ge 1,2),(r\\ge 0,d\\ge 3)\\big \\rbrace $ .", "The number of topologies is one when $(r,d)=(0,1), (1,0)$ ; two when $(r,d)=(0,2)$ , $(1,1)$ ; bounded above by $3^{(r-1)} \\times (r-1) \\times r!", "\\times p(r), \\quad \\text{ when } (r,d)=(r\\ge 2,0),$ where $p(r)$ is the partition function of the integer $r$ .", "Let us recall that the phase portrait ${\\mathfrak {Re}\\left(X\\right)}$ on ${\\mathbb {C}}_{z}$ , as in (3) of the theorem, only has a finite number ($\\le r$ ) of multiple saddle points.", "These phase portraits were first studied by W. M. Boothby [8], [9], showing that they appear as the real part of certain harmonic functions; in our framework, the imaginary part of $\\int \\omega _{X}$ .", "The number of topologies can be obtained by looking at the number of possible $(r,d)$ –skeletons of $\\Lambda _{X}$ associated to the $(r,d)$ –configuration trees.", "For each $(r,d)\\in \\big \\lbrace (r\\ge 2,1), (r\\ge 1,2),(r\\ge 0,d\\ge 3)\\big \\rbrace $ there will be at least one $(r,d)$ –skeleton of $\\Lambda _{X}$ with at least one vertical tower with two horizontal subgraphs attached to the same vertical tower.", "These horizontal subgraphs are vertically separated from each other by an integer number $K(\\sigma ,\\rho )$ , of degree 2 vertices on a vertical tower.", "Hence there are an infinite number of different ways, described by $\\lbrace K(\\sigma ,\\rho )\\ge 1\\rbrace $ , we can attach these two subgraphs to the vertical tower, each of which represents a different configuration in ${\\mathcal {R}}_{X}$ .", "The remaining cases are $(r,d)\\in \\big \\lbrace (0,1), (0,2), (1,1), (r\\ge 1,0)\\big \\rbrace $ .", "The cases $(0,1), (1,0)$ are trivial by Lemma REF .", "For cases $(0,2)$ and $(1,1)$ : ${\\mathcal {R}}_{X}$ has two branch points hence they must share the same sheet.", "Thus each one of these cases have exactly two topologies.", "Case (0,2) is illustrated in Figure REF .", "Case $(r\\ge 2,0)$ corresponds to $\\Psi _{X}$ being a polynomial, hence the number of topologies is finite.", "Since poles can have multiplicity, there are $p(r)$ ways of arranging $r$ poles (with multiplicity).", "Moreover, there are at most $r-1$ diagonals connecting the (at most) $r$ poles.", "For each diagonal we have at least two and at most three different topologies for ${\\mathfrak {Re}\\left(X\\right)}$ (characterized by the diagonal $\\Delta _{\\iota \\kappa }$ : ${\\mathfrak {Im}\\left(t_{\\iota }\\right)}={\\mathfrak {Im}\\left(t_{\\kappa }\\right)}$ , ${\\mathfrak {Im}\\left(t_{\\iota }\\right)}<{\\mathfrak {Im}\\left(t_{\\kappa }\\right)}$ , ${\\mathfrak {Im}\\left(t_{\\iota }\\right)}>{\\mathfrak {Im}\\left(t_{\\kappa }\\right)}$ ).", "Hence there are at most $3^{(r-1)} (r-1)!$ ways of placing the diagonals on the $(r,0)$ –configuration tree to obtain a different topology.", "Finally we must take into account that each of these diagonals, when viewed on the $(r,d)$ –skeleton of $\\Lambda _{X}$ , could be at a different level.", "Let $L(r)$ be the number of ways to place a diagonal on a $(r,d)$ –skeleton of $\\Lambda _{X}$ .", "Since there are at most $r-1$ diagonals and at most $r$ levels, then a (very rough) upper bound for $L(r)$ is $r(r-1)$ .", "Table REF presents a summary of the possible topologies of ${\\mathfrak {Re}\\left(X\\right)}$ , for $X\\in {E}(r,d)$ , that arise for different pairs $(r,d)$ .", "Table: Topologies of ℜ𝔢X{\\mathfrak {Re}\\left(X\\right)} for different pairs (r,d)(r,d)." ], [ "Epilogue: the singularity at $\\infty $", "Our naive question; how can we describe the singularity of $X$ at $\\infty \\in {\\widehat{\\mathbb {C}}}_{z}$ , for $X\\in {E}(r,d)$ ?, is answered in this section.", "In [1], §5, germs of singular analytic vector fields $X$ are studied in detail.", "Starting with a simple closed path $\\gamma $ enclosingWith the usual anticlockwise orientation.", "the singularity $z_{\\vartheta } \\in {\\widehat{\\mathbb {C}}}$ , the notion of an admissible cyclic word $\\mathcal {W}_{X}$ in the alphabet $\\lbrace H, E, P, {}_{}{E}_{}\\rbrace $ is well defined, $\\big ( ({\\widehat{\\mathbb {C}}}, z_{\\vartheta }),X(z) \\big ) \\longmapsto \\mathcal {W}_{X}.$ It is to be noted that the affine group $Aut({\\mathbb {C}})$ is the largest complex automorphism group that acts on ${E}(r,d)$ , $\\mathcal {A}: Aut({\\mathbb {C}}) \\times {E}(r,d) \\longrightarrow {E}(r,r),\\ \\ \\ (T, X) \\longmapsto T^*X$ , see [2].", "Hence the germ $\\big ( ({\\widehat{\\mathbb {C}}}, z_{\\vartheta }),X(z) \\big )$ is a local analytic invariant.", "Figure: Hyperbolic HH, elliptic EE, parabolic PP andentire E {}_{}{E}_{} sectors in ℂ ^ z {\\widehat{\\mathbb {C}}}_{z}.The curve γ\\gamma is shown in red.Note that E E H∼ E E{}_{}{E}_{} H \\sim {}_{}{E}_{}is illustrated on the right.Figure: Two Riemann surfaces; (a) associated to the error functionΨ(z)=2 π∫ 0 z e -ζ 2 dζ\\Psi (z) = \\frac{2}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta and (b) toΨ(z)=2i π∫ 0 z e -ζ 2 dζ\\Psi (z) = \\frac{2i}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta .The red curves represent taut Γ ˜\\widetilde{\\Gamma }'s that allow therecognition of the words.The global topologies of the correspondingℜ𝔢X{\\mathfrak {Re}\\left(X\\right)}, X∈E(0,2)X \\in {E}(0,2), are described in the third row of Table 2,and the germ of singularities at ∞\\infty in Example.The letters in the alphabet are the usual angular sectors for vector fields: hyperbolic $H$ , elliptic $E$ , parabolic $P$ (see [4] p. 304, [5] p. 86) and the new class 1 entire sector ${}_{}{E}_{}$ (see [1] p. 151); see Figure REF .", "Specific attributions encoded by the word $\\mathcal {W}_X$ in (REF ) are as follows.", "Equivalence classes.", "The word $\\mathcal {W}_X$ is well defined up to the relations $E{}_{}{E}_{}H \\sim {E}$ and $H{}_{}{E}_{}E \\sim {E}$ , according to [1] pp.", "166–167.", "Under this equivalence the word becomes independent of the choice of the path $\\gamma $ enclosing the singularity.", "Poincaré–Hopf index.", "If the number of letters $H$ , $E$ and ${E}$ that appear in a word $\\mathcal {W}_{X}$ at $z_{\\vartheta }$ , is denoted by $h$ , $e$ and $\\varepsilon $ respectively, then the Poincaré–Hopf index formula is $PH(X, z_{\\vartheta }) = 1 + \\frac{e-h + \\varepsilon }{2}$ .", "Furthermore, in theorem A p. 130 and §6 of [1], the Poincaré–Hopf index theorem $\\chi ({\\widehat{\\mathbb {C}}}) = \\sum PH(X, z_{\\vartheta }) $ is extended to include germs of singular analytic vector fields $X$ that determine an admissible word.", "Displacement of parabolic sectors.", "As matter of record, each parabolic sector $P_\\nu $ of $\\mathcal {W}_X$ has a displacement number $\\nu \\in {\\mathbb {C}}\\backslash {\\mathbb {R}}$ , see [1] pp.", "149–150.", "The residue.", "In fact the residue of the word (the vector fields germ) is $ Res(\\mathcal {W}_X ) \\doteq Res(X, z_{\\vartheta })=\\frac{1}{2\\pi i }\\int _\\gamma \\omega _X$ , recall [1] p. 167.", "Clearly for $X \\in {E}(r,d)$ all the residues are zero, since $\\omega _X$ is holomorphic on ${\\mathbb {C}}_z$ .", "Example 8 (Cyclic words at poles) For a pole $p_{\\iota }\\in {\\mathbb {C}}_{z}$ of order $-\\mu _{\\iota }$ , the cyclic word $\\mathcal {W}_{X}$ consists of exactly $2(\\mu _{\\iota }+1)$ hyperbolic sectors $H$ : $\\Big ( ({\\mathbb {C}}_{z},p_{\\iota }),X(z)=(z-p_{\\iota })^{-\\mu _{\\iota }} \\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=\\underbrace{HH\\cdots HH}_{2(\\mu _{\\iota }+1)}.$ The Poincaré–Hopf index of $X$ at $p_\\iota $ is $- \\mu _\\iota $ .", "Example 9 (A cyclic word at $\\infty $ ) Recall the rational vector field in Example REF , in our language the description of the singularity at infinity is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{1}{(z-p_{1})^{-\\mu _{1}} (z-p_{1})^{-\\mu _{2}}}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=\\underbrace{EE\\cdots EE}_{\\mu _1+ \\mu _2 + 2 }.$ The Poincaré–Hopf index of $X$ at $\\infty $ is $\\mu _1+ \\mu _2 +2 $ .", "Example 10 (Cyclic words at $\\infty $ having entire sectors) Recall the exponential vector field in Example REF , this basic object produces $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\lambda ^{-1} {\\text{e}}^z\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=E {}_{}{E}_{} H {}_{}{E}_{} \\sim {}_{}{E}_{} {}_{}{E}_{}.$ The Poincaré–Hopf index of $X$ at $\\infty $ is 2.", "Example 11 (The error function) The vector field $X(z)= \\lambda \\frac{\\sqrt{\\pi } }{4} {\\text{e}}^{z^ 2} \\frac{\\partial }{\\partial z}$ , $\\lambda \\in {\\mathbb {C}}^*,$ has associated the error function $\\Psi (z) = \\lambda ^{-1}\\frac{2}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta $ .", "Case $\\lambda = 1$ , the logarithmic branch points are $\\lbrace (\\infty _1, -1), (\\infty _2, 1),(\\infty _3, \\infty ), (\\infty _4, \\infty ) \\rbrace $ , using the notation in equations (REF ), and the ${\\mathfrak {Re}\\left(X\\right)}$ –invariant decomposition is $({\\widehat{\\mathbb {C}}}, X) = \\bigcup _{\\sigma =1}^\\infty \\big ( \\overline{{\\mathbb {H}}}^2_\\sigma , \\frac{\\partial }{\\partial z} \\big ) .", "$ The word is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{ \\sqrt{ \\pi } }{4} {\\text{e}}^{z^2}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=E {}_{}{E}_{} H H {}_{}{E}_{} E {}_{}{E}_{} HH {}_{}{E}_{}.$ See Figure REF .", "Case $\\lambda = i$ , the logarithmic branch points are $\\lbrace (\\infty _1, -i), (\\infty _2, i),(\\infty _3, \\infty ), (\\infty _4, \\infty ) \\rbrace $ , and the ${\\mathfrak {Re}\\left(X\\right)}$ –invariant decomposition is $({\\widehat{\\mathbb {C}}}, X) = \\Big (\\bigcup _{\\sigma =1}^\\infty \\big ( \\overline{{\\mathbb {H}}}^2_\\sigma , \\frac{\\partial }{\\partial z} \\big )\\Big ) \\ \\cup \\ \\big ( \\lbrace -1 \\le {\\mathfrak {Im}\\left(z\\right)} \\le 1 \\rbrace , \\frac{\\partial }{\\partial z} \\big ).$ The word is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{ i \\sqrt{\\pi } }{4} {\\text{e}}^{z^2}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=E {}_{}{E}_{} H H {}_{}{E}_{} P_{2i} E {}_{}{E}_{} HH {}_{}{E}_{} P_{-2i},$ note that the appearance of two opposite parabolic sectors having displacements $\\pm 2i$ is due the horizontal strip in the decomposition.", "See Figure REF .", "In both cases the Poincaré–Hopf index of $X$ at $\\infty $ is 2.", "We now have that for the essential singularity: Theorem 10.1 Let be $X\\in {E}(r,d)$ , the cyclic word $\\mathcal {W}_{X}$ at $\\infty $ is recognized as $\\big ( ({\\widehat{\\mathbb {C}}}_{z},\\infty ),X\\big ) \\longmapsto \\mathcal {W}_X=W_{1} W_{2} \\cdots W_{k},\\quad W_{\\iota }\\in \\lbrace H,E,P,{}_{}{E}_{} \\rbrace ,$ with exactly $\\varepsilon =2d$ letters $W_{\\iota }={}_{}{E}_{}$ .", "Moreover, $h-e=2(d-r-1)$ .", "The word $\\mathcal {W}_X$ is a complete topological invariant of a germ $\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X \\big )$ .", "Conversely, a germ of a singular complex analytic vector field $\\big (({\\mathbb {C}},0), Y \\big )$ is the restriction of an $X \\in {E}(r,d)$ at $\\infty $ if and only if the point 0 is an isolated essential singularity of $Y$ and its admissible word $\\mathcal {W}_{Y}$ satisfies that i) the residue of the word $Res(\\mathcal {W}_{Y} ) = 0$ , ii) the Poincaré–Hopf index of the word $PH(Y, 0 ) = 2 + r$ , iii) it has exactly $2d$ entire sectors ${}_{}{E}_{}$ .", "The proof of the first statement follows the arguments in §5, §9 and §10 of [1].", "Step 1: Take a simple path $\\gamma \\subset ({\\widehat{\\mathbb {C}}}_z, \\infty )$ enclosing only $\\infty $ ($\\gamma $ does not enclose any poles of $X$ ).", "Step 2: Lift $\\gamma $ to $\\Gamma $ in ${\\mathcal {R}}_{X}\\subset {\\widehat{\\mathbb {C}}}_{z}\\times {\\widehat{\\mathbb {C}}}_{t}$ .", "Note that $\\Gamma $ lies completely in the soul of ${\\mathcal {R}}_{X}$ , recall Definition REF .", "Step 3: The singularity at $\\infty $ of $X$ has a certain self–similarity (as the examples in § shown), hence in order to recognize a simple word describing it, a suitable deformation of $\\Gamma $ is required.", "That is, we deform $\\Gamma $ to a taut deformation $\\widetilde{\\Gamma }$ in the soul of ${\\mathcal {R}}_{X}$ .", "For examples of a taut deformation $\\widetilde{\\Gamma }$ see Figures REF and REF .", "For the appropriate technical definitions and another example see pp.", "211–212 of [1], in particular figure 17.", "The taut deformation $\\widetilde{\\Gamma }$ recognizes letters $W_{\\iota }$ at $\\infty $ as follows: [leftmargin=*] letters $P$ when $\\widetilde{\\Gamma }$ crosses finite height strip flows, letters $H$ when $\\widetilde{\\Gamma }$ makes a half circle around a branch point of ${\\mathcal {R}}_{X}$ , letters $E$ when $\\widetilde{\\Gamma }$ makes a half circle around (the branch point at) $\\infty $ on a sheet of ${\\mathcal {R}}_{X}$ , letters ${E}$ when $\\widetilde{\\Gamma }$ bounces off the boundaries of the soul of ${\\mathcal {R}}_{X}$ .", "As for the difference $h-e$ between the number of sectors $H$ and $E$ appearing in the cyclic word $\\mathcal {W}_{X}$ at $\\infty $ , we shall use the Poincare–Hopf index theory extended to these kinds of singularities (theorem A in §6 of [1] with $M={\\widehat{\\mathbb {C}}}_{z}$ ).", "From the fact that $X\\in {E}(r,d)$ has exactly $r$ poles (counted with multiplicity) in ${\\mathbb {C}}_{z}$ and since $PH(X,p_{\\iota })=-\\mu _{\\iota }$ for a pole $p_{\\iota }$ of order $-\\mu _{\\iota }$ , then (6.6) of [1] gives us $2=\\chi ({\\widehat{\\mathbb {C}}})=PH(X,\\infty )+\\sum _{p_{\\iota }\\in {\\mathcal {P}}} PH(X,p_{\\iota }) \\\\=PH(X,\\infty )-r.$ On the other hand from (6.5) of [1] $PH(X,\\infty )=1+\\frac{e-h+2d}{2}$ , and the result follows.", "Assertion (2) follows by simple inspection.", "For assertion (3), use a slight modification of corollary 10.1 of [1].", "The only change arises from the fact that $X\\in {E}(r,d)$ has exactly $r$ poles (counted with multiplicity) in ${\\mathbb {C}}_{z}$ .", "Once again, by (REF ) the result follows.", "Example 12 (Cyclic words at $\\infty $ ) 1.", "Recall the vector field in Example REF , $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{{\\text{e}}^z}{ \\lambda (z-p_1)} \\frac{\\partial }{\\partial z}\\Big )\\longmapsto \\mathcal {W}_X= E P_{\\nu } E {}_{}{E}_{} H H {}_{}{E}_{} P_{-\\nu } E E \\\\\\sim E P_{\\nu } {}_{}{E}_{} H {}_{}{E}_{} P_{-\\nu } E E,$ where $\\nu =\\widetilde{p}_{1}-a_{1}=-\\lambda {\\text{e}}^{-p_{1}}$ .", "Note that if $\\nu \\in {\\mathbb {R}}$ then $P_{\\pm \\nu }$ do not appear as letters in $\\mathcal {W}_X$ and the word reduces to $\\mathcal {W}_X= {}_{}{E}_{} {}_{}{E}_{} E E$ .", "The Poincaré–Hopf index of $X$ at $\\infty $ is 3.", "2.", "Recall the vector field in Example REF , $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{-{\\text{e}}^{z^3}}{ 3z^2} \\frac{\\partial }{\\partial z}\\Big )\\longmapsto \\mathcal {W}_X= {}_{}{E}_{} {}_{}{E}_{} {}_{}{E}_{} {}_{}{E}_{}{}_{}{E}_{} {}_{}{E}_{}$ The Poincaré–Hopf index of $X$ at $\\infty $ is 4.", "3.", "Recall the vector field in Example REF , $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{{\\text{e}}^{z^3}}{ 3z^3-1} \\frac{\\partial }{\\partial z}\\Big )\\longmapsto \\mathcal {W}_X= {}_{}{E}_{} EE {}_{}{E}_{} {}_{}{E}_{} {}_{}{E}_{}{}_{}{E}_{} {}_{}{E}_{}.$ The Poincaré–Hopf index of $X$ at $\\infty $ is 5." ], [ "The case that all critical and asymptotic\nvalues are real", "Recall the following result.", "Theorem (Eremenko et al., [12], [13]) If all critical points of a rational function $f$ are real, then $f$ is equivalent to a real rational function.", "This immediately implies that for such a rational function all the critical values are also real.", "Motivated by the above, we have.", "Corollary 2 (Real critical and asymptotic values) If all critical and asymptotic values of $\\Psi _X$ for $X \\in {E}(r,d)$ are in ${\\mathbb {R}}$ , then the following assertions hold.", "${\\mathcal {R}}_X$ , as in (REF ), is the union of half planes.", "$\\Psi _X : U \\subset {\\widehat{\\mathbb {C}}}\\longrightarrow {\\mathbb {H}}^2$ is a Schwartz–Christoffel map, for each half plane $U$ .", "$X$ is unstable in ${E}(r,d)$ .", "The critical and asymptotic values are in ${\\mathbb {R}}$ if and only if the family of rotated vector fields ${\\mathfrak {Re}\\left( {\\text{e}}^{i \\theta } X\\right)}$ bifurcates at $\\theta =n\\pi $ for $n\\in {\\mathbb {Z}}$ .", "$\\Box $" ], [ "Relation to Belyĭ's functions", "A rational function is Belyĭ if it has only three critical values $\\lbrace 0,1, \\infty \\rbrace $ , see [6].", "We discuss the analogous notion considering asymptotic values.", "By Lemma REF , we have that for $X\\in {E}(r,d)$ , the distinguished parameter $\\Psi _X$ has an even number $2d$ of asymptotic values (counted with multiplicity).", "The construction of a $\\Psi _X(z)$ having three asymptotic values, say at $\\lbrace 0, 1, \\infty \\rbrace $ set theoretically, as in Belyĭ's theory, is possible for $X\\in {E}(r,d)$ .", "Example 13 The vector field $X(z)= \\frac{\\sqrt{\\pi } }{4} {\\text{e}}^{z^ 2} \\frac{\\partial }{\\partial z}$ from Example REF , having associated the error function $\\Psi (z) = \\frac{2}{ \\sqrt{\\pi } }\\int _0 ^z {\\text{e}}^{-\\zeta ^2} d\\zeta $ with logarithmic branch points, $\\lbrace (\\infty _1, -1), (\\infty _2, 1),(\\infty _3, \\infty ), (\\infty _4, \\infty ) \\rbrace $ , using the notation in Equations (REF ) and (REF ).", "In set theoretically language, its asymptotic values are $\\lbrace -1 , 1 , \\infty \\rbrace $ .", "Example 14 A transcendental Belyĭ function.", "With the present techniques, we can describe the following example of a vector field arising from a transcendental Belyĭ function as in [19] p. 292.", "Let ${\\mathcal {R}}$ be the Riemann surface that consists of half a Riemann sphere (cut along the extended real line ${\\mathbb {R}}\\cup \\lbrace \\infty \\rbrace \\subset {\\widehat{\\mathbb {C}}}$ ) glued to three semi–infinite towers of copies of ${\\widehat{\\mathbb {C}}}\\backslash (a,b]$ where $(a,b]\\in \\lbrace (-\\infty ,0], (0,1],(1,\\infty ] \\rbrace $ , as in Figure REF .", "The general version of the dictionary ([1] Lemma 2.6) shows that a transcendental function $\\Upsilon (z) :{\\mathbb {C}}_z \\longrightarrow {\\widehat{\\mathbb {C}}}_{t}$ and a vector field $X(z)= \\frac{1}{\\Upsilon ^\\prime (z)} \\frac{\\partial }{\\partial z}$ are associated to ${\\mathcal {R}}$ .", "The (logarithmic) branch points of $\\Upsilon (z)$ are $\\lbrace (\\infty _1, 0), (\\infty _2, 1),(\\infty _3, \\infty ) \\rbrace $ .", "Of course there is only one such possible Riemann surface (up to Möbius transformation).", "Compare also with the line complex description as in p. 292 of [19].", "The word is $\\Big ( ({\\widehat{\\mathbb {C}}}_{z}, \\infty ),X(z)=\\frac{ 1 }{ \\Upsilon ^\\prime (z)}\\frac{\\partial }{\\partial z}\\Big ) \\longmapsto \\mathcal {W}_X=H {}_{}{E}_{} E {}_{}{E}_{} H \\mathcal {T},$ note the appearance of a new word $\\mathcal {T}$ that is an angular sectorThe phase portrait of ${\\mathfrak {Re}\\left(X\\right)}$ is obtained by considering the pullback of ${\\mathfrak {Re}\\left(\\frac{\\partial }{\\partial t}\\right)}$ via $\\Upsilon $ .", "having an accumulation point of double zeros of $X$ , see Figure REF .", "The 1–order of $X$ is finite and at least 1.", "Figure: Riemann surface corresponding to atranscendentalBelyĭ function Υ\\Upsilon .The path Γ ˜\\widetilde{\\Gamma } is the taut deformation ofΓ=(Ψ X ∘γ)\\Gamma = (\\Psi _X \\circ \\gamma ) originated by aγ\\gamma boundingthe singularity (ℂ ^,∞) , X Υ \\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X_\\Upsilon \\big ).Note that topologically this is the onlypossible surface with exactly three logarithmic branch points.Figure: The cyclic words (a)–(b)appearing in Examples and (c) in .Numerical models for (a)–(b)appeared as figures 15 and 16 in ." ], [ "Topological classification of ${\\mathfrak {Re}\\left(X\\right)}$ for {{formula:c26ceb54-de86-4962-a4a8-391441ec47d6}}", "As suggested by the results of §; a careful study of the $(r,d)$ –skeleton of $\\Lambda _X$ allows for a complete topological classification of ${\\mathfrak {Re}\\left(X\\right)}$ for $X\\in {E}(r,d)$ , in terms of the placement of the critical and asymptotic values.", "This will be the subject of future work." ], [ "Dynamical coordinates for other families of vector fields", "As Example REF suggests, there are other families of vector fields where the construction of the dynamical coordinates $\\Lambda _{X}$ is certainly possible.", "For instance, when considering the family ${E}(s,r,d)=\\left\\lbrace X(z)=\\frac{Q(z)}{P(z)}\\ {\\text{e}}^{E(z)}\\frac{\\partial }{\\partial z} \\ \\Big \\vert \\ \\begin{array}{l}Q,\\ P,\\ E\\in {\\mathbb {C}}[z], \\\\\\deg {Q}=s,\\ \\deg {P}=r,\\ \\deg {E}=d\\end{array}\\right\\rbrace ,$ as in [2], we are presented with two intrinsically different cases: 1) If $\\Psi _X$ is univaluedThis is equivalent to requiring that the associated 1–form $\\omega _X$ have all its residues cero.", "then vertices of the form $(q_\\iota ,\\infty ,\\nu _\\iota )$ , corresponding to the zeros ${\\mathcal {Z}}=\\lbrace q_\\iota \\rbrace _{\\iota =1}^s$ of $X$ , need to be added to the description of $\\Lambda _X$ .", "2) If $\\Psi _X$ is multivalued, then extra structure will be required, because of the appearance of logarithmic singularities over those $q_\\iota \\in {\\mathbb {C}}_z$ where the associated 1–form has non–zero residue." ], [ "On cyclic words", "Cyclic words as topological or analytical invariants for germs.", "The word $\\mathcal {W}_X$ (as in Theorem REF ), is a complete topological invariant of a germ $\\big ( ({\\widehat{\\mathbb {C}}}, \\infty ), X \\big )$ , $X \\in {E}(r,d)$ .", "Moreover, the word $\\mathcal {W}_X$ in general, is not a global topological invariant of $X \\in {E}(r, d)$ .", "For example all the vector fields $X\\in {E}(r, 0)$ , $r \\ge 3$ , with all critical and asymptotic values in ${\\mathbb {R}}$ , have the same word $\\mathcal {W}_X = \\underbrace{EE \\cdots EE}_{2r +2}$ at $\\infty $ .", "However, it is possible to modify the definitions of angular sectors $P_\\nu $ , $E$ and ${}_{}{E}_{}$ so that in fact the corresponding $\\mathcal {W}_X$ is a global analytic invariant of $X$ modulo $Aut({\\mathbb {C}})$ .", "This is left for a future project.", "Other angular sectors as letters for cyclic words.", "As shown in Example REF and in examples 5.9, 5.12 and figures 2, 5 of [1]; there are certainly other possible angular sectors that can be used as letters for cyclic words.", "In this context and considering the above examples, it is clear that there are an infinite number of topologically different angular sectors (letters) that can appear in a cyclic word associated to an essential singularity for a vector field $X$ .", "However, it is not immediately clear how many topologically different letters there are when we specify the $p$ –order of $X$, that is the coarse analytic invariant of functions and vector fields.", "For instance, by once again considering Example REF , $X(z)=\\Upsilon ^{*}(\\frac{\\partial }{\\partial t})(z)=\\frac{1}{\\Upsilon ^{\\prime }(z)}\\frac{\\partial }{\\partial z}$ .", "However we may also consider $Y(z)=\\Upsilon ^{*}(\\lambda t\\frac{\\partial }{\\partial t})(z)=\\lambda \\frac{\\Upsilon (z)}{\\Upsilon ^{\\prime }(z)}\\frac{\\partial }{\\partial z}$ which provides a (very) different vector field.", "Remark 14 As a side note this shows that the topological classification of functions is coarser the topological classification of phase portraits of vector fields, even for $\\Psi _X$ and $X$ in ${E}(r,d)$ ." ] ]
1906.04207
[ [ "Future Data Helps Training: Modeling Future Contexts for Session-based\n Recommendation" ], [ "Abstract Session-based recommender systems have attracted much attention recently.", "To capture the sequential dependencies, existing methods resort either to data augmentation techniques or left-to-right style autoregressive training.Since these methods are aimed to model the sequential nature of user behaviors, they ignore the future data of a target interaction when constructing the prediction model for it.", "However, we argue that the future interactions after a target interaction, which are also available during training, provide valuable signal on user preference and can be used to enhance the recommendation quality.", "Properly integrating future data into model training, however, is non-trivial to achieve, since it disobeys machine learning principles and can easily cause data leakage.", "To this end, we propose a new encoder-decoder framework named Gap-filling based Recommender (GRec), which trains the encoder and decoder by a gap-filling mechanism.", "Specifically, the encoder takes a partially-complete session sequence (where some items are masked by purpose) as input, and the decoder predicts these masked items conditioned on the encoded representation.", "We instantiate the general GRec framework using convolutional neural network with sparse kernels, giving consideration to both accuracy and efficiency.", "We conduct experiments on two real-world datasets covering short-, medium-, and long-range user sessions, showing that GRec significantly outperforms the state-of-the-art sequential recommendation methods.", "More empirical studies verify the high utility of modeling future contexts under our GRec framework." ], [ "Introduction", "Session-based Recommender system (SRS) has become an emerging topic in the recommendation domain, which aims to predict the next item based on an ordered history of interacted items within a user session.", "While recent advances in deep neural networks [7], [17], [27], [26] are effective in modeling user short-term interest transition, it remains as a fundamental challenge to capture the sequential dependencies in long-range sessions [35], [25], [14].", "In practice, long-range user sessions widely exist in scenarios such as micro-video and news recommendations.", "For example, users on TikTokhttps://www.tiktok.com may watch 100 micro-videos in 30 minutes as the average playing time of each video takes only 15 seconds.", "Generally speaking, there are two popular strategies to train recommender models from sequential data: data augmentation [24], [27], [13], [24], [26], [4] and autoregressive training [35], [11].", "Specifically, the data augmentation approach, such as the improved GRU4Rec [24], performs data preprocessing and generates new training sub-sessions by using prefixes of the target sequence, and the recommender then predicts the last item in the sequence.", "The autoregressive approach models the distribution of an entire sequence in an end-to-end manner, rather than only the last item.", "This idea results in a typical left-to-right style unidirectional generative model, referred to as NextItNet [35].", "The two strategies share similar intuition in that when constructing the prediction function for a target interaction, only its past user behaviors (which we also term as “contexts” in this paper) are taken into account.", "In standard sequential data prediction, it is a straightforward and reasonable choice to predict a target entry based on the past entries [28], [10].", "However, in sequential recommendation, we argue that such a choice may limit the model's ability.", "The key reason is that although user behaviors are in the form of sequence data, the sequential dependency may not be strictly held.", "For example, after a user purchases a phone, she may click phone case, earphone, and screen protector in the session, but there is no sequential dependency among the three items — in other words, it is likely that the user clicks the three items in any order.", "As such, it is not compulsory to model a user session as a strict sequence.", "Secondly, the objective of recommendation is to accurately estimate a user's preference, and using more data is beneficial to the preference estimation.", "As the future data after a target interaction also evidences the user's preference, it is reasonable to believe that modeling the future data can help build better prediction model for the target interaction.", "Figure: Examples of ED architecture to model sequential data (encoder as yellow and decoder as blue).", "(a) is a standard ED architecture where the input xx and output zz are from two different domains.", "E.g., in English-to-Chinese machine translation, xx and zz represent English and Chinese words respectively.", "(b) is a direct application of ED on SRS with future data modeled.", "As the predicted items (e.g., x 5 x_5 with red color) by the decoder can be observed from the encoder's input, it causes data leakage in training.Nevertheless, it is challenging to model with the future data well, since it disobeys machine learning principles and can cause data leakage if not handled properly.", "Taking the encoder-decoder (ED) neural architecture as an example, which has been extensively used in sequential data modeling [23], [2], [9].", "As illustrated in Figure REF (a), in machine translation, when predicting a target word in a sequence (i.e., sentence), the encoder takes the words from both sides as the input source.", "Since the source and target words are from different domains, there is no issue of data leakage.", "However, if we apply the same ED architecture to user session modeling, as illustrated in Figure REF (b), the data leakage issue arises inevitably.", "This is because the source and target entries are from the same domain, such that a target entry (to be predicted by the decoder) exactly occurs in the input of the encoder.", "To address the above issues, we propose a new SRS method that models the future contexts: Gap-filling based encoder-decoder framework for sequential Recommendation, or GRec for short.", "GRec revises the ED design by tailoring it for future data modeling without data leakage: the encoder and decoder are jointly trained by a gap-filling mechanism [19], which is inspired by the recent development of pretrained language model [3].", "Specifically, a portion of items in a user session are deleted by filling in the gap symbols (e.g., \"$\\_\\_$ \").", "The encoder takes the partially-complete sequence as the input, and the decoder predicts the items of these gaps conditioned on the encoded representation model.", "Through this way, GRec can force the encoder to be aware of the general user preference, represented by unmasked actions, and simultaneously force the decoder to perform next item generation conditioned on both the past contexts and the encoded general user preference.", "The contributions of the work are listed as follows: We highlight the necessity of modeling future contexts in session-based recommender system, and develop a general neural network framework GRec that works without data leakage.", "We specify GRec using convolutional neural network with sparse kernels [35], unifying the advantages of both autoregressive mechanism for sequence generation and two-side contexts for encoding.", "We propose a projector neural network with an inverted bottleneck architecture in the decoder, which can enhance the representational bandwidth between the encoder and the decoder.", "We conduct extensive experiments on two real-world datasets, justifying the effectiveness of GRec in leveraging future contexts for session-based recommender system.", "The paper is organized as follows.", "In Section 2, we review recent advancements in using sequential neural network models for SRS.", "Particularly, we recapitulate two widely used unidirectional training approaches.", "In Section 3, we first investigate the straight ways to model bidirectional contexts within a user session, and point out the drawbacks of them for the item recommendation task.", "After that, we describe in detail the framework and architecture of our proposed GRec.", "In Section 4, we conduct experiments and ablation tests to verfiy the effectiveness of GRec in the SRS task.", "In Section 5, we draw conclusions and future work." ], [ "Preliminaries", "In this section, we first define the problem of session-based recommendations.", "Then, we recapitulate two state-of-the-art left-to-right style sequential recommendation methods.", "At last, we review previous work of SRS." ], [ "Top-$N$ Session-based Recommendation", "The formulation of top-$N$ session-based recomendation in this paper closely follows that in [26], [24], [35].", "In SRS, the concept “session\" is defined as a collection of items (referring to any objects e.g., videos, songs or queries) that happened at one time or in a certain period of time [13], [31].", "For instance, both a list of browsed webpages and a collection of watched videos consumed in an hour or a day can be regarded as a session.", "Formally, let $\\lbrace x_1,...,x_{t-1},x_t\\rbrace $ be a user session with items in the chronological order, where $x_i \\in \\mathbb {R}^n$ $(1\\le i \\le t)$ denotes the index of a clicked item out of a total number of $n$ items in the session.", "The task of SRS is to train a model so that for a given prefix session data, $x=\\lbrace x_1,...,x_{i}\\rbrace $ , it can generate the distribution $\\mathbf {\\mathit {\\hat{y}}}$ for items which will occur in the future, where $\\mathbf {\\mathit {\\hat{y}}} =[ \\hat{y}_1,...,\\hat{y}_n ] \\in \\mathbb {R}^n $ .", "$\\hat{y}_j$ represents probablity value of item $i+1$ occurring in the next clicking event.", "In practice, SRS typically makes more than one recommendation by selecting the top-$N$ (e.g., $N=10$ ) items from $\\mathbf {\\mathit {\\hat{y}}}$ , referred to as the top-$N$ session-based recommendations." ], [ "The Left-to-Right-style Algorithms", "In this section, we mainly review the sequential recommendation models that have the left-to-right fashions, including but not limited to Improved GRU4Rec [24] (short for IGRU4Rec), Caser [26], and NextItNet [35].", "Among these models, IGRU4Rec and Caser fall in the line of data augmentation methods, as shown Figure REF  (a), while NextItNet is a typical AR-based generative model, as shown in Figure REF  (b).", "Note, GRU4Rec, NextItNet can be trained by both DA and AR methods." ], [ "The authors in [24] proposed a generic data augmentation method to improve recommendation quality of SRS, which has been further applied in a majority of future work, such as [27], [13], [24], [26].", "The basic idea of DA in SRS is to treat all prefixes in the user session as new training sequences [7].", "Specifically, for a given user session $\\lbrace x_1,...,x_{t-1},x_t\\rbrace $ , the DA method will generate a collection of sequences and target labels {$(x_2|x_1)$ , $(x_3|x_1,x_2)$ ,..., $(x_{t}|x_1,x_2,...,x_{t-1})$ } as illustrated in Figure REF (a).", "Following this processing, the sequential model is able to learn all conditional dependencies rather than only the last item $x_{t}$ and the prefix sequence $\\lbrace x_1,x_2,...,x_{t-1}\\rbrace $ .", "Due to more information learned by additional subsessions, data augmentation becomes an effective way to reduce the overfitting problem especially when the user session is longer and the user-item matrix is sparse.", "Even though the data augmentation method has been successfully applied in numerous SRS work, it may lead to a break regarding the integrity of the entire user session and significantly increase training times [35].", "Figure: Two techniques to train sequential recommendation models.", "The numbers represent observed itemIDs in each user session.", "\"0\" is the padding token.", "The red token representsthe items to be predicted by SRS.", "(a) The typical data augmentation approach with a number of new subsessions created by spliting the original input session.", "(b)The typical left-to-right style autoregressive approach.The item that is being predicted is only determined by its previous timesteps, i.e., p(x t )=p(x t |x 1 ,...,x t-1 )p(x_t)=p(x_t|x_1,...,x_{t-1}).", "For instance,item “4” is predicted by “1, 2, 3” which achieves the same effect with session-1 in (a).The overall training objectives in (b) can be regarded as the sum of the separate objective of all subsessions in (a)." ], [ "The AR-style learning methods [35], [11] propose to optimizing all positions of the original input sequence rather than only the final one.", "Specifically, the generative model takes $\\lbrace x_1,...,x_{t-1}\\rbrace $ (or $x_{1:{t-1}}$ ) as the input and output probabilities (i.e., softmax) over $x_{2:{t}}$ by a seq2seq (sequence-to-sequence) manner.", "Mathematically, the joint distribution of a user session $\\lbrace x_1,...,x_{t-1},x_t\\rbrace $ can be factorized out as a product of conditional distributions following the chain rule: $p(x)=\\prod _{i=1}^{t}p(x_i|x_1,...,x_{i-1}; {\\Theta })$ where $p(x_i|x_1,...,x_{i-1})$ denotes the probability of $i$ -th item $x_i$ conditioned on its all prefix $x_{1:{i-1}}$ , ${\\Theta }$ is the parameters.", "With this formulation, each predicted item can be conditioned on all items that are clicked earlier.", "Correspondingly, the AR method does not rely on the data augmentation technique any more.", "As mentioned, both the data augmentation and AR approaches train the user session in an order from left to right.", "Though it conforms to the generation law of sequential data with natural orders, the way of modeling inevitably neglects many useful future contexts that associate with the target interaction.", "Particularly in the field of recommendation, user behaviors in the sequence may not obey rigid order relations.", "Hence, these methods may limit the ability of sequential recommendation models.", "Moreover, leveraging the addittional future contexts can also be regarded as a way of data augmentation that helps models alleviate the sparsity problem in SRS.", "Motivated by this, we believe that it is crucial to investigate the impact to sequential recomendation models by taking into account both directional contexts." ], [ "Related Work", "Recently, the powerful deep neural network based sequential models have almost dominated the field of session-based recommender systems (SRS).", "Among these models, GRU4Rec [7] is regarded as the pioneering work that employs the recurrent neural network (RNN) to model the evolution of user preference.", "Inspired by the success, a class of RNN-based models has been developed.", "For example, an improved RNN variant in [24] showed promising improvements over standard RNN models by proposing data augmentation techniques.", "Hidasi et al [6] further proposed a family of alternative ranking objective functions along with effective sampling tricks to improve the cross-entropy and pairwise losses.", "[17] proposed personalized SRS, while  [5], [20] explored how to use content and context features to enhance the recommendation accuracy.", "Another line of research work is based on convolutional neural networks (CNN) and attention mechanisms.", "The main reason is that RNN-based sequential models seriously depend on a hidden state from all the past interactions that cannot fully utilize parallel processing power of GPUs [35].", "As a result, their speeds are limited in both training and evaluation.", "Instead, CNN and purely attention based models are inherently easier to be parallelized since all timesteps in the user session are known during training.", "The most typical CNN models for SRS is Caser [26], which treats the item embedding matrix as an image and then performs 2D convolution on it.", "In NextItNet [35], authors argued that the standard CNN architecture and max pooling operation of Caser were not well-suited to model long-range user sequence.", "Correspondingly, they proposed using stacked dilated CNN to increase the receptive field of higher layer neurons.", "Moreover, authors claimed that the data augmentation techniques widely used previous work could be simply omitted by developing a seq2seq style objective function.", "They showed that the autoregressive NextItNet is more powerful than Caser and more efficient than RNN models for top-$N$ session-based recommendation task.", "Inspired by the success of Caser and NextItNet, several extended work, e.g.,  [32], [33], were proposed by employing (or improving) the 1D dilated CNN or 2D CNN to model user-item interaction sequence.", "Meanwhile, transformer-based self-attention [11], [37], [22] models also demonstrated promising results in the area of SRS.", "However, it is known that the self-attention mechanism is computationally more expensive than the stacked dilated CNN structure since calculating self-attention of all timesteps requires quadratic complexity.", "More recently, [14], [25] introduced gating networks to improve SRS by capturing both short- and long-term sequential patterns.", "The above mentioned sequential recommenders are built on either an encoder or a decoder architecture.", "Jointly training an encoder and decoder to model two directional contexts as well as maintain the autoregressive generative mechanism has not been explored in the existing recommendation literature.", "A relatively relevant work to this paper is NARM [13], which proposed an attention-based `ED mechanism' for SRS.", "However, NARM is, in fact, a sequence-to-one architecture rather than the typical seq2seq manner in its decoder network.", "In other words, NARM decodes the distribution only for the final item, whereas the standard ED model decodes distributions of a complete sequence.", "By contrast, our proposed GRec is a pseq2pseq (partial-sequence-to-partial-sequence) ED paradigm where its encoder & decoder focus on encoding and decoding incomplete sequences.", "With the design, GRec combines the advantages of both autoregressive mechanism for sequence generation and two side contexts for encoding.", "Before introducing the final solution, we first need to investigate some conventional ways to incorporate future contexts.", "Then, we shed light on the potential drawbacks of these methods when applying them for the generating task.", "Motivated by the analysis, we present the gap-filling (or fill-in-the-blank) based encoder-decoder generative framework, namely, GRec.", "In the following, we instantiate the proposed methods using the dilated convolutional neural network used in NextItNet, giving consideration to both accuracy and efficiency." ], [ "Two-way Data Augmentation", "A straightforward approach to take advantage of future data is to reverse the original user input sequence and train the recommendation model by feeding it both the input and reversed output.", "This type of two-way data augmentation approach has been effectively verified in several NLP tasks [23].", "The recommendation models based on both data augmentation and AR methods can be directly applied without any modification.", "For instance, we show this method by using NextItNet (denoted by NextItNet+), as illustrated below.", "$\\begin{aligned}NextItNet+: &\\underbrace{ \\lbrace x_1,...,x_{t-1}\\rbrace }_{input}\\Rightarrow \\underbrace{\\lbrace x_2,...,x_{t}\\rbrace }_{output}\\\\&\\underbrace{ \\lbrace x_t,...,x_{2}\\rbrace }_{input}\\Rightarrow \\underbrace{\\lbrace x_{t-1},...,x_1\\rbrace }_{output}\\end{aligned}$ Issues: The above two-way data augmentation may have two potential drawbacks if using for the item generating task: (1) the left and right contexts of item $x_{i}$ are modeled by the same set of parameters or same convolutional kernels of NextitNet.", "While in practice the impact of the left and right contexts to $x_{i}$ can be very different.", "That is, the same parameter representation is not accurate from this perspective.", "(2) The separate training process of the left and right contexts easily results in suboptimal performance since the parameters learned for the left contexts may be largely modified when the model trains the right contexts.", "In view of this, a better solution is that (1) a single optimization objective consists of both the left and right contexts simultaneously, and (2) the left and right contexts are represented by different sets of model parameters." ], [ "Two-way NextItNets (tNextItNets)", "Here, we introduce two-way NextItNets that model the past contexts in the forward direction and model the future contexts in the backward direction.", "Similar to the forward NextItNet, the backward NextItNet runs over a user session in reverse, predicting the previous item conditioned on the future contexts.", "The claim here is different from [35], where both the predicted items and its future contexts require to be masked.", "we only guarantee that the item being predicted will not be accessed by higher-layer neurons.", "The formulation of backward NextItNet is $p(x)=\\prod _{i=1}^{t}p(x_i|x_t,.x_{t-1},..,x_{i+1}; \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{\\Theta }}})$ .", "Both the forward and backward NextItNets will produce a hidden matrix for a user session in each convolutional layer.", "Let $\\vec{\\mathbf {\\mathit {h}}}_{x_i}$ and $\\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{\\mathbf {\\mathit {h}}}}}_{x_i}$ be the item hidden vector $x_i$ calculated by the top layer NexitItNet from the forward and backward directions respectively.", "To form the two-way NextItNets, we concatenate $\\vec{\\mathbf {\\mathit {h}}}_{x_i}$ and $\\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{\\mathbf {\\mathit {h}}}}}_{x_i}$ , i.e., $\\mathbf {\\mathit {h}}_{x_i}=[\\vec{\\mathbf {\\mathit {h}}}_{x_i}; \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{\\mathbf {\\mathit {h}}}}}_{x_i}]$ .", "To combine both directions in the objective function, we maximize the joint log likelihood of both directions.", "$\\begin{aligned}p(x)=&\\prod _{i=1}^{t}p(x_i|x_1,.x_2,..,x_{i};\\Theta _e,\\vec{\\Theta }_{NextItNet},\\Theta _s) \\\\ &p(x_i|x_t,.x_{t-1},..,x_{i+1}; \\Theta _e,\\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{\\Theta }}}_{NextItNet},\\Theta _s)\\end{aligned}$ The parameters $\\Theta $ consist of four parts: the bottom layer item embedding $\\Theta _e$ , convolutional kernels of NextItNet $ \\vec{\\Theta }_{NextItNet}$ & $ \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{\\Theta }}}_{NextItNet}$ and weights of softmax layer $\\Theta _s$ .", "The idea here has similar spirit with the recent deep contextualized word representation (ELMo) model [16] with the exception that ELMo was designed for word understanding or feature extraction tasks via a Bi-RNN encoder, while we apply the two-way NextItNets to solve the generating task.", "Issues: Though tNextItNets can address the training issues mentioned in Section REF , the future contexts are actually unapproachable during the generating phase.", "That is, the backward NextItNet is useless when it is used for inference.", "The discrepancies between training and predicting may seriously hurt the final recommendation performance since the optimal parameters learned for the two-way NextItNets may be largely suboptimal for the unidirectional NextItNet.", "Another downside is that two-way NextItNets are essentially a shallow concatenation of independently trained left-to-right and right-to-left models, which have limited expresiveness in modeling complex contextual representations.", "So, it is unknown whether the proposed two-way NextItNets perform better or not than NextItNet, even though it utilize more contexts.", "Figure: The graphical illustration of GRec with two convolutional layers.", "The decoder (grean neurons) is stacked on top of the encoder (yellow neurons).", "The light blue & green areas are the receptive field of x 6 x_6.", "Note the first position is not considered for masking." ], [ "Gap-filling Based ED framework", "In this subsection, we first present the general framework and neural architecture of GRec.", "Then, we discuss the relation between GRec and other popular sequential models." ], [ " Seq2seq for SRS", "First, we introduce the basic concepts of the seq2seq learning for SRS.", "We denote $(x,z)\\in (\\mathcal {X},\\mathcal {Z})$ as a sequene pair, where $x =\\lbrace x_1,...,x_{t}\\rbrace \\in \\mathcal {X}$ represents the user input session sequence with $t$ items, and $z =\\lbrace z_2,...,z_{g+1}\\rbrace \\in \\mathcal {Z}$ represents the output sequence, and $(\\mathcal {X},\\mathcal {Z})$ are regarded as source and target domains.", "Unlike the standard seq2seq scenario (i.e., Figure REF (a)), we have the following special relations in the SRS task (see Figure REF  (b)): (1) $g=t$ ; (2) $\\lbrace z_1,...,z_{g}\\rbrace $ =$\\lbrace x_1,...,x_{t}\\rbrace $ .", "The goal of a seq2seq model is to learn a set of parameters $\\Theta $ to describe the conditional probablity $P (z|x, \\Theta )$ , and usually employs the log likelihood as the objective function [23], [21]: $G(\\mathcal {X},\\mathcal {Z}; \\Theta )=\\sum _{(x,z)\\in (\\mathcal {X},\\mathcal {Z})}\\log p(z|x; \\Theta )$ .", "Following the decomposition of the chain rule, the probability can be further expressed as an autoregressive manner: $P (z|x, \\Theta )=\\prod _{i=2}^{g}P(z_i|z_{1:i-1},x;\\Theta )=\\prod _{i=2}^{t}P(x_i|x_{1:i-1},x;\\Theta )$" ], [ "General Framework of Pseq2pseq", "As can be seen, it is non-trivial to design a seq2seq learning model using Eq.", "(REF ) since the item that is being predicted, e.g., $x_i$ , could be indirectly seen from the encoder network by $x$ .", "To address this issue, we present the masked-convolution operations by applying the idea of gap-filling (originally designed for the language [19] task) in the ED architecture.", "Here, we assume items in a user session as words in a sentence.", "Correspondingly, we could randomly replace some tokens in the sequence with the gap symbol “__”.", "The goal of gap-filling is to predict the truth of these missing tokens.", "GRec consists of a modified version of encoder & decoder, and a projector module which is injected into the decoder network.", "Both the encoder and decoder are described by using the dilated convolutional neural network, although they can be simply replaced with the recurrent [7] and attention [11] networks.", "The main difference of the encoder and decoder is that the encoder network is built upon the deep bidirectional CNN, while the decoder network is built upon the deep causal CNN.", "To enhance the brandwidth between the encoder and decoder, we place the decoder on top of the represenation computed by the encoder, and inject a projector network between them.", "This is in contrast to models that compress the encoder representation to a fixed-length vector [23] or align them by attention mechanismWe did not find the basic attention mechanisms introduced in [2], [29] help GRec yield any better results.", "[2], [21].", "Formally, given a user session sequence $x =\\lbrace x_1,...,x_{t}\\rbrace \\in \\mathcal {X}$ , we denote $\\tilde{x}$ as a partial $x$ , where portions of the items, i.e., $x_\\triangle =\\lbrace x_{\\triangle _1 },...,x_{\\triangle _m }\\rbrace $ ($1 \\le m < t$ ), are randomly replaced with blank mask symbols (“__\").", "GRec optimizes a pseq2pseq model by predicting $x_\\triangle $ in each user session, taking the modified item sequence $\\tilde{x}$ as input sequence.", "The objective function $G(\\mathcal {X};\\Theta )$ of GRec is defined as $\\begin{aligned}G(\\mathcal {X};\\Theta )=&\\sum _{x\\in \\mathcal {X}}\\log p(x_{\\triangle }|\\tilde{x}; \\Theta )\\\\=&\\sum _{x\\in \\mathcal {X}}\\log \\prod _{i=1}^{m} p(x_{\\triangle _i}|x_{1:\\triangle _{i-1}},\\tilde{x}; \\Theta )\\end{aligned}$ where $\\Theta $ consists of the item embeddings of encoder $\\Theta _{en}$ and decoder $\\Theta _{de}$ , the convolution weights of encoder $\\Theta _{cnn}$ and decoder $\\vec{\\Theta }_{cnn}$ , the weights of the projector module $\\Theta _{p}$ and softmax layer $\\Theta _s$ .", "One may find that there is overlapped data between $\\tilde{x}$ and $x_{1:\\triangle _{i-1}}$ .", "In fact, since the item embeddings of encoder and decoder are not shared, the overlapped tokens in the encoder and decoder can represent different meanings.", "We show the graphical example of Eq.", "(REF ) using Figure REF .", "The decoder of GRec will predict items (i.e., $x_\\triangle $ ) that are masked in the encoder part.", "As shown in Figure REF , GRec takes an input sequence “$x_1, x_3, x_7, x_8$ ” and produces “$x_2, x_4, x_5, x_6, x_9$ ” as the output sequence.", "Taking the generation of item “$x_6$ ” as an example, when it is predicted, GRec can leverage the causal relations of the partial sequence “$x_1, x_2, x_3, x_4, x_5$ ”, and meanwhile leverage the representations of item “$x_3, x_7, x_8$ ” via the encoder, where “ $x_7, x_8$ ” are the future contexts of “$x_6$ ”.", "For clarity, we show the comparison of NextItNet (seq2seq) and GRec (pseq2pseq) in terms of model generation as below: $\\begin{aligned}&NextItNet: \\underbrace{\\lbrace x_1,x_2,x_3,...,x_7,x_8\\rbrace }_{decoder \\ input}\\Rightarrow \\underbrace{\\lbrace x_2,x_3,x_4,...,x_8,x_9\\rbrace }_{decoder \\ output}\\\\&GRec: \\underbrace{ \\lbrace x_1, \\_\\_, x_3, \\_\\_, \\_\\_, \\_\\_, x_7, x_8, \\_\\_,\\rbrace }_{encoder \\ input}+\\underbrace{ \\lbrace x_1,x_2,x_3,...,x_9\\rbrace }_{decoder \\ input} \\\\&\\Rightarrow \\underbrace{\\lbrace x_2,x_4,x_5,x_6,x_9\\rbrace }_{decoder \\ output}\\end{aligned}$ With this design, GRec can take advantage of both the past and future contexts without causing data leakage." ], [ " GRec Architecture", "In the following, we describe the components of GRec: the embedding layers, the encoder, the decoder, the projector and the softmax layer.", "Embedding Layers.", "The proposed GRec has two distinct embedding layers, namely, the encoder embedding matrix $\\mathbf {\\mathit {\\widetilde{E}}} \\in \\mathbb {R}^{n\\times d}$ and decoder embedding matrix $\\mathbf {\\mathit {\\widehat{E}}}\\in \\mathbb {R}^{(n-1)\\times d}$ , where $n-1$ is the number of items and $d$ is the embedding dimension.", "Specifically, the encoder of GRec embeds the masked user input sequence $\\tilde{x}$ via a look-up table from $\\mathbf {\\mathit {\\widetilde{E}}} $ , denoted by $\\mathbf {\\mathit {\\widetilde{E}}}^{\\tilde{x}} \\in \\mathbb {R}^{t\\times d}$ , while the decoder embeds the original input sequence $x$ from $\\mathbf {\\mathit {\\widehat{E}}}$ , denoted by $\\mathbf {\\mathit {\\widehat{E}}}^x\\in \\mathbb {R}^{t\\times d}$ .", "After the embedding look-up operation, we denote the embeddings of the encoder and decoder as below: $\\begin{aligned}\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}} =\\begin{bmatrix} \\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}_1} & \\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}_0} & \\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}_3} & \\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}_0} & \\cdots & \\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}_0} \\end{bmatrix}\\\\\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}} =\\begin{bmatrix} \\mathbf {\\mathit {\\widehat{E}}}_L^{{x}_1} & \\mathbf {\\mathit {\\widehat{E}}}_L^{{x}_2} &\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}_3} & \\mathbf {\\mathit {\\widehat{E}}}_L^{{x}_4} & \\cdots &\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}_t} \\end{bmatrix}\\end{aligned}$ where $L$ represents the $L$ -th user sequence, and $\\mathbf {\\mathit {\\widetilde{E}}}_e^{\\tilde{x}_0}$ represents the embedding vector of blank symbol, i.e., `$\\_\\_$ ', in the encoder embedding.", "Encoder: Deep Bidirectional CNNs by Gap-filling.", "We implement the encoder network with a series of stacked 1D dilated convolutional layers inspired by NextItNet.", "To alleviate gradient vanishing issues, we wrap every two dilated layers by a residual block.", "Unlike NextItNet, the convolutional operations of the encoder are not causal.", "Each higher-layer neurons can see both its left and right contexts.", "With the gap-filling design, these neurons are forced to understand the unmasked contexts in the sequence.", "It is also worth mentioning that the proposed gap-filling mechanism is dynamic and random, which masks different portions of the item sequence in different training batches.", "Formally, we define the output of the encoder network with two stacked layers in Figure REF as: $\\begin{aligned}\\mathcal {F}_{encoder}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}})=\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}}+\\mathcal {F}_{non\\_cauCNN}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}})\\end{aligned}$ where $\\mathcal {F}_{non\\_cauCNN}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}})$ denotes the block function of non-causal CNNs defined as $\\begin{aligned}\\mathcal {F}_{non\\_cauCNN}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}})=RELU(\\mathcal {L}_n(\\psi _2 (RELU(\\mathcal {L}_n(\\psi _1 (\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}}))))))\\end{aligned}$ where $RELU$ and $\\mathcal {L}_n$ denote non-linear activation function [15] and layer-normalization [1], $\\psi _1$ and $\\psi _2$ are non-causal CNNs with 1-dilated and 2-dilated filters respectively.", "In practice, one can repeat the basic encoder structure several times to capture long-term and complex dependencies.", "Decoder: Deep Causal CNNs by Gap-predicting.", "The decoder is composed of the embedding layer $\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}$ , the projector and the causal CNN modules by which each position can only attend leftward.", "The CNN component strictly follows NextItNet with the exception that it is allowed to estimate the probabilities of only the masked items in the encoder, rather than the entire sequence in NextItNet.", "Meanwhile, before performing the causal CNN operations, we need to aggregate the final ouput matrix of the encoder and the embedding matrix of the decoder, and then pass them into the projector network, which is described later.", "Formally, the final hidden layer (before softmax layer) of the decoder can be represented as $\\begin{aligned}\\mathcal {F}_{decoder}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}} )=\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}) +\\mathcal {F}_{cauCNN}(\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}))\\end{aligned}$ where $\\begin{aligned}&\\mathcal {F}_{cauCNN}(\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}))\\\\&=RELU(\\mathcal {L}_n(\\phi _2 (RELU(\\mathcal {L}_n(\\phi _1 (\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})))))))\\end{aligned}$ where $\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})$ and $\\mathcal {F}_{cauCNN}(\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}))$ are the outputs of projection layers and causal CNN layer respectively, $\\phi _1$ and $\\phi _2$ are causal CNNs with 1-dilated and 2-dilated filters respectively.", "Projector: Connecting Encoder & Decoder.", "Although the output hidden layer of the encoder, i.e., $\\mathcal {F}_{encoder}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}})$ and the input embedding layer of the decoder, i.e., $\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}_0} $ have the same tensor shape, we empirically find that directly placing the decoder on top of the encoder by element-wise addition may not offer the best results.", "To maximize the representational brandwith between the encoder and decoder, we propose an additional projection network (or projector in short) in the decoder.", "Specifically, the projector is an inverted bottleneck residual architecture, which consists of the projection-up layer, the activation function layer, the projection-down layer and a skip connection between the projection-up and projection-down layers.", "The projector first projects the original $d$ -dimensional channels into a larger dimension with the $1\\times 1 \\times d \\times f$ ($f=2d$ in this paper) convolutional operations.", "Following by the non-linearity, it then projects the $f$ channels back to the original $d$ dimensions with the $1\\times 1 \\times f \\times d$ convolutional operations.", "The output of the projector is given as $\\begin{aligned}\\mathcal {F}_{PR}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})=\\mathcal {F}_{agg}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}) +\\phi _{down}(RELU(\\phi _{up}(\\mathcal {F}_{agg}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}))))\\end{aligned}$ where $\\begin{aligned}\\mathcal {F}_{agg}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})=\\mathcal {F}_{encoder}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}})+\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}}\\end{aligned}$ where $\\phi _{up}$ and $\\phi _{down}$ represent the projection-up and projectin-down operations.", "Model Training & Generating.", "As mentioned in Eq.", "(REF ), GRec only takes the masked positions into consideration rather than the complete sequence.", "Hence, we first perform the look-up table by retrieving the hidden vectors of the masked positions from $\\mathcal {F}_{decoder}(\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})$ , denoted by $\\mathcal {F}_{decoder}^{x_\\triangle } (\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})$ .", "Then, we feed these vectors into a fully-connected neural network layer which projects them from the $d$ -dimentional latent space to the $n$ -dimentional softmax space.", "The calculated probabilities of the masked items $x_{\\triangle }$ are given as $\\begin{aligned}p(x_\\triangle |\\tilde{x}; \\Theta )=softmax (\\mathcal {F}_{decoder}^{x_\\triangle } (\\mathbf {\\mathit {\\widetilde{E}}}_L^{\\tilde{x}},\\mathbf {\\mathit {\\widehat{E}}}_L^{{x}})\\mathbf {\\mathit {W}}+\\mathbf {\\mathit {b}})\\end{aligned}$ where $\\mathbf {\\mathit {W}}\\in \\mathbb {R}^{d\\times n }$ and $\\mathbf {\\mathit {b}} \\in \\mathbb {R}^n$ are weight matrix and the corresponding bias term.", "Finally, we are able to optimize Eq.", "(REF ) by gradient ascent (or gradient descent on the negative of Eq.", "(REF )) .", "Since GRec only estimates portions of items in each batch, it needs more training steps to converge compared to NextItNet-style models (which estimate the entire sequence), but much fewer steps compared to the left-to-right data augmentation based models (e.g., Caser) (which estimate only the last item).", "Once the model is well trained, we can use it for item generation.", "Unlike the training phase during which the encoder has to mask a certain percentage of items, GRec is able to directly compute the softmax of the final position of the final layer at the inference time without performing mask operations." ], [ "Connection to Existing Models", "Our work is closely related to NextItNet and the well-known bidirectional language model BERT [3], [22].", "In this subsection, we show the connections of GRec to NextItNet-style and BERT-style models.", "For clarity, we omit the projection layers during discussion.", "As shown in Figure REF (a), when $m=1$ , the encoder masks only one position, i.e., $x_5$ , from the input sequence, and correspondingly the decoder only predicts this masked token, conditioned on all other items in this sequence.", "Mathematically, we have $p(x_{\\triangle }|\\tilde{x})$ = $p(x_{5 }|x \\backslash x_5)$ .", "If we further mask the input of the decoder with `$\\_\\_$ ', GRec reduces to a standard encoder with one softmax output, and is very similar to the well-known bidirectional language model BERT, with the only exception that BERT applies the Transformer [29] architecture while GRec uses the stacked 1D dilated CNNs.", "In this simple case, GRec reduces to a sequence-to-one model and loses its autoregressive property.", "In fact, The DA-based recommendation mdoels, such as Caser, IGRU4Rec and NARM, can also be seen as sequence-to-one models which just apply different neural network infrastructures.", "Figure: GRec variants by changing the gap-filling strategyWhen $m=t-1$ , all items (except the first position) in the encoder will be masked, and the decoder will predict these masked items from $x_2$ to $x_t$ , as illustrated in Figure REF (b).", "In this case, the encoder of GRec becomes almost ineffective.", "If we remove the encoder, GRec becomes exactly NextItNet.", "Note that GRec with $m=t-1$ is very likely to perform worse than NextItNet.", "This is because in such case the encoder of GRec introduces many additional noises, which makes the decoder much harder to be optimized.", "In summary, our proposed GRec can be seen as a pseq2pseq model that jointly trains the encoder and decoder for the sequential recommendation tasks.", "In contrast to NextItNet-style models, GRec is able to model both the past and future contexts.", "In contrast to BERT-style models, GRec is more suitable to the generation task due to its autoregressive process.", "In contrast to the standard seq2seq encoder-decoder models, GRec does not have the data leakage problem when the encoder and decoder are fed into the same input sequence.", "As the key contribution of this work is to improve the existing left-to-right style learning algorithms for SRS, we evaluate GRec on real-world datasets with short-, medium- and long-range sessions, and conduct extensive ablation studies to answer the following research questions: RQ1: Whether the three proposed approaches perform better than the existing left-to-right sytle models?", "Which way performs best?", "RQ2: How does GRec perform with different gap-filling strategies?", "RQ3: What are the effects of other key modules of GRec?", "For example, does it benefit from the proposed projector module?", "RQ4: Is GRec a general framework or does it work well by replacing the encoder and decoder with other types of neural networks?" ], [ "We conduct experiments on two real-world datasets with three different session lengths.", "Table: Statistics of the datasets.", "“M” is short for million.Table: Accuracy comparison.", "MostPop returns item lists ranked by popularity.", "For each measure, the best result is indicated in bold.", "ML-latesthttp://files.grouplens.org/datasets/movielens/.", "This dataset was created on September 26, 2018 by MovieLens.", "Since the original dataset contains cold items, we perform a basic preprocessing by filtering out items that appear less than 20 times, similar to  [26].", "We then generate the interaction sequence of the same user according to the chronological order.", "We split the sequence into subsequence every $k$ movies.", "If the length of the subsequence is less than $k$ , we pad with zero in the beginning of the sequence to reach $k$ .", "For those with length less than $l$ , we simply remove them in our experiments.", "In our experiments, we set $k$ =30 with $l$ =10 and $k$ =100 with $l$ =20, which results in two datasets, namely, ML30 and ML100.", "TW10https://weishi.qq.com.", "This is a private dataset which was created on October, 2018 by the Weishi Team at Tencent Inc.. TW10 is a short video dataset, in which the averaging playing time of each video is less than 30 seconds.", "Since the cold users and items have been trimmed by the official provider, we do not need to consider the cold-start problem.", "Each user sequence contains 10 items at maximum.", "Table REF summarizes the statistics of evaluated datasets in this work." ], [ "We randomly split all user sequences into training (80%), validation (10%) and testing (10%) sets.", "We evaluate all models by three popular top-$N$ metrics, namely MRR@$N$ (Mean Reciprocal Rank), HR@$N$ (Hit Ratio) and NDCG@$N$ (Normalized Discounted Cumulative Gain) [36], [34], [35].", "$N$ is set to 5 and 20 for comparison.", "The HR intuitively measures whether the ground truth item is on the top-$N$ list, while the NDCG & MRR account for the hitting position by rewarding higher scores to hit at a top rank.", "For each user sessions in the testing sets, we evaluate the accuracy of the last (i.e., next) item following  [35], [11]." ], [ "We compare the proposed augmentation methods with three typical sequential recommendation baselines, namely, GRU4Rec [7], Caser [26] and NextItNet [35], particularly with NextItNet since GRec can be seen as an extension of NextItNet.", "We train Caser using the data augmentation method, and train GRU4Rec and NextItNet based on the AR method.", "For fair comparisons, all methods use the cross-entropy loss function.", "Figure: Convergence behaviors of GRec and NextItNet.", "All hyper-parameters are kept the same for the two models.One training epoch in x-axis is 10000**128, 10000**256, and 3000**256 sequences on TW10, ML30 and ML100 respectively, where 128 and 256 are the batch size.", "Note we perform early stop on TW10 after 20 epoches when NextItNet fully converges and plot the same results in the following epoches, as shown in (a).Figure: Performance trend of GRec by tuning the percentage of masked items in the input sequence.", "All other hyper-parameters are kept unchanged." ], [ "For comparison purpose, we follow the common practice in  [11], [14], [18], [30] by setting the embedding dimension $d$ to 64 for all models.", "The hidden dimensions are set the same value as embedding dimension $d$ .", "Though methods with other $d$ (e.g., $d=16, 256, 512$ ) yield different results, the performance trend keeps similar.", "The learning rate is set to 0.001 in this paper.", "Other hyper-parameters of baseline methods are empirically tuned according to performance on validation sets.", "NextItNet+, tNextItNets and GRec use exactly the same hyper-parameters ($q=128$ for TW10, $q=256$ for ML30 and ML100) as NextItNet since they can be regarded as variants of NextItNet.", "The dilated convolution kernels for both the encoder and decoder are set to 3.", "The dilated convolutional layers are stacked using dilation factors $\\lbrace 1,2,2,4,1,2,2,4,1,2,2,4,\\rbrace $ (6 residual blocks with 12 CNN layers), $\\lbrace 1,2,4,8,1,2,4,8\\rbrace $ (4 blocks with 8 CNN layers), and $\\lbrace 1,2,4,8,1,2,4,8,1,2,4,8\\rbrace $ (6 blocks with 12 CNN layers) on TW10, ML30 and ML100 respectively.", "We perform sampled softmax [8] on TW10 and full softmax on ML30 and ML100 for NextItNet, NextItNet+, tNextItNets and GRec throughout this paper.", "All models use the Adam [12] optimizer.", "All results of GRec use $\\gamma =50\\%$ as the gap-filling percentage without special mention." ], [ "Performance Comparison (RQ1)", "Table REF presents the results of all methods on three datasets, namely, the short-range session dataset TW10, medium-range ML30, and long-range ML100.", "We first observe that NextitNet achieves significantly better results than Caser and GRU4Rec on all three datasets.", "This is consistent with the observation in [35] since (1) with a fair comparison setting, the AR-based optimization method is usually more effective than the data augmentation based method for the sequence generating task; (2) the stacked dilated residual block architecture in NextItNet is capable of capturing more complex and longer sequential dependencies, while the max-pooling operations and shallow structure in Caser inevitably lose many important temporal signals and are far from optimal [25], particularly for modeling long-range sequences.", "In what follows, we focus on comparing our proposed methods with NextItNet as they use similar neural network modules and the same hyper-parameters.", "First, among the three proposed augmentation methods, NextItNet+ and tNextItNet yield consistently worse results than NextItNet, whereas GRec outperforms NextItNet by a large margin.", "The results of NextItNet+ and tNextItNet indicate that the trivial two-way augmentation methods are not enough to guarantee better recommendation accuracy compared with the unidirectional model, although they are trained with more data or more parameters.", "The results are predictable since, as we mentioned before, the parameters learned by the right contexts in NextItNet+ may be incompatible with those learned from the left contexts using the same convolutional filters.", "Even though tNextItNet applies two independent networks, the discrepancies during training and inference phases are very harmful for the recommendation accuracy.", "Table: Impact of the projector module regarding MRR@5.", "NextItNetP represents NextItNet with projector.", "GRecN represents GRec without projector.Second, we observe that GRec with the pseq2pseq structure signficantly exceeds NextItNet, as demonstrated in Table REF .", "The results indicate that an appropriate way of modeling by using additional (i.e., future) contextual features does improve the recommendation accuracy for the unidirectional model which attends to only the past contexts.", "Moreover, we plot the convergence comparison of GRec and NextItNet in Figure REF .", "The results show that NextItNet converges a bit faster and better than GRec in the first several epoches, but shows poorer results than GRec after more training epoches.", "The slightly slower convergence behavior is because the loss function of GRec only considers a partial sequence whereas NextItNet loss leverages the loss of complete sequence during training (also refer to Eq.", "(REF )).", "But obviously, the improved performance gains of GRec far outweigh the marginally increased training cost." ], [ "Impact of the Gap-filling Strategies (RQ2)", "Table REF shows the performance change of GRec with different gap-filling percentages.", "We fix all other hyper-parameters by tuning $\\gamma $ .", "As clearly shown, too large or too small $\\gamma $ typically achieves suboptimal performance.", "The highest recommendation accuracy is obtained when $\\gamma $ is between $30\\%$ to $50\\%$ .", "The is because masking too much percentage of items in the user session is very likely to (1) discard important future contexts; (2) introduce noises due to the masked tokens; and (3) bring more discrepancies between training and inference phases, as explained in Section REF .", "E.g., when $\\gamma =1.0$ , no future contexts are leveraged, and the encoder of GRec becomes a neural network with only noises.", "In this case, GRec performs even worse than the standard NextItNet.", "On the other side, GRec will lose its autoregressive advantage when $\\gamma $ is smaller, and becomes an simple encoder network or a sequence-to-one model when only one item is masked.", "With this setting, the sequential and recurrent patterns will not be captured any more.", "Hence, there is a clear trade-off for GRec between taking advantage of future contexts and making use of the autoregressive property.", "Table: GRec vs. its encoder regarding MRR@5. γ\\gamma is set to 0.5 for both GRec and its encoder.Table: GRec vs. NextItNet with d=512d=512 regarding MRR@5." ], [ "Ablation Studies (RQ3)", "In this subsection, we first investigate the effectiveness of the projector module.", "One may argue that the improved performance of GRec relative to NextItNet may come from the additional projector module.", "To clear up the uncertainty, we perform a fair ablation study by removing the projector for GRec as well as injecting it for NextItNet.", "We have the following observations according to Table REF : (1) The projector indeed helps GRec achieve better performance by comparing GRec & GRecN; (2) NextItNet is inferior to GRec even with the projector by comparing GRec & NextItNetP; (3) GRec still exceeds NextItNet even without the projector by comparing GRecN & NextItNet.", "Second, we also report results of GRec with only the encoder network since the encoder itself is also able to leverage two directional contexts.", "To do so, we remove the decoder of GRec and place the softmax layer on the encoder during training.", "At the generating phase, we just need to replace the last item by \"$\\_\\_$ \", and retrieve the top-$N$ scored items for comparison.", "With this special case, GRec reduces to a BERT-like bidirectional encoder model.", "We report the MRR@5 in Table REF .", "As can be seen, GRec largely exceeds its encoder on all three datasets.", "The findings confirm our previous analysis since the bidirectional enocder network is not autoregressive and fail to explicitly model the sequential patterns of previous interactions over time.", "In addition, some of the left contexts are missing because of the gap-filling mechanism, which also results in unsatisfied performance.", "In addition, we also report NextItNet & GRec with a very large $d$ in Table REF (along with much longer training time and larger memory consumption).", "As shown, GRec obtains significant improvements relative to NextItNet, similar to the observations in Table REF .", "Table: Recurrent variants of GRec regarding [email protected]: GRec variants with recurrent encoder or decoder." ], [ "GRec Variants (RQ4)", "Since GRec is a general ED framework, one can simply replace the original dilated convolutional neural network with other types of neural networks, such as RNN.", "For the sake of completeness, we demonstrate two GRec variants in Figure REF .", "ReCd represents GRec with recurrent encoder network (Bi-GRU) and convolutional decoder network, while CeRd represents GRec with convolutional encoder network and recurrent decoder network (GRU).", "We report results on Table REF and make two observations: (1) GRec still exceeds NextItNet even it utilizes Bi-GRU as encoder by comparing ReCd & NextItNet; (2) GRec outperforms GRU when it utilizes the typical GRU as decoder by comparing CeRd & GRU; (3) in general, the GRec framework using stacks of convolutional blocks for both its encoder and decoder performs better than its variants using RNN for either encoder or decoder.", "The above observations further verify the generality and flexibility of GRec for processing future contexts.", "In this paper, we perform studies on how to incorporate future contexts for the typical left-to-right style learning algorithms in the task of SRS.", "The motivation is that the architectures of autoregressive-based sequential recommendation models fail to model the past and future contexts simultaneously.", "To maintain the autoregressive property as well as utilize two directional contexts, we present GRec, a novel pseq2pseq encoder-decoder neural network recommendation framework with gap-filling based optimization objective.", "GRec is general and flexible, which jointly trains the encoder and decoder on the same user action sequence without causing the data leakage issue.", "Through ablations and controlled experiments, we demonstrate that GRec is more powerful than the traditional unidirectional models.", "For future work, we are interested in studying whether the right contexts or GRec can improve the recommendation diversity for SRS." ], [ "Acknowledgement", "This work is supported by the National Natural Science Foundation of China (61972372, U19A2079)." ] ]
1906.04473
[ [ "Evolutionary Trigger Set Generation for DNN Black-Box Watermarking" ], [ "Abstract The commercialization of deep learning creates a compelling need for intellectual property (IP) protection.", "Deep neural network (DNN) watermarking has been proposed as a promising tool to help model owners prove ownership and fight piracy.", "A popular approach of watermarking is to train a DNN to recognize images with certain \\textit{trigger} patterns.", "In this paper, we propose a novel evolutionary algorithm-based method to generate and optimize trigger patterns.", "Our method brings a siginificant reduction in false positive rates, leading to compelling proof of ownership.", "At the same time, it maintains the robustness of the watermark against attacks.", "We compare our method with the prior art and demonstrate its effectiveness on popular models and datasets." ], [ "Introduction", "Since the success of large scale neural networks in the early 2010s, we have witnessed an explosive growth in the field.", "The popularity grew not only in academia but in the industry as well.", "Deep neural networks (DNNs) have become the de facto solution to many complex computer vision, speech recognition, and natural language processing problems.", "Popular as deep learning may be, building DDNs to solve real-world problems remains an arduous task.", "It requires a vast amount of high-quality labeled data and heavy use of computational resources and human expertise.", "It goes without saying that DNNs are invaluable technological assets that potentially has huge commercial impacts.", "Over the past few years, myriads of companies have joined the AI arms race.", "Just among AI startups, investment from the venture capital market reached a record high $9.3 billion in 2018 [1].", "Among the companies, many provide range from commercial libraries for embedded systems, cloud machine learning APIs to building private corporate clouds for AI, spanning across industries like transportation, manufacturing, healthcare, finance, and consumer electronics.", "Figure: Workflow of the trigger pattern-based black-box DNN watermarking.While the DNNs are fueling commercial successes in the AI market, a void in IP protection for DNN models may hinder the progress.", "When a model owner sells a service to a customer, she should have a reliable way to prevent the customer from illegally distributing or reselling it.", "To achieve that goal, the owner not only needs to identify her own model when it is distributed, but also prove the ownership to a trusted arbitrator.", "Recently, several researchers proposed watermarking as a viable solution to the IP protection problem in deep learning [2][3][4] [5][6][7].", "Digital watermarking originally refers to the process of covertly embedding information in multimedia content.", "The concept has since been extended to cover software [8], circuits [9] as well as DNNs.", "White-box watermarking embeds the owner's information in the weights of a DNN.", "Black-box watermarking, on the other hand, embeds the watermark in the input-output behavior of the model.", "The set of input used to trigger that behavior is called trigger set.", "For the popular task of image classification, a common approach is to assign a random label to trigger images and train the model to classify accordingly.", "The non-triviality of ownership of a watermarked model is constructed on the extremely small probability for any other model to exhibit the same behavior [4].", "Through detecting the watermark in a DNN model, the owner will be able to both identify and prove her ownership.", "Based on the characteristics of their trigger sets, existing black-box watermarking methods can be split into two categories.", "The first category of methods curates a finite set of special trigger images.", "The special images can be completely random [4], samples derived from unused hidden space [5], or adversarial examples [3].", "Another category of methods maintain trigger patterns and add them to natural input images to create trigger sets.", "The trigger patterns are usually meaningful patterns that can serve as a proof of the owner's identity, such as the logos [6] and color-coded keys [7].", "Figure REF describes the workflow of the method.", "Some sample trigger images are shown in Figure REF .", "The motivation for designing the trigger sets are different between those two categories of methods.", "The first is focused on the functionality of the model.", "They aim to create trigger sets such that watermarking is as orthogonal to the normal functionality of the model as possible.", "The second, on the other hand, is more focused on the watermarking extraction procedure.", "Associating the owner's identity with the trigger set makes detection and proof of ownership much more straightforward.", "The evident drawback of the first category is the difficulty to establish a connection between the trigger set and the owner.", "To solve that problem, researchers went as fars as to use complex cryptographic tools [4].", "Further, the limited size of the trigger set weakens the proof of ownership.", "The drawback of the second category lies in an inevitable trade-off between the robustness of the watermark and potential of false positive watermark detection.", "If a trigger pattern is too prominent, then it risks triggering false positives in other neural networks.", "On the other, if a pattern is too inconspicuous, it may be easily removed during fine-tune attacks.", "In this paper, we aim to bridge the gap between the different trigger set generation methods.", "We propose a differential evolution-based framework to determine how any given trigger pattern should be added to the image such that false positive detections are reduced while the robustness of the watermark is maintained.", "With our framework, trigger pattern-based watermarking adds the model functionality to its equation, while still keeping ownership proofs simple.", "The contribution of our paper are as follows: We proposed an evolutionary algorithm-based framework to optimize trigger patterns in order to facilitate robust and low-false-positive black-box watermarking We surveyed and compared existing trigger set generation methods and presented our analysis We implemented our method with popular DNN models and datasets and evaluate its performance The rest of the paper is organized as follows: Section describes the watermarking problem in more details and defines the problem.", "Section presents our algorithm.", "Section evaluates the performance of the proposed algorithm.", "Figure: Examples of trigger inputs.", "(a) a random out-of-distrbution image , (b) a regular image with a logo , (c) a regular image with a color-coded key not perceptible by the eye ." ], [ "Preliminaries", "In this section, we will first introduce the background of watermarking.", "Then we will define our problem.", "Similar to most of the security-related literature, we use the Alice / Bob narrative to describe the scenario.", "Alice will be the model owner.", "Bob will be the customer who buys the model from Alice, and also the malicious attacker who tries to infringe on Alice's IP rights." ], [ "DNN Watermarking", "We define DNN watermarking as the process of covertly embedding information in the DNN in order to verify and prove an owner's ownership.", "We focus on black-box watermarking for image classification, which achieves the aforementioned goal by embedding special input-output patterns in DNNs.", "To embed the watermark, Alice will train a DNN with both the regular dataset and the trigger set with specific output labels.", "To detect the watermark, Alice will use a subset of the trigger set as the input to the DNN and observe the output.", "There will be a positive detection if certain probability requirements are met.", "A successful watermarking method has to meet several criteria regarding its effectiveness, fidelity, false positive rate, and robustness.", "A detailed description of the criterion is presented in Table REF .", "First, the effectiveness criterion states that a watermark has to ensure successful and consistent detection.", "The fidelity criterion states that watermarking cannot have a significant negative impact on the regular functionality of the model.", "False positive rate and robustness will be discussed separately in the following subsections." ], [ "Proof of Ownership", "A watermarking method's ability to prove ownership mainly relies on its low false positive rate.", "Suppose that Alice decides that a watermark detection is positive if there are at most $\\Delta $ misclassifications among $N$ trigger images.", "Then the probability of the detection can be calculated as follows [7][5], assuming independence between the classification of each trigger image.", "$\\rho $ represents the accuracy of the model on the trigger set.", "$\\Pr =\\sum _{\\delta =0}^\\Delta {{N}\\atopwithdelims (){\\delta }}\\rho ^{N-\\delta }(1 - \\rho )^\\delta $ The ownership is established based on the fact that $\\Pr $ is disproportionally small for a non-watermarked neural network.", "If a watermarking method incurs a high false positive rate (a high $\\rho $ for non-watermarked models), then $\\Pr $ is no longer small and that the proof of ownership will be inconclusive at best." ], [ "Threat Model", "In this subsection, we introduce our notion of robustness by defining our threat model.", "We assume that Bob has white-box access to the model, but does not have access to the training set.", "Instead, Bob has access to some proprietary test data (i.e.", "a subset of test set).", "We argue that proprietary data is one of the most important competitive advantages of Alice, and an IP pirate Bob by no means should have access to it.", "Otherwise, with the training data and the model, he might as well train a new model on his own, especially when he has the ability to carry out sophisticated attacks such as fine-tuning.", "On the other hand, it is a reasonable assumption that the attacker may have white-box access to the model architecture and parameters.", "In the case of cloud ML service, Bob can be a malicious service platform.", "In the case of software ML libraries, Bob can be a hacker.", "In both cases, Bob would have the full white-box access to the model.", "We also assume that Alice only has black-box access to the model.", "In addition, Alice will have direct access to input to the model.", "There are no preprocessing stages between Alice's input and the input of the model.", "With some test data and the model, Bob may fine-tune the model to produce a slightly different version of it.", "That is called the fine-tune attack.", "After the fine-tune attack, Alice's watermark should still exist.", "Some researchers also discussed overwrite attacks, where Bob tries to embed his own watermark using the same procedure on Alice's model.", "It is indeed a very reasonable attack scenario.", "In our experiments, we found that embedding a new watermark using Bob's limited amount of data would adversely affect the model's performance, rendering the model much less valuable.", "Thus we rule out the possibility of Bob carrying out overwrite attacks." ], [ "Problem Definition", "A DNN for classification is a function $f: \\mathbb {R}^d \\rightarrow [0, 1]^L $ .", "Given an input $X \\in \\mathbb {R}^d$ , it is desired that the function classifies it correctly to its label $y$ , $f(X) = y$ .", "A pattern $P \\in \\mathbb {R}^d$ has the same dimensions as $X$ , but is much more sparse.", "In its image form, $P$ 's non-zero entries can be considered as a set of $K$ pixels with explicitly designed values and coordinates $\\lbrace v_k, c_k^x, c_k^y\\rbrace ^K$ .", "$P$ is tightly coupled with the identity of the model owner.", "And the absolute and relative coordinates of the pixels may or may not contribute to $P$ 's ability to carry information.", "The pattern can be embedded on any input $X$ from the intended data distribution to convert it into a trigger input through a function $g(X, P)$ For convenience, we sometimes write the function as $g(X_i, \\lbrace v_k, c_k^x, c_k^y\\rbrace ^K)$ .. A watermarked DNN will be trained to classify $g(X, P)$ to $y_i^{\\prime } \\ne y_i$ .", "The fact that a DNN model classifies the trigger inputs disproportionally correctly can serve as a unique proof of the owner's identity.", "We consider two alternative approaches to create trigger patterns.", "In the method proposed by Guo et al.", "(shown in Figure REF ), a color-coded key serves as the trigger pattern [7].", "They embed the pattern by offsetting the pixel values of the input, $g(X, P) = X + P$ .", "Since the information is mainly ingrained in the pixel values, we consider the pixel locations ${c_k^x, c_k^y}$ to be flexible.", "We use Key throughout the paper to denote this trigger pattern.", "The second approach we consider is proposed by Zhang et al., and shown in Figure REF [6].", "The information is obviously contained in the geometrical shape of the logo and the pixels have to remain in a relatively fixed to each other.", "Thus its location can be represented by its top left corner ${c^x, c^y}$ .", "The author did not explicitly say how they embed the logo, but we interpret it as blending with the input, $g(X, P) =(1 - \\alpha ) X + \\alpha P$ .", "We denote the second type of trigger pattern as Logo.", "Our main goal is to find the $P =\\lbrace c_k^x, c_k^y\\rbrace ^K$ such that the probability of a non-watermarked DNN $f_0$ classifying a trigger input to its original labels is maximized.", "The main motivation behind the goal is to minimize false positive watermark detection.", "Empirically, given dataset $\\mathcal {D}$ , then the goal can be expressed as follows.", "$\\operatornamewithlimits{argmax}_{\\lbrace c_k^x, c_k^y\\rbrace ^K} |\\lbrace X | f_0(g(X, \\lbrace v_k, c_k^x, c_k^y\\rbrace ^K)) = y, X, y \\in \\mathcal {D} \\rbrace |$ We have found that larger $v_k$ leads to more robust watermarking, although it leads to higher false positives.", "In the Key-related experiments, $v_k$ is given.", "But we can also integrate $v_k$ into the optimization landscape as follows.", "The Logo-related experiments use this objective function.", "$\\begin{split}\\operatornamewithlimits{argmax}_{\\lbrace v_k, c_k^x, c_k^y\\rbrace ^K} & |\\lbrace X | f_0(g(X, \\lbrace v_k, c_k^x, c_k^y\\rbrace ^K)) = y, \\\\& X, y \\in \\mathcal {D} \\rbrace | +\\delta \\sum _k v_k\\end{split}$ $$" ], [ "Method", "In this section, we first provide a high level overview of why we chose the DE framework and how it works.", "Then we delve deeper to provide some algorithmic details that are crucial to the convergence of DE." ], [ "Differential Evolution", "Differential Evolution Input: dataset $\\mathcal {D}$ , non-watermarked DNN model $f_0$ , populartion $N$ , number of generations $G$ Output: best candidate $P$ after evolution [1] Randomly intialize, $i$ th candidate $P_i=\\lbrace v_{ik}, c_{ik}^x, c_{ik}^y\\rbrace ^K$ generation $g=1, \\ldots , G$ each candidate $P_i$ Randomly pick $0 \\le j, k, l < N$ where $j \\ne k \\ne l$ $P_i^{\\prime } \\leftarrow $ evolve($P_j$ , $P_k$ , $P_l$ ) fitness($P_i^{\\prime }, f_0, \\mathcal {D}$ ) $>$ fitness($P_i, f_0, \\mathcal {D}$ ) $P_i \\leftarrow P_i^{\\prime }$ $\\operatornamewithlimits{argmax}_{P_i} \\textsc {fitness}(P_i, f_0, \\mathcal {D})$ To find the pattern $P$ , the first methods that came up to us were the gradient-based methods commonly used for finding adversarial samples [10] [11] [12].", "A key difference between our problem and theirs is that our pattern $P$ is universal.", "Therefore, finding the gradient of individual inputs hardly helps our situation.", "The family of evolutionary algorithms are among the most prominent non-gradient-based optimization methods.", "We initially relied on the generic evolutionary algorithm (EA), but we were unable to find a reasonable set of parameters to make the algorithm converge.", "That is when differential evolution (DE) presented itself as an alternative.", "DE is a metaheuristic search algorithm that optimizes a given objective by evolving a population of candidates in parallel [13].", "It follows the concept of generic EAs where a population of candidates evolves, and the candidates that are fittest will survive in each iteration.", "However, DE is simpler and it is known to facilitate faster convergence to the global optimum.", "Instead of using mutations and crossover between two parents, candidates DE evolve over a triplet.", "A new candidate is created by adding a weighted difference between two candidates to the third.", "$P=P_0 + F \\times (P_1 - P_2)$ Algorithm REF presents the high-level procedure of using DE to solve our problem.", "The main idea is to generate 1) new pixel coordinates 2) pixel values of $P$ using the differential variation operation described in Equation REF .", "The fitness function can be either of the two objective functions described in Section REF .", "The evolve function, on the other hand, is more complex.", "We describe more details of the function in the next subsection." ], [ "Optimizations for DE", "[ht] Evolve with Closest Triplet Input: 3 candidates, each containing $K$ pixel coordinates: $\\lbrace c_{1k}^x, c_{1k}^y\\rbrace ^K$ , $\\lbrace c_{2k}^x, c_{2k}^y\\rbrace ^K$ , $\\lbrace c_{3k}^x, c_{3k}^y\\rbrace ^K$ , differential weight $F$ Output: pixel coordinates of a new candidate, $\\lbrace c_{k}^x, c_{k}^y\\rbrace ^K$ [1] Pair$\\lbrace c_{1k}^x, c_{1k}^y\\rbrace ^K$ , $\\lbrace c_{2k}^x, c_{2k}^y\\rbrace ^K$ Initialize $heap$ each $\\lbrace c_{1i}^x, c_{1i}^y\\rbrace $ Calculate pairwise distances each $\\lbrace c_{2j}^x, c_{2j}^y\\rbrace $ Push distance($\\lbrace c_{1i}^x, c_{1i}^y\\rbrace $ , $\\lbrace c_{2j}^x, c_{2j}^y\\rbrace $ ) in $heap$ $heap$ is not empty Pair by distance distance, $\\lbrace c_{1i}^x, c_{1i}^y\\rbrace $ , $\\lbrace c_{2j}^x, c_{2j}^y\\rbrace $ $\\leftarrow $ Pop from $heap$ Neither $\\lbrace c_{1i}^x, c_{1i}^y\\rbrace $ or $\\lbrace c_{2j}^x, c_{2j}^y\\rbrace $ is paired Pair ($\\lbrace c_{1i}^x, c_{1i}^y\\rbrace $ with $\\lbrace c_{2j}^x, c_{2j}^y\\rbrace $ ) Pair($\\lbrace c_{1k}^x, c_{1k}^y\\rbrace ^K$ , $\\lbrace c_{2k}^x, c_{2k}^y\\rbrace ^K$ ) Pair($\\lbrace c_{1k}^x, c_{1k}^y\\rbrace ^K$ , $\\lbrace c_{3k}^x, c_{3k}^y\\rbrace ^K$ ) each $\\lbrace c_{2i}^x, c_{2i}^y\\rbrace ^K$ $\\lbrace c_{3j}^x, c_{3j}^y\\rbrace ^K$ paired with $\\lbrace c_{1k}^x, c_{1k}^y\\rbrace ^K$ $c_k^x \\leftarrow c_{1k}^x + F \\times ( c_{2i}^x - c_{3j}^x)$ $c_k^y \\leftarrow c_{1k}^y + F \\times ( c_{2i}^y - c_{3j}^y)$ $\\lbrace c_{k}^x, c_{k}^y\\rbrace ^K$ We use two different variants of evolve functions for the two existing trigger patterns, Key and Logo.", "For Logo, the evolve function is straightforward.", "We use the top left pixel of the logo as the anchor, and each candidate can be represented by a simple triplet $(v, c^x, c^y)$ .", "We use DE to evolve and select the location of the logo as well as its pixel values.", "We use a different approach for Key.", "Since pixel locations are al flexible, the candidate will be an array of $K$ tuples $\\lbrace c_k^x, c_k^y\\rbrace ^K$ .", "When we evolve using three candidates, each with $K$ pixels, which pixels should pair up and evolve becomes an important question.", "If pixels are randomly paired up, it is likely that the pixels will engage in a Brownian motion-like movement across different generations.", "Consequently, as we empirical show later, the evolution will not converge.", "If our goal is to evolve the pixels into optimal locations, then it makes sense to induce the evolution in such a way that pixels nearest to an optimal location will move toward that location.", "To that end, we propose an algorithm to pair closest pixels together to evolve.", "The most efficient implementation is to store all pairwise distances in heaps and always pair available pixels with the smallest distances.", "Algorithm REF describes implementation in more details.", "The time complexity of the algorithm is $O(K^2 log (K)$ , where $K$ is the number of pixels.", "Figure: Best fitness of the population across the generations.Figure REF shows the fitness of the best candidate over the different generations.", "The fitness function is simply the accuracy of the subset.", "The proposed method, closest triplet evolve function, converged much faster than the evolve function where pixels are randomly paired together.", "In fact in the latter case, the fitness plateau at around 0.89 and it is unclear whether it will converge at all." ], [ "Evaluation", "In this section, we report the performance evaulations of our method.", "We first describe implementation details of the watermarking procedure and the DE algorithm.", "Then we evaluate the effectiveness, fidelity, false positive rate and robustness of the watermark in following subsections.", "Since neither Logo nor Key is an original idea from this paper, we omit many repetitive experiments for brevity.", "The key is to demonstrate the ability of our DE algorithm to reduce false positive while maintaining the robustness of the watermark.", "It is worth noting that all of our trigger sets are built from test data that has not been used during training." ], [ "Differential Evolution", "We applied our DE-based approach on both Logo and Key trigger pattern generation.", "We use Equation REF as the fitness function for Logo, where both pixel locations and values are optimized.", "The weight $\\beta $ is set such that a maximum $v$ constitutes 5% of the fitness score.", "With the Key pattern, we only optimize the pixel locations.", "In the fitness functions of both DE algorithms, we evaluate the accuracy of candidates on a random set of 640 training images.", "The model and dataset are described in the next subsection.", "We carried out experiments on both the MNIST dataset and the CIFAR-10 dataset, and trigger patterns and trigger set images output by the DE algrithm are shown in Figure REF .", "To create Logo patterns on the CIFAR-10 dataset, we replicated the experiments by Zhang et al.", "[6].", "Surprisingly, the optimal location to put the logo isn't at one of the 4 corners as one would intuitively think.", "In DE, we set $v_k=255$ searched the coefficient for blending $\\alpha $ instead, which yielded an optimal value of 0.4019.", "Like Guo et al.", "[7], we encoded the message in the pixel values of the Key trigger pattern.", "The CIFAR-10 variant includes 64 pixels and encodes 128 bits of information.", "Every pixel in the RGB color space with $v_k=\\pm 100$ encodes 2 bits, and the message can be decoded by reading the pixels from left to right, top to bottom.", "The MNIST variant includes 192 pixels and encodes 192-bit information, with $v_k=100, 200$ to encode 0 and 1 respectively.", "In both cases, pixels in the Key pattern gravitate towards the edge of the images.", "Clearly, the DE algorithm is rewarding pixel locations that do not overlap with the objects, which tend to occupy the center of the image.", "The new pixel locations form patterns in a way that has minimal impacts on the classification of an object.", "Because of that, even we added patterns with large pixel values, the resulting images still didn't trigger regular models.", "To test the capacity of our algorithm, we intentionally used patterns that have a lot more complexity in our experiments on the MNIST dataset.", "Images in MNIST dataset has a lot more empty space to take advantage of, while pixels blindly accumulate at the edge may cause misclassification.", "Through our algorithm, the probability $\\Pr (f_0(g(X_i, P))=y_i)$ increased from as low as 83.30% (during random initialization) to 99.27%.", "The probability is measured over the entire trigger set, and the classification accuracy on the regular test set with the exact same images is 99.46.", "It clearly shows the algorithms ability of learning to reduce false positive detections of the watermark.", "To test our DE algorithm's ability to converge, we repeated the MNIST/CIFAR10 \"key\" on 8 different parameter sets (number of pixels in the pattern, etc.", "), 5 experiments per set.", "Eat set all converged to solutions that produce extremely similar fitness scores, with an average standard deviation of 0.0060." ], [ "Effectiveness and Fidelity", "It has been demonstrated in all previous works that DNNs can be trained to successfully recognize the triggers sets.", "In addition, Adi et al.", "also showed the significance of start training from scratch in creating a robust watermark [4].", "We followed the same procedure.", "Table REF shows the classification accuracy of both non-watermarked and watermarked models on the regular test set and the trigger set.", "In light of the fidelity criterion, the classification accuracy of the watermarked model on the regular test set is sligtly lower compared to the regular non-watermarked model.", "It is expected as watermarking makes the classification problem much harder.", "In light of the effectiveness criterion, the ability of the watermarked model to recognize the trigger set is as good as its ability to classify regular images." ], [ "False Positives", "Figure REF shows the false positive rate of different trigger patterns.", "The false postive rate here is measured by the probability that a non-watermarked model classifies a trigger image into its re-assigned class $\\Pr (f_0(g(X_i, P)=y_i^{\\prime })$ .", "We used four different non-watermarked DNN trained on the regular CIFAR-10 dataset: ResNet-18, ResNet-50, DenseNet-121, VGG-16.", "The fitness function in DE used to obtain the trigger pattern only involves the ResNet-18 model.", "The results show that the what our DE learns from one model generalizes well to other models as well.", "We tested the generalizability further using the same pattern on 5 newly trained VGG-13s, and obtained a 95% confidence interval of $1.15\\% \\pm 0.06\\%$ .", "We used two baselines for comparison.", "To compare with the DE-based Key pattern, we used a Key pattern with random $\\lbrace x_k, y_k\\rbrace ^K$ but the same $v_k$ .", "We see drastic improvements with up to 10$\\times $ reduction in the false positive rate.", "Note that the trigger pattern proposed by Guo et al.", "is also based on random location [7].", "But they explicitly selected $v_k$ such that the pattern is imperceptible, resulting in a lower false positive rate.", "But as we see later, they achieved that at the cost of robustness.", "To compare with the DE-based Logo, we use Logo trigger pattern used Zhang et al.", "[6].", "We achieve about 2$\\times $ improvement in false positive rate.", "Putting it in the perspective of Equation REF , even 2$\\times $ translates to over $25000\\times $ lower probability ($\\Delta =5, N=20$ )." ], [ "Robustness", "We measure the robustness of the watermarking methods through their resistance against fine-tune attacks.", "Table REF reports trigger set classification accuracy loss after we fine-tuned a watermarked model.", "Unlike some of the earlier approaches that based their attack on the training set, we used 1000 test images and applied various data augmentation techniques.", "The model watermarked using the original key method suffered a significant drop in accuracy.", "The accuracy drop was almost entirely eliminated when we switch to our key method.", "Both of the logo method were resilient against the fine-tune attack.", "Images superimposed with our pattern are sufficiently different from the normal input distribution.", "Because of that, a watermarked model's ability to recognize those patterns is largely orthogonal to its ability to classify objects and is, therefore, harder to remove during a fine-tune attack." ], [ "Discussions", "Gradient-free methods do exist in the world of adversarial learning, most notably in the subproblem of black-box attacks, where the attackers don't have access to the gradient information [14][15][16][17].", "Those methods again focus on individual samples and are essentially solving a different problem than ours.", "It is worth noting that the work from Moosavi-Dezfooli et al.", "aims at creating universal adversarial perturbations [18].", "Their proposal to reduce the search space to a subset of the input provided invaluable insights.", "Due to the limited scope of this paper, the parameter $v_k$ isn't systematically studied.", "It is more a heuristic and manually selected in many situations.", "It would be valuable to study how it systematically affects the robustness of the watermarking methods.", "Table: Classification accuracy of watermarked models on corresponding CIFAR-10 trigger sets after fine-tune attacks.", "The less the accuracy loss, the more robust the method is." ], [ "Conclusion", "Black-box DNN watermarking has emerged as a viable solution to IP protection in the context of MLaaS.", "Adding owner identity-based trigger patterns to natural input images is a popular method to create effective trigger sets that establish strong ownership proofs.", "In this paper, we propose a novel differential evolution-based framework to optimize the generation of such trigger patterns.", "Compared to the prior art, our method demonstrates significant improvement in false positive rate and robustness in experiments on popular models and datasets." ] ]
1906.04411
[ [ "On the explicit constructions of certain unitary $t$-designs" ], [ "Abstract Unitary $t$-designs are `good' finite subsets of the unitary group $U(d)$ that approximate the whole unitary group $U(d)$ well.", "Unitary $t$-designs have been applied in randomized benchmarking, tomography, quantum cryptography and many other areas of quantum information science.", "If a unitary $t$-design itself is a group then it is called a unitary $t$-group.", "Although it is known that unitary $t$-designs in $U(d)$ exist for any $t$ and $d$, the unitary $t$-groups do not exist for $t\\geq 4$ if $d\\geq 3$, as it is shown by Guralnick-Tiep (2005) and Bannai-Navarro-Rizo-Tiep (BNRT, 2018).", "Explicit constructions of exact unitary $t$-designs in $U(d)$ are not easy in general.", "In particular, explicit constructions of unitary $4$-designs in $U(4)$ have been an open problem in quantum information theory.", "We prove that some exact unitary $(t+1)$-designs in the unitary group $U(d)$ are constructed from unitary $t$-groups in $U(d)$ that satisfy certain specific conditions.", "Based on this result, we specifically construct exact unitary $3$-designs in $U(3)$ from the unitary $2$-group $SL(3,2)$ in $U(3),$ and also unitary $4$-designs in $U(4)$ from the unitary $3$-group $Sp(4,3)$ in $U(4)$ numerically.", "We also discuss some related problems." ], [ "Introduction", "The basic idea of “design theory” is to approximate a given space $M$ by a good finite subset $X$ of $M$ .", "The spherical $t$ -designs are those finite subsets $X$ of the unit sphere $M=S^{n-1}$ such that for any polynomial $f$ of degree up to $t$ , the spherical integral of $f$ on the sphere is given by the average value of $f$ at the finitely many points of $X$ of $S^{n-1}$ [12].", "So are the concept of combinatorial $t$ -designs ($t$ -$(v,k,\\lambda )$ designs) of the $M={\\binom{V}{k}}$ , the set of all the $k$ -element subsets of a set $V$ of cardinality $v$ .", "The space $M$ has the structure of an association scheme called Johnson association scheme $J(v,k)$ .", "This concept of $t$ -design was generalized further to the concept of $t$ -designs in $Q$ -polynomial association schemes by Delsarte [11].", "There are many different kinds of design theories and there is vast literature on various design theories.", "We would like to refer the readers, in particular, to the following two papers [3], [4] for the review of the developments of design theory including many generalizations of the concept of $t$ -designs, from viewpoint of algebraic combinatorics.", "The microscopic world is described by quantum physics, where the time-evolution of a closed system is expressed by a unitary transformation.", "Accordingly, study of unitary transformations, or unitary matrices if the system is finite-dimensional, is essential to understand the quantum world.", "Needless to say, unitary transformations play central roles in quantum computing and quantum information theory.", "So, it is natural for us to approximate the whole unitary group $\\operatorname{U}(d)$ by a finite subset $X$ of $M=\\operatorname{U}(d)$ .", "This lead physicists and mathematicians to formulate the concept of unitary $t$ -designs [15], [29].", "A systematic study of unitary $t$ -designs from a mathematical viewpoint is given by Roy-Scott [28] and we use their paper as a basic reference on unitary $t$ -designs.", "There are many further developments on the theory of unitary $t$ -designs, including those so called approximate unitary $t$ -designs.", "Those unitary $t$ -designs which satisfy eqn:biaxal in def:magnum in Section II is called exact unitary $t$ -designs.", "Approximate unitary $t$ -designs have also been considered and studied mainly in physics.", "A unitary $t$ -design $X$ of $\\operatorname{U}(d)$ is called a unitary $t$ -group if $X$ is a subgroup of $\\operatorname{U}(d)$ as well.", "In physics, cf [39], [36], [41], some unitary 3-groups have been known, say Clifford groups and some sporadic examples, but the difficulty of finding unitary 4-groups (except for the case of $d=2$ , cf.", "[5]) has been noticed.", "Actually, the non-existence of unitary 4-groups was known for $d\\ge 5$ in a disguised form in finite group theory, in a very deep paper of Guralnick-Tiep [16]) that uses the classification of finite simple groups.", "This was recently pointed out by BNRT [5] and the complete classification of unitary $t$ -groups on $\\operatorname{U}(d)$ for all $t\\ge 2$ and $d\\ge 2$ was obtained therein.", "Although unitary 4-groups on $\\operatorname{U}(d)$ do not exist for $d\\ge 3$ at all, unitary $t$ -designs exist for all $t$ and $d$ as was proved in Seymour-Zaslavsky [32].", "However, the explicit constructions of unitary $t$ -designs are challenging in general, similarly as in the case for the explicit constructions of spherical $t$ -designs.", "In particular, while the existence of unitary 4-designs in $\\operatorname{U}(4)$ have been known, their explicit constructions were not obtained so far to our knowledge [25].", "Explicit constructions of unitary $t$ -designs are essential in many areas of quantum information processing such as efficient randomized benchmarking of quantum channels [20], [23], [24], [35], [21], [37], quantum process tomography[29], [22], quantum state tomography[19], [30], [40], [38], decoupling [34], [27], quantum cryptography[1] and data hiding[13], among others.", "Their efficient implementation in terms of the number of local gates have been actively studied [8], [17], [6], [26].", "The main purpose of this paper is to give explicit constructions of unitary 3-designs in $\\operatorname{U}(3)$ and unitary 4-designs in $\\operatorname{U}(4)$ numerically.", "In order to do that, we first obtain the following purely mathematical theorem that explains how we can construct unitary $(t+1)$ -designs from certain unitary $t$ -group $G$ explicitly.", "Namely, we obtain the following Theorem: Theorem 1 Let ${G}$ be a finite subgroup of $\\operatorname{U}(d)$ , and let $\\chi : \\operatorname{U}(d) \\rightarrow \\operatorname{U}(d)$ be the natural (fundamental) unitary representation of $\\operatorname{U}(d)$ .", "We abuse the notation by considering $\\chi : G \\hookrightarrow \\operatorname{U}(d)$ as the natural embedding of $G$ .", "Suppose that ${G}$ is a unitary $t$ -group in ${\\operatorname{U}(d)}$ .", "Let $\\chi ^{t+1}$ be the $(t+1)$ times tensor product of the fundamental representation $\\chi $ .", "Suppose ${(\\chi ^{t+1} , \\chi ^{t+1})_G=(\\chi ^{t+1}, \\chi ^{t+1} )_{\\operatorname{U}(d)}+1.", "}$ Then there exists a non-zero $G \\times G$ -invariant homogeneous polynomial $f \\in \\operatorname{Hom}(\\operatorname{U}(d), t+1, t+1)$ , unique up to scalar multiplication, such that $\\int _{\\operatorname{U}(d)} f(U) \\,\\mathrm {d}U = 0$ .", "Let ${U_0\\in \\operatorname{U}(d)}$ be a zero of ${f.}$ Then the orbit ${X} = GU_0G$ of ${U_0}$ under the action ${G\\times G}$ on ${\\operatorname{U}(d)}$ becomes a unitary $(t+1)$ -design in ${\\operatorname{U}(d)}$ .", "Here we defined the inner product of two representations $\\rho _1$ and $\\rho _2$ of a group $\\operatorname{U}(d)$ by $(\\rho _1 , \\rho _2)_{\\operatorname{U}(d)} = \\int _{\\operatorname{U}(d)} \\operatorname{Tr}\\rho _1(U) \\overline{\\operatorname{Tr}\\rho _2(U)}\\,\\mathrm {d}U$ and $(\\rho _1 , \\rho _2)_{G} = \\frac{1}{|G|} \\sum _{x \\in G} \\operatorname{Tr}\\rho _1(x) \\overline{\\operatorname{Tr}\\rho _2(x)}$ for a finite subgroup $G \\subset \\operatorname{U}(d)$ .", "The Haar measure is normalized as $\\int _{\\operatorname{U}(d)} \\,\\mathrm {d}U = 1$ .", "This theorem guarantees that if there is such $G$ satisfying the conditions of thm:Khazar, then there is a non-trivial homogeneous polynomial $f$ in $\\operatorname{Hom}(\\operatorname{U}(d), t+1,t+1)$ that is invariant under the action of $G\\times G.$ Take any zero $U_0$ of $f$ on $\\operatorname{U}(d)$ , then the orbit of $U_0$ under the action of $G\\times G$ , say $X=GU_0G$ , gives a unitary $(t+1)$ -design on $\\operatorname{U}(d).$ In sec:Ino, we apply this Theorem in particular for the two cases $d=3, G=\\operatorname{SL}(3,2), t=2$ $d=4, G=\\operatorname{Sp}(4,3), t=3$ to construct the explicit unitary $(t+1)$ -designs in $\\operatorname{U}(d)$ numerically.", "This technique also works for other $G$ satisfying the conditions of thm:Khazar, but the large order of the group so far prevented us from getting the explicit examples for other cases.", "They should be manageable if we have more computational resources.", "thm:Khazar claims $X$ is a unitary $(t+1)$ -design, although it does not rule out the possibility that $X$ is also a unitary $(t+2)$ -design.", "We have the following theorem to bound the strength of the design.", "Theorem 2 Let $G$ be a finite subgroup of $\\operatorname{U}(d)$ .", "Let $X_1 = G U_1 G$ and $X_2 = G U_2 G$ be two orbits of the natural action $G \\times G$ on $\\operatorname{U}(d)$ .", "Suppose $X_i$ is a unitary $t_i$ -design but not a unitary $(t_i+1)$ -design where $i = 1,2$ .", "Then $t_1 \\le 2 t_2 +1$ and $t_2 \\le 2 t_1 + 1$ .", "This theorem is motivated by [2] which proves a similar result for spherical designs.", "We will conclude our paper by giving some discussions." ], [ "Unitary $t$ -designs and unitary {{formula:ed72c2cc-366e-4273-9896-606a8ee8cb33}} -groups", "Let us recall the definition of unitary $t$ -designs in $\\operatorname{U}(d).$ Definition 3 A finite subset ${X}$ of the unitary group ${\\operatorname{U}(d)}$ is called a unitary ${t}$ -design, if $ {\\int _{\\operatorname{U}(d)}f(U) \\,\\mathrm {d}U=\\frac{1}{|X|}\\sum _{U\\in X}f(U)}$ for any ${f(U)\\in \\operatorname{Hom}(\\operatorname{U}(d), t,t).", "}$ Here ${\\operatorname{Hom}(\\operatorname{U}(d),r,s)}$ is the space of polynomials that are homogeneous of degree ${r}$ in the matrix entries of ${U}$ , and homogeneous of degree ${s}$ in the matrix entries of the Hermitian conjugate ${U^{\\dagger }}$ of $U$ .", "Those satisfying the condition (REF ) above are called exact unitary $t$ -designs in some literature.", "In this paper, we consider only these unitary $t$ -designs.", "While those with the condition (REF ) replaced by the condition that the difference of both sides is very small, are called approximate unitary $t$ -designs.", "Of course exact unitary $t$ -designs are approximate unitary $t$ -designs, and both types of unitary $t$ -designs are studied extensively in physics [17], [6], [26], [10], [18].", "It is known that there are many equivalent characterizations of unitary $t$ -design in $\\operatorname{U}(d)$ .", "(cf.", "Roy-Scott [28], Zhu-Kueng-Grassl-Gross [41].)", "Here, we will use some of the equivalent conditions later in our paper.", "One equivalent definition is as follows [28]: A finite subset $X$ in $\\operatorname{U}(d)$ is a unitary $t$ -design, if and only if for any $f\\in \\operatorname{Hom}(\\operatorname{U}(d), t,t)$ , $(1,f)_{\\operatorname{U}(d)}=(1,f)_X,$ where $(1,f)_{\\operatorname{U}(d)}=\\int _{\\operatorname{U}(d)}\\overline{f}\\,\\mathrm {d}U$ and $(1,f)_{X}=\\frac{1}{|X|}\\sum _{x\\in X}\\overline{f(x)}.$ There are several different characterization of unitary $t$ -groups [28] and [41].", "Let ${\\chi }$ be the natural (fundamental) representation of ${G \\hookrightarrow \\operatorname{U}(d)}$ as well as the natural representation of $\\operatorname{U}(d)$ , $\\chi : \\operatorname{U}(d) \\hookrightarrow \\operatorname{U}(d)$ .", "The notation $\\chi ^t$ is the shorthand for $t$ times tensor product $\\chi \\otimes \\cdots \\otimes \\chi $ .", "A finite subgroup ${G}$ is a unitary ${t}$ -group in ${\\operatorname{U}(d)}$ , if and only if ${\\frac{1}{|G|}\\sum _{g\\in G}{\\operatorname{Tr}\\chi (g)}^{2t}=\\int _{U\\in \\operatorname{U}(d)}{\\operatorname{Tr}\\chi (U)}^{2t} \\,\\mathrm {d}U}.$ A finite subgroup $G$ is a unitary $t$ -group if and only if the decomposition of $\\operatorname{U}(d)^{\\otimes t}$ into the irreducible representations of ${\\operatorname{U}(d)}$ is the same as the decomposition of $G^{\\otimes t}$ into the irreducible representations of ${G}$ in the sense of both dimension and multiplicity.", "A finite subgroup ${G\\subset \\operatorname{U}(d)}$ is a unitary ${t}$ -group, if and only if ${M_{2t}(G, V)=M_{2t}(\\operatorname{U}(d),V).", "}$ where the LHS ${M_{2t}(G, V) :=(\\chi ^t, \\chi ^t)_G=\\frac{1}{|G|}\\sum _{g\\in G}{\\chi ^t}(g)\\overline{{\\chi ^t}(g)} = \\frac{1}{{G}}\\sum _{g\\in G}{\\operatorname{Tr}\\chi (g)}^{2t}}$ and the RHS ${M_{2t}(\\operatorname{U}(d),V)}$ is the corresponding inner product ${M_{2t}(\\operatorname{U}(d),V) := (\\chi ^t, \\chi ^t)_{\\operatorname{U}(d)}=\\int _{U\\in \\operatorname{U}(d)} {\\operatorname{Tr}\\chi (U)}^{2t} \\,\\mathrm {d}U.", "}$ Let us recall that unitary $t$ -groups in $\\operatorname{U}(d)$ are completely classified for all $t\\ge 2$ and $d\\ge 2.$ (Cf.", "Guralnick-Tiep [16] and BNRT [5].)", "The main purpose of this paper is to prove thm:Khazar given in Section II and construct new unitary designs accordingly." ], [ "Proofs", "It is known that the irreducible representations of $\\operatorname{U}(d)$ appearing in $\\chi ^{t+1} \\otimes \\overline{\\chi }^{t+1}$ are parametrized by the non-increasing integer sequence $\\mu =(\\mu _1, \\mu _2, \\ldots , \\mu _d).$ The irreducible representation of $\\operatorname{U}(d)$ corresponding the sequence $\\mu $ is denoted by $\\rho _{\\mu }.$ (cf.", "[28]).", "Let $\\Phi $ be the set of $\\mu $ with $\\rho _{\\mu }$ in the representation $\\chi ^{t+1} \\otimes \\overline{\\chi }^{t+1}$ .", "Such $\\mu $ is characterized by $\\mu _+ = - \\mu _- \\le t+1$ .", "Here, $\\mu _+$ is the sum of all positive $\\mu _i$ 's and $\\mu _-$ is the sum of all negative $\\mu _i$ 's.", "Let $G$ be a subgroup of $\\operatorname{U}(d)$ , and let $\\chi $ be the natural embedding of $G$ into $\\operatorname{U}(d)$ .", "Suppose that $d\\ge t+1$ .", "Let $(\\chi ^{t+1} \\overline{\\chi }^{t+1}, 1)_G = (t+1)!", "+1$ .", "First we prove the following proposition: Proposition 4 With the notation given above, there is a unique non-trivial irreducible representation $\\rho _{\\widetilde{\\mu }}$ such that $(\\rho _{\\widetilde{\\mu }}, 1)_G=1$ , where $\\widetilde{\\mu } \\in \\Phi $ .", "We write $\\displaystyle \\chi ^{t+1} =\\bigoplus _\\lambda H_{\\lambda }=\\bigoplus _\\lambda W_{\\lambda }\\otimes S_{\\lambda }$ as given in [41].", "Here $W_{\\lambda }$ is the Weyl module carrying the irreducible representation of $\\operatorname{U}(d)$ associated with the partition $\\lambda $ while $S_{\\lambda }$ is the Specht module of which the symmetric group $S_{t+1}$ acts irreducibly.", "By our assumption $(\\chi ^{t+1} \\overline{\\chi }^{t+1},1)_G=\\sum _{\\lambda , \\rho } d_{\\lambda } d_{\\tau }(W_{\\lambda }\\overline{W_{\\tau }}, 1)_G =(t+1)!+1$ .", "Here $\\lambda $ and $\\tau $ are non-increasing partitions of $t+1$ into no more than $d$ parts and $d_{\\lambda }$ is the degree of the Specht module $S_\\lambda $ .", "Note that $(\\chi ^{t+1} \\overline{\\chi }^{t+1},1)_{\\operatorname{U}(d)}=\\sum _{\\lambda , \\tau } d_{\\lambda } d_{\\tau } (W_{\\lambda }\\overline{W_{\\tau }}, 1)_{\\operatorname{U}(d)} = (t+1)!", "=\\sum {d_{\\lambda }}^2$ .", "On the other hand, the irreducible representations $\\rho _{\\mu }$ of $\\operatorname{U}(d)$ appearing in $\\chi ^{t+1} \\overline{\\chi }^{t+1}$ are characterized in [28].", "We do not know the exact multiplicities in which each irreducible representation $\\rho _{\\mu }$ (of $\\operatorname{U}(d)$ ) appearing in $\\chi ^{t+1} \\overline{\\chi }^{t+1}$ .", "However, we know that the multiplicity of $\\rho _{(0,...,0)}$ is $(t+1)!$ .", "Since $(\\rho _{\\mu },1)_{\\operatorname{U}(d)}= 0$ for $\\mu \\ne {(0,...,0)}$ , we conclude that there is exactly one $\\widetilde{\\mu } \\ne {(0,...,0)}$ such that $(\\rho _{\\tilde{\\mu }},1)_G=1$ .", "Next we introduce the concept of unitary $\\rho $ -design for later proof.", "Definition 5 Let $(\\rho ,V)$ be a unitary representation of $\\operatorname{U}(d)$ .", "Let $X$ be a finite subset of $\\operatorname{U}(d)$ .", "Then $X$ is called a unitary $\\rho $ -design if $ \\frac{1}{{X}} \\sum _{U \\in X} \\rho (U) = \\int _{\\operatorname{U}(d)} \\rho (U) \\,\\mathrm {d}U.$ Obviously we have another characterization of unitary $t$ -design.", "Theorem 6 $X$ is a unitary $t$ -design if and only if $X$ is a unitary $\\rho $ -design for every irreducible representation $\\rho $ appearing in $U^{\\otimes t} \\otimes (U^\\dagger )^{\\otimes t}$ .", "We mimic the proof in [29] to get an equivalent definition of unitary $\\rho $ -design, whose condition is easy to confirm.", "Theorem 7 For any finite $X \\subset \\operatorname{U}(d)$ , $ \\frac{1}{{X}^2} \\sum _{U,V \\in X} \\operatorname{Tr}\\rho (U^\\dagger V) \\ge \\int _{\\operatorname{U}(d)} \\operatorname{Tr}\\rho (U) \\,\\mathrm {d}U.$ with equality if and only if $X$ is a unitary $\\rho $ -design.", "Corollary 8 If $X$ is a unitary $\\rho $ -design, then $UX$ is also a unitary $\\rho $ -design for every $U \\in \\operatorname{U}(d)$ .", "[Proof of thm:consortion] Let $S := \\frac{1}{{X}} \\sum _{U \\in X} \\rho (U) - \\int _{\\operatorname{U}(d)} \\rho (U) \\,\\mathrm {d}U$ .", "Then $0 \\le \\operatorname{Tr}(S^\\dagger S) &= \\frac{1}{{X}^2} \\sum _{U,V \\in X} \\operatorname{Tr}\\rho (U^\\dagger V) - 2 \\times \\frac{1}{{X}} \\sum _{U \\in X} \\int _{\\operatorname{U}(d)} \\operatorname{Tr}\\rho (U^\\dagger V) \\,\\mathrm {d}V \\\\&+ \\int _{\\operatorname{U}(d)} \\int _{\\operatorname{U}(d)} \\operatorname{Tr}\\rho (U^\\dagger V) \\,\\mathrm {d}U \\,\\mathrm {d}V \\\\&= \\frac{1}{{X}^2} \\sum _{U,V \\in X} \\operatorname{Tr}\\rho (U^\\dagger V) - \\int _{\\operatorname{U}(d)} \\operatorname{Tr}\\rho (U) \\,\\mathrm {d}U.", "$ Now we are able to prove thm:Khazar.", "[Proof of thm:Khazar] First we recall the fact that all the matrix coefficient functions of $\\rho _\\mu $ where $\\mu \\in \\Phi $ form a basis of $\\operatorname{Hom}(\\operatorname{U}(d), t+1,t+1)$ [28].", "By prop:Kalmarian, there is a unique non-trivial irreducible representation $\\rho _{\\widetilde{\\mu }}$ such that $(\\rho _{\\widetilde{\\mu }},1)_G = 1$ .", "By symmetry $\\widetilde{\\mu } = - \\widetilde{\\mu }$ .", "Here $- \\widetilde{\\mu }$ is obtained by negating entries of $\\widetilde{\\mu }$ and put them in reverse order.", "For every non-trivial $\\mu \\ne \\widetilde{\\mu }$ , we have shown that $(\\rho _\\mu ,1)_{G} = 0$ .", "Therefore $\\frac{1}{{G}^2} \\sum _{U,V \\in G} \\operatorname{Tr}\\rho _\\mu (U^\\dagger V) = 0$ .", "By thm:consortion, $G$ is a unitary $\\rho _\\mu $ -design.", "It implies that $\\frac{1}{{G}} \\sum _{U \\in G} \\rho _\\mu (U) = 0$ .", "Hence every matrix coefficient function of $\\rho _\\mu $ becomes 0 after $G \\times G$ averaging.", "For $\\mu = \\widetilde{\\mu }$ , let us consider its matrix coefficient functions.", "Let $A = \\frac{1}{{G}} \\sum _{U \\in G} \\rho _{\\widetilde{\\mu }}(U)$ and $M = M(U) = \\frac{1}{{G}^2} \\sum _{g_1,g_2 \\in G} \\rho _{\\widetilde{\\mu }}(g_1^\\dagger U g_2)$ .", "Note that $\\rho _{\\widetilde{\\mu }}(g) M = M \\rho _{\\widetilde{\\mu }}(g) =M$ for every $g \\in G$ and hence $AM=MA=M$ .", "Suppose $\\rho _{\\widetilde{\\mu }}$ decomposes into irreducible representations $(\\rho _\\eta , V_\\eta )$ of $G$ by $\\rho _{\\widetilde{\\mu }} = \\bigoplus _{\\eta \\in \\Gamma } m_\\eta \\rho _\\eta $ .", "Then $M$ is a block diagonal matrix with blocks corresponding to these $\\rho _\\eta $ .", "By prop:Kalmarian, one of the $\\rho _\\eta $ 's is the trivial representation and its multiplicity is one.", "For every other $\\eta $ , since $(\\rho _\\eta ,1)_G = 0$ , we have $A|_{V_\\eta } = 0_{V_\\eta }$ .", "Therefore $M|_{V_\\eta } = 0_{V_\\eta }$ .", "Hence the matrix coefficient functions in the block corresponding to $\\rho _\\eta $ becomes 0 as well after $G \\times G$ averaging.", "Note that the trivial representation in $\\rho _{\\widetilde{\\mu }}$ is of dimension 1 and of multiplicity 1, therefore besides the trivial constant function only one matrix coefficient is non-zero after $G \\times G$ averaging.", "In fact, its $G \\times G$ averaging is equal to the polynomial $f(U) = \\frac{1}{{G}^2} \\sum _{g_1, g_2 \\in G} \\chi _{\\widetilde{\\mu }}(g_1^\\dagger U g_2)$ .", "Note that $(\\rho _{\\widetilde{\\mu }},1)_{\\operatorname{U}(d)} = 0$ implies that $\\int _{\\operatorname{U}(d)} f(U) \\,\\mathrm {d}U = 0$ .", "So far, we have shown the existence and uniqueness of the non-zero $G \\times G$ -invariant homogeneous polynomial $f \\in \\operatorname{Hom}(\\operatorname{U}(d), t+1, t+1)$ such that $\\int _{\\operatorname{U}(d)} f(U) \\,\\mathrm {d}U = 0$ .", "Now we take a zero $U_0$ of the polynomial $f(U) = \\frac{1}{{G}^2} \\sum _{g_1, g_2 \\in G} \\chi _{\\widetilde{\\mu }}(g_1^\\dagger Ug_2)$ .", "Let $X$ be the orbit of $U_0$ under the action of $G \\times G$ on $\\operatorname{U}(d)$ .", "For every non-trivial $\\mu \\ne \\widetilde{\\mu }$ , we have shown that $G$ is a unitary $\\rho _\\mu $ -design.", "By coro:benevolent and the additivity of unitary $\\rho $ -design, $X = GU_0G$ is a unitary $\\rho _\\mu $ -design.", "For $\\mu = \\widetilde{\\mu }$ , since $U_0$ is a zero of $f$ , we get $\\operatorname{Tr}M(U_0) = 0$ .", "Combined with the argument in the last paragraph, $M(U_0)$ is indeed the zero matrix.", "Hence $X$ is a unitary $\\rho _{\\widetilde{\\mu }}$ -design.", "Finally by thm:walkmiller, we conclude that $X$ is a unitary $(t+1)$ -design.", "[Proof of thm:lentiform] Without loss of generality, let us assume that $t_1 \\le t_2$ .", "Since $X_1 = G U_1 G$ is not a unitary $(t_1 + 1)$ -design, there exists a $G \\times G$ -invariant homogeneous polynomial $h \\in \\operatorname{Hom}(\\operatorname{U}(d),t_1+1,t_1+1)$ such that $h(U_1) \\ne 0$ and $\\int _{\\operatorname{U}(d)} h(U) \\,\\mathrm {d}U = 0$ .", "Now let us consider the $G \\times G$ -invariant homogeneous polynomial $h\\overline{h} \\in \\operatorname{Hom}(\\operatorname{U}(d), 2t_1 + 2, 2t_1 +2)$ .", "Note that $c := \\int _{\\operatorname{U}(d)} (h\\overline{h})(U) \\,\\mathrm {d}U > 0$ .", "Let $f := h \\overline{h} - c \\in \\operatorname{Hom}(\\operatorname{U}(d), 2t_1 + 2, 2t_1 +2)$ , then $\\int _{\\operatorname{U}(d)} f(U) \\,\\mathrm {d}U = 0$ .", "Suppose $X_2$ is a unitary $(2t_1 +2)$ -design, then we must have $f(U_2) = 0$ .", "Note that $2t_1 + 2 > t_1 +1$ , so $h(U_2) = 0$ .", "Therefore $f(U_2) = h(U_2)\\overline{h}(U_2) - c = -c < 0$ , contradiction.", "Hence $t_2 \\le 2t_1 + 1$ ." ], [ "Examples of unitary $t$ -groups {{formula:5310487d-f160-4ac0-9ac6-f38d19f39acd}} in {{formula:81b8257a-b5b0-4545-b107-61eb6daaf14d}} satisfying the conditions of Theorem 1", "The followings are some examples of ${G\\subset \\operatorname{U}(d)}$ that satisfy the conditions in thm:Khazar.", "Here, we basically use the notation of An Atlas of Finite Groups [9].", "Also, see Guralnick-Tiep [16] and BNRT [5].", "For ${t=3,}$ (We assume ${d\\ge 3.", "}$ ) ${d=4, G=\\operatorname{Sp}(4,3),}$ ${d=6, G=6_1.\\operatorname{U}_4(3),}$ ${d=12, G=6\\operatorname{Suz}.", "}$ For ${t=2,}$ (We assume ${d\\ge 3.", "}$ ) ${d=3, G=\\operatorname{SL}(3,2)=\\operatorname{PSL}(2,7),}$ ${d=10, G=\\operatorname{M}_{22},}$ ${d=28, G=\\operatorname{Rd},}$ ${d=(3^m\\pm 1)/2, G=\\operatorname{PSp}(2m,3), \\operatorname{Sp}(2m,3)}$ .", "(see [5] for the details of Weil representations in this case.)", "The above list might exhaust all such examples, although we will not try to give a rigorous proof of this claim." ], [ "The unitary representation of $\\operatorname{SL}(3,2)$ and {{formula:2b7f7583-275f-46fb-a899-1a11772f235e}}", "We aim to construct some unitary $(t+1)$ -designs based on certain unitary $t$ -groups.", "This urges us to find the unitary representations of these groups first.", "We adopt the notation $E(n)$ being the $n$ -th root of unity from the mathematical software GAP [14].", "The following two constructions are taken from [33].", "Example 9 Let $a := -(E(7)^4+E(7)^2+E(7))$ .", "Let $\\mathcal {M}$ be the matrix group generated by the following three matrices.", "$M_1 = \\begin{bmatrix}1 & & \\\\& & 1 \\\\& 1 &\\end{bmatrix},\\ M_2 = \\begin{bmatrix}1 & & \\\\& 1 & \\\\& & -1\\end{bmatrix},\\ M_3 = \\begin{bmatrix}1/2 & -1/2 & -a/2 \\\\-1/2 & 1/2 & -a/2 \\\\-\\overline{a}/2 & -\\overline{a}/2 & 0\\end{bmatrix}.$ Then $G=\\mathcal {M}^{(1)}$ , the commutator subgroup of $\\mathcal {M}$ , is isomorphic to $\\operatorname{SL}(3,2)$ and is embedded in $\\operatorname{U}(3)$ .", "Example 10 Let $\\omega := E(3)$ .", "Let $\\mathcal {M}$ be the matrix group generated by the following four matrices.", "$M_1 = \\begin{bmatrix}1 & & & \\\\& 1 & & \\\\& & \\omega ^2 & \\\\& & & 1\\end{bmatrix}, &\\ M_2 = \\frac{-i}{\\sqrt{3}} \\begin{bmatrix}\\omega ^{\\phantom{1}} & \\omega ^2 & \\omega ^2 & 0 \\\\\\omega ^2 & \\omega ^{\\phantom{1}} & \\omega ^2 & 0 \\\\\\omega ^2 & \\omega ^2 & \\omega ^{\\phantom{1}} & 0 \\\\0 & 0 & 0 & i\\sqrt{3}\\end{bmatrix}, \\\\M_3 = \\begin{bmatrix}1 & & & \\\\& \\omega ^2 & & \\\\& & 1 & \\\\& & & 1\\end{bmatrix}, &\\ M_4 = \\frac{-i}{\\sqrt{3}} \\begin{bmatrix}\\phantom{-}\\omega ^{\\phantom{1}} & -\\omega ^2 & 0 & -\\omega ^2 \\\\-\\omega ^2 & \\phantom{-}\\omega ^{\\phantom{1}} & 0 & \\phantom{-}\\omega ^2 \\\\0 & 0 & i\\sqrt{3} & 0 \\\\-\\omega ^2 & \\phantom{-}\\omega ^2 & 0 & \\phantom{-}\\omega ^{\\phantom{1}}\\end{bmatrix}.$ Then $G=\\mathcal {M}^{(1)}$ , the commutator subgroup of $\\mathcal {M}$ , is isomorphic to $\\operatorname{Sp}(4,3)$ and is embedded in $\\operatorname{U}(4)$ ." ], [ "The $G \\times G$ -invariant polynomial", "The construction of the $G \\times G$ -invariant polynomial $f$ in $\\operatorname{Hom}(\\operatorname{U}(d),t+1,t+1)$ is based on the irreducible characters of $\\operatorname{U}(d)$ .", "Suppose $\\chi _\\mu $ is the character of an irreducible representation $(\\rho _\\mu , V_\\mu )$ of the unitary group $\\operatorname{U}(d)$ .", "It naturally induces a $G \\times G$ -invariant function on $\\operatorname{U}(d)$ , namely $ f(U) = \\sum _{(g_1,g_2) \\in G \\times G} \\chi _\\mu (g_1^\\dagger U g_2).$ A closed form of $\\chi _\\mu (\\Lambda )$ can be expressed as a symmetric polynomial with respect to the spectrum of the unitary matrix $\\Lambda $ .", "Note that if $\\mu = -\\mu $ , then $\\chi _\\mu $ , thus $f$ , is a real function.", "Theorem 11 ([28] or [7]) Let $V_\\mu $ be the irreducible representation of $\\operatorname{U}(d)$ indexed by non-increasing integer sequence $\\mu $ .", "If $\\mu _d = 0$ , then the character of $V_\\mu $ is $\\chi _\\mu (\\Lambda ) = s_\\mu (\\lambda _1, \\ldots , \\lambda _d),$ where $s_\\mu $ is the Schur polynomial, and ${\\lambda _1, \\ldots , \\lambda _d}$ are the eigenvalues of $\\Lambda $ .", "If $\\mu _d \\ne 0$ , then the character of $V_\\mu $ is $\\chi _\\mu (\\Lambda ) = \\det (\\Lambda )^{\\mu _d} \\chi _{\\mu ^{\\prime }}(\\Lambda ),$ where $\\mu ^{\\prime } = (\\mu _1 - \\mu _d, \\ldots , \\mu _{d-1} - \\mu _d, 0)$ .", "For numerical computation, it takes considerable time to find the eigenvalues of a matrix and meantime it loses accuracy.", "Therefore we prefer to express $\\chi _\\mu $ by $\\operatorname{Tr}(\\Lambda ^k)$ and $\\overline{\\operatorname{Tr}({\\Lambda }^k)}$ where $0 \\le k \\le d$ .", "This can be done by Newton–Girard formulae [31].", "Example 12 By thm:inhibitor, we have $& \\chi _{(3,0,-3)}(\\Lambda ) = \\det (\\Lambda )^{-3} s_{(6,3,0)}(\\lambda _1,\\lambda _2,\\lambda _3) \\\\& = \\frac{1}{(\\lambda _1 \\lambda _2 \\lambda _3)^3} \\bigg (-2 \\lambda _1 \\lambda _2 \\lambda _3 \\left(\\lambda _1 \\lambda _2+\\lambda _3 \\lambda _2+\\lambda _1 \\lambda _3\\right) \\left(\\lambda _1+\\lambda _2+\\lambda _3\\right){}^4 \\\\& +\\left(\\lambda _1 \\lambda _2+\\lambda _3 \\lambda _2+\\lambda _1 \\lambda _3\\right){}^3 \\left(\\lambda _1+\\lambda _2+\\lambda _3\\right){}^3+2 \\lambda _1^2 \\lambda _2^2 \\lambda _3^2 \\left(\\lambda _1+\\lambda _2+\\lambda _3\\right){}^3 \\\\& +3 \\lambda _1 \\lambda _2 \\lambda _3 \\left(\\lambda _1 \\lambda _2+\\lambda _3 \\lambda _2+\\lambda _1 \\lambda _3\\right){}^2 \\left(\\lambda _1+\\lambda _2+\\lambda _3\\right){}^2-2 \\left(\\lambda _1 \\lambda _2+\\lambda _3 \\lambda _2+\\lambda _1 \\lambda _3\\right){}^4 \\left(\\lambda _1+\\lambda _2+\\lambda _3\\right) \\\\& -5 \\lambda _1^2 \\lambda _2^2 \\lambda _3^2 \\left(\\lambda _1 \\lambda _2+\\lambda _3 \\lambda _2+\\lambda _1 \\lambda _3\\right) \\left(\\lambda _1+\\lambda _2+\\lambda _3\\right)+\\lambda _1^3 \\lambda _2^3 \\lambda _3^3+2 \\lambda _1 \\lambda _2 \\lambda _3 \\left(\\lambda _1 \\lambda _2+\\lambda _3 \\lambda _2+\\lambda _1 \\lambda _3\\right){}^3\\bigg )$ Note that $\\frac{1}{\\lambda _i} = \\overline{\\lambda _i}$ and $\\operatorname{Tr}(\\Lambda ^k) = \\sum _i {\\lambda _i^k}$ .", "We can simplify the above expression by Newton-Girard formulae of symmetric polynomials.", "$ &\\chi _{(3,0,-3)}(\\Lambda ) =\\operatorname{Tr}(\\Lambda ^2)\\overline{\\operatorname{Tr}(\\Lambda )}^2+ \\operatorname{Tr}(\\Lambda ^3)\\overline{\\operatorname{Tr}(\\Lambda ) \\operatorname{Tr}(\\Lambda ^2)}+ \\operatorname{Tr}(\\Lambda )^2 \\overline{\\operatorname{Tr}(\\Lambda ^2)}\\\\&- 2 \\operatorname{Tr}(\\Lambda ^2) \\overline{\\operatorname{Tr}(\\Lambda ^2)}+ \\operatorname{Tr}(\\Lambda ) \\operatorname{Tr}(\\Lambda ^2) \\overline{\\operatorname{Tr}(\\Lambda ^3)}- \\operatorname{Tr}(\\Lambda ^3) \\overline{\\operatorname{Tr}(\\Lambda ^3)}- 3 \\operatorname{Tr}(\\Lambda ) \\overline{\\operatorname{Tr}(\\Lambda )}+ 10 \\nonumber $ Example 13 $ &\\chi _{(4,0,0,-4)}(\\Lambda ) = \\bigg (*{18 \\operatorname{Tr}(\\Lambda ^4) - 12 \\operatorname{Tr}(\\Lambda ) \\operatorname{Tr}(\\Lambda ^3) - 6 \\operatorname{Tr}(\\Lambda ^2)^2 + 4 \\operatorname{Tr}(\\Lambda ^2) \\operatorname{Tr}(\\Lambda )^2}^2 \\nonumber \\\\&+ 48 \\Re *{(2 \\operatorname{Tr}(\\Lambda ) \\operatorname{Tr}(\\Lambda ^3) + \\operatorname{Tr}(\\Lambda ^2)^2) \\overline{\\operatorname{Tr}(\\Lambda ^2) \\operatorname{Tr}(\\Lambda )^2}} \\nonumber \\\\&- 16 *{\\operatorname{Tr}(\\Lambda ^2) \\operatorname{Tr}(\\Lambda )^2}^2 + *{24 \\operatorname{Tr}(\\Lambda ^3) - 27 \\operatorname{Tr}(\\Lambda ^2) \\operatorname{Tr}(\\Lambda ) + 3 \\operatorname{Tr}(\\Lambda )^3}^2 \\\\&- *{3 \\operatorname{Tr}(\\Lambda )^3 - 27 \\operatorname{Tr}(\\Lambda ) \\operatorname{Tr}(\\Lambda ^2)}^2 + 360 *{\\operatorname{Tr}(\\Lambda ^2)}^2 + 216 *{\\operatorname{Tr}(\\Lambda )^2}^2 \\nonumber \\\\&- 1296 \\Re *{\\overline{\\operatorname{Tr}(\\Lambda ^2)} \\operatorname{Tr}(\\Lambda )^2} + 432 *{\\operatorname{Tr}(\\Lambda ) \\operatorname{Tr}(\\Lambda ^2)}^2 - 720 *{\\operatorname{Tr}(\\Lambda )}^2 - 5040\\bigg ) / 144 \\nonumber $" ], [ "The approximation algorithm", "Now our goal is reduced to the following problem.", "Problem 14 Given a continuous real function $f$ defined on a connected Lie group, find a zero of this function (numerically).", "In particular, the function $f$ is a non-trivial $G \\times G$ -invariant polynomial on a unitary group $\\operatorname{U}(d)$ .", "The unitary group $\\operatorname{U}(d)$ is connected, and the existence of zero is guaranteed because the integration of $f$ on $\\operatorname{U}(d)$ is 0.", "Suppose $f(L) < 0$ and $f(R) > 0$ where $L, R$ are two matrices representing the elements of the Lie group.", "By intermediate value theorem, there exists at least one matrix $Z$ on a path connecting $L$ and $R$ such that $f(Z) = 0$ .", "There are infinitely such paths and we will choose some special paths in the following.", "It is natural to use bisection method or false position method to approximate the zero in arbitrary precision.", "The trouble here is that the function is defined on a manifold rather than the Euclidean space.", "For Lie groups, there is a canonical atlas given by the exponential map from the Lie algebra to the Lie group.", "We take advantage of this property to define the mid-point and the false position.", "The mid-point of $L$ and $R$ is defined to be $\\exp \\left((\\log L + \\log R)/2 \\right)$ , and the false position between $L$ and $R$ is defined to be $\\exp \\left(\\frac{f(R)\\log L - f(L)\\log R}{f(R)-f(L)} \\right)$ .", "The false position method usually converges faster than the bisection method.", "Nevertheless we use the bisection method when $L$ and $R$ are far away for the sake of robustness.", "One may consider other iterative methods to speed up the convergence.", "We did not use them because evaluation of the function is the heavy part of the computation.", "The initial value of $L$ and $R$ are obtained by taking unitary matrices randomly until both of them are found.", "FindZeroOnUnitaryGroup [1] FindZeroOnUnitaryGroup$f,L,R,\\epsilon $ ${L-R} > \\epsilon $ $M \\leftarrow \\exp \\left((\\log L + \\log R)/2 \\right)$ or $\\exp \\left(\\frac{f(R)\\log L - f(L)\\log R}{f(R)-f(L)} \\right)$ .", "$f(M) < 0$ $L \\leftarrow M$ $R \\leftarrow M$ $M$ We are ready to construct the unitary designs, but let us put further constraint on the solution for the moment.", "Problem 15 Find a zero $U_0$ with good property, namely the size of the orbit $GU_0G$ is as small as possible.", "Suppose $GU_0G$ is an orbit whose size is smaller than $|G|^2$ , then there must exist $g_1,g_2,g_3,g_4 \\in G$ , such that $g_1^\\dagger U_0g_2=g_3^\\dagger U_0g_4$ .", "Therefore $U_0^\\dagger g_i U_0 = g_j$ where $g_i = g_3 g_1^\\dagger $ and $g_j = g_4 g_2^\\dagger $ are also elements of $G$ .", "This implies that $g_i$ and $g_j$ have the same spectrum.", "If $g_i$ has distinct eigenvalues, then $U_0$ is on a submanifold isomorphic to $\\operatorname{U}(1) \\times \\operatorname{U}(1) \\times \\cdots \\times \\operatorname{U}(1)$ .", "If the eigenvalues of $g_i$ are not simple, then $U_0$ is on a submanifold isomorphic to $\\operatorname{U}(m_1) \\times \\operatorname{U}(m_2) \\times \\cdots \\times \\operatorname{U}(m_k)$ , where $m_1, m_2, \\ldots , m_k$ are the multiplicities of the eigenvalues.", "Note that there is no guarantee that a zero exists on the submanifold.", "Though it does not solve prob:septomarginal completely, we have the clue to find them." ], [ "Solutions", "For $G \\cong \\operatorname{SL}(3,2) \\hookrightarrow \\operatorname{U}(3)$ , we find a zero on the submanifold $\\operatorname{U}(1) \\times \\operatorname{U}(1) \\times \\operatorname{U}(1)$ , namely the diagonal unitary matrices.", "The size of the orbit is at most ${G}^2/4 = 7056$ .", "Example 16 Let $G \\cong \\operatorname{SL}(3,2)$ be the matrix group in eg:floatboard, and let $f$ be the $G \\times G$ -invariant polynomial induced by the irreducible character $\\chi _{(3,0,-3)}$ in eqn:amphitriaene.", "Then $U_0 = \\operatorname{diag}(u_{11}, u_{22},u_{33})$ is a zero of $f$ , where $u_{11} = 1$ , $u_{22} \\approx 0.6480674529649858 - 0.7615829412529393 i$ and $u_{33} \\approx -0.3307476956662597 - 0.9437192176762438 i$ .", "The error bound in alg:Pocket is $\\epsilon = 10^{-15}$ .", "Hence the orbit $GU_0G$ is a unitary 3-design on $\\operatorname{U}(3)$ .", "The size of this orbit is 7056.", "Moreover, we can characterize all the diagonal unitary matrices in $\\operatorname{U}(3)$ which make $G U_0 G$ a unitary 3-design.", "Let $u, v, t$ be real numbers and let $U_0 = \\operatorname{diag}\\left(e^{it}, e^{i(t + \\frac{u+v}{2})}, e^{i(t + \\frac{u-v}{2})}\\right)$ .", "Then $G U_0 G$ is a unitary 3-design if and only if $u$ and $v$ satisfy $ & \\cos (u) [281838 \\cos (v) - 156 \\cos (2 v)-158]+ \\sqrt{7} \\sin (u) \\left[24 \\cos (v)+6 \\cos (2 v)+2 \\right] \\nonumber \\\\& + [28125 -181 \\cos (v)+140901 \\cos (2 v)-65 \\cos (3 v)] = 0.$ The solution of this equation is shown in fig:Beethovenish.", "Figure: Solution of eqn:brachiocyllosis for u,v∈[0,2π]u,v \\in [0,2\\pi ].For $G \\cong \\operatorname{Sp}(4,3) \\hookrightarrow \\operatorname{U}(4)$ , we find a zero on the submanifold $\\operatorname{U}(2)\\times \\operatorname{U}(2)$ .", "The size of the orbit is at most ${G}^2/6 = 447897600$ .", "Example 17 Let $G \\cong \\operatorname{Sp}(4,3)$ be the matrix group in eg:zoon, and let $f$ be the $G \\times G$ -invariant polynomial induced by the irreducible character $\\chi _{(4,0,0,-4)}$ in eqn:northwestern.", "Then $U_0 = \\begin{bmatrix}A & 0 \\\\0 & B\\end{bmatrix} $ is a zero of $f$ , where $A \\approx \\begin{bmatrix}-0.106632-0.973877 i & 0.0621677\\, -0.190601 i \\\\0.197341\\, +0.0353545 i & -0.807683-0.554486 i\\end{bmatrix}$ and $B \\approx \\begin{bmatrix}-0.596879-0.434093 i & -0.562033-0.373388 i \\\\0.372766\\, -0.562445 i & -0.381284+0.631921 i\\end{bmatrix}$ .", "The error bound in alg:Pocket is $\\epsilon = 10^{-6}$ .", "Hence the orbit $GU_0G$ is a unitary 4-design on $\\operatorname{U}(4)$ .", "The size of this orbit is 447897600.", "Remark 18 The existence of exact $t$ -designs are guaranteed by thm:Khazar, which is different from finding approximate unitary $t$ -designs.", "alg:Pocket can approximate such a unitary design with arbitrary precision if one has enough time and computational resources.", "The time complexity of evaluating eqn:trifledom is $O(tp(t)d^3 {G}^2)$ where $p(n) \\sim \\frac{1}{4n \\sqrt{3}} e^{\\pi \\sqrt{2n/3}}$ , the partition function, is equal to the number of partitions of $n$ .", "The error $\\epsilon $ is ideally halved after each iteration.", "So it takes about $\\log _2 10 \\approx 3.3$ iterations to get one more significant digit.", "For eg:pentahalide, our program (written in Mathematica) ran on a PC equipped with Core i7-6700 CPU and 8GB RAM, and it took about half a day for each iteration." ], [ "Discussion", "It would be interesting to classify those unitary $t$ -groups $G\\subset U(d)$ that satisfy $(\\chi ^{t+1},\\chi ^{t+1})_G=(\\chi ^{t+1},\\chi ^{t+1})_{U(d)}+1=(t+1)!", "+1$ , which is the condition of thm:Khazar.", "This should be certainly possible for $t\\ge 2$ as such $G$ are among those already classified.", "The problem would be interesting for $t=1$ as well.", "We expect the existence of many such examples of unitary 2-designs by our method mentioned in this paper.", "Such classification may be obtained by extending the method in Guralnick-Tiep [16], although actually doing so would not be trivial at all.", "This would lead to explicit constructions of many families of explicit unitary 2-designs.", "We believe this is an independently interesting open problem from the viewpoint of finite group theory.", "Concerning eg:cattlegate,eg:pentahalide, it would be interesting to find what are the smallest sizes of unitary 3-designs, respectively 4-designs, that can be obtained by our method.", "This may be done by discussing the possible submanifolds which contain the orbit.", "If the function can achieve zero on a submanifold, we can still apply alg:Pocket.", "On the other hand it is not easy to show the non-existence of zeros on a submanifold." ], [ "Acknowledgment", "The authors thank TGMRC (Three Gorges Mathematical Research Center) of China Three Gorges University in Yichang, Hubei, China, for supporting the visits of the authors to work on this research project in October 2018.", "The work is supported in part by NSFC Grant 11671258.", "The work of M. N. is partly supported by KAKENHI from JSPS Grant-in-Aid for Scientific Research (KAKENHI Grant No.", "17K05554).", "Y.", "Z. is supported by NSFC Grant No.", "11801353 and China Postdoctoral Science Foundation No.", "2018M632078." ] ]
1906.04583
[ [ "Bayesian Evaluation of Incomplete Fission Yields" ], [ "Abstract Fission product yields are key infrastructure data for nuclear applications in many aspects.", "It is a challenge both experimentally and theoretically to obtain accurate and complete energy-dependent fission yields.", "We apply the Bayesian neural network (BNN) approach to learn existed fission yields and predict unknowns with uncertainty quantification.", "We demonstrated that BNN is particularly useful for evaluations of fission yields when incomplete experimental data are available.", "The BNN results are quite satisfactory on distribution positions and energy dependencies of fission yields." ], [ "Bayesian Evaluation of Incomplete Fission Yields Zi-Ao Wang State Key Laboratory of Nuclear Physics and Technology, School of Physics, Peking University, Beijing 100871, China Junchen Pei [email protected] State Key Laboratory of Nuclear Physics and Technology, School of Physics, Peking University, Beijing 100871, China Yue Liu State Key Laboratory of Nuclear Physics and Technology, School of Physics, Peking University, Beijing 100871, China Yu Qiang State Key Laboratory of Nuclear Physics and Technology, School of Physics, Peking University, Beijing 100871, China Fission product yields are key infrastructure data for nuclear applications in many aspects.", "It is a challenge both experimentally and theoretically to obtain accurate and complete energy-dependent fission yields.", "We apply the Bayesian neural network (BNN) approach to learn existed fission yields and predict unknowns with uncertainty quantification.", "We demonstrated that BNN is particularly useful for evaluations of fission yields when incomplete experimental data are available.", "The BNN results are quite satisfactory on distribution positions and energy dependencies of fission yields.", "Introduction.— Nuclear fission data is the key ingredient in many nuclear applications  [1] such as nuclear energy, radiation shielding, management of nuclear wastes, and production of rare isotopes.", "The role of fission is also essential in synthesizing superheavy elements [2], [3], and understandings of reactor neutrinos [4] and r-process nucleosynthesis in neutron star mergers [5].", "In particular, high-precision and reliable neutron-induced fission product yield (FPY) distributions of actinides are very valuable.", "Systematic analysis of FPY presented interesting insights in the evolution of nuclear structures and dynamics [6], [7], [8].", "However, experimental measurements of FPY with continuous incident neutron energies is extremely difficult and insufficient.", "In major nuclear data libraries (ENDF [9], JENDL [10], JEFF [11], CENDL [12], etc.", "), complete evaluations of FPY are only available for neutron incident energies around thermal energies, 0.5 MeV and 14 MeV.", "Therefore, the prediction and evaluation of incomplete FPY at other energies for fast reactors are very anticipated.", "The theoretical description of fission observables is well known as one of the most challenging tasks in nuclear physics [13].", "The recent developments of fully microscopic nuclear fission models such as Time-dependent Hartree-Fock-Bogoliubov  [14] and Time-dependent Generator-Coordinate Method [15], [16], are promising but are not ready yet for accurate quantitative applications.", "There are phenomenological and semi-microscopic fission models which can well describe existing data in some region but suffers from predicting power as fission modes evolve [17].", "Indeed, the details of fission observables mainly depend on the multi-dimensional potential energy surfaces which are rather complex [13].", "The fission of compound nuclei involve fades of quantum effects as excitation energies increase [3], [18].", "It is more sophisticated to describe the energy dependence of FPY  [18], [19] and post-neutron FPY (independent FPY) [20].", "Furthermore, the uncertainty quantification of nuclear models has become a pressing issue in recent years [21].", "The machine learning is a very powerful tool to learn complex big data and then make predictions.", "In this respect, the Bayesian neural networks can naturally solve ill-inversed regression problems with uncertainty quantifications [22].", "There are a large number of experimental measurements of fragment distributions of different nuclei and excitation energies.", "To this end, the BNN is ideal for describing complex fission observables and capturing statistical properties of fission of compound nuclei.", "BNN has several applications in nuclear physics for predictions of binding energies [22], [23], [25], [24], which are useful for the inference of unknown nuclear masses near drip lines [26].", "It has also been used in other problems such as in simulating nuclear reaction cross sections [27] and equation of state [28].", "The Gaussian process can also solve regression problems but focuses on local correlations [22].", "The conventional evaluation of fission yields mainly rely on the least-squares adjustments of parameters of various phenomenological models [29], such as the Brosa model [31].", "This kind of evaluations could not be applicable when very few experimental data are available.", "The main objective of this work is to evaluate incomplete independent FPY based on BNN with uncertainty quantification.", "The Talys  [30] code which includes the Brosa [31] and GEF [32] models has been extensively used for evaluations of fission data.", "We also build BNN+Talys as an attempt to improve the descriptions of fission yields.", "It is possible that the prediction and evaluation by learning of existing data can acquire underlying correlations beyond theoretical fission models.", "The models.— The BNN approach performs posterior inference by treating network weights as random parameters [33].", "With given data and finite training steps, the BNN can offer full uncertainty qualification of parameters as well as inferences through the confidence interval (CI) which includes the true value with a probability.", "The prior distribution allows one to encode problem-specific beliefs as well as general properties about weights, which leads to penalized functions to avoid overfitting problems.", "In contrast, the frequentist inference aims to find exact parameters after infinite samplings.", "We adopt a feed-forward neural network defined as, $f(x,\\theta ) = a + \\sum _{j=1}^{H} b_j \\tanh (c_j+\\sum _{i=1}^{l} d_{ij}x_i)$ where $H$ denotes layers of the net, $l$ denotes neurons in each layer, and the model parameters (or “connection weights\") are $\\theta $ =$\\lbrace a, b_j, c_j, d_{ij} \\rbrace $ .", "The choice of the number of neurons defines the complexity of the networks and depends on the amount of dataset.", "The inputs of the network are given by $x_i$ =$\\lbrace Z_i, N_i, A_i, E_i\\rbrace $ , which include the charge number $Z_i$ and neutron number $N_i$ of the fission nucleus, the mass number $A_i$ of the fission fragment, and the excitation energy of the compound nucleus $E_i$ =$e_i+S_i$ ($e_i$ and $S_i$ are neutron incident energy and neutron separation energy respectively).", "The likelihood function $p(D|\\theta )$ and objective function $\\chi ^2(\\theta )$ are given by $p(D|\\theta ) = \\exp (-\\chi ^2/2),~~\\chi ^2 = (t_i-f(x_i,\\theta )^2)/\\Delta t_i^2$ where the data is given as $D$ =$\\lbrace x_i,t_i\\rbrace $ , in which $x_i$ are the inputs and $t_i$ is the output fission yield.", "The posterior distribution $p(\\theta |x,t)$ is obtained by $p(\\theta |D)=\\frac{ p(D|\\theta ) p(\\theta ) }{ \\int p(D|\\theta ) p(\\theta ) d\\theta }$ where $p(\\theta )$ is the assumed prior distribution of parameters and usually adopts a gaussian function with a width as described in Ref.[33].", "The denominator of the integral is called “evidence\" which can be used for model comparisons.", "The parameter learning is actually a maximizing posterior process via Stochastic Gradient Langevin Dynamics [33].", "The prediction on new inputs $x_n$ can be obtained with the net function $f(x_n,\\theta )$ .", "The averaged prediction of BNN invokes high-dimensional intergral and can be obtained by Hybrid Markov Chain Monte Carlo integral [33] over $\\theta $ , $<f(x_n,\\theta )>=\\int f(x_n,\\theta )p(\\theta |D)d\\theta $ When new observed data $D_n$ are added, the posterior can be updated as $p(\\theta |D,D_n)= \\frac{ p(D_n|\\theta )p(\\theta |D)}{ \\int p(D_n|\\theta )p(\\theta |D)d \\theta }$ The inference can be continuously improved with the updated posterior.", "Single fission— Firstly, to test the performance of the BNN approach, the learning of independent mass distributions of the neutron-induced fission of $n$ +$^{235}$ U with energy of 0.5 MeV are studied.", "The training dataset is taken from the JENDL [10].", "In this case, the network input has only one variable $x_i$ =$A_i$ .", "In Fig.REF (a), the results with 6 neurons for $N$ =107 points are satisfactory with a total $\\chi _N^2$ of 2.25$\\times $ 10$^{-6}$ , after 100 000 BNN samplings, where $\\chi _N^2$ is defined as $\\sum \\limits _i (t_i-f(x_i))^2/N$ .", "The largest deviations appear at mass number of 98 and 100.", "It is known that global optimization and overfitting are the most challenging problems for the neural networks approach.", "The BNN with a complex network can have problems in numerical convergence [22].", "To improve the performance of the network, we resample the points which have large deviations in the learning set and repeat the training for several cycles, in analogy to the reinforcement learning.", "The results are shown in Fig.REF (b) with $\\chi _N^2$ of 2.05$\\times $ 10$^{-6}$ .", "It can be seen that the learning in Fig.REF (b) has been much improved compared to Fig.REF (a).", "The learning performance with resampling is slightly better than BNN with 8 neurons in which $\\chi _N^2$ is 2.10$\\times $ 10$^{-6}$ .", "BNN needs sufficient samplings to get numerical convergence and this is very time consuming [22].", "We show that the reinforcement learning is helpful to efficiently obtain global optimizations.", "The associated CI with a 95$\\%$ probability is small and can reasonably reflect the BNN performance.", "Figure: (Color online) The BNN learning of fission yields of n+ 235 ^{235}U at energy of 0.5 MeV from JENDL , (a) use 6 neurons and (b) use 6 neurons and resampling.The shadow region corresponds to CI of BNN estimated at 95%\\%.Figure: (Color online) The BNN+Talys learning of fission yields of induced fission yields from JENDL library.", "The Talys results with default parameters are also given for comparison.The shadow region corresponds to CI estimated at 95%\\%.Figure: (Color online) The BNN prediction of fission yields of n+ 235 ^{235}U compared with JENDL, with neutron incident energy at (a) 0.5 MeV and (b) 14 MeV.The shadow region corresponds to the CI estimated at 95%\\%.Table: The learning and validation errors χ N 2 \\chi _N^2 (×10 -5 \\times 10^{-5}) of various models, in whichBNN-32 denotes 32 neurons has been adopted in the network, and Talys(pre-n) denotes calculated pre-neutron FPY using the GEF model in the Talys-1.9 code with default parameters .Figure: (Color online) The BNN evaluation of fission yields of n+ 235 ^{235}U at energies of 1.37 MeV (a) and 14.8 MeV (b), after learningthe available experimental data .The dashed line denotes the BNN prediction without learning the experimental data.The shadow region corresponds to the CI estimated at 95%\\%.Figure: (Color online) Similar to Fig.", "but for fission yields at energies of 4.49 MeV (a) and 8.9 MeV (b).Figure: (Color online) The compiled evaluations of fission yields of n+ 235 ^{235}U at different incident energies.", "The evaluations at 0.5 and 14 MeV aretaken from JENDL , and BNN evaluations at other energies are taken from Fig.", "and Fig..Validation— We next include the evaluated experimental neutron-induced independent FPY of 30 nuclei ($^{227,229,232}$ Th, $^{231}$ Pa, $^{232,233,234,235,236,237,238}$ U, $^{237,238}$ Np, $^{238,239,240,241,242}$ Pu, $^{241,243}$ Am, $^{242,243,244,245,246,248}$ Cm, $^{249, 251}$ Cf, $^{254}$ Es, $^{255}$ Fm ) from JENDL [10] in the learning set.", "To validate the predicting power of BNN, the data of $^{235}$ U has been excluded in the learning set.", "To provide some physics guides, the widely used GEF model for pre-neutron FPY in the Talys-1.9 code [30] has been adopted to obtain initial fission yields.", "BNN is used to learn the residuals.", "The combined BNN+Talys approach is similar to the hybrid BNN approach used for nuclear mass predictions [22], [23], [25].", "The network adopts 32 neurons for totally about 5029 data points and additional resampled points.", "The dataset is much larger than that used in BNN for nuclear mass studies [22], [23], [25].", "Some training results are shown in Fig.REF .", "It is shown that the Talys code with default parameters has large discrepancies compared to evaluated data.", "The BNN+Tayls can remarkably reproduce the overall evaluated fission yields.", "The crucial test is the validation of the BNN approach for $n$ +$^{235}$ U, as shown in Fig.REF .", "We see that the predictions of BNN+Tayls are satisfactory for $n$ +$^{235}$ U at energy of 0.5 MeV and 14 MeV.", "The energy dependence and the position of distributions can be well described by BNN.", "This success is mainly because the learning set has included neighbor U and Pu isotopes.", "Note that the predictions are less satisfactory around $^{227}$ Th and $^{255}$ Fm where neighbor nuclei in the learning set are not sufficient.", "The validation of $^{235}$ U is not as good as the learning in Fig.REF .", "This has also been reflected by the larger CI in Fig.REF compared to Fig.REF .", "The performances of different models in learning (29 nuclei) and validation ($^{235}$ U) are compared in Table REF , with listed errors $\\chi _N^2$ .", "For the pure BNN approach, the best performance is BNN-32-resample with 32 neurons and resampling.", "We demonstrated that the resampling is helpful in both learning and validation.", "The BNN approach is not always improved with increasing numbers of neurons and local optimizations are more likely to happen.", "For the BNN+Talys approach, the best performance is BNN-40.", "For BNN-32, the combination of Talys doesn't gain performance due to its large discrepancies.", "Note that the default Talys calculations are not as good as the updated GEF model in Ref. [32].", "In contrast, BNN plus microscopic nuclear mass models are very successful [22], [23], [25].", "It is expected that BNN plus microscopic fission models [15] can further improve descriptions of fission yields.", "Evaluation—The key motivation of our BNN approach is to evaluate incomplete experimental FPY based on the information learned from completed evaluations of other nuclei.", "Fig.REF shows the BNN-32-resample results for fission yields of n+$^{235}$ U at energies of 1.37 and 14.8 MeV after learning the JENDL library.", "There are no complete evaluation in existed libraries and only a few experimental data are available [34].", "We see that after taking into account the experimental data, the BNN can can give rather reasonable evaluation of fission yields.", "The BNN predictions without learning the experimental data are also satisfactory.", "This good performance is not surprise because the energies are close to 0.5 MeV and 14 MeV which are included in the learning set.", "The evaluation of n+$^{235}$ U at energies of 4.49 and 8.9 MeV are shown in Fig.REF .", "The evaluation at energies in the middle of 0.5 and 14 MeV are more challenging for BNN.", "Consequently the CI in Fig.REF is much larger than that of Fig.REF .", "We see that the BNN predictions have unreasonable negative FPY around mass number 110 at energies of 1.37 and 4.49 MeV.", "The BNN evaluation by taking into account the experimental data can avoid the negative values.", "Based on Fig.REF and Fig.REF , the fission yields from BNN at different energies are shown in Fig.REF .", "It can be seen that around the valley (around mass number 110$\\sim $ 120) fission yields increase monotonically as energies increase from 1.37, 4.49, 8.9, to 14.8 MeV.", "The two peaks corresponding to asymmetric fission modes decrease.", "It is known that the symmetric fission mode will play a role as excitation energies increase [18], [19].", "We demonstrated that the features of energy dependent fission yields can be successfully described by the BNN evaluation.", "The BNN evaluation can reasonably give CI for uncertainty warnings.", "Generally, our approach is reliable for the distribution position but less accurate for detailed peak structures, although it can be very accurate for a single fission as shown in Fig.REF .", "Usually fission yields are evaluated by tunning phenomenological models which could not be applicable when only a few experimental data are available.", "In this respect, the BNN approach is superior to phenomenological evaluations.", "Further improvement of our approach is possible by adding more measured data in the learning set and adopting specialized reinforcement learning scheme.", "Other auxiliary variables could be adopted for improvements.", "For example, the odd-even effect in charge distributions is significant [7].", "The physics guides on priors from microscopic fission models are also anticipated.", "Summary— We applied BNN to learn and predict independent fission yields of actinide nuclei for the first time.", "In many cases, the experimental distributions of neutron-induced fission yields are rather incomplete.", "The BNN evaluation of the incomplete fission yields based on learned information is very valuable.", "The BNN results are quite satisfactory regarding the distribution positions and energy dependencies of fission yields, while phenomenological evaluations could not be applicable when very few experimental data are available.", "The BNN with resampling can improve both learning and prediction, indicating that specialized reinforcement learning is needed.", "The associated confidence interval can reasonably estimate the evaluation uncertainty.", "Further improvement of the BNN approach is appealing towards modeling of reliable and quantitive nuclear fission data for practical nuclear applications.", "We thank useful comments by W. Nazarewicz.", "This work was supported by National Key R$\\&$ D Program of China (Contract No.", "2018YFA0404403), and the National Natural Science Foundation of China under Grants No.11790325,11835001.", "We also acknowledge that computations in this work were performed in Tianhe-1A located in Tianjin and Tianhe-2 located in Guangzhou." ] ]
1906.04485
[ [ "The K\"ahler-Ricci flow with Log Canonical Singularities" ], [ "Abstract We establish the existence of the K\"ahler-Ricci flow on projective varieties with log canonical singularities.", "This generalizes some of the existence results of Song-Tian \\cite{ST3} in case of projective varieties with klt singularities.", "We also prove that the normalized K\"ahler-Ricci flow will converge to the \\ka-Einstein metric with negative Ricci curvature on semi-log canonical models in the sense of currents.", "Finally we also construct K\"ahler-Ricci flow solutions performing divisorial contractions and flips with log canonical singularities." ], [ "Introduction", "In the decades of 1980 and 1990, Mori first proposed the minimal model program for high dimensional algebraic varieties, which has become an active field in algebraic geometry (cf.", "[21], [23]).", "The main target of this program is to give a complete birational classification of algebraic varieties according to the birational classification of their minimal models.", "A variety $X$ is called minimal if its canonical line bundle $K_{X}$ is nef, i.e., it holds that $K_{X}\\cdot C\\ge 0$ for any algebraic curve $C.$ An important procedure in this program is to perform successive birational surgeries, such as blow-downs and flips, to an algebraic variety until the so called minimal model is reached.", "Later, an essential breakthrough by Birkar-Cascini-Hacon-McKernan in [4] asserts the minimal model is indeed attained by this procedure for varieties with klt singularities.", "On the other hand, finding a canonical metric on a Kähler  manifold or a variety has long been a central problem in Kähler geometry.", "Since Yau's solution to Calabi's conjecture [43] there have been a lot of developments in this direction.", "When the first Chern class $c_{1}(X)$ has definite sign the canonical metrics are Kähler-Einstein metrics which have been studied systematically.", "In particular, the Kähler-Ricci flow can also be used to study the canonical metrics.", "In [8] Cao gave a parabolic proof of existence of negative and zero Kähler, metrics and showed that starting from some $\\omega _0$ (belonging to $-c_1(X)$ in the negative case), the Kähler-Ricci flow $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\omega &=-Ric(\\omega )\\\\\\omega (0) &= \\omega _0,\\end{array}\\right.$ has a longtime solution converging (after normalizing in the negative case) to the Kähler-Einstein metric on $X$ .", "However, in general the first Chern class will not be zero or have definite sign, and in these general situations the minimal model program provides ideas for finding so-called generalized Kähler-Einstein metrics as canonical metrics on these varieties.", "In [42] Tsuji used the Kähler-Ricci flow to study the existence of the generalized Kähler-Einstein metric on minimal projective manifolds of general type.", "In [41] Tian-Zhang established the general existence result of the Kähler-Ricci flow.", "In particular they gave a general existence time criterion of the Kähler-Ricci flow on Kähler manifolds and studied the long time behavior on projective manifolds.", "Furthermore, in a sequence of works [32], [33], [34], Song-Tian initiated the study of analytic minimal model program which proposes to find the canonical metrics on general projective varieties via to use the Kähler-Ricci flow.", "They gave more precise descriptions of the long time behavior of the Kähler-Ricci flow on projective manifolds and began the study of the Kähler-Ricci flow on singular projective varieties.", "Let us briefly recall the minimal model program of Mori and discuss some corresponding works on the analytic aspect of this program.", "By the classical theory [21], [23], algebraic varieties can be classified by their Kodaira dimensions, where the Kodaira dimension of a n-dimenstional variety $X$ is defined by $\\kappa (X)=Kod(X):=\\sup \\lbrace \\kappa |\\lim \\inf _{l\\rightarrow +\\infty }\\frac{dim H^{0}(X,lK_{X})}{l^{\\kappa }}>0\\rbrace .$ When $X$ is not minimal ($K_{X}$ is not nef.", "), by the cone theorem and base point free theorem there exists a contraction map $\\varphi : X\\rightarrow X^{\\prime }$ determined by the extremal ray which has negative intersection number with $K_{X}.$ If $dim X^{\\prime }<n$ , then $\\varphi $ is of fiber type and $X$ is a uniruled Mori fiber space which implies $\\kappa (X)=-\\infty $ ([21], [23]) and we can continue to consider the structure of the lower dimensional variety $X^{\\prime }.$ When $dim X^{\\prime }=n,$ we can consider the exceptional set $Exc(\\varphi )$ of $\\varphi ,$ which is the complement of the set in $X$ where $\\varphi $ is an isomorphism.", "If $Exc(\\varphi )$ has codimension 1, $\\varphi $ is a divisorial contraction such that the Picard number $\\rho (X^{\\prime })=\\rho (X)-1.$ If $Exc(\\varphi )$ has codimension 1 has codimension greater than 1, it was conjectured that there exists a flip morphism $X\\rightarrow X^{+}$ with associated $\\varphi ^{+}:X^{+}\\rightarrow X^{\\prime }$ such that $-K_{X}$ is $\\varphi $ -ample while $K_{X^{+}}$ is $\\varphi ^{+}$ -ample.", "The existence of the flip on normal varieties with klt singularities was confirmed in [4].", "In these contexts, in [34], Song-Tian first defined the weak Kähler-Ricci flow on $\\mathbb {Q}$ -factorial projective varieties with klt singularities and generalized Tian-Zhang's maximal time existence of the flow solution ([41]) to singular settings.", "In particular, it was determined that when $X$ is not minimal ($K_{X}$ is not nef) then in finite time, the flow will encounter an analytic singularity corresponding to algebraic surgeries characterized by $\\varphi $ , and that the Kähler-Ricci flow could be extended through these singularities in the sense of currents.", "Furthermore, if the surgeries at the singular time are divisorial contractions, Song-Weinkove in [35], [36] proved the geometric convergence of the flow solution to the variety generated by the divisorial contractions.", "Song and Yuan also found several examples of metric flips corresponding to algebraic flips in [30], [37].", "For divisorial contractions we have $\\rho (X^{\\prime })=\\rho (X)-1$ , so there can be at most finitely many divisorial contractions encountered by successive contractions of a non-minimal variety $X$ as above before a minimal variety is reached.", "In general however, it is still unknown whether the number of flips encountered is necessarily finite or not.", "On certain varieties with dimension 3 or 4 it is known that only finitely many flips are encountered [21].", "We will assume that only finitely many successive contractions of $X$ are required before reaching a minimal variety which we still denote by $X.$ We will also assume that the abundance conjecture is true, which asserts that if $K_{X}$ is nef then it is semi-ample, i.e., the canonical ring $\\text{R}:=\\bigoplus _{l\\ge 0}H^{0}(X,lK_{X})$ is finitely generated.", "Thus there exists a natural morphism $\\Phi _{|lK_{X}|}:X\\rightarrow X_{can}=\\text{Proj R}$ which is called the canonical model of $X.$ We then consider three separate cases in terms of the Kodaira dimension $\\kappa (X)$ of $X$ : $\\kappa =0$ ; $0<\\kappa <n$ ; $\\kappa =n$ .", "If $\\kappa (X)=0$ then the abundance conjecture holds while $K_{X}$ is numerically trivial.", "In this case, if $X$ is smooth Cao [8] proved the Kähler-Ricci flow smoothly converges to the Calabi-Yau metric on $X$ , while if $X$ has klt singularities Song-Yuan [38] proved that the Kähler-Ricci flow converges to the singular Calabi-Yau metric in the current sense.", "If $\\kappa (X)=n$ i.e., $K_{X}$ is big or $X$ is of general type, then the abundance conjecture also holds, and the morphism $\\Phi _{|lK_{X}|}$ is a birational morphism from $X$ to $X_{can}.$ By [41] if $X$ is nonsingular, the normalized Kähler-Ricci flow with any initial Kähler metric $\\omega _{0}$ $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\omega } &=-Ric(\\tilde{\\omega })-\\tilde{\\omega } \\\\\\tilde{\\omega }(0) &= \\tilde{\\omega }_{0},\\end{array}\\right.$ has a unique solution on $[0,+\\infty )$ and converges to the generalized Kähler-Einstein metric on $X_{can}$ in current sense, where the generalized Kähler-Einstein metric is a smooth Kähler-Einstein metric on the regular part of $X_{can}.$ In case that $X$ is of general type, with klt singularities, but not minimal, in [34] Song-Tian proved that given a suitable initial metric, one can continue the flow through its singularities, and that if only finitely many such are encountered, the weak flow (REF ) will go through the surgeries and finally weakly converges to the generalized Kähler-Einstein metric on $X_{can}.$ If $0<\\kappa (X)<n,$ assuming the abundance conjecture holds, the morphism $\\Phi _{|lK_{X}|}$ will induce a fibration of $X$ over a minimal $\\kappa (X)$ -dimensional variety $X_{can}:=\\text{Proj R}$ where the canonical class $K_{X_{\\eta }}$ of the generic fiber $X_{\\eta }$ is numerically trivial.", "As a special case, in [32] Song-Tian considered the case when $n=2,\\kappa (X)=1,$ i.e., when $X$ is a nonsingular surface as the torus fibration over an elliptic curve which could be thought as $X_{can}.$ They proved that given any initial metric $\\omega _{0}$ , the normalized Kähler-Ricci flow (REF ) has a unique solution on $[0,+\\infty )$ and converges to the generalized Kähler-Einstein metric on the regular part of $X_{can}$ in $C^{1,1}$ -sense, where the generalized Kähler-Einstein metric $\\omega _{GKE}$ on the regular part of $X_{can}$ satisfies $Ric(\\omega _{GKE})=-\\omega _{GKE}+\\omega _{WP}$ with a Weil-Petersson metric $\\omega _{WP}$ which is generated by the deformation of the torus fibers.", "Furthermore, in [33] they generalized their work to any n-dimensional nonsingular projective varieties with any $\\kappa (X)\\in (0,n)$ and one difference is that the convergence will only be in the current sense.", "If $\\kappa (X)=1,$ recently in [40] Tian-Z.L.Zhang proved that the convergence is in fact in the Gromov-Hausdorff sense.", "In summary, we observe that even starting from a nonsingular variety, singularities for the Kähler  Ricci flow may still develop in finite time at which point algebraic surgeries are encountered.", "To study the analytic minimal model program, we need to define the Kähler-Ricci flow on singular varieties.", "As in [2], [13], [34], a reasonable background metric must first be defined on such singular varieties.", "On a normal variety $X$ , reference to a local metric can always be made, as in as in [2], as any neighbourhood can always be considered as an analytic set in an ambient complex Euclidean space and a restriction of an ambient Kähler metric can be considered.", "On a $\\mathbb {Q}$ -factorial projective variety $X$ with a big and semi-ample divisor $H$ , a global metric always exists as in [34], since we have a birational morphism $\\Phi _{|mH|}: X\\rightarrow \\mathbb {CP}^{N_{m}}$ for sufficiently large integer $m$ , which in turn induces a current $\\omega _{0}:=\\frac{1}{m}\\Phi _{|mH|}^{*}\\omega _{FS}\\in [H]$ on $X$ where $\\omega _{FS}$ is the canonical Fubini-Study metric on $\\mathbb {CP}^{N_{m}}.$ As $H$ is big and semi-ample, by the properties of birational morphism, $\\omega _{0}$ is a positive current, smooth and non-degenerate in a Zariski open subset of $X.$ So we can think of $\\omega _{0}$ as a metric on the projective variety $X$ , and we can define the class of $\\omega _{0}$ -PSH functions on $X:$ $PSH(X,\\omega _{0}):=\\lbrace \\varphi \\in [-\\infty ,+\\infty )|\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi \\ge 0\\rbrace .$ To define the Kähler-Ricci flow of $\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi $ on the possibly singular variety $X$ for $\\varphi \\in PSH(X,\\omega _{0})$ , we must pull back and work on a smooth resolution of $X$ as follows.", "By Hironaka's resolution (cf.", "[21], [23]) we have a nonsingular projective variety $X^{\\prime }$ with a birational morphism $\\pi :X^{\\prime }\\rightarrow X$ where the canonical classes of $X$ and $X^{\\prime }$ are related by the adjunction formula: $K_{X^{\\prime }}=\\pi ^{*}K_{X}+\\sum _{j}a_{j}E_{j},$ where $E_{j}$ is an exceptional divisor and $a_{j}$ is the corresponding discrepancy.", "We may then consider the Kähler-Ricci flow (REF ) on $X^{\\prime }$ of the possibly degenerate pullback metric $\\pi ^{*}(\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi )$ on $X^{\\prime }.$ A solution $\\omega (t)$ to this could be then considered as a solution to (REF ) on $X$ provided $\\omega (t)$ restricts to be zero along the fibers of $\\pi $ , and thus “descends\" down to $X$ .", "The above study leads in general to degenerate elliptic or parabolic complex Monge-Ampère equations on the non-singular variety $X^{\\prime }.$ In [13] Eyssidieux-Guedj-Zeriahi generalized Kolodziej's $L^{\\infty }$ -estimate [24] for complex Monge-Ampère equation to the degenerate case and established the existence of singular Kähler-Einstein metrics with zero or negative Ricci curvature on $\\mathbb {Q}$ -factorial projective varieties with klt singularities.", "In [34] Song-Tian also made use of this crucial estimate to establish the existence of the solutions to the Kähler-Ricci flow on $\\mathbb {Q}$ -factorial projective varieties with klt singularities.", "A critical point of those works is that the klt singularities, where for any exceptional divisor $E_{j}$ it holds that $a_{j}>-1,$ only result in a $L^{p}$ -integrable volume form in the degenerate Monge-Ampère equation on $X^{\\prime }$ for some $p>1$ where the crucial $L^{\\infty }$ -estimate of the potential holds in [24].", "However we will see that this integrability fails in case of log canonical singularities, where $a_{j}\\ge -1.$ As indicated in [21], [23], in the minimal model program we are concerned mainly with singular varieties with at worst log canonical singularities, as classification according to discrepancies is invariant under different resolutions, i.e., the properties that all $a_{j}>-1$ and $a_{j}\\ge -1$ are independent of the resolutions.", "Following [4] on varieties with klt singularities, Birkar [5], Hacon-Xu [20] and Fujino [15], [16] established the minimal model theory for log canonical pairs.", "In particular they established the existence of birational surgeries including blow-downs and flips for log canonical pairs.", "In the analytic aspect, by [2], $\\mathbb {Q}$ -Fano varieties admit at worst klt singularities so log canonical singularities cannot appear on $\\mathbb {Q}$ -Fano varieties.", "In [6], Berman-Guenancia proved the existence of a Kähler-Einstein metric with negative Ricci curvature on stable semi-log canonical pair $(X,D)$ .", "Here the semi-log canonical means that the twisted canonical class $K_{X}+D$ is $\\mathbb {Q}$ -Cartier ample, $X$ has only ordinary nodes with codimension 1, and any resolution of this pair satisfies the log canonical condition.", "As in the log canonical case, the $L^{\\infty }$ -estimate in [13] does not hold, and they used instead the variational method developed by [3] to establish the existence of a weak solution.", "In [31], Song also derived the existence of the Kähler-Einstein metric on the semi-log canonical pairs by purely PDE methods where he also proved that the semi-log canonical model can be the limit in the moduli space of negative Kähler-Einstein metrics.", "In this work, we will generalize the results on existence of solutions to the Kähler-Ricci flow in [34] to the case of $\\mathbb {Q}$ -factorial projective varieties with log canonical singularities.", "Our first result is in the following: Theorem 1.1 Let $X$ be a $\\mathbb {Q}$ -factorial projective variety with log canonical singularities and $H$ be a big and semi-ample $\\mathbb {Q}$ -Cartier divisor on $X.$ Suppose $\\Phi _{|mH|}$ defines a birational morphism $X\\rightarrow \\mathbb {CP}^{N_{m}}$ for some large integer $m$ and $\\omega _{0}=:\\frac{1}{m}\\Phi _{|mH|}^{*}\\omega _{FS}\\in [H]$ is semi-positive current on $X.$ Then given the initial potential function $\\varphi _{0}\\in PSH(X,\\omega _{0})\\bigcap L^{\\infty }_{loc}(X\\setminus X_{lc})$ such that $\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{0}$ is a current with zero Lelong number, there exists a unique maximal weak solution $\\omega (t)$ to the Kähler-Ricci flow (REF ): $\\nonumber \\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\omega &=-Ric(\\omega )\\\\\\omega (0) &= \\omega _0+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{0},\\end{array}\\right.$ on the time interval $X\\times [0,T_{0}),$ where $T_{0}:=\\sup \\lbrace t>0|H+tK_{X}\\;is\\; nef\\rbrace .$ In particular, $\\omega (t)$ is a current on $X^{\\prime }$ with zero Lelong number for all $t\\in [0,T_{0})$ , solves (REF ) smoothly on $X_{reg}\\times (0,T_{0})$ and converges to the initial current $\\omega (0)$ in the current sense.", "Remark 1.2 We refer to Definition REF for the precise definition of weak and maximal solutions to (REF ) in the context of Theorem REF above.", "Let us briefly describe the strategy of the proof.", "As in [2], [34] and described above we pull back and study the Kähler-Ricci flow on the resolution $X^{\\prime }$ of $X$ and derive a degenerate complex Monge-Ampère flow equation for a family of singular potentials $\\varphi (t)$ on $X^{\\prime }.$ As indicated in [13], we cannot restrict the initial potential $\\varphi (0)$ to be in $L^{\\infty }(X^{\\prime })$ , and more generally must consider currents of zero Lelong number.", "The main difficulty here arises from the exceptional divisor on $X^{\\prime }$ with discrepancy $-1,$ precluding a treated as in [13], [34].", "We overcome this difficulty as in [11], [12], [17] by first regularizing $\\varphi (0)$ by a family of smooth bounded potentials on $X^{\\prime }$ , and also by regularizing the degenerate Monge-Ampèreflow equation on $X^{\\prime }$ while introducing a new family of background forms which combine Carlson-Griffiths forms [9] and the regularized conical forms by Guenancia-Păun [19] on $X^{\\prime }$ .", "In particular, these background forms will provide a family smooth complete bounded curvature Kähler  metrics in the open compliment of some divisor on $X^{\\prime }$ .", "We will then derive a uniform upper bound for solutions of the regularized equation as well as local lower bounds in the regular region after which we establish successive local high order estimates.", "For the unique maximality of solutions, we will adapt the arguments in [17] to prove the continuity of the solution at time zero in $L^{1}$ -sense which will imply the maximality of the weak solution to the weak Kähler-Ricci flow (REF ).", "By Theorem REF , when $X$ is a semi-log canonical model $X$ so that $K_{X}$ is ample $\\mathbb {Q}$ -Cartier, the solution $\\omega (t)$ to the Kähler-Ricci flow (REF ) exists for $t\\in [0,+\\infty ).$ In fact, the normalized Kähler-Ricci flow (REF ) will also have a longtime solution in this case as well and natural problem is to study the limit behaviour of the normalized flow.", "From [10], [26], in the complete smooth case the normalized Kähler-Ricci flow converges to the complete Kähler-Einstein metric with negative Ricci curvature.", "In the next theorem we show that similar limit behaviour is also true for such semi-log canonical models: Theorem 1.3 Suppose that in Theorem REF we have that $X$ is a semi-log canonical model.", "Then, the normalized Kähler-Ricci flow (REF ) has a unique maximal weak solution on $[0,+\\infty )$ with initial condition $\\omega (0)=\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{0}$ .", "Moreover as $t$ tends to infinity $\\omega (t)$ converges to the Kähler-Einstein current $\\omega _{KE}$ in both current and $C^{\\infty }_{loc}(X_{reg})$ -senses.", "Moreover $\\omega _{KE}$ is a smooth Kähler-Einstein metric in $X_{reg},$ and a current with zero Lelong number and bounded local potential away from $X_{lc}.$ Remark 1.4 We refer to §5 for the definition of semi-log canonical models.", "Theorem REF can be viewed as a parabolic version of the proof of existence of the Kähler-Einstein current on semi-log canonical varieties in [6], [31].", "The next important problem is the behaviour of the Kähler-Ricci flow when $K_{X}$ is not nef.", "In this case the flow will arrive at a singularity at the finite time $T_0$ in Theorem REF .", "As indicated by [5], [15], [16], [20], birational surgeries will occur at time $T_0$ .", "In the context of the analytic minimal model program, as Song-Tian did for klt case [34], we have the following result asserting that the Kähler-Ricci flow extends through the bi-rational surgeries in the log canonical case as in the following Theorem 1.5 Suppose that in Theorem REF , we have that $T_{0}<\\infty $ and $H_{T_{0}}=H+T_{0}K_{X}$ is $\\mathbb {Q}$ -semi-ample, such that for some large integer $m$ the linear system $|mH_{T_{0}}|$ induces a morphism $\\pi :X\\rightarrow Y.$ Then: If $\\pi :X\\rightarrow Y$ is a divisorial contraction, then there exists a closed semi-positive (1, 1)-current $\\omega _{Y}$ with zero Lelong number on $Y$ such that The weak Kähler-Ricci flow can be uniquely continued on $Y$ starting with $\\omega _{Y}$ at $t=T_{0}.$ $\\omega (t,\\cdot )$ converges to $\\pi ^{*}\\omega _{Y}$ in $C^{\\infty }(X_{reg}\\setminus Exc(\\pi ))$ as $t\\rightarrow T_{0}^{-}.$ Still denote the Kähler-Ricci flow starting on $Y$ at $t=T_{0}$ with the initial metric $\\omega _{Y}$ by $\\omega (t,\\cdot ),$ then $\\omega (t,\\cdot )$ converges to $\\omega _{Y}$ in $C^{\\infty }(Y_{reg}\\setminus \\pi (Exc(\\pi )))$ as $t\\rightarrow T_{0}^{+}.$ If $\\pi :X\\rightarrow Y$ is a small contraction, i.e., $Exc(\\pi ))$ has codimension greater than 1, and there exists a flip ${X @{.>}[rr]^{\\bar{\\pi }^{-1}} [dr]_{\\pi }& & X^{+} [dl]^{\\pi ^{+}} \\\\& Y }$ with the property that $X^{+}_{lc}\\bigcap Exc(\\pi ^{+})=\\varnothing ,$ then there exists a closed semi-positive (1, 1)-current $\\omega _{Y}$ with zero Lelong number on $Y$ such that $\\omega (t,\\cdot )$ converges to $\\pi ^{*}\\omega _{Y}$ in $C^{\\infty }(X_{reg}\\setminus Exc(\\pi ))$ as $t\\rightarrow T_{0}^{-}.$ The weak Kähler-Ricci flow can be uniquely continued on $X^{+}$ starting with $\\pi ^{+*}\\omega _{Y}$ at $t=T_{0}.$ Denote the solution still by $\\omega (t,\\cdot ),$ then $\\omega (t,\\cdot )$ converges to $\\pi ^{+*}\\omega _{Y}$ in $C^{\\infty }(X^{+}_{reg}\\setminus Exc(\\pi ^{+}))$ as $t\\rightarrow T_{0}^{+}.$ In summary, the weak Kähler-Ricci flow can be uniquely extended through the divisorial contractions and flips on $\\mathbb {Q}$ -factorial projective varieties with log canonical singularities.", "Compared to [34], our main difficulty is the case when the exceptional locus has nonempty intersection with the log canonical locus $X_{lc},$ where the local potential is $-\\infty .$ Actually we will show that in that case the local potential on the exceptional locus will also attain $-\\infty $ with zero Lelong number.", "After the birational surgeries the Kähler-Ricci flow will be continued with the new log canonical locus where the initial potential is $-\\infty $ with zero Lelong number.", "The paper is organized as follows.", "In section 2 we establish some necessary preliminaries and definitions needed throughout the paper, and in particular, to consider the Kähler  Ricci flow of a metric $\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{0}$ on a log canonical variety $X$ as in the above Theorems.", "In section 3 we formulate the corresponding main degenerate Monge-Ampèreflow equation to be solved on the smooth resolution $X^{\\prime }$ of $X$ .", "We then formulate a regularization of this degenerate flow equation and derive corresponding a priori estimates.", "Then in secitons 4, 5, 6 we prove Theorems REF , REF and REF respectively.", "Acknowledgment.", "The authors want to thank Professor Gang Tian for his interest in this work and lots of encouragement.", "They also want to thank Professor Jian Song for his careful reading the draft and beneficial advice.", "The last author wants to thank Professor Chenyang Xu, Chi Li and Yuchen Liu for discussions in algebraic geometry.", "He also wants to thank Professor Yuan Yuan for careful discussions on the proof details.", "Finally the second and last authors want to thank BICMR for its hospitality where partial work was done during the summer of 2018." ], [ "Divisors and singularities in the minimal model program", "Let us collect some necessary background materials we need in this paper, which mainly come from [21], [23], [34].", "First we recall the basic Definition 2.1 Given a $\\mathbb {Q}$ -Cartier divisor $D$ on a projective variety $X$ we say $D$ is ample if there exists a positive integer $m$ such that the linear system $|mD|$ induces an embedding of $X$ into $\\mathbb {CP}^{N_{m}};$ we say $D$ is semi-ample if there exists a positive integer $m$ such that the linear system $|mD|$ induces a morphism of $X$ into $\\mathbb {CP}^{N_{m}};$ we say $D$ is effective if $D=\\sum _{i=1}^{k}n_{i}D_{i}$ where the integers $n_{i}\\ge 0$ and $D_{i}$ are prime divisors; we say $D$ is nef if $D\\cdot C\\ge 0$ for any curve $C$ on $X.$ we say $D$ is big if $dim\\;H^{0}(X,mD)\\sim m^{n}$ for positive integer $m\\rightarrow +\\infty .$ We will restrict to considering $\\mathbb {Q}$ -$factorial$ projective varieties which are defined as Definition 2.2 An $n$ -dimensional projective variety $X$ is called $\\mathbb {Q}$ -$factorial$ if $\\mathbb {Q}$ -divisors are in fact $\\mathbb {Q}$ -Cartier divisors and $X$ is normal (ie, $dim(X_{sing})\\le n-2$ ).", "Let $X$ be a $\\mathbb {Q}$ -factorial projective variety.", "Then Hironaka's resolution theorem (cf.", "[21], [23]) provides a smooth projective variety $X^{\\prime }$ and a birational morphism $\\pi :X^{\\prime }\\rightarrow X$ with the adjunction formula (REF ): $K_{X^{\\prime }}=\\pi ^{*}K_{X}+\\sum _{j}a_{j}E_{j},$ where $E_{j}^{\\prime }s$ are exceptional divisors belonging to the exceptional locus $Exc(\\pi )$ with simple normal crossings and $a_{j}^{\\prime }s$ are the unique collection of rational corresponding discrepancies.", "In particular, $\\pi (\\bigcup _j E_j )=X_{sing}$ and $\\pi : X^{\\prime }\\setminus (\\bigcup _j E_j) \\rightarrow X_{reg}$ is a complex holomorphic map.", "Definition 2.3 Let $X$ be a $\\mathbb {Q}$ -factorial projective variety with a resolution $\\pi :X^{\\prime }\\rightarrow X$ as above We say that $X$ has log canonical singularities if $a_{j}\\ge -1$ for all $j$ log terminal singularities if $a_{j}> -1$ for all $j$ canonical singularities if $a_{j}\\ge 0$ for all $j$ terminal singularities if $a_{j}>0$ for all $j$ For any divisor $E_j$ in (REF ) we say $E_j$ is a log canonical (lc) divisor if $a_{j}=-1$ , $E_j$ is a log terminal (lt) divisor if $-1<a_{j}<0$ , $E_j$ is a canonical divisor if $a_{j}\\ge 0$ .", "According to [21], given two different resolutions of $X$ above with log canonical singularities the classification above is independent of the choices of resolutions in the sense that the log canonical divisors are in strictly one-to-one correspondence for two different resolutions, and likewise for log terminal and canonical divisors.", "In particular, we may make the following Definition 2.4 Let $X$ be a $\\mathbb {Q}$ -factorial projective variety.", "We say $X$ has log canonical singularities if $a_{j}\\ge -1$ for all $j$ in (REF ) and we define the log canonical locus as $X_{lc}:=\\pi (\\bigcup _{a_{i=-1}}E_{i})$ .", "As in [21], [34], we have the following special case of Kodaira's Lemma, which plays a crucial role as in [34], [41], [42]: Lemma 2.5 Given a $\\mathbb {Q}$ -factorial projective variety $X$ with a semi-ample and big $\\mathbb {Q}$ -divisor $H,$ for any resolution $\\pi :X^{\\prime }\\rightarrow X,$ there exists an effective divisor $E$ whose support is contained in $Exc(\\pi )$ on $X^{\\prime }$ , and $d>0$ such that $\\pi ^{*}H-\\delta E$ is ample for any rational number $0<\\delta <d.$" ], [ "PSH functions", "Let $X$ be a $\\mathbb {Q}$ -factorial projective variety with a resolution $\\pi :X^{\\prime }\\rightarrow X$ as above.", "Now we define a global semi-positive 1-1 form $\\omega _0$ on $X$ so that $\\pi ^* \\omega _0$ is smooth on $X^{\\prime }$ .", "As in [34] there exists a big and semi-ample $\\mathbb {Q}$ -Cartier divisor $H$ on $X$ .", "Thus we obtain a birational morphism $\\Phi _{|mH|}: X\\rightarrow \\mathbb {CP}^{N_{m}}$ for some large integer $m$ and some $N_m$ .", "We define $\\omega _{0}:=\\frac{1}{m}\\Phi _{|mH|}^{*}\\omega _{FS}\\in [H]$ which is semi-positive current on $X$ , where $\\omega _{FS}$ is the Fubini-Study metric on $\\mathbb {CP}^{N}$ .", "In particular, $\\Phi _{|mH|}$ is a holomorphic map on $X_{reg}$ while $\\pi ^* \\omega _{0}$ is a smooth semi-positive closed $1-1$ form on the smooth variety $X^{\\prime }$ .", "We may conveniently define plurisubharmonic functions on $X$ relative to any such form on $X$ as follows Definition 2.6 Let $\\omega $ be a closed 1-1 form on $X_{reg}$ such that $\\pi ^*\\omega $ extends smoothly to $X^{\\prime }$ .", "We say function $\\varphi : X\\rightarrow [-\\infty ,+\\infty )$ is $\\omega $ -PSH on $X$ if, $u+\\varphi \\cdot \\pi $ is a classical plurisubharmonic function in local holomprhic coordinates for any local potential $u$ for $\\omega $ (ie, $\\sqrt{-1}\\partial \\bar{\\partial }u =\\omega $ ).", "As we mentioned in the introduction, unlike [13], [24], [34], we need to deal with the currents with unbounded local potentials.", "On the other hand, we will restrict to considering PSH functions on $X$ with so called zero Lelong number as in the following Definition 2.7 Suppose $\\varphi $ is an $\\omega $ -PSH potential function on $X$ and let $E$ be a divisor on $X^{\\prime }$ .", "We say $\\varphi $ , or equivalently $\\omega +\\sqrt{-1}\\partial \\bar{\\partial }\\varphi $ , has zero Lelong numbers along $\\pi (E)$ if for any $\\epsilon >0$ there exists a constant $C_{\\epsilon }$ such that the pull-back $\\pi ^{*}\\varphi $ on $X^{\\prime }$ satisfies $\\pi ^{*}\\varphi \\ge \\epsilon \\log |S|^{2}+C_{\\epsilon }$ where $S$ is a holomorphic section, and $|\\cdot |$ is a Hermitian metric, associated to the holomorphic line bundle associated to $E$ .", "We may now define weak solutions to (REF ) as follows Definition 2.8 Let $\\varphi _{0}$ be an $\\omega _0$ -PSH function on $X$ for some closed 1-1 form $\\omega _0$ on $X$ which is also smooth on $X_{reg}$ .", "We say a family of closed 1-1 forms $\\omega (t)$ is a weak solution to the Kähler  Ricci flow $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\omega &=-Ric(\\omega )\\\\\\omega (0) &= \\omega _0+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{0},\\end{array}\\right.$ on $X\\times [0, T)$ if $ \\omega (t)$ restricts to a smooth solution to (REF ) on $X_{reg} \\times (0, T)$ .", "$\\omega (t)=\\omega _0 -t \\eta +\\sqrt{-1}\\partial \\bar{\\partial }\\varphi (t)$ on $X \\times (0,T)$ where $\\eta \\in [K_X]\\bigcap C^{\\infty }(X_{reg})$ and $\\varphi (t)$ is a $\\omega _0 -t \\eta $ PSH function on $X$ .", "$\\varphi (t)\\rightarrow \\varphi _0$ in $L^1(X)$ as $t\\rightarrow 0$ .", "A weak solution $\\omega (t)=\\omega _0 -t \\eta +\\sqrt{-1}\\partial \\bar{\\partial }\\varphi (t)$ to the Kähler  Ricci flow on $X\\times [0, T)$ above is called $maximal$ if given any other weak solution $\\omega ^{\\prime }(t)=\\omega _0 -t \\eta +\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }(t)$ with $\\varphi (0)=\\varphi ^{\\prime }(0)$ , we have $\\varphi (t)\\ge \\varphi ^{\\prime }(t)$ on $X\\times [0, T)$ .", "We could likewise define a weak solution on $X$ in terms of the resolution $\\pi :X^{\\prime }\\rightarrow X$ as follows.", "In conditions (1)-(3) above we replace $\\omega _0, \\omega (t), X, X_{reg}$ and $\\eta $ respectively with $\\pi ^*\\omega _0, \\omega ^{\\prime }(t), X^{\\prime }, X^{\\prime }\\setminus Exc(\\pi )$ and $\\eta $ by any smooth representative $\\eta ^{\\prime }$ of $\\pi ^*K_{X}$ on $X^{\\prime }$ , provided we then require the solution $\\omega ^{\\prime }(t)$ “descends\" to $X$ in the sense that: for each $p\\in X$ , if $\\pi ^*\\omega _0 -t \\eta ^{\\prime } =0$ on $\\pi ^{-1}(p)$ then $\\varphi $ is constant on $\\pi ^{-1}(p)$ .", "With the above definitions and results, we may now summarize once and for all, the main assumptions and notations we will adopt throughout the paper.", "Assumption 2.9 Let $X$ be a $\\mathbb {Q}$ -factorial projective variety with log canonical singularities and $H$ is a big and semi-ample $\\mathbb {Q}$ -Cartier divisor on $X$ .", "Consider the maps $X^{\\prime } \\xrightarrow{} X \\xrightarrow{} \\mathbb {CP}^{N}$ where $\\pi $ is a resolution of $X$ and $\\Phi $ is a birational morphism, generated by $H,$ for some $N$ .", "In particular, the map $\\Phi \\circ \\pi $ is holomorphic and non-degenerate away from the exceptional locus $Exc(\\pi )$ .", "Let $\\theta $ be a smooth Kähler form and $\\Omega ^{\\prime }$ be a smooth volume form on $X^{\\prime }$ .", "Then we make the following assumptions and definitions $Exc(\\pi )$ is the union of simple normal crossing log canonical, log terminal and canonical divisors on $X^{\\prime }$ which we respectively denote by $D_i, E_j, F_k$ .", "In particular, we have $\\pi ^* K_X=& K_{X^{\\prime }}+\\sum _{i}D_{i}+\\sum _{j}b_{j}E_{j}-\\sum _{k}a_{k}F_{k},$ where $0\\le a_{k}$ and $0<b_{j}<1$ for all $k, j$ .", "$\\widetilde{E}$ and $d>0$ are as in Lemma REF .", "In particular, the support of $\\widetilde{E}$ is contained in $Exc(\\pi )$ and for all $0<\\delta <d$ we have $\\pi ^* \\omega _0 +\\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log |\\widetilde{S}|^2 \\ge c_{\\delta } \\theta $ for some $c_{\\delta } >0$ .", "$S_i, S_j, S_k, \\tilde{S}$ will respectively denote holomorphic sections of the line bundles associated with $D_i, E_j, F_k, \\tilde{E}$ .", "$|S_i|, |S_j| , |S_k|, |\\widetilde{S}|$ will respectively denote lengths relative to hermitian metrics $h_i, h_j, h_k, \\tilde{h}$ .", "$\\Theta _i, \\Theta _j, \\Theta _k, \\tilde{\\Theta }$ will respectively denote the curvature forms of $h_i, h_j, h_k, \\tilde{h}$ .", "$T_{0}:=\\sup \\lbrace t>0| \\pi ^* H+t(K_{X^{\\prime }} +\\sum _{i}\\Theta _{i}+\\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k} ) \\;is\\; nef\\rbrace =\\sup \\lbrace t>0| H+t(K_{X}) \\;is\\; nef\\rbrace $ $\\omega _{0}:=\\Phi ^{*}\\omega _{FS}\\in [H]$ on $X$ where $\\omega _{FS}$ is the Fubini-Study metric on $\\mathbb {CP}^{N}$ and the smooth semi positive form $\\pi ^* \\omega _{0}\\in [\\pi ^{*}H] $ satisfies $\\pi ^* \\omega _{0}\\ge |\\widetilde{S}|^{c} \\theta $ for some $c>0$ ." ], [ "A degenerate parabolic Monge-Ampère equation", "To transform the Kähler-Ricci flow (REF ) on $X$ to a complex Monge-Ampère flow equation, we consider a corresponding degenerate complex Monge-Ampère flow equation on the resolution $X^{\\prime }$ as in [2], [34].", "Define the following smooth family of closed 1-1 forms on $X^{\\prime }$ $\\omega _{t}:=\\pi ^{*}\\omega _{0}+t\\chi =\\pi ^{*}\\omega _{0}+t(-Ric(\\Omega ^{\\prime })+\\sum _{i}\\Theta _{i}+\\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k})$ Now for a given family of $\\omega _{t}$ plurisubharmonic functions $\\varphi (t)$ on $X^{\\prime } \\times [0, T)$ , define the family of forms $\\omega (t):=\\omega _{t}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi $ on $X^{\\prime } \\times [0, T)$ .", "By a straight forward computation using the Poincaré-Lelong Formula, it follows that $\\omega (t)$ is a weak solution to Kähler  Ricci flow as in definition REF on $X^{\\prime }\\setminus Exc(\\pi ) \\times (0, T)$ provided $\\varphi $ is a smooth solution to the equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi &=\\displaystyle \\log \\frac{(\\omega _{t}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi )^{n}\\prod _{i}|S_{i}|_{i}^{2}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}\\\\\\varphi (0) &= \\pi ^{*}\\varphi _{0},\\end{array}\\right.$ on $X^{\\prime }\\setminus Exc(\\pi ) \\times (0, T)$ with $\\varphi (t)\\rightarrow \\pi ^* \\varphi _0$ in $L^1(X^{\\prime })$ as $t\\rightarrow 0$ .", "In particular, if $\\varphi $ solves (REF ) in the above sense and $\\omega (t)$ descends to $X$ , then we obtain a weak solution to (REF ) on $X \\times [0, T)$ as in Definition REF .", "Note that as $T_{0}:=\\sup \\lbrace t>0| \\pi ^* H+tK_{X^{\\prime }}\\;is\\; nef\\rbrace $ as in Assumption REF , the adjunction formula in Assumption REF (1) implies that the smooth background form $\\omega _{t}$ is nef on $X^{\\prime }$ for all $t\\in [0, T_0)$ , and it follows that for any $T^{\\prime }<T_0$ , we may have $\\omega _{t}+ \\sqrt{-1}\\partial \\bar{\\partial }\\psi _{T^{\\prime }} \\ge 0$ for some $\\psi _{T^{\\prime }}\\in C^{\\infty }(X^{\\prime })$ .", "As mentioned in the introduction however, due to the existence of lc divisors in the resolution (REF ), we cannot make direct use of the estimates in [13], [34] to construct solutions to (REF ).", "We will establish the a priori estimates of (REF ) instead through an approximation process in the next section involving the use of both the approximate conical Kähler  metrics and Carlson-Grifiths metrics on $\\tilde{X}$ .", "Solving (REF ) for a zero Lelong number solution $\\varphi (t)$ on $X^{\\prime }\\setminus {E} \\times [0, T_0)$ can be regarded as the chief analytic goal of this paper." ], [ "An approximate equation (existence)", "To study solutions to (REF ) on $X^{\\prime },$ we need to overcome the singularities in the equation corresponding to the lc divisors $D_{i}$ and lt divisors $E_{j}$ .", "We will do this by perturbing these singular terms in (REF ) to arrive at an approximate equation which is known to have a solution on $X^{\\prime }\\times [0, T_0)$ .", "We begin with the following Lemmas which will be used in this perturbative process.", "The following approximation lemma will be used to deal with the singularities in the initial potential $\\pi ^* \\phi _0$ .", "Lemma 3.1 ([12], [7]) Given a smooth semi-positive $(1,1)$ -form $\\omega _{0}$ on $X^{\\prime }$ , for any $\\omega _{0}$ -PSH function $\\varphi ,$ there exist a decreasing sequence of $\\omega _{0}$ -PSH functions which are smooth on $X^{\\prime }$ and satisfy $\\varphi _{l}\\searrow \\varphi $ and $\\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{l}\\rightarrow \\omega _{0}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi $ in the current sense on $X^{\\prime }$ .", "We will use the following approximate conical Kähler  forms to handle the singularities in (REF ) around the log terminal divisors $E_j$ .", "Lemma 3.2 ([19]) Define the function $\\mathcal {F}(t,\\beta ,\\epsilon ):=\\frac{1}{\\beta }\\int _{0}^{t}\\frac{(r+\\epsilon )^{\\beta }-\\epsilon ^{\\beta }}{r}dr,$ Given a Kähler  form $\\theta $ on $X^{\\prime }$ and Hermitian metric $h_j$ on $S_j$ , there exists $\\eta >0$ such that for all $\\epsilon _j$ sufficiently small, $\\theta + \\eta \\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},\\beta ,\\epsilon _{j}^{2})$ is a Kähler  form on $X^{\\prime }$ and is uniformly (over $\\epsilon _j$ ) equivalent to the local model $\\sqrt{-1} \\sum _{j=2}^{n} dz^j \\wedge dz^{\\bar{j}} +\\sqrt{-1}\\frac{dz_{1}\\wedge d\\bar{z}_{1}}{(|z_{1}|^{2}+\\epsilon _{j}^{2})^{1-\\beta }}$ in local holomorphic coordinates where $E_j=\\lbrace z_1=0\\rbrace $ .", "The singularities in (REF ) around the log canonical divisors $E_j$ will be dealt with through the use of Carlson-Grifiths forms on which we have the following lemma (see [9] or [18]) Lemma 3.3 ([9]) Given a Kähler  form $\\theta $ on $X^{\\prime }$ and Hermitian metric $h_i$ on $S_i$ , we may scale $h_j$ so that the Carlson-Griffiths type form $\\begin{split}\\widehat{\\omega }_{\\theta , h}:=&\\theta - \\sqrt{-1}\\partial \\bar{\\partial }(\\log \\log ^2 \\Vert S\\Vert ^{2}_{h})\\\\&=\\theta -2\\frac{\\sqrt{-1}\\partial \\bar{\\partial }\\log \\Vert S\\Vert ^{2}_{h}}{\\log \\Vert S\\Vert ^{2}_{h}}+2 \\frac{\\partial \\log \\Vert S\\Vert ^{2}_{h}}{\\log \\Vert S\\Vert ^{2}_{h}}\\wedge \\frac{\\bar{\\partial } \\log \\Vert S\\Vert ^{2}_{h}}{\\log \\Vert S\\Vert ^{2}_{h}}\\\\\\end{split}$ satisfies $\\hat{ \\omega }_{\\theta , h}$ is a complete Kähler  metric on $X^{\\prime } \\setminus \\bigcup _i D_i $ , and for any $D_k$ it is equivalent to the local model $ \\sqrt{-1}\\sum _{j=2}^{n} dz^j \\wedge dz^{\\bar{j}}+\\sqrt{-1} \\displaystyle \\frac{ dz^1 \\wedge dz^{\\bar{1}}}{|z^1|^2 \\log ^2 |z^1|^2} $ in local holomorphic coordinates around any point $p\\in D_k$ , were $D_k=\\lbrace z_1=0\\rbrace $ .", "$\\hat{ \\omega }_{\\theta , h}$ has bounded geometry of infinite order.", "$-\\log \\log ^2 \\Vert S\\Vert ^{2}_{h}$ is bounded above and in $L^1(X^{\\prime })$ $\\log \\frac{\\widehat{\\omega }_{S, h}^{n}\\Vert S\\Vert _{h}^{2}\\log ^{2}\\Vert S\\Vert _{h}^{2}}{\\theta ^n}$ is bounded on $X^{\\prime }\\setminus \\bigcup _i D_i $ .", "In particular (2) implies that $\\widehat{\\omega }_{\\eta , h}$ is a well defined current on $X^{\\prime }$ .", "(see for example [26] (§8, example 8.15)).", "Now we may write our approximation of (REF ).", "For any positive integer $l$ , and real numbers $\\eta , u, v, \\epsilon _i, \\epsilon _j >0$ , consider the equation: $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }})^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\\\\\;\\\\\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0) &= \\varphi _{l,0}-\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}).\\end{array}\\right.$ where $ \\nonumber \\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}:&=\\pi ^{*}\\omega _{0}+u\\theta +t\\chi -(t+v)\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}\\nonumber \\\\\\nonumber &+\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),$ Here $\\varphi _{l,0}$ is the non-increasing sequence of $\\omega _0$ PSH functions given by Lemma REF where $\\varphi _0$ is the initial condition for (REF ) and $\\eta >0$ is chosen as in Lemma REF relative to $\\theta $ .", "We suppress the dependence of $\\varphi $ on $\\eta $ as this parameter will be fixed at some point in our arguments, and we will not need let $\\eta $ pass to any limits.", "This is partially related to the fact that the initial form $\\omega ^{\\prime }_{0,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }} $ is independent of $\\eta $ and the following remark which explains the sense in which equation (REF ) is a perturbation of (REF ).", "Remark 3.4 If a family of solutions $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ to (REF ) converges locally smoothly to a limit $\\varphi ^{\\prime }\\in C^{\\infty }(X^{\\prime } \\setminus \\widetilde{E} \\times [0, T_0))$ , as we let $l\\rightarrow \\infty $ and $u,v,\\tilde{\\epsilon }, \\rightarrow 0$ , then $\\varphi ^{\\prime }$ will solve the equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi ^{\\prime } &=\\displaystyle \\log \\frac{(\\omega ^{\\prime }_{t}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime })^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2})^{a_{k}}}\\\\\\;\\\\\\varphi ^{\\prime }(0) &= \\varphi _{l,0}-\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}).\\end{array}\\right.$ on $(X^{\\prime } \\setminus \\widetilde{E} \\times [0, T_0))$ where $ \\nonumber \\omega ^{\\prime }_{t}:&=\\pi ^{*}\\omega _{0}+t\\chi -t\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}+\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),$ In particular, $\\varphi =\\varphi ^{\\prime } +\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-t \\log \\log ^{2}|S_{i}|_{i}^{2}$ will solve (REF ) on $X^{\\prime } \\setminus \\widetilde{E} \\times [0, T_0)) $ .", "Roughly speaking, we have perturbed so that the corresponding background form $\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}$ will be equivalent to a Carlson-Grifiths form around the log canonical divisors $D_j$ , and will be approximately conical near the log terminal divisors $E_j$ .", "This will essentially allow us to combine the techniques from [6], [11], [13], [26] with those in [34] in our study of (REF ).", "In particular, we have the following existence theorem essentially due to [26].", "Theorem 3.5 Suppose $\\omega ^{\\prime }_{0,u,v,\\tilde{\\epsilon }_{j}} + \\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0)$ is Kähler  on $X^{\\prime }$ .", "Then (REF ) has a solution $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\in C^{\\infty }(X^{\\prime } \\setminus \\bigcup _{i}D_{i}\\times [0, T_0)) \\bigcap L^{\\infty }(X^{\\prime } \\setminus \\bigcup _{i}D_{i}\\times [0, T_0)) $ .", "Moreover, the family of Kähler metrics $\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ is equivalent to a Carlson-Grifiths form on $X^{\\prime } \\setminus \\bigcup _{i}D_{i}$ , and has bounded curvature, for all $t\\in [0, T_0)$ .", "Consider the Carlson Grifiths metric on $X^{\\prime }$ given by $\\begin{split}\\widetilde{\\omega } &=\\omega ^{\\prime }_{0,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j}}(0)\\\\&= \\pi ^{*}\\omega _{0}+u\\theta -v\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}+\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l, 0}\\\\\\end{split}$ Then $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j}}$ solves (REF ) exactly when $\\psi $ solves $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\psi &=\\displaystyle \\log \\frac{(\\sigma _t+\\sqrt{-1}\\partial \\bar{\\partial }\\psi )^{n}}{\\widetilde{\\omega }^n} \\\\\\;\\\\\\psi (0) &= 0\\end{array}\\right.$ where $ \\nonumber \\sigma _t:= \\widetilde{\\omega }-t Rc( \\widetilde{\\omega }) +t\\left(\\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k}-\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{\\prod _j (|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_j}}{\\prod _k (|S_{k}|_{k}^{2}+\\epsilon _k^2)^{a_k} }\\right)$ Now as $\\widetilde{\\omega }$ is complete on $X^{\\prime }$ with bounded covariant derivatives of curvature, the proof of the main theorem in [26] implies that (REF ) has a solution on $X^{\\prime }\\setminus {\\bigcup _i D_i} \\times [0, T_0)$ such that $\\sigma _t+\\sqrt{-1}\\partial \\bar{\\partial }\\psi $ is a Carlson-Grifiths metric on $X^{\\prime }$ for each $t$ , provided that for every $T<T_0$ we have $\\sigma _T \\ge c\\widetilde{\\omega } + \\sqrt{-1}\\partial \\bar{\\partial }F$ for some $c$ and smooth $F$ which is bounded on $X^{\\prime }$ along with all covariant derivatives relative to $\\widetilde{\\omega }$ .", "Now for any $T$ we may use Lemma 8.6 in [26] to calculate the Ricci form of a Carlson-Griffiths metric as in the following $\\begin{split}\\sigma _T=& \\widetilde{\\omega }-T Rc( \\widetilde{\\omega })\\\\ &+T\\left(\\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k}-\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{\\prod _j (|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_j}}{\\prod _k (|S_{k}|_{k}^{2}+\\epsilon _k^2)^{a_k} }\\right)\\\\= & \\widetilde{\\omega }-T (Rc(\\theta )+ \\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}+\\sum _{i}\\Theta _{i} + \\sqrt{-1}\\partial \\bar{\\partial }H) \\\\&+T\\left(\\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k}-\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{\\prod _j (|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_j}}{\\prod _k (|S_{k}|_{k}^{2}+\\epsilon _k^2)^{a_k} }\\right)\\\\= & \\widetilde{\\omega }-T \\left(Rc(\\theta )+\\sum _{i}\\Theta _{i}+ \\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k}\\right)-T \\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2} \\\\\\end{split}$ $\\nonumber \\begin{split}&-T\\sqrt{-1}\\partial \\bar{\\partial }\\left(\\log \\frac{\\prod _j (|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_j}}{\\prod _k (|S_{k}|_{k}^{2}+\\epsilon _k^2)^{a_k} } + H\\right)\\\\=:& I + II + III+ IV\\end{split}$ where $H$ is bounded on $X^{\\prime }$ along with all covariant derivatives relative to $\\widetilde{\\omega }$ .", "It follows from the definition of $T_0$ in Assumption REF and (REF ) that if $T<T_0$ , then $I+II \\ge c\\widetilde{\\omega } + \\sqrt{-1}\\partial \\bar{\\partial }F$ for some $c$ and smooth $F$ which is bounded on $X^{\\prime }$ along with all covariant derivatives relative to $\\widetilde{\\omega }$ .", "Thus in turn, by the above we conclude that $\\sigma _T\\ge c^{\\prime }\\widetilde{\\omega } + \\sqrt{-1}\\partial \\bar{\\partial }F^{\\prime }$ for some $c^{\\prime }$ and smooth $F^{\\prime }$ which is bounded on $X^{\\prime }$ along with all covariant derivatives relative to $\\widetilde{\\omega }$ .", "We conclude that (REF ), and thus (REF ) has a solution on $X^{\\prime }\\setminus {\\bigcup _i D_i} \\times [0, T_0)$ with the properties stated in the Theorem." ], [ "An approximate equation (a priori estimates)", "In this subsection we fix Hermitian metrics $h_i, h_j, h_k$ on $D_i, E_j, F_k$ and a solution $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\in C^{\\infty }(X^{\\prime } \\setminus \\bigcup _{i}D_{i}\\times [0, T_0))$ to (REF ) as in Theorem REF for some values of the parameters $l,u,v,\\tilde{\\epsilon }>0$ and $\\eta >0$ .", "Our goal in this subsection will be to derive a priori estimates for $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ which are local in $C^{\\infty }(X^{\\prime } \\setminus \\widetilde{E}\\times (0, T_0))$ , and independent of the parameters $l,u,v,\\tilde{\\epsilon }$ .", "For the remainder of the subsection we will fix some $T^{\\prime }\\in (0, T_0)$ .", "A simple consequence of Assumption REF (2) and (4) is that there exists $c>0$ such that the class $[\\pi ^{*}\\omega _{0}+u\\theta +t\\chi +\\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log |S_{\\widetilde{E}}|^{2}]$ is Kähler on $X^{\\prime }$ for all $t\\in [0, T^{\\prime }]$ and $\\delta \\le c$ .", "We will also let $\\delta \\in (0, c/(T^{\\prime }+1))$ be fixed throughout the subsection.", "We now make the following assumption throughout the subsection which will be needed in some of our estimates.", "Assumption 3.6 Given the fixed constants $T^{\\prime }, \\delta $ above, the forms $\\pi ^{*}\\omega _{0}+u\\theta +t\\chi + t \\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log ^{2}|S_{\\widetilde{E}}|^{2}$ $\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}} + \\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log ^{2}|S_{\\widetilde{E}}|^{2}$ are Kähler  on $X^{\\prime }$ and $X^{\\prime }\\setminus \\widetilde{E}$ respectively for all $t\\in [0, T^{\\prime }]$ .", "In particular, for any fixed $i, j$ the second form above is equivalent to the local model $\\begin{split}\\sum _{k\\ge 3}dz^k\\wedge d\\overline{z}^k + (t+v)\\sqrt{-1}(\\frac{dz^1\\wedge d\\overline{z}^1}{|z^1|_{i}^{2}\\log ^{2}|z^1|_{i}^{2}}+\\frac{\\Theta _{i}}{\\log |z^1|_{i}^{2}})+\\eta \\sqrt{-1}\\frac{dz_2\\wedge d\\bar{z}_2}{(|z^2|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}\\\\\\end{split}$ unifromly (over $t, l, u,v,\\tilde{\\epsilon }_{j}$ ) in some local holomorphic coordinate around any point in which $D_i=\\lbrace z^1=0\\rbrace $ and $E_j=\\lbrace z_2=0\\rbrace $ .", "By the remark above Assumption REF and Lemma REF , the conditions in the assumption will hold given some choice of the Hermitian metrics $h_i, h_j, h_k$ and the constant $\\eta $ .", "Thus for the purpose of deriving estimates for $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ which are local in $X^{\\prime } \\setminus \\widetilde{E}\\times (0, T_0)$ , we may make the above assumption without any loss of generality in view of the following remarks (see also remark REF ).", "Remark 3.7 Changing from $\\eta =a$ to $\\eta =b$ in (REF ), while fixing $l, u,v, \\tilde{\\epsilon }$ , corresponds simply to adding $(a-b)\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})$ to the solution.", "Remark 3.8 Given Hermitian metrics $h_i, h_j, h_k$ on $D_i, E_j, F_k$ respectively, a smooth volume form $\\Omega ^{\\prime }$ on $X^{\\prime }$ , denote by $\\varphi ^{\\prime h_i, h_j, h_k, \\Omega ^{\\prime }, \\eta }$ the corresponding solution to (REF ) (for some fixed set of parameters $l, u,v, \\tilde{\\epsilon }, \\eta $ which we suppress in the notation).", "Then using this notation, for another set of Hermitian metrics $\\widetilde{h}_i,\\widetilde{h}_j, \\widetilde{h}_k$ we have $\\varphi ^{\\prime h_i, h_j, h_k, \\Omega ^{\\prime }}=\\varphi ^{\\prime h_i, \\widetilde{h}_j, \\widetilde{h}_k, \\widetilde{\\Omega }^{\\prime }}=\\varphi ^{\\prime \\widetilde{h}_i, \\widetilde{h}_j, \\widetilde{h}_k, \\widetilde{\\Omega }^{\\prime }}+\\psi $ for some smooth volume form $\\widetilde{\\Omega }^{\\prime }$ on $X^{\\prime }$ and smooth bounded function $\\psi $ on $X^{\\prime }\\setminus \\widetilde{E} \\times [0, T_0)$ ." ], [ "Upper bound on $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$", "We begin with Lemma 3.9 there exists a constant $C$ depending only on $T^{\\prime }$ such that on $X^{\\prime } \\setminus (\\bigcup _i D_i) \\times [0, T^{\\prime }]$ we have $\\sup \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le C-t\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}).$ Consider the function $\\phi _{l,u,v,\\tilde{\\epsilon }}(t):=\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(t)+t\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})$ which satisfies the equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\phi _{l,u,v,\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{(\\omega ^{\\prime \\prime }_{t,u,v,\\tilde{\\epsilon }}+\\sqrt{-1}\\partial \\bar{\\partial }\\phi _{l,u,v,\\tilde{\\epsilon }})^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }}\\\\\\;\\\\\\phi _{l,u,v,\\tilde{\\epsilon }}(0) &= \\varphi _{l,0}-\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),\\end{array}\\right.$ where $\\omega ^{\\prime \\prime }_{t,u,v,\\tilde{\\epsilon }}=&\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-t\\sum _{k}a_{k}\\sqrt{-1}\\partial \\bar{\\partial }\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})\\\\=&\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+t\\sum _{k}a_{k}\\sqrt{-1}\\left(\\frac{|S_{k}|_{k}^{2}\\Theta _{k}}{|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}}-\\frac{DS_{k}\\wedge \\overline{DS}_{k}}{(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{2}}\\right)\\\\\\le &\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+Ct\\theta ,$ for some uniform constant $C>0$ depending only on $T^{\\prime }$ .", "Here the inequality holds because all the curvature forms of the exceptional divisors are smooth on $X^{\\prime }$ .", "By (REF ) and applying Omori-Yau's maximum principle to (REF ) we have the upper bound $\\sup \\phi _{l,u,v,\\tilde{\\epsilon }}\\le \\phi _{l,u,v,\\tilde{\\epsilon }}(0)+C\\max _{n_{1},n_{2}}\\int _{0}^{t}\\log (v+s)^{n_{1}}(1+s)^{n_{2}}ds\\le C(t),$ where $n_{1}\\in [0,n]$ represents the number of irreducible log canonical divisors at any point of $X^{\\prime }$ which may not be a constant and $n_{2}=n-n_{1}.$ This estimate implies the upper bound in the Lemma.", "Using Lemma REF we now derive a uniform upper bound as in the following Theorem 3.10 There exists a constant $C$ depending only on $T^{\\prime }$ such that on $X^{\\prime } \\setminus (\\bigcup _i D_i) \\times [0, T^{\\prime }]$ we have $\\sup \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le C.$ First we observe from (REF ) that it suffices to conclude an upper bound in a neighbourhood of the canonical divisors $\\bigcup _k F_{k}.$ Let us cover $\\bigcup _k F_{k}.$ by finitely many coordinate charts $V_{\\alpha }$ such that the exceptional divisors correspond to coordinate hyperplanes in each $V_{\\alpha }$ .", "We will derive a uniform upper bound for $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ in each compliment $V_{\\alpha }\\setminus \\bigcup _i D_{i}.$ We may suitably shrink the charts such that in each $V_{\\alpha }$ we have $\\omega ^{\\prime }_{u,t}:=\\pi ^{*}\\omega _{0}+u\\theta +t\\chi =\\sqrt{-1}\\partial \\bar{\\partial }\\Phi _{\\alpha }$ smooth function $\\Phi _{\\alpha }$ which is uniformly bounded by a constant $C_{0}$ depending only on $T^{\\prime }$ .", "Moreover we may require that the boundary of each chart has a shape of the product of the polydisk $\\mathbb {D}^{n^{\\prime }}:=\\lbrace |z_{k_{1}}|\\le r_{1},|z_{k_{2}}|\\le r_{2},\\cdots ,|z_{k_{n^{\\prime }}}|\\le r_{n^{\\prime }}\\rbrace $ and a bounded region $U^{n-n^{\\prime }}$ in $\\mathbb {C}^{n-n^{\\prime }},$ where $z_{k_{i}}$ correspond to those canonical divisors $F_{k_{i}}$ which has a simple normal crossing inside the chart.", "Fix one such chart $V_{\\alpha }$ .", "It suffices to prove the following claim: $\\sup _{V_{\\alpha }}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le \\sup _{\\mathbb {T}^{n^{\\prime }}\\times U^{n-n^{\\prime }}}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+2n^{\\prime }C_{0},$ where $\\mathbb {T}^{n^{\\prime }}:=\\lbrace |z_{k_{1}}|=r_{1},|z_{k_{2}}|=r_{2},\\cdots ,|z_{k_{n^{\\prime }}}|=r_{n^{\\prime }}\\rbrace $ is the boundary torus around $\\mathbb {D}^{n^{\\prime }}.$ Suppose the claim is true.", "Now as $F_{k_{1}},\\cdots ,F_{k_{n^{\\prime }}}$ are the only canonical divisors intersecting $V_{\\alpha }$ by assumption, it follows the RHS of (REF ) is uniformly bounded from above by Lemma REF , in which case we see that the theorem follows by the finiteness of the covering $\\lbrace V_{\\alpha }\\rbrace $ of $\\bigcup _k F_{k}.$ Let us prove the claim (REF ) by induction.", "For any point $p\\in V_{\\alpha }\\setminus \\bigcup _{i}D_{i}$ with local coordinates $(z_{k_{1}},\\cdots ,z_{k_{n^{\\prime }}},z^{\\prime }_{n-n^{\\prime }}),$ where $|z_{k_{i}}|\\le r_{i}$ and $z_{i}\\ne 0,$ fix all coordinates except for $z_{k_{1}}$ and apply the maximum principle to the local potential of the metric $\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ on one-dimensional disk $\\mathbb {D}^{1}:=\\lbrace |z_{k_{1}}|\\le r_{1}\\rbrace \\times \\lbrace z^{\\prime }_{n-1}\\rbrace ,$ it follows that $&(\\Phi _{\\alpha }-(t+v)\\sum _{i}\\log \\log ^{2}|S_{i}|_{i}^{2}+\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})+\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }})(p)\\\\\\le &\\sup _{\\mathbb {T}^{1}}(\\Phi _{\\alpha }-(t+v)\\sum _{i}\\log \\log ^{2}|S_{i}|_{i}^{2}+\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})+\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}),$ where $\\mathbb {T}^{1}$ is the boundary of $\\mathbb {D}^{1}.$ As $z^{\\prime }_{n-1}$ is fixed, it follows by plurisubharmonicity that $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(p)\\le \\sup _{\\mathbb {T}^{1}}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+2C_{0}.$ Similarly for $i$ -torus $\\mathbb {T}^{i}:=\\lbrace |z_{k_{1}}|=r_{1},\\cdots ,|z_{k_{i}}|=r_{i}\\rbrace \\times \\lbrace z^{\\prime }_{n-i}\\rbrace ,$ with $i<n^{\\prime },$ it follows that $\\sup _{\\mathbb {T}^{i}}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le \\sup _{\\mathbb {T}^{i+1}}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+2C_{0}.$ By induction the claim follows." ], [ "Lower bound on $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$", "The uniform upper bound established above already makes the family $\\lbrace \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\rbrace $ a pre-compact set in the class of quasi-PSH functions.", "However we still need a lower bound which not only guarantees the solution will not tend to $-\\infty $ on the whole space, but also controls the behaviour of the solution near the exceptional divisors.", "We will follow the ideas in [11] combined with Tsuji's trick of applying Kodaira's Lemma [34], [41], [42](or Lemma REF in section 2).", "By Definition REF , we can see that the initial potential in (REF ) satisfies $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0)\\ge \\delta (\\sum _{i}\\log |S_{i}|_{i}^{2}+\\sum _{j}\\log |S_{j}|_{j}^{2}+\\sum _{k}\\log |S_{k}|_{k}^{2})+C_{\\delta }.$ for some constant $C_{\\delta }$ .", "Theorem 3.11 There exists a constant $C$ depending on $\\delta $ and $T^{\\prime }$ such that on $(X^{\\prime }\\setminus \\tilde{E})\\times [0,T^{\\prime }],$ we have $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\ge \\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}+C_{\\delta }.$ We adapt the idea from [11].", "By the definition of $\\tilde{E}$ in Assumption REF (REF ) implies $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0)\\ge \\frac{\\delta }{2}\\log |\\tilde{S}_{\\tilde{E}}|^{2}+C_{1}$ for some constant $C_{1}$ depending on $\\delta .$ Now we write: $&\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\nonumber \\\\=&\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2} +\\sqrt{-1}\\partial \\bar{\\partial }(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})\\nonumber \\\\:=\\;&\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }+\\sqrt{-1}\\partial \\bar{\\partial }(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}).$ As $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ is a bounded solution to (REF ), we see that for any $l\\ge 1,u,v,\\tilde{\\epsilon }>0$ and $t\\in [0,T^{\\prime }]$ the function $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2} (x, t)\\rightarrow \\infty $ as $x\\rightarrow \\tilde{E}$ .", "Thus its infimum at each time slice can only be attained away from $\\tilde{E},$ hence by the maximum principle it follows that $\\sqrt{-1}\\partial \\bar{\\partial }(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})\\ge 0.$ Put (REF ) into (REF ) and make use of the maximum principle, it follows that $\\frac{\\partial }{\\partial t}\\inf (\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})&\\ge \\log \\frac{\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\nonumber \\\\&\\ge \\log \\frac{\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }}-C^{\\prime },$ as $\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}$ is uniformly bounded from above.", "On the other hand, by Assumption REF we have $\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }\\ge \\frac{C(c_{\\delta },\\eta )(v+t)^{n}\\Omega ^{\\prime }}{\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}.$ Combining (REF ) and (REF ) gives $\\frac{\\partial }{\\partial t}\\inf (\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})\\ge -C_{1}(c_{\\delta },\\eta )+n\\log (v+t).$ The theorem then follows from integration of the above inequality and the initial lower bound (REF ).", "This theorem informs us that $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ is uniformly locally bounded away from the non-ample locus $\\tilde{E}.$ In the uniqueness section furthermore we will show that the corresponding solution to (REF ) is uniformly locally bounded away from the log canonical locus.", "To establish the existence of the solution to (REF ), or equivalently (REF ), from the compactness of solutions to (REF ), we still need to derive a uniform high order estimates for solutions to (REF ) which we do in the following sub subsections.", "The main idea will be similar to [11], [34]." ], [ "Upper and lower bounds on $\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ ", "We begin with the following estimate for the time derivative of $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}:$ Lemma 3.12 There exist $C_{1\\delta },C_{2\\delta }>0$ depending only on $\\delta , T^{\\prime }$ such that on $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime }],$ we have $n\\log t+\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-C_{1\\delta }\\le \\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le n+\\frac{C_{2\\delta }-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}}{t}.$ Denote the evolving metric in (REF ) as $\\omega (t)$ for simplicity and take the time derivative of (REF ), it follows that $\\frac{\\partial }{\\partial t}\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}=\\Delta _{\\omega }\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+tr_{\\omega }(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}).$ Thus we have $(\\frac{\\partial }{\\partial t}-\\Delta _{\\omega })(t\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-nt)=-tr_{\\omega }(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}(0)-\\delta \\Theta _{\\tilde{E}})\\le 0.$ where the last inequality holds by Assumption REF .", "Note that $H^{+}(p,t):=t\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-nt\\rightarrow -\\infty $ when $p\\in X^{\\prime }\\setminus \\tilde{E}$ approaches $\\tilde{E},$ and thus by the maximum principle we have $\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le n+\\frac{\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}+C_{\\delta }}{t}\\le n+\\frac{C_{2\\delta }-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}}{t},$ where we have used Theorem REF -REF .", "For the lower bound, by (REF ) and the facts (REF ) and (REF ), for $A\\gg 1$ we have the following inequality $&(\\frac{\\partial }{\\partial t}-\\Delta _{\\omega })\\left(\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-n\\log t\\right)\\nonumber \\\\=\\;&tr_{\\omega }(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})+A\\log \\frac{\\omega ^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\nonumber \\\\&+\\;Atr_{\\omega }(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\delta \\Theta _{\\tilde{E}})-An-\\frac{n}{t}\\nonumber \\\\\\ge \\;&tr_{\\omega }(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})-A\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})-An-\\frac{n}{t}\\nonumber \\\\&+\\;A\\log \\frac{C(c_{\\delta },\\eta )(v+t)^{n}\\omega ^{n}}{\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }}+Atr_{\\omega }\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }\\nonumber \\\\\\ge \\;&tr_{\\omega }\\left((A-1)\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }+(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})\\right)-\\frac{C(A,c_{\\delta },\\eta )}{t}\\nonumber \\\\\\ge \\;&\\frac{A}{2}tr_{\\omega }\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }-\\frac{C(A,c_{\\delta },\\eta )}{t}\\ge \\frac{An}{2}\\left(\\frac{\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }}{\\omega ^{n}}\\right)^{1/n}-\\frac{C(A,c_{\\delta },\\eta )}{t},$ where the second inequality follows from the property of logarithmic functions and we have also used Assumption REF .", "Recall that $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ itself is a smooth and bounded solution to (REF ), thus when $t\\rightarrow 0$ or the point $p\\in X^{\\prime }\\setminus \\tilde{E}$ approaches $\\tilde{E},$ the function $H^{-}(p,t):=\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-n\\log t\\rightarrow +\\infty .$ Thus its infimum over $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime }]$ will be attained at some $(p_{0},t_{0})$ where $t_0\\ne 0$ and $p_0\\notin \\tilde{E},$ and by the maximum principle it follows from (REF ) that at $(p_{0},t_{0})$ we have $\\omega ^{n}\\ge C^{\\prime }(A,c_{\\delta },\\eta )t^{n}\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }.$ Thus we have $&\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-n\\log t\\nonumber \\\\\\ge \\;&\\inf H^{-}(p,t)=H^{-}(p_{0},t_{0})\\nonumber \\\\\\ge \\;&\\log \\frac{\\omega ^{n}}{\\omega ^{\\prime n}_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }}(p_{0},t_{0})+n\\log (v+t_{0})-n\\log t_{0}+C^{\\prime }_{\\delta }\\ge \\; C^{\\prime \\prime }_{\\delta }.$ By Theorem REF -REF we conclude the lower bound estimate of $\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}.$" ], [ "Upper bound on Laplacian of $ \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$", "Next we will conclude the following Laplacian estimate for the solutions: Theorem 3.13 There exist constants $c_{\\delta },C_{\\delta }>0$ depending only on $\\delta $ and $T^{\\prime }$ such that on $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime }]$ we have $.c_{\\delta }|\\tilde{S}_{\\tilde{E}}|^{\\frac{2\\delta }{t}}\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}e^{-\\frac{C_{\\delta }}{t}}\\le tr_{\\hat{\\omega }}\\omega \\le \\frac{1}{|\\tilde{S}_{\\tilde{E}}|^{2\\delta /t}}e^{\\frac{C_{\\delta }}{t}}.$ where $\\omega :=\\;\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ $\\hat{\\omega }:=\\theta -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}+\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}).$ We begin with the following Lemma on the background form $\\hat{\\omega }$ Lemma 3.14 There exist a sufficiently small constant $\\rho \\in (0,1)$ and constants $C_{1},C_{2}>0$ such that $Bisec(\\hat{\\omega })\\ge -(C_{1}\\sqrt{-1}\\partial \\bar{\\partial }\\Psi _{\\rho }+C_{2}\\hat{\\omega })\\otimes \\hat{\\omega },$ where $\\Psi _{\\rho }:=\\sum _{j}\\mathcal {F}(|S_{j}|^{2},\\rho ,\\epsilon _{j}^{2})$ is PSH with $\\mathcal {F}$ defined in (REF ).", "The proof of this lemma is almost identical to [19].", "The only difference is that in [19] the term $-\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}$ is omitted from $\\hat{\\omega }$ thus making it smooth across the log canonical divisor $\\bigcup _i D_i$ , while in our case $\\hat{\\omega }$ above has cusp like singularities at $\\bigcup _i D_i$ .", "However, as $\\hat{\\omega }$ is a standard Carlson-Griffiths form as in Lemma REF , there exist quasi-coordinates near those divisors $D_i$ such that the metric has bounded geometry in these coordinates [22], [39], and making use of the quasi-coordinates we can adapt the construction in [19] to our case and conclude the lemma.", "This estimate highly depends on the properties of $\\hat{\\omega }$ .", "This is similar to the role played by the approximate conic metrics from [19] in the conical Kähler-Ricci flow [25], [28].", "Recall that the approximate Monge-Ampèreflow equation (REF ) corresponds to the twisted Kähler-Ricci flow: $\\frac{\\partial }{\\partial t}\\omega =&-Ric(\\omega )+2\\pi \\sum _{i}[D_{i}]+\\sum _{j}b_{j}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{|S_{j}|_{j}^{2}+\\epsilon _{j}^{2}}{h_{j}}\\nonumber \\\\&-\\sum _{k}a_{k}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}}{h_{k}}.$ Thus at any point $p$ , after choosing holomorphic coordinates in which $\\hat{g}_{i\\bar{j}}|_{p}=\\delta _{ij}$ and $g_{i\\bar{j}}|_{p}=\\lambda _{i}\\delta _{ij}$ (correspond to $\\hat{\\omega }$ and $\\omega $ ) the corresponding parabolic Aubin-Yau Inequality (the elliptic version appeared in [1], [44]) takes the form $&(\\frac{\\partial }{\\partial t}-\\Delta )\\log tr_{\\hat{\\omega }}\\omega \\nonumber \\\\ \\le &\\frac{1}{tr_{\\hat{\\omega }}\\omega }\\left(-\\sum _{p,q}\\frac{\\lambda _{p}}{\\lambda _{q}}R_{p\\bar{p}q\\bar{q}}(\\hat{\\omega })+tr_{\\hat{\\omega }}(\\sum _{j}b_{j}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{|S_{j}|_{j}^{2}+\\epsilon _{j}^{2}}{h_{j}}-\\sum _{k}a_{k}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}}{h_{k}})\\right)\\nonumber \\\\=&-\\frac{1}{tr_{\\hat{\\omega }}\\omega }\\sum _{p<q}(\\frac{\\lambda _{p}}{\\lambda _{q}}+\\frac{\\lambda _{q}}{\\lambda _{p}}-2)R_{p\\bar{p}q\\bar{q}}(\\hat{\\omega })\\nonumber \\\\&+\\frac{1}{tr_{\\hat{\\omega }}\\omega }tr_{\\hat{\\omega }}(\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}\\hat{\\omega }^{n}}{\\Omega ^{\\prime }}-Ric(\\Omega ^{\\prime })+\\sum _{j}b_{j}\\Theta _{j})\\nonumber \\\\&-\\frac{1}{tr_{\\hat{\\omega }}\\omega }\\sum _{k}a_{k}tr_{\\hat{\\omega }}(\\frac{\\epsilon _{k}^{2}}{|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}}\\Theta _{k}+\\frac{\\epsilon _{k}^{2}DS_{k}\\wedge \\overline{DS}_{k}}{(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{2}}),$ where $\\Delta $ represents the Laplacian with respect to the evolving metric $\\omega .$ Note that from [19] although the bisectional curvature of $\\hat{\\omega }$ is not uniformly bounded from below, we can still construct a bounded auxiliary function to derive the trace estimate.", "By Lemma REF , the first term in (REF ) can be controlled as following: $&-\\frac{1}{tr_{\\hat{\\omega }}\\omega }\\sum _{p<q}(\\frac{\\lambda _{p}}{\\lambda _{q}}+\\frac{\\lambda _{q}}{\\lambda _{p}}-2)R_{p\\bar{p}q\\bar{q}}(\\hat{\\omega })\\nonumber \\\\\\le &\\frac{1}{\\sum _{p}\\lambda _{p}}\\sum _{p,q}\\left(\\frac{\\lambda _{p}}{\\lambda _{q}}(C_{1}\\Psi _{\\rho ,q\\bar{q}}+C_{2})+\\frac{\\lambda _{q}}{\\lambda _{p}}(C_{1}\\Psi _{\\rho ,p\\bar{p}}+C_{2})\\right)\\le C^{\\prime }_{1}\\Delta \\Psi _{\\rho }+C^{\\prime }_{2}tr_{\\omega }\\hat{\\omega }.$ For the second term, we first note that $-Ric(\\Omega ^{\\prime })+\\sum _{j}b_{j}\\Theta _{j}$ is uniformly bounded with respect to $\\hat{\\omega }.$ Next, note that $\\hat{\\omega }$ has similar expansion as (REF ), which implies that $\\sqrt{-1}\\partial \\bar{\\partial }\\log \\frac{\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}\\hat{\\omega }^{n}}{\\Omega ^{\\prime }}&=-\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }(\\log |S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2})+\\sqrt{-1}\\partial \\bar{\\partial }\\log H\\nonumber \\\\&=\\sum _{i}(\\Theta _{i}-\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})+\\sqrt{-1}\\partial \\bar{\\partial }\\log H,$ where by direct computations $H=h_{0}+h_{1}(\\sum _{i}|S_{i}|_{i}^{2}\\log |S_{i}|_{i}^{2}+\\sum _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}})+\\cdots $ where $h_{0}>0,$ and $h_{0},h_{1},\\cdots $ are bounded and smooth.", "Thus the second term of (REF ) is also uniformly bounded with respect to $\\hat{\\omega }.$ Finally, as $\\Theta _{k}$ is uniformly bounded with respect to $\\hat{\\omega },$ the last term obviously is bounded from above with respect to $\\hat{\\omega }.$ Combine those arguments with (REF ) we have $(\\frac{\\partial }{\\partial t}-\\Delta )\\log tr_{\\hat{\\omega }}\\omega \\le C^{\\prime }_{1}\\Delta \\Psi _{\\rho }+C^{\\prime }_{2}tr_{\\omega }\\hat{\\omega }+\\frac{C^{\\prime \\prime }_{2}}{tr_{\\hat{\\omega }}\\omega }\\le C^{\\prime }_{1}\\Delta \\Psi _{\\rho }+C_{3}tr_{\\omega }\\hat{\\omega }.$ Now we consider the function $G(p,t):=t\\log tr_{\\hat{\\omega }}\\omega -A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})+B\\Psi _{\\rho },$ it follows that $(\\frac{\\partial }{\\partial t}-\\Delta )G\\le &\\log tr_{\\hat{\\omega }}\\omega +C_{3}t\\;tr_{\\omega }\\hat{\\omega }+A(-\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+n-tr_{\\omega }(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\delta \\Theta _{\\tilde{E}}))\\nonumber \\\\&+(C^{\\prime }_{1}t-B)\\Delta \\Psi _{\\rho }\\nonumber \\\\\\le &\\log tr_{\\hat{\\omega }}\\omega -A\\log \\frac{\\omega ^{n}}{\\hat{\\omega }^{n}}+A\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})+(C^{\\prime }_{1}t-B)\\Delta \\Psi _{\\rho }\\nonumber \\\\&+tr_{\\omega }(C_{3}t\\hat{\\omega }-A\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta })+C\\nonumber \\\\\\le &tr_{\\omega }((1+C_{3}t)\\hat{\\omega }-A\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta })+(C^{\\prime }_{1}t-B)\\Delta \\Psi _{\\rho }+C(A)$ where the last inequality follows from the property of logarithmic function again.", "Considering that $\\Psi _{\\rho }$ is PSH, for $t\\le T^{\\prime }$ we can choose $B\\ge C^{\\prime }_{1}T^{\\prime }$ so that $(C^{\\prime }_{1}t-B)\\Delta \\Psi _{\\rho }\\le 0.$ On the other hand, by Assumption REF it follows that $\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }\\ge c_{\\delta }\\hat{\\omega }$ for some $c_{\\delta }>0.$ Thus by choosing large enough $A$ we have $(\\frac{\\partial }{\\partial t}-\\Delta )G\\le -tr_{\\omega }\\hat{\\omega }+C_{\\delta }.$ Note that $G$ is bounded from above at $t=0$ by assumptions and tends to $-\\infty $ near $\\tilde{E}$ so we may assume the supremum of $G$ is attained at $(p_{0},t_{0})$ for $p_{0}\\in X^{\\prime }\\setminus \\tilde{E}$ and $t_{0}>0,$ thus by the maximum principle it follows from above that $tr_{\\omega }\\hat{\\omega }(p_{0},t_{0})\\le C_{\\delta }.$ Then at $(p_{0},t_{0})$ it follows that $G(p_{0},t_{0})&\\le t_{0}\\log (tr_{\\omega }\\hat{\\omega })^{n-1}(\\frac{\\omega ^{n}}{\\hat{\\omega }^{n}})-A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})+B\\Psi _{\\rho }\\nonumber \\\\&\\le C_{\\delta }+t_{0}(\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})+C)\\le C_{\\delta },$ where the last inequality follows from Lemma REF .", "Thus for general $(p,t)\\in (X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime }]$ it follows that $G(p,t):=t\\log tr_{\\hat{\\omega }}\\omega -A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})+B\\Psi _{\\rho }\\le C_{\\delta },$ which implies that $tr_{\\hat{\\omega }}\\omega \\le \\frac{1}{|\\tilde{S}_{\\tilde{E}}|^{2\\delta /t}}e^{\\frac{C_{\\delta }}{t}}.$ On the other hand, as $\\frac{\\omega ^{n}}{\\hat{\\omega }^{n}}\\ge \\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}e^{\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-C},$ it follows from (REF ) (REF ) that $tr_{\\hat{\\omega }}\\omega \\ge c_{\\delta }|\\tilde{S}_{\\tilde{E}}|^{\\frac{2\\delta }{t}}\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}e^{-\\frac{C_{\\delta }}{t}},$ which complete the Laplacian estimate (REF ).", "Remark 3.15 It is possible to eliminate the factor $\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}$ in (REF ) if we can replace the conic approximation method in [19] by other approximation using the upper bound of the bisectional curvature.", "Then we may use Chern-Lu Inequality to get rid of that factor.", "We may consider this approximation in the future." ], [ "higher order derivative estimates on $ \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$", "By the bounds in Lemma REF and Theorem REF we may have Theorem 3.16 For any compact set $K\\subset X^{\\prime }\\setminus \\tilde{E},\\;t\\in (0,T^{\\prime }]$ there exist constants $C(m,K,T^{\\prime })$ such that on $K\\times (0,T^{\\prime }]$ we have $C(m,K,T^{\\prime })^{-1}\\theta \\le \\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }} \\le C(m,K,T^{\\prime }) \\theta $ Finally, by standard Evans-Krylov estimates (cf.", "[34], [43]), we can establish the local high order estimates in any compact set $K\\subset X^{\\prime }\\setminus \\tilde{E}:$ Theorem 3.17 For any compact set $K\\subset X^{\\prime }\\setminus \\tilde{E}\\times (0,T^{\\prime }]$ there exist constants $C(m,K,T^{\\prime })$ such that $|\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}|_{C^{m}(K)}\\le C(m,K,T^{\\prime }).$" ], [ "Estimates when $\\varphi _{0}\\in C^{\\infty }(X^{\\prime })$", "In the following subsubsection we assume in addition that $\\varphi _{0}\\in C^{\\infty }(X^{\\prime })$ .", "Note that by absorbing the smooth form $\\sqrt{-1}\\partial \\bar{\\partial }\\pi ^* \\varphi _{0}$ into $\\pi ^* \\omega _0$ , we will assume, without loss of generality, that in fact $\\varphi _{0}=0$ .", "Thus we simply take $\\varphi _{0, l}=0$ for all $l$ in the previous subsections, and denote $\\varphi ^{\\prime }_{l, u,v,\\tilde{\\epsilon }}$ accordingly by $\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}$ .", "We will establish estimates for $\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}$ which are uniform on $K\\times [0, T^{\\prime }]$ for any $K\\subset \\subset M$ Lemma 3.18 [modified Lemma REF ] There exists a constant $C_1$ depending only on $T^{\\prime }$ such that $\\sup \\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}\\le t(C_1-\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})).$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0,T^{\\prime }]$ .", "This basically follows from a slight variant on the proof of Lemma REF .", "Notice that the RHS in (REF ) is bounded above uniformly on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ .", "On the other hand, since $\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}(0)=0$ we may also have $\\sup _M \\phi _{u,v,\\tilde{\\epsilon }}(0) \\le 0$ and it follows from the maximum principle that $\\phi _{u,v,\\tilde{\\epsilon }}\\le Ct$ there.", "The Lemma then follows from the fact that $\\varphi _{l,u,v,\\tilde{\\epsilon }}:=\\phi _{l,u,v,\\tilde{\\epsilon }}-t \\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})$ as in the proof of Lemma REF .", "Next we want to modify the upper and lower bound in Lemma REF .", "We begin with the upper bound Lemma 3.19 [modification of upper bound in Lemma REF ] There exist $C>0$ depending only on $\\delta , T^{\\prime }$ such that we have $\\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}\\le (C-\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}))+n-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0,T^{\\prime }]$ .", "Define $\\psi _{u,v,\\tilde{\\epsilon }}&:= \\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}+\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})\\\\ \\sigma _{t, u,v,\\tilde{\\epsilon }}&:=\\omega ^{\\prime }_{t, u,v,\\tilde{\\epsilon }}- \\eta \\sqrt{-1}\\partial \\bar{\\partial }\\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})\\nonumber $ Then $\\psi _{u,v,\\tilde{\\epsilon }}$ solves (REF ) but with initial condition $\\psi _{u,v,\\tilde{\\epsilon }}(0)=0$ and with $\\omega ^{\\prime }_{t, u,v,\\tilde{\\epsilon }}$ replaced with $\\sigma _{t, u,v,\\tilde{\\epsilon }}$ which we will denote below simply as $\\sigma $ .", "We have $\\frac{\\partial }{\\partial t}\\dot{\\psi }^{\\prime }_{u,v,\\tilde{\\epsilon }}=\\Delta _{\\sigma }\\dot{\\psi }^{\\prime }_{u,v,\\tilde{\\epsilon }}+tr_{\\sigma }(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}).$ Thus we have the following on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ assuming $v$ is sufficiently small and where $C_1$ is to be chosen below.", "$\\begin{split}(\\frac{\\partial }{\\partial t}-\\Delta _{\\sigma })(t\\dot{\\psi }^{\\prime }_{u,v,\\tilde{\\epsilon }}&-(\\psi ^{\\prime }_{u,v,\\tilde{\\epsilon }}-t\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-nt)\\\\&=-tr_{\\sigma }(\\sigma _{t,u,v,\\tilde{\\epsilon }_{j}}-t \\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log |\\tilde{S}_{\\tilde{E}}|^{2})+\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2} \\le 0.\\end{split}$ where the last inequality follows by Assumption REF and the assumption $\\log |\\tilde{S}_{\\tilde{E}}|^{2} <0$ .", "Now note that the quantity $Q$ being evolved on the LHS above is bounded above by 0 at $t=0$ .", "Also, the quantity approaches $-\\infty $ towards the divisors at all positive times.", "Thus by the maximum principle we may conclude that $Q(x, t) \\le 0$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ and thus $\\dot{\\psi }^{\\prime }_{u,v,\\tilde{\\epsilon }}\\le \\frac{(\\psi ^{\\prime }_{u,v,\\tilde{\\epsilon }}-t\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})+nt)}{t} \\le (C_2-\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}))+n-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}$ where in the last inequality we have used Lemma REF together with the definition of $\\psi _{u,v,\\tilde{\\epsilon }}$ .", "The Lemma then follows as $\\dot{\\psi }_{u,v,\\tilde{\\epsilon }}=\\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}$ .", "Next we modify the lower bound in Lemma REF .", "Lemma 3.20 [modification of lower bound in Lemma REF ] There exist $C>0$ depending only on $de, T^{\\prime }$ such that on $M \\times [0,T^{\\prime }]$ it holds that $\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-C \\le \\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}$ The proof is basically the same as the proof of Lemma REF except that we evolve instead the quantity $Q:=\\left(\\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}+A(\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})\\right)$ .", "Note that by (3.8) we have $Q(x, 0) \\ge C_\\delta $ on $\\tilde{X}$ for some constant $C_{\\delta }$ .", "We also see that for all $t\\in [0, T^{\\prime }]$ we have $Q(x, t) \\rightarrow \\infty $ as $x$ approaches the divisor $\\tilde{E}$ .", "Thus $Q$ attains a minimum on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ at some point $(x_0, t_0)$ .", "If $t_0=0$ then the Lemma follows from the previous observation.", "If $t_0>0$ then similarly to the proof of Lemma REF , we have the following at $(x_0, t_0)$ where $A$ is a sufficiently large constant depending only on $T^{\\prime }$ and $\\delta $ and $C_i$ 's are some constants depending only on $T^{\\prime }, \\delta $ and $\\eta $ and where $\\hat{\\omega }$ is as in the statement of Theorem 3.7: $0\\ge &(\\frac{\\partial }{\\partial t}-\\Delta _{\\omega })\\left(\\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}+A(\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})\\right)\\nonumber \\\\=\\;&tr_{\\omega }(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})+A\\log \\frac{\\omega ^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\nonumber \\\\&+\\;Atr_{\\omega }(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\delta \\Theta _{\\tilde{E}})-An\\nonumber \\\\\\ge \\;&tr_{\\omega }(\\chi -\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}+A(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\delta \\Theta _{\\tilde{E}}))+A\\log \\frac{C(c_{\\delta },\\eta )\\omega ^{n}}{\\hat{\\omega }^n}\\nonumber \\\\&+\\;-An-A\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})\\nonumber \\\\\\ge \\;&C_1tr_{\\omega }(\\hat{\\omega })+A\\log \\frac{C(c_{\\delta },\\eta )\\omega ^{n}}{\\hat{\\omega }^n}-C_2\\nonumber \\\\\\ge \\;&C_3 \\left( \\frac{\\hat{\\omega }^n}{\\omega ^n} \\right)^{1/n} -C_2\\nonumber \\\\$ Thus we get $\\hat{\\omega }^n \\le C_4 \\omega ^n$ at $(x_0, t_0)$ and it follows from (3.8) and the definition of $\\hat{\\omega }$ in Theorem 3.7, that $\\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}(x_0, t_0)$ and thus $Q(x_0, t_0)$ is bounded below by some constant $C_5$ by Theorem REF .", "The lemma then follows from Theorem REF .", "Lemma 3.21 [modification of upper bound in Theorem 3.7] There exist a function $F(x)$ depending only on $T^{\\prime }, \\delta $ such that in $M \\times [0,T^{\\prime }]$ it holds that $tr_{\\hat{\\omega }}\\omega \\le F(x)$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0,T^{\\prime }]$ where $\\hat{\\omega }$ is as in Theorem 3.7.", "The proof is basically as the upper bound proof in Theorem 3.7 except that we consider instead the quantity $G(p,t):=\\log tr_{\\hat{\\omega }}\\omega -A(\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})+B\\Psi _{\\rho }$ for uniform constants $A, B$ to be determined.", "In the following, a uniform constant refers to a constant depending only on the constants given in the hypothesis of the Lemma, in particular, independent of the parameters $u,v,\\tilde{\\epsilon }$ .", "In particular, $G(p,t)$ is bounded above on $M$ for each $t$ .", "Moreover, by the hypothesis and Theorem 3.5 for sufficiently large $A$ we have $G(p,0)\\le C(A)$ on $M$ .", "By the same calculations in Lemma REF we may choose $A$ and $B$ sufficiently large, so that the following holds where $C_3, C_1^{\\prime }$ are the same constants as in the proof Lemma REF and the $C^{\\prime }$ denote uniform constants which may differ from line to line.", "$(\\frac{\\partial }{\\partial t}-\\Delta )G\\le &C_{3}\\;tr_{\\omega }\\hat{\\omega }+A(-\\dot{\\varphi }^{\\prime }_{u,v,\\tilde{\\epsilon }}+n-tr_{\\omega }(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\delta \\Theta _{\\tilde{E}})-\\sqrt{-1}\\partial \\bar{\\partial }\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})\\nonumber \\\\&+(C^{\\prime }_{1}-B)\\Delta \\Psi _{\\rho }\\nonumber \\\\\\le &-A\\log \\frac{\\omega ^{n}}{\\hat{\\omega }^{n}}+A\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})+(C^{\\prime }_{1}-B)\\Delta \\Psi _{\\rho }\\nonumber \\\\&+tr_{\\omega }(C_{3}\\hat{\\omega }-A(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\delta \\Theta _{\\tilde{E}}-\\sqrt{-1}\\partial \\bar{\\partial }\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}))\\nonumber \\\\&+(C^{\\prime }_{1}-B)\\Delta \\Psi _{\\rho }+C\\nonumber \\\\\\le &-tr_{\\omega }\\hat{\\omega }+C\\nonumber \\\\$ where in the last inequality we have used Assumption REF and the definition of $\\hat{\\omega }$ in Theorem 3.7.", "Note that for all $t\\in [0, T^{\\prime }]$ we have $G(x, t) \\rightarrow -\\infty $ as $x$ approaches $\\tilde{E}$ .", "Thus $\\max _{(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]} G(x, t) = G(p_0, t_0)>0$ for some $(p_0, t_0)\\in (X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ .", "Now if $t_0=0$ the Lemma holds.", "If $t_0 >0$ we have $tr_{\\omega }\\hat{\\omega }\\le C$ at $(p_0, t_0)$ by (REF ), and thus as in the proof of Lemma REF we conclude the following at $(p_0, t_0)$ $G(p_{0},t_{0})&\\le \\log (tr_{\\omega }\\hat{\\omega })^{n-1}(\\frac{\\omega ^{n}}{\\hat{\\omega }^{n}})-A(\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})+B\\Psi _{\\rho }\\nonumber \\\\&\\le C(\\dot{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}))\\nonumber \\\\&-[A(\\varphi ^{\\prime }_{u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2})+B\\Psi _{\\rho }]\\\\&=: H_1(p_0, t_0) + H_2(x_0, t_0)\\nonumber $ where by Lemma REF we have $H_1(p, t) \\le -(\\delta /3) \\log |\\tilde{S}_{\\tilde{E}}|^{2} +C_{1, \\delta }$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ for some $C_{1, \\delta }$ and by Theorem 3.5 we have $H_2(p, t) \\le (2\\delta /3) \\log |\\tilde{S}_{\\tilde{E}}|^{2} +C_{2, \\delta }$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime }]$ for some $C_{2, \\delta }$ and thus we may continue the estimate in (REF ) as $H_1(p_0, t_0) + H_2(p_0, t_0)\\le (\\delta /3) \\log |\\tilde{S}_{\\tilde{E}}|^{2} +C_{1, \\delta }+C_{2, \\delta } \\le C_{3, \\delta }$ Combining (REF ) and (REF ) we conclude that $G\\le C_{3, \\delta }$ on $(X^{\\prime }\\setminus \\tilde{E}) \\times [0, T^{\\prime })$ and the Lemma follows from the definition of $G$ and Lemma REF , or alternately Lemma REF .", "As in the previous section, we may deduce from Lemmas REF and REF that for any compact set $K\\subset X^{\\prime }\\setminus \\tilde{E},\\;t\\in [0,T^{\\prime }]$ we have the equivalence $C^{1-}\\theta \\le \\omega _{t, u, v, \\tilde{\\epsilon }}+\\sqrt{-1}\\partial \\bar{\\partial }\\phi ^{\\prime }_{u, v, \\tilde{\\epsilon }}\\le C \\theta $ where $C>0$ depends only on $T^{\\prime }, K$ .", "Thus by Evans-Krylov local theory (cf.", "[34], [43]) we can establish the local high order estimates as in Theorem 3.22 Suppose $\\varphi _{0}\\in C^{\\infty }(X^{\\prime })$ .", "For any compact set $K\\subset X^{\\prime }\\setminus \\tilde{E}\\times [0,T^{\\prime }]$ there exist constants $C(m,K,T^{\\prime })$ such that $|\\varphi ^{\\prime }_{l, u,v,\\tilde{\\epsilon }}|_{C^{m}(K\\times [0,T^{\\prime }])}\\le C(m,K,T^{\\prime }).$" ], [ "Proof of Theorem ", "In the following sections we will prove Theorem REF .", "We begin by fix some choice of Hermitian metrics $h_i, h_j, h_k$ and a constants $\\eta >0$ .", "Now given any $l, u, \\tilde{\\epsilon }$ , the hypothesis of Theorem REF will be satisfied provided $v$ is sufficiently small depending only on $u$ , and thus we have a solution $\\varphi ^{\\prime }_{l, u, v, \\tilde{\\epsilon }}(t)$ to (REF ) on $(X^{\\prime }\\setminus \\tilde{E})\\times [0,T_0)$ .", "Moreover, given any $T^{\\prime }\\in (0, T_0)$ and $\\delta $ as in the beginning of §3.2, we may assume without loss of generality that Assumption REF there is satisfied, in view of the remarks following the assumption, and thus on $(X^{\\prime }\\setminus \\tilde{E})\\times [0,T^{\\prime }]$ our solution $\\varphi ^{\\prime }_{l, u, v, \\tilde{\\epsilon }}(t)$ satisfies a uniform global upper bound by Theorem REF , a uniform local lower bound estimate as in Theorem REF , and also uniform local higher order estimates as in Theorem REF where the uniformity is over the parameters $l, u, v, \\tilde{\\epsilon }$ ." ], [ "Existence of a solution", "Consider the family of solutions $\\varphi ^{\\prime }_{l, u, v, \\tilde{\\epsilon }}(t)$ to (REF ).", "It follows from the above estimates and the Arzela-Ascoli theorem that we may let the parameters approach their limits, along appropriate subsequences, to obtain a smooth local limit $\\varphi ^{\\prime }_{l, u, v, \\tilde{\\epsilon }}\\overset{C^{\\infty }(X^{\\prime }\\setminus \\tilde{E})\\times [0,T_0)}{\\longrightarrow }\\varphi ^{\\prime } $ solving (REF ).", "In addition, given any $t\\in [0, T_0)$ , the family $\\varphi ^{\\prime }_{l, u, v, \\tilde{\\epsilon }}(t)-\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}$ extends to a family of $C\\theta $ plurisubharmonic functions on $X^{\\prime }$ for some $C$ independent of the parameters, while also satisfying a uniform global upper bound and a uniform local lower bound away from $\\tilde{E}$ .", "It follows from the classical theory of plurisubharmonic functions (see [12]) that $\\varphi ^{\\prime }$ extends to a $C\\theta $ plurisubharmonic function on $X^{\\prime }$ and that we may also have the convergence $\\varphi ^{\\prime }_{l, u, v, \\tilde{\\epsilon }}(t) \\overset{L^1(X^{\\prime })}{\\longrightarrow } \\varphi ^{\\prime }(t)$ and thus in the sense of currents on $X^{\\prime }$ .", "In particular, $\\varphi ^{\\prime }(t)$ itself defines a current on $X^{\\prime }$ with zero Lelong number.", "In particular, $\\varphi =\\varphi ^{\\prime } +\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-t \\log \\log ^{2}|S_{i}|_{i}^{2}$ will solve (REF ) smoothly on $X^{\\prime } \\setminus \\widetilde{E} \\times [0, T_0))$ (see remark REF ) and defines a current on $X^{\\prime }$ with zero Lelong number.", "In the following, we show $\\varphi ^{\\prime }$ can in fact be obtained as a monotone limit as the above parameters approach their limits in turn above, similar to the approach taken in [11], [17], [34].", "We first note that if we replace the term $-(v+t)\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}$ by $v\\theta -(v+t)\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}$ in the background forms $\\omega _{t, u, v, \\tilde{\\epsilon }}$ in (REF ), this will not affect the statements of any of the results in §3.1, 3.2 and we could still obtain a limit solution $\\varphi $ to (REF ) exactly as above.", "On the other hand, this assumption will provide the monotonicity of the functions $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ in the parameter $v$ which is the starting point of the monotonicity Lemma below.", "By a slight abuse of notation, and for simplicity of notation, we will make this replacement of terms implicitly by simply assuming $\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}\\ge 0$ below.", "The following Lemma establishes the monotonicity of the family of solutions $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ with respect to each of its parameters.", "Lemma 4.1 Consider the family of solutions $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ to (REF ) on $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T_0)$ .", "We have the following monotonicity in the parameters $l,u,v,\\tilde{\\epsilon }$ .", "for all $0<v^{\\prime }\\le v,$ $\\varphi ^{\\prime }_{l,u,v^{\\prime },\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}\\le \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}$ for all $0<\\epsilon ^{\\prime }_{j}\\le \\epsilon _{j},$ $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }^{\\prime }_{j},\\tilde{\\epsilon }_{k}}+\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{\\prime 2})\\le \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}+\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2});$ for all $0<\\epsilon ^{\\prime }_{k}\\le \\epsilon _{k},$ $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }^{\\prime }_{k}}\\ge \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}};$ for all $0<u^{\\prime }\\le u,$ $\\varphi ^{\\prime }_{l,u^{\\prime },v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}\\le \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}};$ for all $l^{\\prime }\\le l,$ $\\varphi ^{\\prime }_{l^{\\prime },u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}\\ge \\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}.$ For (1), consider the difference $\\phi _{v^{\\prime },v}:=\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}-\\varphi ^{\\prime }_{l,u,v^{\\prime },\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}$ which satisfies $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\phi _{v^{\\prime },v} &=\\displaystyle \\log \\frac{(\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v^{\\prime },\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}+\\sqrt{-1}\\partial \\bar{\\partial }\\phi _{v,v^{\\prime }})^{n}}{(\\omega ^{\\prime }_{t,u,v^{\\prime },\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u,v^{\\prime },\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}})^{n}}\\\\\\;\\\\\\phi _{v^{\\prime },v}(0) &= 0.\\end{array}\\right.$ Now $\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}\\ge \\omega ^{\\prime }_{t,u,v^{\\prime },\\tilde{\\epsilon }_{j}}$ and it follows by the maximum principle that $\\phi _{v^{\\prime },v}\\ge 0,$ thus (1) follows.", "For (2) we set $\\phi _{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}:=\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}+\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})$ and $\\overline{\\omega }_{t,u,v}:&=\\pi ^{*}\\omega _{0}+u\\theta +t\\chi -(t+v)\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}\\\\&=\\omega ^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),$ then $\\phi _{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}$ solves the equation $\\nonumber \\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\phi _{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}} &=\\displaystyle \\log \\frac{(\\overline{\\omega }_{t,u,v}+\\sqrt{-1}\\partial \\bar{\\partial }\\phi _{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}})^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\\\\\;\\\\\\phi _{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}(0) &= \\varphi _{l,0}.\\end{array}\\right.$ Now (2) follows directly from the maximum principle applied to the difference $\\phi _{l,u,v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}-\\phi _{l,u,v,\\tilde{\\epsilon }^{\\prime }_{j},\\tilde{\\epsilon }_{k}}$ as in the proof of (1).", "The inequalities in (3) and (4) follow from (REF ) and applying the maximum principle to the corresponding differences as in the proof of (1).", "Meanwhile, (5) follows from the comparison principle in [11], [17], which implies the monotonicity of the initial decreasing potential sequence is preserved under the complex Monge-Ampère flow (REF ).", "We may now obtain a stepwise limit from the family $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ as follows.", "By Lemma REF (1) we have a monotone limit $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\searrow \\varphi ^{\\prime }_{l,u,\\tilde{\\epsilon }}$ in $C^{\\infty }(X^{\\prime }\\setminus \\bigcup _{i}D_{i})\\times [0,T_0)$ as $v\\searrow 0$ where each $\\varphi ^{\\prime }_{l,u,\\tilde{\\epsilon }}$ solves (REF ) for $v=0$ .", "Also, we may have $\\varphi ^{\\prime }_{l,u,\\tilde{\\epsilon }}\\in L^{\\infty }(X^{\\prime }\\setminus \\bigcup _{i}D_{i})$ by Theorem 1.1 of [11], where the bounds are in fact independent of $\\tilde{\\epsilon _j}$ by [11], [28].", "In particular, as explained above the limit also holds in the sense of currents on $X^{\\prime }$ for each $t$ .", "By Lemma REF (2) and the fact that $\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})\\rightarrow \\frac{|S_{j}|_{j}^{2(1-b_{j})}}{(1-b_{j})^{2}}$ (which are uniformly bounded) by Lemma REF , we have a monotone limit $\\varphi ^{\\prime }_{l,u,\\tilde{\\epsilon _j}, \\tilde{\\epsilon _k}}\\searrow \\varphi ^{\\prime }_{l,u, \\tilde{\\epsilon _k}}$ in $C^{\\infty }(X^{\\prime }\\setminus \\bigcup _{i}\\tilde{E})\\times [0,T_0)$ as all $\\epsilon _{j}\\searrow 0$ .", "Moreover, the limit satisfies a global upper bound and local lower bound for each $t$ , and thus as explained above the limit also holds in the sense of currents on $X^{\\prime }$ .", "By Lemma REF (3) we have a monotone limit $\\varphi ^{\\prime }_{l,u, \\tilde{\\epsilon _k}} \\searrow \\varphi ^{\\prime }_{l,u}$ in $C^{\\infty }(X^{\\prime }\\setminus \\bigcup _{i}\\tilde{E})\\times [0,T_0)$ as all $\\epsilon _{k}\\searrow 0$ .", "Moreover, the limit satisfies a global upper bound and local lower bound for each $t$ , and thus as explained above the limit also holds in the sense of currents on $X^{\\prime }$ .", "Finally, Lemma REF (4), (5) we have a monotone limit $\\varphi ^{\\prime }_{l,l^{-1}} \\searrow \\varphi ^{\\prime }$ in $C^{\\infty }(X^{\\prime }\\setminus \\bigcup _{i}\\tilde{E})\\times [0,T_0)$ as $l\\rightarrow \\infty $ .", "In particular, for each $t\\in [0, T_0)$ the sequence $\\varphi ^{\\prime }_{l,l^{-1}}$ is a non-increasing sequence of $\\omega ^{\\prime }_{t,u}:=\\omega ^{\\prime }_{t,u,0,0}$ plurisubharmonic functions on $X^{\\prime }$ satisfying a global upper bound and local lower bound.", "Hence as explained above, by the theory of plurisubharmonic functions, the convergence holds in the sense of currents on $X^{\\prime }$ .", "Thus $\\varphi ^{\\prime }$ solves (REF ) in the current sense and gives rise to a current with zero Lelong number and with local finite potential out of $\\tilde{E}$ on $[0,T^{\\prime }].$ By Theorem REF $\\varphi ^{\\prime }$ is in fact smooth on $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime }])$ and solves (REF ) there." ], [ "Weak continuity at $t=0$", "Consider the solution $\\varphi ^{\\prime }(t)$ to (REF ) constructed in the previous section and the corresponding solution $\\varphi =\\varphi ^{\\prime } +\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-t \\log \\log ^{2}|S_{i}|_{i}^{2}$ to (REF ).", "In particular, we had constructed the $\\varphi ^{\\prime }$ as the limit $\\begin{split}\\varphi ^{\\prime }_{l,l^{-1}}&\\searrow \\varphi ^{\\prime }\\\\\\varphi ^{\\prime }_{l,l^{-1}}&= \\lim _{\\tilde{\\epsilon }_{k}\\rightarrow 0}\\lim _{\\tilde{\\epsilon }_{j}\\rightarrow 0} \\lim _{v\\rightarrow 0}\\varphi ^{\\prime }_{l,l^{-1},v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}.\\\\\\end{split}$ We have the following weak convergence as $t\\rightarrow 0$ .", "Theorem 4.2 We have the convergence $\\lim \\limits _{t\\searrow 0}\\varphi (t)=\\varphi (0)$ in $L^1(X^{\\prime })$ , thus in the sense of currents on $X^{\\prime }$ .", "It will suffice to prove $\\lim \\limits _{t\\searrow 0}\\varphi ^{\\prime }(t)=\\varphi ^{\\prime }(0)$ in $L^1(X^{\\prime })$ .", "Fix some $0<T^{\\prime }<T_0$ .", "We may then assume, as explained above, that Assumption REF and the estimates in §3.2 apply to each $\\varphi ^{\\prime }_{l,l^{-1},v,\\tilde{\\epsilon }_{j},\\tilde{\\epsilon }_{k}}$ and thus to $\\varphi ^{\\prime }$ on $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime })$ .", "Fix some $c >0$ .", "Then for all $t<T^{\\prime }$ the family of functions $\\varphi ^{\\prime }(t)$ are $C\\theta $ plurisubharmonic on $X^{\\prime }$ for some $C$ independent of $t$ while the family is also uniformly bounded above on $X^{\\prime }$ by Theorem REF , and locally bounded below on $X^{\\prime }\\setminus \\tilde{E}$ by Theorem REF .", "Again, from the classical theory of plurisubharmonic functions, given any sequence $t_m\\rightarrow 0$ , there exists a subsequence which we will continue to denote as $t_m$ , so that $\\varphi ^{\\prime }(t_{m_k})\\rightarrow \\psi $ in $L^1(X^{\\prime })$ for some $C\\theta $ plurisubharmonic function $\\psi $ on $X^{\\prime }$ .", "To prove the Proposition we only need to prove that $\\psi =\\varphi ^{\\prime }(0)$ almost everywhere on $X^{\\prime }$ .", "Claim $\\lim \\limits _{t\\searrow 0}\\varphi _{l, l^{-1}}(t)=\\varphi _{l, l^{-1}}(0)$ in $L^1(X^{\\prime })$ , thus in the sense of currents on $X^{\\prime }$ .", "Note that $\\varphi ^{\\prime }_{l, l^{-1}} \\in C^{\\infty }((X^{\\prime }\\setminus \\tilde{E})\\times [0,T_0)$ for each $l$ by Theorem REF , and the fact that $\\varphi _{0, l}\\in C^{\\infty }(X^{\\prime })$ for every $l$ .", "Thus the limit in the claim holds at every $x\\in X^{\\prime }\\setminus \\tilde{E}$ .", "On the other hand, the family of functions $\\varphi ^{\\prime }(t)$ are $C\\theta $ plurisubharmonic on $X^{\\prime }$ for some $C$ independent of $t<T^{\\prime }$ , satisfy a global upper bound on $X^{\\prime }$ and a local lower bound on $X^{\\prime }\\setminus \\tilde{E}$ uniformly over $t<T^{\\prime }$ .", "The claim then follows by the classical theory of plurisubharmonic functions.", "Using the claim and the fact that From Lemma REF (4), (5) the sequence $\\varphi ^{\\prime }_{l,l^{-1}}$ is non-increasing on $X^{\\prime }$ we get $\\psi (x)=\\lim _{m}\\varphi ^{\\prime }(t_{m},x)\\le \\lim _{l,l^{-1}} \\lim _{m}\\varphi ^{\\prime }_{l,l^{-1}}(t_{m},x)=\\lim _{l,l^{-1}} \\varphi ^{\\prime }_{l,l^{-1}}(0,x)=\\varphi ^{\\prime }(0,x)$ for almost all $x\\in X^{\\prime }\\setminus \\tilde{E}$ .", "On the other hand, by the time derivative estimate in Lemma REF , for any $\\delta >0$ it follows that $\\varphi ^{\\prime }_{l,l^{-1}}(t_{m},x)-\\varphi ^{\\prime }_{l,l^{-1}}(0,x)\\ge \\int _{0}^{t_{m}}(n\\log t+\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-C_{1\\delta })dt.$ Thus the limit $\\lim _{m}\\varphi ^{\\prime }_{l,l^{-1}}(t_{m},x)\\ge \\varphi ^{\\prime }_{l,l^{-1}}(0,x)$ holds for almost all $x\\in X^{\\prime }\\setminus \\tilde{E}$ and by taking $l\\rightarrow +\\infty $ we get $\\psi (x)=\\lim _{m}\\varphi ^{\\prime }(t_{m},x)\\ge \\varphi ^{\\prime }(0,x)$ .", "We thus conclude that $\\psi (x)=\\varphi ^{\\prime }(0,x)$ for almost all $x\\in X^{\\prime }\\setminus \\tilde{E}$ which completes the proof of the Proposition." ], [ "Uniqueness and maximality", "We now follow the argument in [17] to prove uniqueness and maximality of the solution to (REF ) constructed in §4.1.", "First we show that when the approximation parameters $v,\\epsilon _{j},\\epsilon _{k}=0$ the corresponding solutions $\\varphi ^{\\prime }_{l,u}$ are unique in the category of bounded functions with the corresponding continuity at $t=0$ as in the following Lemma 4.3 There exists a unique solution $\\varphi ^{\\prime }_{l,u}\\in L^{\\infty }((X^{\\prime }\\setminus \\tilde{E})\\times [0,T^{\\prime }])\\bigcap C^{\\infty }((X^{\\prime }\\setminus \\tilde{E})\\times [0,T^{\\prime }])$ which solves the following equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi ^{\\prime }_{l,u} &=\\displaystyle \\log \\frac{(\\omega ^{\\prime }_{t,u}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u})^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}\\\\\\;\\\\\\varphi ^{\\prime }_{l,u}(0) &= \\varphi _{l,0}-\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})},\\end{array}\\right.$ and satisfies that $\\lim \\limits _{t\\searrow 0}\\varphi ^{\\prime }_{l,u}(t)=\\varphi ^{\\prime }_{l,u}(0)$ almost everywhere and in the current sense.", "Here $\\varphi ^{\\prime }_{l,u}:=\\varphi ^{\\prime }_{l,u,0,0}.$ We can prove this Lemma using the trick from [34].", "As we showed before, there exists one solution $\\varphi ^{\\prime }_{l,u}$ satisfying all the requirements.", "Suppose there exists another solution $\\psi ^{\\prime }_{l,u}$ satisfying all the same conditions in this Lemma.", "Without loss of generality assume that $|\\tilde{S}_{\\tilde{E}}|\\le 1$ everywhere, then consider the function $D_{+}:=\\psi ^{\\prime }_{l,u}-\\varphi ^{\\prime }_{l,u}+\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}$ which satisfies the following equation: $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\ D_{+} &=\\displaystyle \\log \\frac{(\\omega ^{\\prime }_{t,u}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u}+\\delta \\Theta _{\\tilde{E}}+\\sqrt{-1}\\partial \\bar{\\partial }D_{+})^{n}}{(\\omega ^{\\prime }_{t,u}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u})^{n}}\\\\\\;\\\\D_{+}(0) &= \\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}.\\end{array}\\right.$ For any time slice $t\\in [0,T^{\\prime }]$ as $D_{+}\\rightarrow -\\infty $ near $\\tilde{E},$ the supremum of $D_{+}$ will always be obtained away from $\\tilde{E}$ where $\\psi ^{\\prime }_{l,u},\\varphi ^{\\prime }_{l,u}$ are smooth.", "Also note that in this Lemma $\\omega ^{\\prime }_{t,u}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime }_{l,u}$ is non-degenerate, thus for small enough $\\delta >0,$ by the maximum principle, it follows that $\\sup D_{+}(t)\\le \\sup D_{+}(0)+C(\\delta ,T^{\\prime })\\le C(\\delta ,T^{\\prime }),$ where $C(\\delta ,T^{\\prime })\\rightarrow 0$ as $\\delta \\rightarrow 0.$ Let $\\delta \\rightarrow 0,$ it follows that $\\psi ^{\\prime }_{l,u}\\le \\varphi ^{\\prime }_{l,u}.$ Similarly by taking $D_{-}:=\\psi ^{\\prime }_{l,u}-\\varphi ^{\\prime }_{l,u}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}$ we can show that $\\psi ^{\\prime }_{l,u}\\ge \\varphi ^{\\prime }_{l,u},$ which completes the proof of this Lemma.", "We now follow the argument in [17] to prove that the solution $\\varphi $ to (REF ) constructed in §4.1 is the unique maximal solution to (REF ) in the sense of the following: Theorem 4.4 Let $\\psi \\in L_{loc}^{\\infty }((X^{\\prime }\\setminus \\tilde{E})\\times [0,T_0))\\bigcap C^{\\infty }((X^{\\prime }\\setminus \\tilde{E})\\times (0,T_0))$ solve (REF ) on $(X^{\\prime }\\setminus \\tilde{E})\\times (0,T_0)$ satisfying $\\lim \\limits _{t\\searrow 0} \\psi (t)=\\varphi _0$ in $L^1(X^{\\prime })$ .", "Then $\\psi (t)\\le \\varphi (t)$ where $\\varphi (t)$ is the solution to (REF ) constructed above.", "Recall that $\\varphi =\\varphi ^{\\prime } +\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-t \\log \\log ^{2}|S_{i}|_{i}^{2}$ where in turn $\\varphi ^{\\prime }_{l,l^{-1}}(t)\\searrow \\varphi ^{\\prime }$ and each $\\varphi ^{\\prime }_{l,l^{-1}}$ solves $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi ^{\\prime } &=\\displaystyle \\log \\frac{(\\omega ^{\\prime }_{t}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi ^{\\prime })^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2})^{a_{k}}}\\\\\\;\\\\\\varphi ^{\\prime }(0) &= \\varphi _{l,0}-\\eta \\sum _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}).\\end{array}\\right.$ where $ \\nonumber \\omega ^{\\prime }_{t}:&=\\pi ^{*}\\omega _{0}+t\\chi -t\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}+\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),$ By the construction of $\\varphi (t)$ above it suffices to prove $\\psi ^{\\prime }(t)\\le \\varphi ^{\\prime }_{l,l^{-1}}(t)$ for any $l>0$ where $\\psi ^{\\prime }$ is another solution to (REF ).", "Fix some $(x, t)\\in X^{\\prime }\\setminus \\tilde{E})\\times [0,T_0)$ .", "Then for any $0<t_{\\epsilon }<t$ the function $\\psi ^{\\prime }(t,x)-\\varphi ^{\\prime }_{l,u}$ attains the maximum on $X^{\\prime }\\setminus \\tilde{E})\\times [t_{\\epsilon },T_0)$ at some $(t_{\\epsilon },x_{\\epsilon })$ .", "This can be shown by applying the maximum principle to the function $\\psi ^{\\prime }-\\varphi ^{\\prime }_{l,l^{-1}}+\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}$ as in the proof of Lemma REF for some $\\delta > 0$ then letting $\\delta \\rightarrow 0$ .", "Thus for each $l>0$ we have $\\psi ^{\\prime }(t,x)-\\varphi ^{\\prime }_{l,l^{-1}}(t,x)\\le \\sup _{X^{\\prime }}(\\psi ^{\\prime }(t_{\\epsilon })-\\varphi ^{\\prime }_{l,l^{-1}}(t_{\\epsilon }))\\le \\sup _{X^{\\prime }}(\\psi ^{\\prime }(t_{\\epsilon })-\\varphi ^{\\prime }_{l,l^{-1}}(0)) + \\sup _{X^{\\prime }}(\\varphi ^{\\prime }_{l,l^{-1}}(0)-\\varphi ^{\\prime }_{l,l^{-1}}(t_{\\epsilon }))$ Now by the same proof of Lemma REF , we may have $n\\log t - C_l\\le \\dot{\\varphi }^{\\prime }_{l,l^{-1}}$ on $X^{\\prime }\\times [0, t_{\\epsilon }]$ for some $C_l >0$ depending on $l$ and $t_{\\epsilon }$ .", "Integrating in time gives $ \\sup _{X^{\\prime }}(\\varphi ^{\\prime }_{l,l^{-1}}(0)-\\varphi ^{\\prime }_{l,l^{-1}}(t_{\\epsilon })) \\rightarrow 0$ as $t_{\\epsilon }\\rightarrow 0$ .", "Meanwhile, Hartogs Lemma and the continuity of $\\varphi ^{\\prime }_{l,l^{-1}}(0)$ on $X^{\\prime }$ gives $\\sup _{X^{\\prime }}(\\psi ^{\\prime }(t_{\\epsilon })-\\varphi ^{\\prime }_{l,l^{-1}}(0))\\rightarrow \\sup _{X^{\\prime }}(\\psi ^{\\prime }(0)-\\varphi ^{\\prime }_{l,l^{-1}}(0))\\le 0$ as $t_{\\epsilon }\\rightarrow 0$ where we have used the fact that $\\varphi ^{\\prime }_{l,l^{-1}}(0)$ is non-increasing by construction.", "By letting $t_{\\epsilon }\\rightarrow 0$ in the inequality above we conclude that $\\psi ^{\\prime }(t,x)-\\varphi ^{\\prime }_{l,l^{-1}}(t,x)\\le 0$ for all $l$ which in turn implies $\\psi ^{\\prime }(t,x)-\\varphi ^{\\prime }(t,x)\\le 0$ .", "This proves the Theorem." ], [ "Improved lower bounds", "The main goal of this section is to prove the following stronger lower bound for the solution $\\varphi (t)$ to (REF ) constructed in §4.1.", "Theorem 4.5 If $\\varphi (0)\\in L_{loc}^{\\infty }(X^{\\prime }\\setminus \\pi ^{-1}(X_{lc}))$ and gives rise to a current with zero Lelong number, then$\\varphi (t)$ belongs to $L_{loc}^{\\infty }((X^{\\prime }\\setminus \\pi ^{-1}(X_{lc})\\times [0,T^{\\prime }]\\bigcap C^{\\infty }((X^{\\prime }\\setminus \\pi ^{-1}(\\tilde{E})\\times (0,T_0)$ and also gives rise to a current with zero Lelong number.", "We will apply the comparison principle associated with the $L^{\\infty }$ -estimate in [13], [17], [34].", "Assume that $\\pi ^{-1}(X_{lc})=\\bigcup _{i}D_{i}\\bigcup _{j^{\\prime }}D_{j^{\\prime }}\\bigcup _{k^{\\prime }}D_{k^{\\prime }},$ where $D_{j^{\\prime }},D_{k^{\\prime }}$ represent those lt divisors and canonical divisors which have non-empty intersections with lc divisors $D_{i}.$ By the assumption of this theorem for any $\\delta >0$ there exists a constant $C_{\\delta }>0$ such that $\\varphi (0)\\ge \\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\log |S_{i}|_{i}^{2}+\\log |S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\log |S_{k^{\\prime }}|_{k^{\\prime }}^{2})-C_{\\delta }.$ Consider the following equation approximating (REF ): $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi _{\\tilde{\\epsilon }_{i}, l} &=\\displaystyle \\log \\frac{(\\omega _{t}+l^{-1}\\theta + \\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{\\tilde{\\epsilon }_{i}})^{n}\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}\\\\\\varphi _{\\tilde{\\epsilon }_{i, l}}(0) &=\\varphi _{l, 0},\\end{array}\\right.$ for approximation parameters $\\epsilon _{i}>0.$ Fix $T^{\\prime }<T_0$ .", "As $\\varphi _{l, 0}$ is smooth on $X^{\\prime }$ , by the results in section 7 of [17], for each set of $\\epsilon _{i}>0$ there exists a unique solution $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)\\in L^{\\infty }(X^{\\prime }\\times (0,T^{\\prime }))\\bigcap C^{\\infty }_{loc}((X^{\\prime }\\setminus \\tilde{E})\\times (0,T^{\\prime })).$ Moreover, by a maximum principle argument as in §4.3, we may conclude that the family of functions $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)$ are non-increasing as $\\epsilon _{i}\\searrow 0$ and $l\\rightarrow \\infty $ separately, and also that $\\varphi _{\\tilde{\\epsilon }_{i}}(t)\\ge \\varphi _{l, l^{-1}}(t)$ for all $\\epsilon _i, l$ .", "In particular, the zero Lelong number solution $\\varphi (t)$ to (REF ) constructed in §4.1 provides a uniform lower barrier for the family of solutions $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)$ and it follows from this, and the the proof of Theorem 7.5 in [17], that $\\varphi _{\\tilde{\\epsilon }_{i}}(t)$ satisfies local higher order estimates on $X^{\\prime } \\setminus \\tilde{X} \\times [0, T^{\\prime }]$ as in Theorem REF , where the estimates are independent $\\tilde{\\epsilon }_{i}, l$ .", "In particular, we conclude the monotone convergence of $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)$ , as $\\epsilon _{i}\\searrow 0$ and $l\\rightarrow \\infty $ , to a smooth limit solution $\\psi (t)$ to (REF ) on $X^{\\prime }\\setminus \\tilde{E}\\times [0, T^{\\prime })$ satisfying the conditions of Theorem REF , thus giving $\\psi (t)\\le \\phi (t)$ .", "On the other hand, we have $\\psi (t)\\ge \\phi (t)$ by construction, and we conclude that $\\psi (t)=\\phi (t)$ .", "We will suppress the index $l$ in $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)$ below.", "For small constants $\\delta ^{\\prime },\\epsilon _{i},\\epsilon _{j^{\\prime }},\\epsilon _{k^{\\prime }}>0$ set $\\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }}(t):&=\\varphi _{\\tilde{\\epsilon }_{i}}(t)-\\delta ^{\\prime }t\\sum _{i}\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\\\&-\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}))\\nonumber $ and compare (REF ), it follows that $\\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }}(t)$ satisfies $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{(\\omega _{t,\\delta ^{\\prime },\\tilde{\\epsilon }, l}+\\sqrt{-1}\\partial \\bar{\\partial }\\phi _{\\delta ^{\\prime },\\tilde{e}})^{n}\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})^{1-\\delta ^{\\prime }}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}\\\\ \\;\\\\\\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }}(0)&= \\varphi _{l,0}-\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2})),\\end{array}\\right.$ on $X^{\\prime }\\setminus {\\tilde{E}}$ where $\\omega _{t,\\delta ^{\\prime },\\tilde{\\epsilon }, l}:&=\\omega _{t}+l^{-1}\\theta + \\delta ^{\\prime }t\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\\\&+\\delta \\sum _{i,j^{\\prime },k^{\\prime }}\\sqrt{-1}\\partial \\bar{\\partial }(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}))\\\\&\\ge \\omega _{t}+l^{-1}\\theta -\\delta ^{\\prime }t\\sum _{i}\\frac{|S_{i}|_{i}^{2}\\Theta _{i}}{|S_{i}|_{i}^{2}+\\epsilon _{i}^{2}}-\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\frac{|S_{i}|_{i}^{2}\\Theta _{i}}{|S_{i}|_{i}^{2}+\\epsilon _{i}^{2}}+\\frac{|S_{j^{\\prime }}|_{j^{\\prime }}^{2}\\Theta _{j^{\\prime }}}{|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2}}+\\frac{|S_{k^{\\prime }}|_{k^{\\prime }}^{2}\\Theta _{k^{\\prime }}}{|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}})$ Note that as $D_{i},D_{j^{\\prime }},D_{k^{\\prime }}$ are all exceptional divisors which are generated by blow-up operations thus carry hermitian metrics with non-positive curvature, by choosing small enough $\\delta ,\\delta ^{\\prime }>0$ we may assume by Assumption REF (4) and Remark REF we may assume that $\\omega _{t,\\delta ^{\\prime },\\tilde{\\epsilon }, l}$ is Kähler  on $X^{\\prime }$ for all $t\\in [0,T^{\\prime \\prime }].$ We then choose another fixed Kähler  form $\\kappa $ on $X^{\\prime }$ satisfying $\\kappa \\le \\frac{\\omega _{t,\\delta ^{\\prime },\\tilde{\\epsilon }}}{2}$ for all $t\\in [0,T^{\\prime \\prime }]$ and consider the complex Monge-Ampère equation: $(\\kappa +\\sqrt{-1}\\partial \\bar{\\partial }\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})^{n}=\\frac{C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\prod _{k}|S_{k}|_{k}^{2a_{k}}\\Omega ^{\\prime }}{\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})^{1-\\delta ^{\\prime }}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}$ where $C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ is chosen such that $[\\kappa ]^{n}=C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\int _{X^{\\prime }}\\frac{\\prod _{k}|S_{k}|_{k}^{2a_{k}}\\Omega ^{\\prime }}{\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})^{1-\\delta ^{\\prime }}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}$ and moreover $C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ are uniformly bounded positive constants whose bounds are independent of $\\epsilon _{i}.$ By the $L^{\\infty }$ -estimates in [13] there exists a unique solution $\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ which is uniformly bounded independent of $\\epsilon _{i}.$ Similar to [34], set $\\xi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(t):=\\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(t)-\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ it follows that $\\xi (t)$ satisfies the following equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\xi _{\\delta ^{\\prime },\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{((\\kappa +\\sqrt{-1}\\partial \\bar{\\partial }\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})+(\\omega _{t,\\delta ^{\\prime },\\tilde{\\epsilon }}-\\kappa )+\\sqrt{-1}\\partial \\bar{\\partial }\\xi _{\\delta ^{\\prime },\\tilde{\\epsilon }})^{n}}{(\\kappa +\\sqrt{-1}\\partial \\bar{\\partial }\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})^{n}}+\\log C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\\\ \\;\\\\\\xi _{\\delta ^{\\prime },\\tilde{\\epsilon }}(0)&= \\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(0)-\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}},\\end{array}\\right.$ and we may conclude by a maximum principle argument as in §4.3 that $\\xi _{\\delta ^{\\prime },\\tilde{\\epsilon }}(t)\\ge \\phi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(0)-\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}+t\\log C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}.$ Combine this with (REF ) (REF ) and the fact that $\\log C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\ge c_{\\delta ^{\\prime }},\\;|\\psi _{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}|\\le C_{\\delta ^{\\prime }},$ it follows that $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)&\\ge -C(\\delta ,\\delta ^{\\prime },T^{\\prime \\prime })+\\delta ^{\\prime }t\\sum _{i}\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\nonumber \\\\&+\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}))\\nonumber \\\\&\\ge -C(\\delta ,\\delta ^{\\prime },T^{\\prime \\prime })+\\delta ^{\\prime }t\\sum _{i}\\log |S_{i}|_{i}^{2}+\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\log |S_{i}|_{i}^{2}+\\log |S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\log |S_{k^{\\prime }}|_{k^{\\prime }}^{2})$ on $(X^{\\prime }\\setminus \\tilde{E})\\times [0,T^{\\prime }]$ for all $\\tilde{\\epsilon }_{i}, l$ , and we may extend this inequality to hold on $(X^{\\prime }\\setminus \\pi ^{-1}(X_{lc}))\\times [0,T^{\\prime }].$ As we have shown that $\\varphi _{\\tilde{\\epsilon }_{i}, l}(t)\\searrow \\varphi (t)$ it follows that $\\varphi (t)$ satisfies the same lower bound above and the proof of this theorem is complete." ], [ "Completion of proof of Theorem ", "In §4.1 we have constructed a smooth solution $\\varphi (t)$ to (REF ) on $(X^{\\prime }\\setminus \\widetilde{E}) \\times [0, T_0)$ such that $\\omega (t)=\\pi ^* \\omega - t \\chi +ddb\\varphi (t)$ and extends to a current on $X^{\\prime }$ with zero Lelong number.", "In view of the properties of $\\varphi $ already established in §4.2 and 4.3, it remains only to show that the solution $\\omega (t)$ descends from $X^{\\prime }$ to $X$ in the sense: if $\\pi ^* \\omega - t \\chi =0$ along any fiber $\\pi ^{-1}(x)$ of $\\pi $ , then $\\varphi (t)$ is constant along $\\pi ^{-1}(x)$ .", "We argue as in [34].", "Note that $\\varphi (t)$ is a $\\omega _{t}=\\pi ^* \\omega - t \\chi $ plurisubharmonic function along any fiber $\\pi ^{-1}(x)$ .", "On the other hand, we may always choose hermitian metrics in $\\chi $ so that $\\pi ^* \\omega - t \\chi =0$ on $\\pi ^{-1}(x)$ and tt follows that $\\varphi =c$ on $\\pi ^{-1}(x)$ for some $c\\in [-\\infty , \\infty )$ .", "Thus $\\omega (t)$ descends to a current on $X$ having zero Lelong number.", "Moreover, if $\\pi ^{-1}(x) \\bigcap X_{lc}$ above is empty, then $c\\ne -\\infty $ by Theorem REF .", "On the other hand, if $\\pi ^{-1}(x)\\bigcap X_{lc}$ is nonempty than as $\\varphi =\\varphi ^{\\prime }-t\\sum _{i}\\log \\log ^{2}|S_{i}|_{i}^{2}-\\eta \\sum _{j}\\sqrt{-1}\\partial \\bar{\\partial }|S_{j}|_{j}^{2(1-b_{j})}$ and $\\varphi ^{\\prime }$ is uniformly bounded from above by Theorem REF , it follows that $c=-\\infty $ .", "This completes the proof of Theorem REF ." ], [ "Semi-log canonical models and basic settings", "In this section we will prove Theorem REF .", "First we briefly recall the definition of semi-log canonical models (cf.", "[6], [23], [31]): Definition 5.1 A reduced projective variety $X$ with $dim_{\\mathbb {C}}X=n$ which is $\\mathbb {Q}$ -Gorenstein and satisfies Serre's $S_{2}$ condition is said to be a semi-log canonical model if $K_{X}$ is an ample $\\mathbb {Q}$ -Cartier divisor, X has only ordinary nodes in codimension 1, X has log canonical singularities, i.e., for any resolution $\\pi :X^{\\prime }\\rightarrow X,$ in the adjunction formula (REF ) it holds that all $a_{i}\\ge -1.$ Note that from (2) in general, semi-log canonical models may not be normal varieties.", "By [31] in this case by the standard normalization $\\nu :X^{\\nu }\\rightarrow X$ it follows that $K_{X^{\\nu }}=\\nu ^{*}K_{X}-cond(\\nu ),$ where $cond(\\nu )$ is an effective reduced divisor which comes from the inverse image of the codimension 1 nodes.", "Combined with the resolution $\\pi ^{\\nu }:X^{\\prime }\\rightarrow X^{\\nu },$ we can consider the resolution $\\pi :=\\pi ^{\\nu }\\circ \\nu :X^{\\prime }\\rightarrow X^{\\nu }\\rightarrow X$ which satisfies the condition of log canonical singularities.", "We may thus consider to be in the situation of Theorem REF though with the additional condition that, in terms of the above resolution, $\\chi =(-Ric(\\Omega ^{\\prime })+\\sum _{i}\\Theta _{i}+\\sum _{j}b_{j}\\Theta _{j}-\\sum _{k}a_{k}\\Theta _{k})$ is itself semi ample and big.", "In particular, this gives $T_0=\\infty $ in Assumption REF (4) while in Assumption REF (2) we have the stronger statement that $\\lambda \\pi ^{*}\\tilde{\\omega }_{0}+(1-\\lambda )\\chi +\\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log |\\tilde{S}_{\\tilde{E}}|^{2}\\ge c_{\\delta }\\theta $ on $X^{\\prime }$ for some $c_{\\delta }>0$ and all $\\delta < d$ and $0\\le \\lambda \\le 1$ .", "Under the above conditions we will show that the longtime solution to (REF ) given by Theorem REF can be transformed to a longtime solution to the normalized Kähler-Ricci flow (REF ) converging to a negative Kähler-Einstein current on $X^{\\prime }$ in the current sense and the $C^{\\infty }_{loc}$ sense on $X^{\\prime }\\setminus \\tilde{E}$ .", "We point out that the existence of Kähler-Einstein currents on such semi-log canonical models had been established earlier by [6], [31] using elliptic methods.", "By Theorem REF it follows that (REF ) has a longtime solution weak solution $\\omega (t)=\\pi ^{*}\\tilde{\\omega }_{0}+t \\chi +\\sqrt{-1}\\partial \\bar{\\partial }\\varphi (t)$ on $(X^{\\prime }\\setminus \\widetilde{E}) \\times [0, \\infty )$ where $\\varphi (t)$ solves (REF ) on $(X^{\\prime }\\setminus \\widetilde{E}) \\times (0, \\infty )$ .", "We also recall that $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ solves the approximate euqation (REF ) on $(X^{\\prime }\\setminus \\widetilde{E}) \\times [0, \\infty )$ and that by construction we have $\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-t \\log \\log ^{2}|S_{i}|_{i}^{2} \\rightarrow \\varphi $ on $(X^{\\prime }\\setminus \\widetilde{E}) \\times (0, \\infty )$ as the parameters $l,u,v,\\tilde{\\epsilon }$ approach their limits along appropriate subsequences.", "From the above, it is straight forward to check that $\\tilde{\\omega }(t):=e^{-t}\\omega (e^t -1)$ is a longtime weak solution to the normalized Kähler  Ricci flow $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\omega } &=-Ric(\\tilde{\\omega })-\\tilde{\\omega } \\\\\\tilde{\\omega }(0) &= \\pi ^{*}{\\omega }_{0} +\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _0\\end{array}\\right.$ on $(X^{\\prime }\\setminus \\widetilde{E}) \\times [0, \\infty )$ where $\\begin{split}\\tilde{\\omega }(t)&=e^{-t} \\pi ^{*}\\omega _{0}+(1-e^{-t}) \\chi +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }(t)\\\\\\tilde{\\varphi }(t)&:=e^{-t}\\varphi (e^t-1).\\\\\\end{split}$ Also, $\\tilde{\\varphi }(t)$ solves the normalized Monge Ampere flow: $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\varphi } &=\\displaystyle \\log \\frac{(\\tilde{\\omega }_{t}+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }^{\\prime })^{n}\\prod _{i}|S_{i}|_{i}^{2}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}-\\tilde{\\varphi }^{\\prime }+nt\\\\\\tilde{\\varphi }(0) &= \\pi ^{*}\\tilde{\\varphi }_{0}.\\end{array}\\right.$ Moreover, we have the convergence $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-(e^t-1) \\log \\log ^{2}|S_{i}|_{i}^{2} \\rightarrow \\tilde{\\varphi }$ on $(X^{\\prime }\\setminus \\widetilde{E}) \\times (0, \\infty )$ where $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(t):=e^{-t}\\varphi ^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(e^t -1)$ solves the normalized approximate Monge Ampere flow $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{(\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }})^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\\\ \\;\\\\ &\\quad -\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+nt\\\\\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0) &= \\tilde{\\varphi }_{l,0}-\\eta \\sum \\limits _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),\\end{array}\\right.$ where $\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}:=&e^{-t}\\pi ^{*}\\tilde{\\omega }_{0}+(1-e^{-t})\\chi +u\\theta -(1+v-e^{-t})\\sum \\limits _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log \\log ^{2}|S_{i}|_{i}^{2}\\\\&+\\eta \\sum \\limits _{j}\\sqrt{-1}\\partial \\bar{\\partial }\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}),$ In particular, we have the convergence $\\tilde{\\omega }^{\\prime }_{l, u,v,\\tilde{\\epsilon }_{j}}(t) \\rightarrow \\tilde{\\omega }(t)$ on $(X^{\\prime }\\setminus \\widetilde{E}) \\times (0, \\infty )$ where for each set of parameter values, $\\tilde{\\omega }^{\\prime }_{l, u,v,\\tilde{\\epsilon }_{j}}(t)= \\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(t)$ is a family of complete bounded curvature Kähler  metrics on $X^{\\prime }\\setminus \\widetilde{E}$ equivalent to a Carlson-Grifiths metric." ], [ "Uniform estimates and convergence", "Note that by an addition of the same function of time only, $\\tilde{\\varphi }(t)$ and $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(t)$ will solve the same equations as (REF ) and (REF ) (respectively), but without the term $nt$ on the RHS.", "We will make this assumption in this subsection, and observe that this does not affect $\\tilde{\\omega }(t)$ in (REF ).", "Under this assumption, we will prove that $\\tilde{\\varphi }(t)$ converges smoothly locally uniformly uniformly on $(X^{\\prime }\\setminus \\widetilde{E})$ to a limit as $t\\rightarrow \\infty $ .", "This combined with (REF ) will imply that $\\tilde{\\omega }(t)$ converges to a limit smoothly locally uniformly on $(X^{\\prime }\\setminus \\widetilde{E})$ as $t\\rightarrow \\infty $ , which by (REF ) will be Kähler  Einstein with negative scalar curvature.", "By (REF ) the background form in (REF ) satisfies $c^{-1}_{\\delta }\\hat{\\omega }\\le \\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\delta \\sqrt{-1}\\partial \\bar{\\partial }\\log |\\tilde{S}_{\\tilde{E}}|^{2}\\le c_{\\delta }\\hat{\\omega }$ on $X^{\\prime }\\setminus \\widetilde{E}\\times [0, \\infty )$ for all $\\delta $ sufficiently small and some $c_{\\delta }>0$ where $\\hat{\\omega }$ is the Carlson Grifiths metric on $X^{\\prime }\\setminus \\widetilde{E}$ defined in Theorem REF .", "By this unifrom equivalence, the arguments in §3.2 can be adapted to the case of (REF ) in a straight forward manner to obtain uniform estimates for $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ and its derivatives which are local in space yet global in time (see Lemma REF ).", "We will illustrate this in detail below for the $C^{0}$ -estimates for $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ and also prove an upper bound on $\\dot{\\tilde{}}{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ showing that $ \\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ is essentially non-increasing in time.", "All together this will imply the smooth local convergence of $\\tilde{\\varphi }(t)$ on $(X^{\\prime }\\setminus \\widetilde{E})$ to a limit as $t\\rightarrow \\infty $ .", "We begin with the following upper bound estimate: Lemma 5.2 $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le C_{0}$ for a uniform constant $C_{0}.$ As Lemma REF , define $\\tilde{\\omega }^{\\prime \\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}:=\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-(1-e^{-t})\\sum _{k}a_{k}\\sqrt{-1}\\partial \\bar{\\partial }\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})$ and $\\tilde{\\phi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}:=\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+(1-e^{-t})\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}),$ it follows that $\\tilde{\\phi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ satisfies the following equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\phi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{(\\tilde{\\omega }^{\\prime \\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\phi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }})^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }}\\\\ \\;\\\\ &\\quad -\\tilde{\\phi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\eta \\sum \\limits _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})\\\\\\tilde{\\phi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0) &= \\tilde{\\varphi }_{l,0}-\\eta \\sum \\limits _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2}).\\end{array}\\right.$ By (REF ) we have $\\tilde{\\omega }^{\\prime \\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}\\le \\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+c(1-e^{-t})\\theta \\le C\\hat{\\omega }$ where $\\hat{\\omega }$ was defined in Theorem REF .", "Thus by the maximum principle, it follows that $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}=\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-(1-e^{-t})\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})\\le C-(1-e^{-t})\\sum _{k}a_{k}\\log (|S_{k}|_{k}^{2}+\\epsilon _{k}^{2}).$ Next, use a similar argument in the proof of Theorem REF , we could show that $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le C_{0}$ which is a uniform constant.", "Next we have the following lower bound estimate: Lemma 5.3 For any $\\delta >0$ there exists a constant $C_{\\delta }$ which only depends on $\\delta $ such that on $(X^{\\prime }\\setminus \\tilde{E})\\times [0,+\\infty ),$ it holds that $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\ge \\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}-C_{\\delta }.$ By (REF ) we have $\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }:=\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}\\ge c_{\\delta }\\hat{\\omega }$ where $c_{\\delta }>0$ is independent of time.", "we can rewrite (REF ) as following: $&\\frac{\\partial }{\\partial t}(\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}) \\\\ =&\\log \\frac{(\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j},\\delta }+\\sqrt{-1}\\partial \\bar{\\partial }(\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}))^{n}\\prod _{i}|S_{i}|_{i}^{2}\\log ^{2}|S_{i}|_{i}^{2}\\prod _{j}(|S_{j}|_{j}^{2}+\\epsilon _{j}^{2})^{b_{j}}}{\\Omega ^{\\prime }\\prod _{k}(|S_{k}|_{k}^{2}+\\epsilon _{k}^{2})^{a_{k}}}\\\\ &-(\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2})-(\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}+\\eta \\sum \\limits _{j}\\mathcal {F}(|S_{j}|_{j}^{2},1-b_{j},\\epsilon _{j}^{2})),$ where $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(0)-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}\\ge -C^{\\prime }_{\\delta }$ by the assumption of zero Lelong number.", "By the maximum principle again it follows that $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}(t)-\\delta \\log |\\tilde{S}_{\\tilde{E}}|^{2}\\ge -e^{-t}C^{\\prime }_{\\delta }-C^{\\prime \\prime }_{\\delta }(1-e^{-t})\\ge -C_{\\delta },$ which concludes this Lemma.", "Next, we need the following upper bound for the time derivative for large time $t>T_{1}\\gg 1:$ Lemma 5.4 There exist a uniform constant $C>0$ such that $\\dot{\\tilde{\\varphi }}^{\\prime }_{l,u,v,\\tilde{\\epsilon }}\\le Cte^{-t}.$ We proceed as [41].", "First, set $\\tilde{\\omega }:=\\tilde{\\omega }^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }},$ by direct computations, we have $\\frac{\\partial }{\\partial t}\\dot{\\tilde{\\varphi }}^{\\prime }_{l,u,v,\\tilde{\\epsilon }}=\\Delta _{\\tilde{\\omega }}\\dot{\\tilde{\\varphi }}^{\\prime }_{l,u,v,\\tilde{\\epsilon }}+tr_{\\tilde{\\omega }}\\dot{\\tilde{\\omega }}^{\\prime }_{t,u,v,\\tilde{\\epsilon }_{j}}-\\dot{\\tilde{\\varphi }}^{\\prime }_{l,u,v,\\tilde{\\epsilon }},$ From (REF ), it follows that $&(\\frac{\\partial }{\\partial t}-\\Delta _{\\tilde{\\omega }})((e^{t}-1)\\dot{\\tilde{\\varphi }}^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}-C^{\\prime }t)\\\\ \\le &-tr_{\\tilde{\\omega }}(\\tilde{\\omega }^{\\prime }_{0,u,v,\\tilde{\\epsilon }}))+n-C^{\\prime }) \\le 0,$ where $C^{\\prime }>0$ is chosen to be large enough.", "Combined with Lemma REF the upper bound for $\\dot{\\tilde{\\varphi }}^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ follows by the maximum principle.", "Similar to the proofs of the above Lemmas, we may continue to adapt the arguments in the proofs of Theorem REF and Theorem REF to obtain Laplacian estimate and high order estimates for $\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}$ , using (REF ), to obtain Lemma 5.5 For any compact set $K\\subset X^{\\prime }\\setminus \\tilde{E},\\;t\\in (0,+\\infty )$ there exist constants $C(m,K)$ such that $|\\tilde{\\varphi }^{\\prime }_{l,u,v,\\tilde{\\epsilon }}|_{C^{m}(K\\times (0,+\\infty ))}\\le C(m,K).$ From Lemma REF , we may may let the parameters $l,u,v,\\tilde{\\epsilon }$ approach their limits along appropriate sequences and obtain a limit $\\tilde{\\varphi }^{\\prime }(t)\\in C^{\\infty }(X^{\\prime }\\setminus \\tilde{E})\\times (0, \\infty )$ having zero Lelong number for all $t$ .", "Moreover, by Lemmas REF and REF it follows that for a sufficiently large constant $K$ the function $\\tilde{\\varphi }^{\\prime }(x, t)-Kt e^t$ is non-decreasing in time while bounded below by some function of $x$ only.", "We conclude the convergence $\\tilde{\\varphi }^{\\prime }(x, t) \\rightarrow \\tilde{\\varphi }^{\\prime }_{\\infty }(x)\\in C^{\\infty }(X^{\\prime }\\setminus \\tilde{E})$ as $t\\rightarrow \\infty $ where $\\tilde{\\varphi }^{\\prime }_{\\infty }(x)$ will also have zero Lelong number.", "Thus the function $\\varphi (t)=\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-(1-e^{-t}) \\log \\log ^{2}|S_{i}|_{i}^{2} + \\varphi ^{\\prime }(t)$ will solve (REF ) and converge to a limit $\\varphi _{\\infty }\\in C^{\\infty }(X^{\\prime }\\setminus \\tilde{E})$ as $t\\rightarrow \\infty $ .", "where $\\varphi _{\\infty }$ will also have zero Lelong number and will satisfy $\\log \\frac{(\\tilde{\\omega }^{\\prime }_{\\infty }+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\infty })^{n}\\prod _{i}|S_{i}|_{i}^{2}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}-\\tilde{\\varphi }_{\\infty }=0$ in the current sense and in $C^{\\infty }(X^{\\prime }\\setminus \\tilde{E})$ sense.", "We have thus established the convergence $\\tilde{\\omega }(t)\\rightarrow \\tilde{\\omega }^{\\prime }_{\\infty }+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\infty }=\\chi +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\infty }$ as $t\\rightarrow \\infty $ in the sense of currents on $X^{\\prime }$ and in the $C^{\\infty }(X^{\\prime }\\setminus \\tilde{E})$ sense where by (REF ) the limit metric satisfies $&Ric(\\chi +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\infty })\\\\=&(Ric(\\Omega ^{\\prime })-\\sum _{i}\\Theta _{i}-\\sum _{j}b_{j}\\Theta _{j}+\\sum _{k}a_{k}\\Theta _{k})+2\\pi (\\sum _{i}[D_{i}]+\\sum _{j}b_{j}[E_{j}]-\\sum _{k}a_{k}[F_{k}])\\\\&-\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\infty }\\\\=&-(\\chi +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\infty })+2\\pi (\\sum _{i}[D_{i}]+\\sum _{j}b_{j}[E_{j}]-\\sum _{k}a_{k}[F_{k}])\\\\$ in the current sense.", "By the resolution $\\pi :X^{\\prime }\\rightarrow X,$ the normalized Kähler-Ricci flow on $X$ will converge to the unique Kähler-Einstein current in the current and $C^{\\infty }(X_{reg})$ -sense." ], [ "Improved lower bounds", "To show that $\\omega _{KE}$ has bounded local potential away from $X_{lc},$ we can modify our proof of Theorem REF .", "As in the previous section, we will again assume without loss of generality that $\\tilde{\\varphi }$ solves (REF ), but without the term $nt$ on the RHS.", "We first perturb this equation to the following: $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}, l} &=\\displaystyle \\log \\frac{(\\tilde{\\omega }_{t}+l^{-1}\\theta +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}, l})^{n}\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}-\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}}\\\\\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}}(0) &=\\varphi _{l, 0}\\end{array}\\right.$ As in the beginning of the proof of Theorem REF , for each $\\tilde{\\epsilon }_{i}, l$ there exists a unique solution $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}, l}$ which is smooth away from $\\tilde{E}$ and moreover $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i},l}$ is uniformly bounded on $X^{\\prime }\\setminus \\tilde{E}$ depending on $\\epsilon _{i}, l$ .", "As we can move $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}}$ to the left hand side of (REF ) and write $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}}+\\dot{\\tilde{\\varphi }}_{\\tilde{\\epsilon }_{i}}=e^{-t}\\frac{\\partial }{\\partial t}(e^{t}\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}})$ we can argue as in the beginning of the proof of Theorem REF that $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}}, l$ is non-increasing as each $\\epsilon _{i}\\searrow 0$ and $l\\rightarrow \\infty $ separately, and that by comparing (REF ) and (REF ) we have $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}, l}\\searrow \\tilde{\\varphi }$ as $\\epsilon _{i}\\searrow 0$ and $l \\rightarrow \\infty $ .", "Next, we will suppress the index $l$ in the following, and similar to Theorem REF , for any small enough constants $\\delta ^{\\prime },\\epsilon _{i},\\epsilon _{j^{\\prime }},\\epsilon _{k^{\\prime }}>0$ we set $\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}(t):&=\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}}(t)-\\delta ^{\\prime }(1-e^{-t})\\sum _{i}\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\\\&-\\delta e^{-t}\\sum _{i,j^{\\prime },k^{\\prime }}(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}))\\nonumber $ thus $\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}(t)$ satisfies $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{(\\tilde{\\omega }_{t,\\delta ^{\\prime },\\tilde{\\epsilon }} +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }})^{n}\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})^{1-\\delta ^{\\prime }}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}{\\Omega ^{\\prime }\\prod _{k}|S_{k}|_{k}^{2a_{k}}}-\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}\\\\ \\;\\\\\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}(0)&= \\varphi _{l, 0}-\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2})),\\end{array}\\right.$ on $X^{\\prime }\\setminus {\\tilde{E}}$ where $\\tilde{\\omega }_{t,\\delta ^{\\prime },\\tilde{\\epsilon }}:&=\\tilde{\\omega }_{t}+l^{-1}\\theta +\\delta ^{\\prime }(1-e^{-t})\\sum _{i}\\sqrt{-1}\\partial \\bar{\\partial }\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\\\&+\\delta e^{-t}\\sum _{i,j^{\\prime },k^{\\prime }}\\sqrt{-1}\\partial \\bar{\\partial }(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}))\\\\&\\ge e^{-t}(\\pi ^{*}\\omega _{0}-\\delta \\sum _{i,j^{\\prime },k^{\\prime }}(\\frac{|S_{i}|_{i}^{2}\\Theta _{i}}{|S_{i}|_{i}^{2}+\\epsilon _{i}^{2}}+\\frac{|S_{j^{\\prime }}|_{j^{\\prime }}^{2}\\Theta _{j^{\\prime }}}{|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2}}+\\frac{|S_{k^{\\prime }}|_{k^{\\prime }}^{2}\\Theta _{k^{\\prime }}}{|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}}))+l^{-1}\\theta \\\\&+(1-e^{-t})(\\chi -\\delta ^{\\prime }\\sum _{i}\\frac{|S_{i}|_{i}^{2}\\Theta _{i}}{|S_{i}|_{i}^{2}+\\epsilon _{i}^{2}})$ and by Assumption REF (4) and Remark REF we may thus assume $\\delta , \\delta ^{\\prime }$ are sufficiently small so that $\\tilde{\\omega }_{t,\\delta ^{\\prime },\\tilde{\\epsilon }}$ is uniformly positive on $X^{\\prime }$ for all $t\\in [0,+\\infty )$ , as $D_{i},D_{j^{\\prime }},D_{k^{\\prime }}$ are exceptional divisors.", "Similar to the proof of Theorem REF , choose a Kählerform $\\kappa $ on $X^{\\prime }$ so that $\\kappa \\le \\frac{\\tilde{\\omega }_{t,\\delta ^{\\prime },\\tilde{\\epsilon }}}{2}$ for $t\\in [0,+\\infty )$ and consider the following complex Monge-Ampère equation $(\\kappa +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})^{n}=\\frac{C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}e^{\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}}\\prod _{k}|S_{k}|_{k}^{2a_{k}}\\Omega ^{\\prime }}{\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})^{1-\\delta ^{\\prime }}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}$ where $C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ is chosen such that $[\\kappa ]^{n}=C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\int _{X^{\\prime }}\\frac{\\prod _{k}|S_{k}|_{k}^{2a_{k}}\\Omega ^{\\prime }}{\\prod _{i}(|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})^{1-\\delta ^{\\prime }}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}$ and moreover $C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ are uniformly bounded positive constants whose bounds are independent of $\\epsilon _{i}.$ By the $L^{\\infty }$ -estimates in [13] there exists a unique solution $\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ which is uniformly bounded independent of $\\epsilon _{i}.$ Similarly, set $\\tilde{\\xi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(t):=\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(t)-\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}$ it follows that $\\tilde{\\xi }(t)$ satisfies the following equation $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\tilde{\\xi }_{\\delta ^{\\prime },\\tilde{\\epsilon }} &=\\displaystyle \\log \\frac{((\\kappa +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})+(\\tilde{\\omega }_{t,\\delta ^{\\prime },\\tilde{\\epsilon }}-\\kappa )+\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\xi }_{\\delta ^{\\prime },\\tilde{\\epsilon }})^{n}}{(\\kappa +\\sqrt{-1}\\partial \\bar{\\partial }\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})^{n}}-\\tilde{\\xi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}+\\log C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\\\ \\;\\\\\\tilde{\\xi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}(0)&= \\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(0)-\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}.\\end{array}\\right.$ and we may conclude by a maximum principle argument as in §4.3 that $\\tilde{\\xi }_{\\delta ^{\\prime },\\tilde{\\epsilon }}(t)\\ge e^{-t}(\\tilde{\\phi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}(0)-\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}})+(1-e^{-t})\\log C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}.$ Combine this with same initial bound (REF ), (REF ) and similar fact that $\\log C_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}\\ge c_{\\delta ^{\\prime }},\\;|\\tilde{\\psi }_{\\delta ^{\\prime },\\tilde{\\epsilon }_{i}}|\\le C_{\\delta ^{\\prime }},$ it follows that $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}, l}(t)&\\ge -C(\\delta ,\\delta ^{\\prime })+\\delta ^{\\prime }(1-e^{-t})\\sum _{i}\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})\\nonumber \\\\&+\\delta e^{-t}\\sum _{i,j^{\\prime },k^{\\prime }}(\\log (|S_{i}|_{i}^{2}+\\epsilon _{i}^{2})+\\log (|S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\epsilon _{j^{\\prime }}^{2})+\\log (|S_{k^{\\prime }}|_{k^{\\prime }}^{2}+\\epsilon _{k^{\\prime }}^{2}))\\nonumber \\\\&\\ge -C(\\delta ,\\delta ^{\\prime })+\\delta ^{\\prime }(1-e^{t})\\sum _{i}\\log |S_{i}|_{i}^{2}+\\delta e^{-t}\\sum _{i,j^{\\prime },k^{\\prime }}(\\log |S_{i}|_{i}^{2}+\\log |S_{j^{\\prime }}|_{j^{\\prime }}^{2}+\\log |S_{k^{\\prime }}|_{k^{\\prime }}^{2}).$ Recall that $\\tilde{\\varphi }_{\\tilde{\\epsilon }_{i}, l}\\searrow \\tilde{\\varphi }$ as $\\epsilon _{i}\\searrow 0$ and $l\\uparrow 0$ .", "In particular $\\tilde{\\varphi }$ satisfies the same lower bound estimate above on $X^{\\prime }\\setminus \\tilde{E}\\times [0, \\infty )$ and thus extends to be in $L^{\\infty }(X^{\\prime }\\setminus \\tilde{E})$ for each $t$ .", "Moreover by the fact $\\varphi =\\varphi ^{\\prime } +\\eta \\sum _{j}|S_{j}|_{j}^{2(1-b_{j})}-t \\log \\log ^{2}|S_{i}|_{i}^{2}$ and Lemma REF $\\tilde{\\varphi }=-\\infty $ on lc divisors $D_{i}.$ This completes the proof of Theorem REF ." ], [ "Kähler-Ricci flow through birational surgeries with lc singularities (proof of Theorem ", "In this section we will discuss the behaviour of the Kähler-Ricci flow with log canonical singularities when birational surgeries happen.", "As [34], we will relate the Kähler-Ricci flow with lc singularities to the minimal model program with scaling.", "First we briefly recall some related background materials in MMP with scaling and recommend [4], [34] for more details.", "For a $\\mathbb {Q}$ -factorial projective variety $X$ with log terminal singularities, when $K_{X}$ is not nef, there exist extremal rays generated by algebraic curves which have negative intersection number with $K_{X}$ by cone theorem.", "Let $H$ be a $\\mathbb {Q}$ -semi-ample and big divisor, by rationality theorem $\\lambda _{0}:=\\inf \\lbrace \\lambda >0|\\lambda H+K_{X}\\;is\\;nef\\rbrace $ is a positive rational number.", "By Kawamata base point free theorem, the divisor $\\lambda _{0}H+K_{X}$ is semi-ample and induces a morphism $\\pi :=\\Phi _{|m(\\lambda _{0}H+K_{X})|}: X\\rightarrow Y$ for sufficiently large $m\\in \\mathbb {N}$ which contracts all curves $C$ satisfying $(\\lambda _{0}H+K_{X})\\cdot C=0$ to points.", "Now considering the image of this contraction morphism, there are several different cases: If $dim Y<dim X,$ $X$ is called a Mori fiber space where all fibers are Fano varieties.", "If $dim Y=dim X,$ i.e., $\\pi $ is a birational morphism, depending on the dimension of the exceptional locus in $X,$ there are two different cases: If $dim Exc(\\pi )=dim Y-1,$ $\\pi $ is a divisorial contraction and we replace $X$ by $Y$ and let $H_{Y}$ be a strict transformation of $\\lambda _{0}H+K_{X}$ by $\\pi .$ Then continue the process to $(Y,H_{Y}).$ If $dim Exc(\\pi )<dim Y-1,$ $\\pi $ is a small contraction and there exists a flip $X\\rightarrow X^{+}$ as (REF ).", "Let $H_{X^{+}}$ be a strict transformation of $\\lambda _{0}H+K_{X}$ and continue the process to $(X^{+},H_{X^{+}}).$ By [5], [15], [20], this program exists for $\\mathbb {Q}$ -factorial varieties with log canonical singularities and birational surgeries including divisorial contractions and flips also exist when there exist extremal rays and the induced contraction morphism is birational.", "As [34], we can relate the above to Kähler-Ricci flow as follows.", "Note that in Theorem REF , the solution $\\omega (t)$ to the Kähler-Ricci flow exists up to the rational time $T_{0}:=\\sup \\lbrace t>0|H+tK_{X}\\;is\\;nef\\rbrace =\\frac{1}{\\lambda _{0}}$ .", "Also, $H+T_0 K_{X}$ is semi-ample and thus for a sufficiently large $m\\in \\mathbb {N}$ there exists a morphism $\\pi :=\\Phi _{|m(H+T_{0}K_{X})|}: X\\rightarrow Y.$ Our goal here will be to show there exists a limit current $\\omega (T_0)$ which pushes down to $Y$ such that we can continue to flow the push down limit by Kähler-Ricci flow on $Y$ using Theorem REF .", "For this, we will assume that $H+T_{0}K_{X}$ is in fact big and nef, $\\pi $ is birational and induces divisorial contractions or flips as above for the projective variety $X$ with log canonical singularities.", "Resolve the singularities of $Y$ and image of $Exc(\\pi )$ we have $\\mu :\\tilde{X}\\rightarrow X\\rightarrow Y$ which satisfies that $\\tilde{X}$ is smooth and $\\pi ^{-1}\\circ \\mu : \\tilde{X}\\rightarrow X$ is a resolution of singularities of $X$ .", "there exists an effective divisor $E_{Y}$ on $\\tilde{X}$ such that $(\\pi ^{-1}\\circ \\mu )^{*}[H+T_{0}K_{X}]-\\delta [E_{Y}]$ is ample for any positive $\\delta \\ll 1$ and $supp(E_{Y})$ coincides with $Exc(\\pi ^{-1}\\circ \\mu ).$ Recall that in proving Theorem REF we established the solution to the complex Monge-Ampère flow (REF ) on $\\tilde{X}\\times [0, T_0)$ which pushes down to a solution to the following complex Monge-Ampère flow equation: $\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{\\partial }{\\partial t}\\varphi &=\\displaystyle \\log \\frac{(\\hat{\\omega }_{t}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi )^{n}}{\\Omega }\\\\\\varphi (0) &= \\varphi _{0},\\end{array}\\right.$ on $X\\times [0,T_{0})$ where $\\hat{\\omega }(t):=\\omega _{0}-tRic(\\Omega )$ and $\\Omega $ is the so-called adapted measure on $X$ (from [34]) satisfying $\\pi ^{*}\\Omega =\\frac{\\prod _{k}|S_{k}|_{k}^{2a_{k}}}{\\prod _{i}|S_{i}|_{i}^{2}\\prod _{j}|S_{j}|_{j}^{2b_{j}}}\\tilde{\\Omega }$ for the resolution $\\mu $ .", "Moreover, the solution $\\varphi (t)$ is smooth on $X_{reg}\\times (0,T_{0})$ and gives rise to a current with zero Lelong number.", "As [34], we have the following estimates: Lemma 6.1 Suppose $\\varphi \\in L^{\\infty }_{loc}((X\\setminus X_{lc})\\times [0,T_{0})\\bigcap L^{\\infty }(X_{reg}\\times (0,T_{0})$ solves (REF ) in the sense of Theorem REF , then $|\\varphi |_{L^{\\infty }}(K\\times [0,T_{0}))\\le C_{K}$ for any $K\\subset \\subset (X\\setminus \\pi ^{-1}(Y_{lc}));$ $|\\varphi |_{C^{k}}(K\\times [0,T_{0}))\\le C_{K,k}$ for any $K\\subset \\subset (X_{reg}\\setminus Exc(\\pi ))$ and $k\\in \\mathbb {N}.$ This follows from the proof of Theorem REF , which constructed a solution $\\tilde{\\varphi }$ to the lifted Monge-Ampère flow equation (REF ) on $\\tilde{X}\\times [0,T_{0})$ .", "In particular, as $H+T_{0}K_{X}$ is big and semi-ample in this case, the local $L^{\\infty }$ -estimate there hold on $(\\tilde{X}\\setminus \\mu ^{-1}(Y_{lc}))\\times [0,T_{0}]$ and moreover $\\tilde{\\phi }$ gives rise to a current with zero Lelong number on $\\tilde{X}\\times [0,T_{0}].$ Thus the first conclusion follows.", "Also the high order estimates for $\\tilde{\\phi }$ hold on $(\\tilde{X}\\setminus E_{Y})\\times [0,T_{0}].$ As $\\mu (E_{Y})\\subset (\\pi (Exc(\\pi )\\bigcup (X\\setminus X_{reg}))),$ the second conclusion follows.", "Now we have the following corollary which describes the limit behaviour of the Kähler-Ricci flow at the singular time $t=T_{0}:$ Corollary 6.2 Let $\\varphi _{T_{0}}:=\\lim \\limits _{t\\nearrow T_{0}}\\varphi (t)$ on $X,$ then the limit current $\\omega (T_{0})=\\hat{\\omega }_{T_{0}}+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi _{T_{0}}$ descends to a semi-positive current with zero Lelong number on $Y.$ From Lemma REF , $\\varphi _{T_{0}}\\in PSH(X,\\hat{\\omega _{T_{0}}})\\bigcap L^{\\infty }_{loc}(X\\setminus \\pi ^{-1}(Y_{lc}))$ is also smooth in $X_{reg}\\setminus Exc(\\pi )$ and gives rise to a current with zero Lelong number.", "Note by definition we have $ [\\hat{\\omega }(T_0)]=[H+T_0 K_X]$ .", "On the other hand, from the birational morphism $\\pi :=\\Phi _{|m(H+T_{0}K_{X})|}: X\\rightarrow Y$ we see that $[H+T_0 K_X]$ also contains the pull back $\\pi ^{*}\\hat{\\omega }_{Y}$ where $\\hat{\\omega }_{Y}$ is the restriction to $Y$ of a multiple of Fubini-Study metric on $\\mathbb {CP}^{N_{m}}$ .", "Thus on any fiber of $\\pi $ we must have $[\\hat{\\omega }(T_0)]=[\\pi ^{*}\\hat{\\omega }_{Y}]=0$ and just as in the proof of Theorem 1.1, we may conclude that $ \\hat{\\omega }(T_0)+\\sqrt{-1}\\partial \\bar{\\partial }\\varphi (T_0)$ descends to Y as in the Corollary while $\\varphi _{T_{0}}$ descends to a potential in $L^{\\infty }_{loc}(Y\\setminus Y_{lc}).$ Remark 6.3 We note here that at the singular time $t=T_{0}$ the set where the local potential is $-\\infty $ could be larger than $X_{lc}.$ The new generated locus may come from the contraction $\\pi .$ The local potential along the whole fiber of $\\pi $ which has nonempty intersection with $X_{lc}$ will be $-\\infty .$ To show that the Kähler-Ricci flow could be extended through the divisorial contractions or flips, similar to [34], we only need to check whether the new initial metric satisfies the conditions of Theorem REF on the new variety.", "In particular, we have Theorem 6.4 Given a $\\mathbb {Q}$ -factorial projective variety $X$ with log canonical singularities and a $\\mathbb {Q}$ -semi-ample divisor $H,$ let $\\omega (t)$ be the solution to the Kähler-Ricci flow on $X\\times [0,T_{0})$ with $\\omega (0)\\in [H]$ and $T_{0}:=\\sup \\lbrace t>0|H+tK_{X}\\;is\\;nef\\rbrace $ as in Theorem REF .", "Suppose $H+T_{0}K_{X}$ is big and semi-ample and induces a birational morphism $\\pi :X\\rightarrow Y$ .", "Let $\\omega (T_0)$ descend to the semi-positive current $\\omega _{Y}$ on $Y$ as in Corollary REF .", "Then if $\\pi $ is a divisorial contraction, there exists a solution to the Kähler-Ricci flow on $Y$ starting with $\\omega _{Y};$ if $\\pi $ is a small contraction and there exists a flip $\\bar{\\pi }=\\pi ^{+}\\circ \\pi :X^{+}\\rightarrow X$ defined as (REF ) in Theorem REF with the property that $X^{+}_{lc}\\bigcap Exc(\\pi ^{+})=\\varnothing ,$ there exists a solution $\\omega ^{+}(t)$ to the Kähler-Ricci flow on $X^{+}$ starting with $\\pi ^{+*}\\omega _{Y}$ and $\\omega ^{+}(t)$ converges to $\\pi ^{+*}\\omega _{Y}$ as $t\\searrow T_{0}$ in both current and $C^{\\infty }(X^{+}_{reg}\\setminus Exc(\\pi ^{+}))$ -senses.", "In [34], to continue the Kähler-Ricci flow after birational surgeries, they need to verify the $L^{p}$ -integrable condition of the new measure for $p>1.$ However, by Theorem REF we only need to verify that the new initial metric is a current with zero Lelong number and local potential in $L^{\\infty }_{loc}$ away from the log canonical locus.", "For divisorial contraction case, by Lemma REF and Corollary REF those conditions are satisfied automatically on $Y$ so the Kähler-Ricci flow can be continued on $Y$ directly.", "For flip case, the assumption $X^{+}_{lc}\\bigcap Exc(\\pi ^{+})=\\varnothing $ guarantees that $\\pi ^{+*}\\omega _{Y}$ satisfies the initial condition of Theorem REF and all the conclusion follows from Theorem REF and continuity properties at the initial time in section 4.", "Remark 6.5 The assumption $X^{+}_{lc}\\bigcap Exc(\\pi ^{+})=\\varnothing $ could be dropped.", "Actually when the flip exists for a $\\mathbb {Q}$ -factorial variety with log canonical singularities, the log canonical loci of $X,Y,X^{+}$ are essentially isomorphic.", "We thank Professor Chenyang Xu for pointing out this property.", "As [34], we can also find a good initial semi-ample divisor $H$ such that at each singular time the induced contraction only contracts exact one extremal ray and performs corresponding birational surgery.", "This process will finally terminate in finite steps when we arrive at a minimal model or a Mori fiber space with log canonical singularities." ], [ "Further Discussions", "This paper is only a starting point of studying MMP with log canonical singularities by the Kähler-Ricci flow.", "We will briefly discuss some further problems here.", "First, instead of a variety, actually the pairs with the form $(X,D)$ are more frequently studied in MMP, where $X$ is the variety we studied in this paper and $D$ is an effective simple normal crossing divisor on $X$ with the form that $D=\\sum _{i}a_{i}D_{i}$ where $D_{i}$ are irreducible and $a_{i}\\in (0,1]$ are rational numbers.", "In this case as the twisted canonical line bundle is $[K_{X}+D],$ we could design a twisted Kähler-Ricci flow with conical and cusp singularities: $\\frac{\\partial }{\\partial t}\\omega (t)=-Ric(\\omega (t))+\\sum _{i}2\\pi a_{i}[D_{i}].$ Such types of twisted Kähler-Ricci flow in manifold case have been studied in [11], [26], [25], [28].", "By the log resolution $\\pi :X^{\\prime }\\rightarrow X$ of the pair $(X,D),$ we have the general adjunction formula $K_{X^{\\prime }}=\\pi ^{*}(K_{X}+\\sum _{i}a_{i}D_{i})+\\sum _{j}b_{j}E_{j},$ where some $E_{j}$ are strict transformations of $D_{i}$ and some are exceptional divisors.", "Combine the techniques in Theorem REF and [11], [28] we could show that the twisted Kähler-Ricci flow exists whenever the corresponding cohomology class of the evolving metric is nef.", "Moreover we could show that along $D_{i}\\bigcap X_{reg}$ the evolving metric simultaneously has conical singularities with angle $2\\pi (1-a_{i})$ when $a_{i}\\in (0,1)$ and cusp singularities when $a_{i}=1.$ In section 5 we showed the convergence of the Kähler-Ricci flow on semi-log canonical models.", "One problem is that what is the long time behaviour of the Kähler-Ricci flow on general minimal varieties with log canonical singularities.", "As [2] showed that the Kodaira dimension of such varieties cannot be $-\\infty ,$ then what are the long time behaviours for nonnegative Kodaira dimensions.", "For smooth minimal manifold case this problem has been studied in [32], [33] and for the varieties with log terminal singularities this has been studied in [14], [34], [38].", "Moreover, the geometric convergence problem with singularities is still quite challenging, see the last section of [34] for discussions in smooth case." ] ]
1906.04343
[ [ "Multiscale Nakagami parametric imaging for improved liver tumor\n localization" ], [ "Abstract Effective ultrasound tissue characterization is usually hindered by complex tissue structures.", "The interlacing of speckle patterns complicates the correct estimation of backscatter distribution parameters.", "Nakagami parametric imaging based on localized shape parameter mapping can model different backscattering conditions.", "However, performance of the constructed Nakagami image depends on the sensitivity of the estimation method to the backscattered statistics and scale of analysis.", "Using a fixed focal region of interest in estimating the Nakagami parametric image would increase estimation variance.", "In this work, localized Nakagami parameters are estimated adaptively by means of maximum likelihood estimation on a multiscale basis.", "The varying size kernel integrates the goodness-of-fit of the backscattering distribution parameters at multiple scales for more stable parameter estimation.", "Results show improved quantitative visualization of changes in tissue specular reflections, suggesting a potential approach for improving tumor localization in low contrast ultrasound images." ], [ "Introduction", "Ultrasound parametric imaging is gaining increased interest as an effective way for quantitative tumor characterization.", "Changes in properties of soft tissue texture, e.g.", "liver parenchyma, can be reflected in the radio-frequency (RF) backscattered statistics as different Rayleigh distributions [1].", "However parametric estimation is not a straightforward process and is generally faced with increased estimation variance, especially for complex speckle patterns.", "This may obscure abnormal tissue structures, e.g.", "tumors and fibrosis, which are deemed important for early diagnosis [2].", "The analytical simplicity of the bi-parametric Nakagami distribution model, along with its goodness-of-fit with the envelope histogram of the ultrasound-backscattered signal [3], can be attractive for tissue characterization [1], [4], [5].", "The shape of the Nakagami distribution is specified by the $\\mu $ parameter corresponding to the local concentration of scatterers, and the amount of spread (i.e.", "the local backscattered energy) is represented by the scale parameter $\\omega $ .", "Different conditions of the RF envelope statistics can be achieved by varying the $\\mu $ parameter.", "Values of $\\mu $ between 0 and 1 yield pre-Rayleigh and Rayleigh distributions.", "The Rayleigh distribution case $\\left(\\mu = 1\\right)$ resembles of having a large number of randomly distributed scatterers, and in the case of high degree of variance the distribution conforms to pre-Rayleigh $\\left(\\mu < 1\\right)$ .", "For a mixture of random and periodically located scatterers, the RF envelope statistics becomes a post-Rayleigh distribution $\\left(\\mu > 1\\right)$ .", "The map of local $\\mu $ parameter values – that correspond to tissue properties – is normally considered in constructing the Nakagami parametric image.", "The estimated Nakagami parameters as a function of the backscattered envelope statistics has shown to be a reliable tool for quantitative visualization of tissue structure changes [4], [5], [6].", "Previous work has improved local window-based $\\mu $ parameter estimation to generate the Nakagami parameter map from envelopes of raw ultrasound signals [7], [8], [9].", "As using a gamma kernel density estimation to achieve a smooth estimation of the distribution from small fixed-size windows [7], or by using a number of windows having a size 3 times the pulse length of the ultrasound [8], or by summing and averaging multiple Nakagami parametric images generated using different sliding square window sizes (7-10 times the transducer pulse length) [9].", "However challenges persist with fixed-size window approaches.", "A focal region of interest (i.e.", "using small windows) enables enhanced resolution of the Nakagami parametric image, but large tissue structures require a large spatial scale to achieve stable parameter estimation.", "Therefore parameter smoothing might not suffice when prominent parts of the examined tissue structure is truncated or located outside the window borders.", "On the other hand, using large window sizes for summation and averaging may affect the results when compounded with windows of smaller-sizes, and hence affecting the reliability of the constructed parametric image resolution.", "In this work an alternative approach of employing a multiscale kernel-based technique to model the backscattering distribution statistics is proposed.", "The focal region of interest should be large enough to have sufficient tissue variation, while being also as small to avoid inclusion of irrelevant textures from the nearby regions.", "The backscattered envelope from tissue was estimated voxel-by-voxel via Nakagami distribution and subsequently used to generate optimized local parametric images for improved liver tumor detection.", "The assumption is based on that tumor regions tend to have different backscattered distribution than normal tissue [10], and a localized approach based on a varying-size kernel can assist in better identifying the tumor speckle patterns." ], [ "Methodology", "The speckle pattern is dependent on the ultrasound wavelength and underlying tissue structure.", "As the former factor is fixed in this case and the latter varies across the ultrasound image, a multiscale approach that can localize the different tissue structures would best suit the modeling of the backscattered envelope.", "A non-linear kernel is applied adaptively to characterize the tissue speckle patterns.", "The estimation of the best parametric Nakagami maps from varying size kernels is described as follows:" ], [ "Multiscale kernel localization", "Let $V$ be an order set of constructed envelope detected RF images $I_{i}\\left(x, y\\right)$ , where $i$ is a certain slice in the acquired volume, and $P_{\\mu ,\\omega }(x,y)$ are the corresponding $\\mu $ and $\\omega $ parametric images.", "The RF images are calculated from the envelope of the ultrasound-backscattered signal just before performing any intensity mapping and post processing filtering.", "This representation preserves the ultrasound data unaltered while providing better quantitative analysis, i.e.", "without the risk of losing information due to RF signal shaping.", "Then a set of varying size kernels ${K}$ can be defined for each $I_{i}\\left(x, y\\right)$ , where ${K} =\\left\\lbrace v_{1},v_{2},\\ldots ,v_{k}\\right\\rbrace , v_{j} \\in V$ and $k = 1/8$ of the size of $I_{i}\\left(x, y\\right)$ .", "Different focal regions are investigated in a multiscale manner as in (1) by varying the size of two non-negative integer variables $a$ and $b$ , which are used to center each localized kernel $v_{j}(s, t)$ with a different size of $m \\times n$ on each voxel $l$ in $I_{i}\\left(x, y\\right)$ of size $M \\times N$ .", "$P_{\\mu ,\\omega }(x,y)= \\sum _{s=-a}^{a}\\sum _{t=-b}^{b}v_{j}\\left(s,t\\right)I_{i}\\left(x+s,y+t\\right)\\left(\\frac{k}{j}\\right)^2$ where $a={\\frac{m+2}{2}}$ , $b={\\frac{n+2}{2}}$ , and $m$ , $n = 1,2,\\dots ,k$ ." ], [ "Modeling backscattered statistics", "The Nakagami distribution $N(x)$ is a gamma related distribution which is known for its analytical simplicity [5], and has been proposed as a general model for ultrasonic backscattering under different scattering conditions and scatterer densities [3].", "This distribution has the density function $N(x|\\mu ,\\omega )=2\\left(\\frac{\\mu }{\\omega }\\right)^{\\mu }\\frac{1}{\\Gamma \\left(\\mu \\right)} x^{\\left(2\\mu -1\\right)} e^{-\\frac{\\mu }{\\omega }x^{2}} , \\quad \\forall x \\in \\mathbb {R} \\ge 0$ where $x$ is the envelope of the RF signal and $\\Gamma \\left(\\cdot \\right)$ is the gamma function.", "If $x$ has a Nakagami distribution $N(x)$ with parameters $\\mu $ and $\\omega $ , then $x^2$ has a gamma distribution $\\Gamma $ with shape $\\mu $ and scale (energy) parameter $\\omega $ /$\\mu $ .", "Although there are other distributions exist in the literature for modeling ultrasonic backscattering, the Nakagami probabilistic distribution was chosen for its simplicity and ability to characterize different scattering conditions ranging from pre- to post-Rayleigh [6].", "Each voxel $l$ in $I_i$ is adaptively transformed via ${K}$ at different scales to its corresponding parametric Nakagami parameters by means of maximum likelihood estimation (MLE) forming a set of parametric vectors for each voxel $l$ .", "The MLE $\\hat{\\theta } \\left(v\\right)$ for a density function $f\\left(v^l_{1},\\ldots ,v^l_{k} | \\theta \\right)$ when $\\theta $ is a vector of parameters for the Nakagami distribution family $\\Theta $ , estimates the most probable parameters $\\hat{\\theta }\\left(v\\right) = arg max_\\theta \\: D\\left(\\theta |v^l_{1},\\ldots ,v^l_{k}\\right)$ , where $D\\left(\\theta |v\\right) = f\\left(v|\\theta \\right), \\!", "\\theta \\in \\Theta $ is the score function.", "Finally the goodness-of-fit is estimated via root mean square error for the calculated Nakagami parameters $\\theta _m$ at different scales, giving the localized parametric Nakagami images $P_{\\mu ,\\omega }$ as summarized in Algorithm 1.", "The shape parametric image $P_{\\mu }$ is used for subsequent tissue characterization.", "[!ht] Multiscale kernel localization Set of ultrasound backscattered envelope images $I_i = \\left\\lbrace \\left(x_1,y_1\\ldots ,x_{j},y_{j} \\right)\\right\\rbrace $ Localized ultrasound Nakagami shape and scale parametric images $P_{\\mu }, P_{\\omega }$ voxels $l$ in $I_i$ localized kernels $\\nu _{1}^{l} \\rightarrow \\nu _{k}^{l}$ Step1 // Fit with a Nakagami distribution $N(x|\\mu ,\\omega )=2\\left(\\frac{\\mu }{\\omega }\\right)^{\\mu }\\frac{1}{\\Gamma \\left(\\mu \\right)} x^{\\left(2\\mu -1\\right)}e^{-\\frac{\\mu }{\\omega }x^{2}}$ Step2 // Calculate Nakagami shape $\\mu $ and scale $\\omega $ parameters using maximum likelihood estimation as: $\\hat{\\theta }\\left(\\nu \\right) = arg max_\\theta \\: D\\left(\\theta /\\nu ^l_{1},\\ldots ,\\nu ^l_{k}\\right)$ where $\\theta $ is a vector of parameters for the Nakagami distribution family $f\\left(\\nu ^l_{1},\\ldots ,\\nu ^l_{k}/\\theta \\right)$ Step3 // Estimate goodness-of-fit of the determined Nakagami parameters $\\theta _{m}$ with the average RF signal $\\theta _{\\alpha }$ within set of localized kernels ${K}$ $\\left(P_{\\mu _{1},\\omega _{1}},\\dots , P_{\\mu _{j},\\omega _{j}}\\right) = \\operatorname{\\arg \\!\\min }\\left\\lbrace \\sqrt{\\frac{}{}}{\\sum \\limits _{s=2}^n \\left(\\theta _{m} - \\theta _{\\alpha }\\right)^2} {n}\\right\\rbrace $ $P_{\\mu }, P_{\\omega }$ Figure: Samples of simulated ultrasound speckle images representing: [left-right] fine texture (dense scatterers), coarse texture (sparse scatterers), heterogeneous texture (random scatterers), and homogeneous texture (periodic scatterers), referring to, respectively, phantoms D57, D30, D5, D37 in Table .Table: Mean absolute difference comparisonof estimated Nakagami shape parametricimages against ground-truth." ], [ "Simulated ultrasound speckle images", "Simulation experiments were performed on 11 different ultrasound speckle images generated from corresponding texture images adopted from the Brodatz texture album [11].", "The ultrasound speckle images were synthesized with given textures as the initial point scatterer image, giving clinical echo alike images that resemble tissue scatterers in appearance.", "Various specular reflection conditions of tissue texture boundaries are synthesized ranging from fine to coarse (i.e.", "high density to low density scatterers) and from heterogeneous to homogeneous (i.e.", "random to periodic scatterers alignment), c.f.", "Fig.", "REF .", "The window-based $\\mu $ parameter estimation methods: gamma kernel function (GKF) [7], windows-modulated compounding (WMC) [9] and the proposed multiscale kernel localization (MKL) methods where applied to the simulated ultrasound speckle images, and performance quantitatively compared with the $\\mu $ parameters estimated from the original synthetic texture images (ground-truth), as shown in Table REF .", "Results show that the MKL method gives more stable $\\mu $ parameter estimation in nearly all cases." ], [ "Liver tumor detection", "In order to quantitatively evaluate the robustness of the 3 different Nakagami parametric image estimation methods, they were applied to real ultrasound liver tumor images obtained using a diagnostic ultrasound system (z.one, Zonare Medical Systems, Mountain View, CA, USA) with a 4 MHz curvilinear transducer and 11 MHz sampling.", "The whole RF ultrasound image (without log-compression and filtering) was used in generating the Nakagami parametric images, so the sensitivity of methods to various tissue scatterers can be investigated.", "Fig.", "REF shows an ultrasound liver tumor image and corresponding Nakagami parametric images via the 3 different methods.", "Tumor tissue specular reflections tend to appear more prominent from the background tissue using the MKL approach as compared to GFK and MWC window-based $\\mu $ parameter estimation methods.", "The different kernel sizes used in generating the Nakagami parametric image using the MKL method is shown in Fig.", "REF .", "Figure: (a) Clinical ultrasound B-mode image showing a liver tumor (indicated by a yellow arrow), and corresponding Nakagami shape parametric image using (b) WMC, (c) GKF, and (d) MKL methods.Figure: Localized adaptive kernel sizes for Fig.", "(d)." ], [ "DISCUSSION", "Tumor texture tends to be more heterogeneous as compared to normal tissue [12].", "This property has been reported to be useful in tumor grading [12], [13], [14], [15] and assessing aggressiveness [16].", "However tumor spatial and contrast resolution in ultrasound images is low as compared to other modalities.", "Modeling the RF backscattered envelope from liver tissue requires an adaptive method that can effectively investigate tissue heterogeneity while reliably estimate the distribution parameters.", "Different spatial variations exist in speckle patterns across the ultrasound image due to the Rayleigh scattering behavior [1], and many tissue structures are prone to low spatial contrast and displacement during successive image acquisition.", "This makes the use of constant focal regions in estimating the backscattering distribution parameters very limiting.", "Such approach may result in missing parts of the analyzed speckle pattern if the focal region was too small, or possibly interlacing of irrelevant patterns from surrounding regions if the focal region was too large.", "Thereby subtle tissue structure (e.g.", "tumor regions in its early stages) could be obscured due to the presence of mixture of patterns.", "Different window sizes have diverse effects on the formation of the Nakagami parametric image.", "The experiments on simulated and real ultrasound images demonstrated the need for an adaptive approach that can enhance image resolution without degrading smoothness, i.e.", "having stable parameter estimation.", "Stable performance was achieved using the MKL method when applied to diverse speckle patterns simulating different soft tissue conditions in clinical practice.", "An exceptional case of D57 in Table REF – which had a fine tissue structure – did not give the best stable $\\mu $ estimation.", "The uniform speckle pattern appearance across the D57 image texture would reduce the sensitivity to texture variations of the adaptive approach employed by MKL.", "Thus a variant spatial resolution throughout the entire imaging field of view would not be best for this particular case [17].", "However in clinical practice, liver tissue characterization involves analyzing the whole ultrasound image before the tumor is localized (cf.", "Fig.", "REF (a)), which means encountering regions with different tissue characteristics; thus a non-varying window size may reduced the reliability of Nakagami imaging.", "The ability to rapidly and accurately identify tumor location in ultrasound images is limited due to inherent low contrast.", "Therefore ultrasound parametric imaging is normally applied to analyze the RF envelope statistics to give an indication of the properties of tissue scatterers.", "Fig.", "REF (d) shows visual improvement in the contrast between the specular reflections of tissue boundaries (c.f.", "Fig.", "REF (b) and Fig.", "REF (c)), with a stronger parametric response in the localized tumor region from surrounding tissue.", "This could be attributed to the adaptive approach of the MKL method that integrates the goodness-of-fit of the backscattering distribution parameters at multiple scales before parameter estimation.", "Examining how the focal regions vary in size as shown in Fig.", "REF , the MKL allows for the aggregation of sufficient voxels that would better represent the envelope statistics in order to highlight differences in tissue properties.", "Such localized multiscale neighborhood around each voxel contributes for the best resolution and improved smoothness.", "Finally, a number of challenges may arise with the employment of Nakagami imaging in tumor segmentation, such as the presence of blood vessels, ducts and other connective tissues.", "Although these small areas might give signs on liver inflammation, they would rather degrade the local image resolution and hence affect the smoothness of the parameter estimation.", "Also tumor spatial contrast varies according to depth and level of speckle artifacts.", "Such challenges would serve as future work for improving accurate segmentation of tumor boundaries in Nakagami parametric images." ], [ "CONCLUSION", "Nakagami parametric imaging based on localized shape parameter maps can model different backscattering conditions.", "Results show more stable estimation of the backscattering distribution parameters within a varying size kernel by means of MLE.", "Moreover, improved highlighting of tumor tissue specular reflections in ultrasound images was achieved.", "The proposed technique could serve as a decision support tool to model the statistical distribution of ultrasound backscatter signals for improved detection of liver tumors." ] ]
1906.04333
[ [ "Variational symmetries and pluri-Lagrangian structures for integrable\n hierarchies of PDEs" ], [ "Abstract We investigate the relation between pluri-Lagrangian hierarchies of $2$-dimensional partial differential equations and their variational symmetries.", "The aim is to generalize to the case of partial differential equations the recent findings in [Petrera, Suris.", "J. Nonlinear Math.", "Phys.", "24:sup1, 121--145 (2017)] for ordinary differential equations.", "We consider hierarchies of $2$-dimensional Lagrangian PDEs (many of which have a natural $(1+1)$-dimensional space-time interpretation) and show that if the flow of each PDE is a variational symmetry of all others, then there exists a pluri-Lagrangian 2-form for the hierarchy.", "The corresponding multi-time Euler-Lagrange equations coincide with the original system supplied with commuting evolutionary flows induced by the variational symmetries." ], [ "Introduction", "In the last decade a variational perspective on integrable systems has emerged under the name of pluri-Lagrangian systems (or Lagrangian multiform systems).", "The theory was initiated in [13] in the discrete setting, more specifically in the context of multidimensionally consistent lattice equations on a quadrilateral stencil, called quad equations.", "Multidimensional consistency means that the equation can be imposed on all elementary squares in a higher-dimensional lattice without leading to contradictions.", "Analogous to commutativity of differential equations, multidimensional consistency is a key feature of integrability for difference equations.", "In [13] it was shown that the property of multi-dimensional consistency can be combined with a variational formulation for quad equations.", "Solutions of integrable quad equations are critical points of an action functional obtained by integrating a suitable discrete Lagrangian 2-form over an arbitrary 2-dimensional surface in a higher-dimensional lattice.", "If the 2-dimensional surface is a plane, we recover a traditional variational principle for a 2-dimensional difference equation where the action is the sum over a plane of evaluations of the Lagrange function.", "The pluri-Lagrangian property requires the action to be critical also when this plane is replaced by any other 2-dimensional discrete surface in a higher-dimensional lattice.", "This remarkable property has been considered as a defining feature of integrability of 2-dimensional discrete equations [13], [14], [15], [2], [32], [4], [6], [9] as well as in the 1-dimensional [33], [5], [7] and 3-dimensional [16], [8] cases.", "The pluri-Lagrangian property can also be formulated in the continuous case, where solutions of (hierarchies of) integrable 2-dimensional partial differential equations (PDEs) are critical points of an action functional obtained by integrating a differential 2-form over an arbitrary 2-dimensional surface in a higher-dimensional space.", "This variational principle has been proposed as a Lagrangian analogue of the existence of Poisson-commuting Hamilton functions [27], [28], [13], [32].", "As in the discrete case, it is not limited to Lagrangian 2-forms describing 2-dimensional PDEs.", "The corresponding variational principle where a Lagrangian 1-form is integrated over curves applies to integrable ordinary differential equations [26], [22], [33].", "It is conjectured that also for $d > 2$ integrable hierarchies of $d$ -dimensional integrable PDEs can be described by pluri-Lagrangian $d$ -forms.", "Thanks to these investigations a quite suggestive scenario has emerged: the pluri-Lagrangian structure is closely related (or even equivalent) to the integrability of the underlying system.", "This novel characterization of integrability applies to both ordinary differential (or difference) equations and partial differential (or difference) equations.", "In the recent paper [22] a connection between the notions of pluri-Lagrangian structures and variational symmetries was proved in the context of classical mechanics.", "In particular, it was shown that the existence of commuting variational symmetries for a system of variational ordinary differential equations leads to a natural pluri-Lagrangian 1-form, whose multi-time Euler-Lagrange equations consist of the original system and commuting flows corresponding to the variational symmetries.", "These findings confirmed, in the framework of classical mechanics, that a pluri-Lagrangian structure is hidden behind the existence of a sufficient number of variational symmetries (i.e., of integrals of motion thanks to Noether theorem).", "In the present work we extend the above idea to the case of variational 2-dimensional PDEs, thus generalizing the results of [22] to the context of Lagrangian field theory with two independent variables.", "We consider hierarchies of variational PDEs where the flow of each PDE is a variational symmetry of the Lagrange functions of all other members of the hierarchy.", "Under this assumption, we show that there exists a pluri-Lagrangian 2-form for the hierarchy.", "The paper is organized as follows.", "In Section we give a short overview of Lagrangian field theory, recalling some classical notions and definitions.", "In particular we will provide a formulation of the celebrated Noether theorem, which establishes the relation between conservation laws and variational symmetries.", "In Section we review the notion of continuous 2-dimensional pluri-Lagrangian systems.", "Section is devoted to new results.", "It will be proved that from a family of variational symmetries one can construct a pluri-Lagrangian structure.", "The final Section contains three examples which illustrate the theoretical results obtained in Section ." ], [ "A short review of Lagrangian field theory", "An exhaustive reference on classical Lagrangian field theory is the book of P.J.", "Olver [20].", "The scope of the present Section is to recall the main definitions and concepts needed for a self-contained presentation of our results in the next Sections." ], [ "Euler-Lagrange equations", "Since we will work in a multi-time setting we do not restrict our presentation here to fields depending on only two independent variables.", "Therefore we start by considering a smooth field $u: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ depending on $N$ real independent variables $t_1,\\dots , t_N$ .", "We will use the multi-index notation for partial derivatives.", "For any multi-index $I=(i_1,\\ldots ,i_N) \\in \\mathbb {N}^N$ we set $u_I = \\frac{\\partial ^{|I|} u}{(\\partial t_1)^{i_1} \\ldots (\\partial t_N)^{i_N}},$ where $|I| = i_1 + \\ldots + i_N$ and $u = u(t_1,\\ldots ,t_N)$ .", "The notations $It_k$ and $It_k^\\alpha $ will represent the multi-indices $(i_1,\\ldots ,i_k + 1, \\ldots i_N)$ and $(i_1,\\ldots ,i_k + \\alpha , \\ldots i_N)$ respectively.", "We will write $k \\notin I$ if $i_k = 0$ and $k \\in I$ if $i_k \\ne 0$ .", "We will denote by $i$ the total derivative with respect to the coordinate direction $t_i$ , $i = \\sum _{I \\in \\mathbb {N}^N} u_{I t_i} \\frac{\\partial {}}{\\partial {u_I}}$ and by $I = {1}^{i_1} \\ldots {N}^{i_N}$ the corresponding higher order derivatives.", "The field $u$ can be considered as a section of the trivial bundle $\\mathbb {R}^N \\times \\mathbb {R}$ .", "The partial derivatives of $u$ of any order span the infinite jet bundle associated with $\\mathbb {R}^N \\times \\mathbb {R}$ .", "We will denote the fiber of the infinite jet bundle by $\\mathcal {J}^\\infty $ and the fiber coordinates by $[u]=(u,u_{t_i},u_{t_it_j},\\ldots )_{i,j,\\ldots \\in \\lbrace 1,\\ldots ,N\\rbrace }.$ A variational problem for a smooth field $u: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ is described by a Lagrangian $L: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ and consists in finding the critical points of the action functional $S = \\int _\\Gamma L[u] \\, \\mathrm {d}t_1 \\wedge \\cdots \\wedge \\mathrm {d}t_N ,$ where $\\Gamma \\subset \\mathbb {R}^N$ is some bounded region.", "In other words, we look for fields $u$ such that for all fields $v$ such that $v$ and its derivatives vanish at the boundary of $\\Gamma $ , there holds $ \\frac{\\mathrm {d}}{\\mathrm {d}\\varepsilon }\\bigg |_{\\varepsilon = 0} \\int _\\Gamma L[u + \\varepsilon v]\\, \\mathrm {d}t_1 \\wedge \\cdots \\wedge \\mathrm {d}t_N = 0.", "$ Concretely, we will be interested in variational problems for fields $u: \\mathbb {R}^2 \\rightarrow \\mathbb {R}$ .", "Therefore, let us fix $N=2$ and write explicitly the variational equations governing the evolution of $u$ .", "In this case the action functional over some bounded region $\\Gamma \\subset \\mathbb {R}^2$ is $ S = \\int _\\Gamma L[u] \\, \\mathrm {d}t_1 \\wedge \\mathrm {d}t_2 .$ The field $u$ is a solution to the variational problem, i.e., a critical point for the action $S$ , if and only if $ \\frac{\\delta _{} {L}}{\\delta {u}} =\\sum _{\\alpha ,\\beta \\ge 0} (-1)^{\\alpha +\\beta } 1^\\alpha 2^\\beta \\!\\left( \\frac{\\partial {L}}{\\partial {u_{t_1^\\alpha t_2^\\beta }}} \\right)=0,$ where the left hand side is called the variational derivative of $L$ .", "Equation (REF ) gives rise to a variational PDE, called Euler-Lagrange equation.", "Note that if the Lagrangian depends on the $n$ -th order jet, i.e., on derivatives of $u$ up to order $n$ , then the Euler-Lagrange equation depends on the jet of order $2n$ .", "If a given 2-dimensional PDE can be written as in Equation (REF ) for some Lagrangian $L$ , then we say that this PDE has a variational (or Lagrangian) structure.", "Of course, the Euler-Lagrange equation (REF ) admits a straightforward generalization for the case of a field $u: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ for $N > 2$ .", "Example 1 The Korteweg-de Vries (KdV) equation $ w_2 = w_{111} + 6 w w_1, $ where $w_i$ is shorthand notation for the derivative $w_{t_i}$ , can be put into a variational form by introducing the potential $u = w_1$ .", "The corresponding equation is $ u_{12} = u_{1111} + 6 u_1 u_{11}.", "$ Its variational structure comes from the Lagrangian $L[u]= \\frac{1}{2} u_1 u_2 - u_1^3 - \\frac{1}{2} u_1 u_{111}.$ Indeed, critical points of the action (REF ) are characterized by the Euler-Lagrange equation $0 = \\frac{\\delta L}{\\delta u}&= - 1 \\frac{\\partial {L}}{\\partial {u_1}} - 1^3 \\frac{\\partial {L}}{\\partial {u_{111}}}- 2 \\frac{\\partial {L}}{\\partial {u_2}} \\\\&= \\left( - \\frac{1}{2} u_{12} + 6 u_1 u_{11} + \\frac{1}{2} u_{1111}\\right) + \\frac{1}{2} u_{1111} - \\frac{1}{2} u_{12} \\\\&=-u_{12} + u_{1111} + 6 u_1 u_{11}.$" ], [ "Variational symmetries and Noether's theorem", "Let $N=2$ .", "A vertical generalized vector field on $\\mathbb {R}^2 \\times \\mathbb {R}$ is a vector field of the form $Q \\partial _u$ , where $Q: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ .", "It is called vertical because it does not contain any $\\partial _{t_i}$ and generalized because $Q$ depends on derivatives of $u$ , not just on $u$ itself.", "The prolongation of $Q \\partial _u$ is a vector field on $\\mathcal {J}^\\infty $ defined as $ \\operatorname{pr}(Q \\partial _u) = \\sum _{I \\in \\mathbb {N}^2} (I Q) \\frac{\\partial {}}{\\partial {u_I}}.", "$ A vector field $Q \\partial _u$ is called a variational symmetry of a Lagrangian $L:\\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ if its prolongation $\\operatorname{pr}(Q \\partial _u)$ satisfies $\\operatorname{pr}(Q \\partial _u) L = 1 F_1 + 2 F_2$ for some functions $F_1,F_2: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ .", "The pair $(F_1,F_2)$ is called the flux of the variational symmetry.", "A conservation law for $L$ is a triple of functions $J_1,J_2,Q: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ that satisfy $1 J_1 + 2 J_2 = -Q \\frac{\\delta _{} {L}}{\\delta {u}} .$ If Equation (REF ) holds true, the pair $J = (J_1,J_2)$ is called the conserved current and $Q$ the characteristic of the conservation law.", "On solutions of the Euler-Lagrange equations (REF ) the conserved current $J$ is divergence-free, hence its name.", "The famous Noether's theorem [18] establishes a one-to-one correspondence between conservation laws and variational symmetries.", "Theorem 1 Let $Q \\partial _u$ be a variational symmetry of $L$ .", "Then $J_1[u] &= \\sum _{I \\lnot \\ni t_2} \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_1}}} \\right) + \\frac{1}{2} \\sum _I 2 \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_1t_2}}} \\right) - F_1[u] , \\\\J_2[u] &= \\sum _{I \\lnot \\ni t_1} \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_2}}} \\right) + \\frac{1}{2} \\sum _I 1 \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_1t_2}}} \\right) - F_2[u],$ define the components of the conserved current of a conservation law, where the pair of functions $(F_1,F_2)$ is the flux, as in Equation (REF ).", "Conversely, given a conserved current $(J_1,J_2)$ , Equations (REF ) and () define the flux $(F_1,F_2)$ of a variational symmetry.", "Note that Equations (REF ) and () contain variational derivatives with respect to partial derivatives of $u$ : $ \\frac{\\delta _{} {L}}{\\delta {u_{I}}} = \\sum _{\\alpha ,\\beta \\ge 0} (-1)^{\\alpha +\\beta } 1^\\alpha 2^\\beta \\frac{\\partial {L}}{\\partial {u_{I t_1^\\alpha t_2^\\beta }}}.", "$ We also observe that $J_1$ and $J_2$ can be alternatively written as $J_1[u] &= \\sum _{\\alpha \\ge 0} \\left( (1^\\alpha Q) \\frac{\\delta _{} {L}}{\\delta {u_{t_1^{\\alpha +1}}}} \\right) + \\frac{1}{2} \\sum _{\\alpha \\ge 0} \\sum _{\\beta \\ge 0} 2 \\left( (1^\\alpha 2^\\beta Q) \\frac{\\delta _{} {L}}{\\delta {u_{t_1^{\\alpha +1}t_2^{\\beta +1}}}} \\right) - F_1[u], \\\\J_2[u] &= \\sum _{\\beta \\ge 0} \\left( (2^\\beta Q) \\frac{\\delta _{} {L}}{\\delta {u_{t_2^{\\beta +1}}}} \\right) + \\frac{1}{2} \\sum _{\\alpha \\ge 0} \\sum _{\\beta \\ge 0} 1 \\left( (1^\\alpha 2^\\beta Q) \\frac{\\delta _{} {L}}{\\delta {u_{t_1^{\\alpha +1}t_2^{\\beta +1}}}} \\right) - F_2[u],$ [Proof of Theorem REF ] The key point of the proof consists in the integration by parts of $ \\operatorname{pr}(Q \\partial _u) L= \\sum _I (I Q) \\frac{\\partial {L}}{\\partial {u_I}}, $ i.e., to write it in the form $ \\operatorname{pr}(Q \\partial _u) L= Q \\frac{\\delta _{} {L}}{\\delta {u}} + 1(\\cdots ) + 2(\\cdots ).", "$ To perform the full calculation, observe that $ \\frac{\\partial {L}}{\\partial {u_I}} = \\frac{\\delta _{} {L}}{\\delta {u_I}} + 1 \\frac{\\delta _{} {L}}{\\delta {u_{It_1}}} + 2 \\frac{\\delta _{} {L}}{\\delta {u_{It_2}}} + 1 2 \\frac{\\delta _{} {L}}{\\delta {u_{It_1t_2}}}, $ hence $\\operatorname{pr}(Q \\partial _u) L&= \\sum _I (I Q) \\left( \\frac{\\delta _{} {L}}{\\delta {u_I}} + 1 \\frac{\\delta _{} {L}}{\\delta {u_{It_1}}} + 2 \\frac{\\delta _{} {L}}{\\delta {u_{It_2}}} + 1 2 \\frac{\\delta _{} {L}}{\\delta {u_{It_1t_2}}} \\right) \\\\&= \\sum _I \\big ( ({It_1t_2} Q) + ({It_2} Q) 1 + ({It_1} Q) 2 + (I Q) 1 2 \\big ) \\frac{\\delta _{} {L}}{\\delta {u_{It_1t_2}}} \\\\&\\quad + \\sum _{I \\lnot \\ni t_2} \\big ( ({It_1} Q) + (I Q) 1 \\big ) \\frac{\\delta _{} {L}}{\\delta {u_{It_1}}} \\\\&\\quad + \\sum _{I \\lnot \\ni t_1} \\big ( ({It_2} Q) + (I Q) 2 \\big ) \\frac{\\delta _{} {L}}{\\delta {u_{It_2}}}+ Q \\frac{\\delta _{} {L}}{\\delta {u}} ,$ where the last term would be a sum over all $I \\lnot \\ni t_1,t_2$ , but only the empty multi-index $I = (0,0)$ satisfies this condition.", "The above equation can be simplified as $\\operatorname{pr}(Q \\partial _u) L&= \\sum _I 1 2 \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_1t_2}}} \\right)\\\\&\\quad + \\sum _{I \\lnot \\ni t_2} 1 \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_1}}} \\right) + \\sum _{I \\lnot \\ni t_1} 2 \\left( (I Q) \\frac{\\delta _{} {L}}{\\delta {u_{It_2}}} \\right) + Q \\frac{\\delta _{} {L}}{\\delta {u}} \\\\&= 1 (J_1 + F_1) + 2 (J_2 + F_2) + Q \\frac{\\delta _{} {L}}{\\delta {u}} .$ It follows that Equations (REF ) and (REF ) are equivalent.", "Hence if $Q \\partial _u$ is a variational symmetry, then Equations (REF )–() define a conserved current.", "Example 2 Consider again the KdV equation $ u_{12} = u_{1111} + 6 u_1 u_{11} $ and its Lagrangian $L[u]= \\frac{1}{2} u_1 u_2 - u_1^3 - \\frac{1}{2} u_1 u_{111}.$ As before, indices denote derivatives with respect to the corresponding time variables, e.g.", "$u_{12} = u_{t_1 t_2}$ .", "We present two variational symmetries of this equation and their associated conservation laws: The generalized vector field $Q \\partial _u$ with $Q[u] = u_1$ corresponds to a translation in the $t_1$ -direction.", "Indeed, $\\operatorname{pr}(Q \\partial _u) L = u_1 \\frac{\\partial {L}}{\\partial {u}} + u_{11} \\frac{\\partial {L}}{\\partial {u_1}} + u_{111} \\frac{\\partial {L}}{\\partial {u_{11}}} + u_{1111} \\frac{\\partial {L}}{\\partial {u_{111}}} + u_{12} \\frac{\\partial {L}}{\\partial {u_2}} \\\\= 1 L,$ hence $Q \\partial _u$ is a variational symmetry with flux $(F_1[u], F_2[u]) = (L[u],0).$ Corresponding to this variational symmetry we find the conservation law $-Q[u] \\frac{\\delta _{} {L}}{\\delta {u}} = -u_1 ( -u_{12} + 6 u_1 u_{11} + u_{1111}) = 1 J_1 + 2 J_2,$ with $J_1[u] &= u_1 \\frac{\\delta _{} {L}}{\\delta {u_1}} + u_{11} \\frac{\\delta _{} {L}}{\\delta {u_{11}}} + u_{111} \\frac{\\delta _{} {L}}{\\delta {u_{111}}} - F_1[u] = -2 u_1^3 - u_1 u_{111} + \\frac{1}{2} u_{11}^2, \\\\J_2[u] &= u_1 \\frac{\\delta _{} {L}}{\\delta {u_2}} - F_2[u] = \\frac{1}{2} u_1^2.$ This in turn implies the conservation of momentum: $ 2 \\int \\frac{1}{2} u_1^2 \\,\\mathrm {d}t_1 = 0 .", "$ The generalized vector field $Q \\partial _u$ with $ Q[u] = 10 u_1^3 + 5 u_{11}^2 + 10 u_1 u_{111} + u_{11111}.", "$ Indeed, $\\operatorname{pr}(Q \\partial _u) L &= Q \\frac{\\partial {L}}{\\partial {u}} + (1 Q) \\frac{\\partial {L}}{\\partial {u_1}} + (1^2 Q) \\frac{\\partial {L}}{\\partial {u_{11}}} + (1^3 Q) \\frac{\\partial {L}}{\\partial {u_{111}}} + (2 Q) \\frac{\\partial {L}}{\\partial {u_2}} \\\\&= 1 F_1+ 2 F_2,$ with $F_1[u] &= -18 u_{1}^{5} - 15 u_{1}^{2} u_{11}^{2} - 45 u_{1}^{3} u_{111} + 5 u_{1}^{3} u_{2} + 4 u_{11}^{2} u_{111} - 18 u_{1} u_{111}^{2} - 4 u_{1} u_{11} u_{1111} \\\\&\\quad - 8 u_{1}^{2} u_{11111} - 10 u_{1} u_{11} u_{12} + \\frac{5}{2} u_{11}^{2} u_{2} + 5 u_{1} u_{111} u_{2} + \\frac{1}{2} u_{1111}^{2} - u_{111} u_{11111} \\\\&\\quad + \\frac{1}{2} u_{11} u_{111111} - \\frac{1}{2} u_{1} u_{1111111} + u_{111} u_{112} - u_{1111} u_{12} + \\frac{1}{2} u_{11111} u_{2},\\\\F_2[u] &= \\frac{5}{2} u_1^{4} + \\frac{15}{2} u_1 u_{11}^{2} + 5 u_1^{2} u_{111} - \\frac{1}{2} u_{111}^{2} + \\frac{1}{2} u_{1} u_{11111}.$ The corresponding conservation law is $-Q[u] \\frac{\\delta L}{\\delta u}= 1 J_1 + 2 J_2,$ with $J_1[u] &= Q \\frac{\\delta _{} {L}}{\\delta {u_1}} + (1 Q) \\frac{\\delta _{} {L}}{\\delta {u_{11}}} + ({11} Q)\\frac{\\delta _{} {L}}{\\delta {u_{111}}} - F_1 \\\\&= -12 u_{1}^{5} - 15 u_{1}^{2} u_{11}^{2} - 10 u_{1}^{3} u_{111} + u_{11}^{2} u_{111} - 2 u_{1} u_{111}^{2} - 6 u_{1} u_{11} u_{1111} \\\\&\\quad + 10 u_{1} u_{11} u_{12} - \\frac{1}{2} u_{1111}^{2} - u_{111} u_{112} + u_{1111} u_{12}$ and $J_2[u] = Q \\frac{\\delta _{} {L}}{\\delta {u_2}} - F_2[u]= \\frac{5}{2} u_{1}^{4} - 5 u_{1} u_{11}^{2} + \\frac{1}{2} u_{111}^{2} .$" ], [ "Pluri-Lagrangian field theory", "In this Section we briefly review the main concepts of pluri-Lagrangian field theory.", "For further details see [26], [27], [28]." ], [ "Integrable hierarchies of PDEs", "One of the defining features of an integrable PDE is that it possesses an infinite amount of symmetries and, correspondingly, an infinite amount of conservation laws.", "These symmetries define a family of PDEs that commute with the original one.", "Let us illustrate the concept of commuting PDEs on the basis of our leading example.", "Example 3 In Example REF $(b)$ we proved that the generalized vector field $Q \\partial _u$ , with $Q[u] = 10 u_1^3 + 5 u_{11}^2 + 10 u_1 u_{111} + u_{11111},$ is a variational symmetry of the KdV equation $u_{12} = u_{1111} + 6 u_1 u_{11} = 0.$ If we introduce a third independent variable $t_3$ , we can define the PDE $ u_3 = 10 u_1^3 + 5 u_{11}^2 + 10 u_1 u_{111} + u_{11111}, $ which commutes with the KdV equation itself.", "This means that both ways of calculating the mixed derivative $u_{123}$ agree on solutions: $3 u_{12}&= 3 (u_{1111} + 6 u_1 u_{11}) \\\\&= 540 u_{1}^{2} u_{11}^{2} + 180 u_{1}^{3} u_{111} + 480 u_{11}^{2} u_{111} + 300 u_{1} u_{111}^{2} + 480 u_{1} u_{11} u_{1111} + 90 u_{1}^{2} u_{11111} \\\\&\\quad + 70 u_{1111}^{2} + 110 u_{111} u_{11111} + 56 u_{11} u_{111111} + 16 u_{1} u_{1111111} + u_{111111111} \\\\&= 1 2 \\left(10 u_1^3 + 5 u_{11}^2 + 10 u_1 u_{111} + u_{11111} \\right) \\\\&= 1 2 u_3.$ Since symmetries lead to commuting equations, a natural perspective on an integrable PDE is to consider it as one equation belonging to an infinite integrable hierarchy, i.e., an infinite set of integrable PDEs such that any two systems in this set are compatible.", "Such hierarchies are usually generated by recursion operators or master symmetries [17], [10], [20]." ], [ "Pluri-Lagrangian problems", "Let us focus on $(1+1)$ -dimensional PDEs.", "A finite number of equations from a hierarchy can be embedded in a higher-dimensional multi-time, where they share a common space direction, say $t_1 = x$ , but each equation has its own time coordinate, $t_2,t_3,\\ldots $ .", "Formally, we can embed the whole hierarchy into an infinite-dimensional space in the same way.", "In the classical variational description of $(1+1)$ -dimensional PDEs, we integrate a Lagrange function over (an open subset of) the 2-dimensional space-time.", "A variational structure of a hierarchy of such PDEs should include the classical variational description of each individual equation, i.e., integration over a 2-dimensional subspace.", "Therefore, it is natural for the role of a Lagrange function to be played by a differential 2-form.", "Let $\\mathcal {L}\\in \\Omega ^2(\\mathbb {R}^N)$ be a 2-form depending on the infinite jet of a smooth field $u: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ , i.e., $\\mathcal {L}[u] = \\sum _{i < j} L_{ij}[u] \\,\\mathrm {d}t_i \\wedge \\mathrm {d}t_j,$ with $L_{ij}:\\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ .", "We say that $u$ solves the pluri-Lagrangian problem for $\\mathcal {L}$ if for any 2-dimensional submanifold $\\Gamma \\subset \\mathbb {R}^N$ and for any infinitesimal variation $v(t_1,\\ldots ,t_N) \\partial _u$ of $u$ , where $v:\\mathbb {R}^N \\rightarrow \\mathbb {R}$ and all its derivatives vanish at the boundary of $\\Gamma $ , we have $ \\frac{\\mathrm {d}}{\\mathrm {d}\\varepsilon } \\bigg |_{\\varepsilon = 0} \\int _\\Gamma \\mathcal {L}[u + \\varepsilon v]= 0 .", "$ This can also be written as $ \\int _\\Gamma \\operatorname{pr}(v \\partial _u) \\mathcal {L}[u] = 0 ,$ where the vertical vector field $\\operatorname{pr}(v \\partial _u) = \\sum _I v_I \\frac{\\partial {}}{\\partial {u_I}}$ acts on the coefficients of $\\mathcal {L}[u]$ , i.e.", "$ \\operatorname{pr}(v \\partial _u) \\mathcal {L}[u] = \\sum _{i < j} \\sum _I v_I \\frac{\\partial {L_{ij}[u]}}{\\partial {u_I}} \\,\\mathrm {d}t_i \\wedge \\mathrm {d}t_j .", "$ The equations that characterize solutions to the pluri-Lagrangian problem are called multi-time Euler-Lagrange equations.", "They were derived in [27] and state that, for all $i,j,k \\in \\lbrace 1,\\ldots ,N\\rbrace $ , there holds: $\\forall I \\lnot \\ni t_i,t_j : &\\quad \\frac{\\delta _{ij} {L_{ij}}}{\\delta {u_I}} = 0, \\\\\\forall I \\lnot \\ni t_i: &\\quad \\frac{\\delta _{ij} {L_{ij}}}{\\delta {u_{It_j}}} = \\frac{\\delta _{ik} {L_{ik}}}{\\delta {u_{It_k}}} , \\\\\\forall I: &\\quad \\frac{\\delta _{ij} {L_{ij}}}{\\delta {u_{It_it_j}}} + \\frac{\\delta _{jk} {L_{jk}}}{\\delta {u_{It_jt_k}}} + \\frac{\\delta _{ki} {L_{ki}}}{\\delta {u_{It_kt_i}}} = 0 , $ where $\\frac{\\delta _{ij} {L_{ij}}}{\\delta {u_{I}}} = \\sum _{\\alpha ,\\beta \\ge 0} (-1)^{\\alpha +\\beta } i^\\alpha j^\\beta \\!\\left( \\frac{\\partial {L_{ij}}}{\\partial {u_{I i^\\alpha j^\\beta }}} \\right)$ is the variational derivative in the $(t_i,t_j)$ -plane.", "Note that the multi-time Euler-Lagrange equations contain the classical Euler-Lagrange equations in each $(t_i,t_j)$ -plane (REF ), where derivatives with respect to other times are considered as additional components of the field, plus additional equations ()–() coming from choices of $\\Gamma $ that are not coordinate planes.", "In the present work, we will use a different property to recognize solutions to the pluri-Lagrangian problem.", "There is a remarkable relation between the pluri-Lagrangian problem and the property that the 2-form $\\mathcal {L}$ is closed on solutions $u$ to the hierarchy.", "In fact, this closedness property is often considered to be the fundamental property of the Lagrangian theory of integrable hierarchies [13], [14], [15], [16], [33], [2], [32].", "When this point of view is taken, the term “Lagrangian multiform” is more commonly used than “pluri-Lagrangian”.", "Here, we show that a slightly weaker property of the 2-form is a sufficient condition for a solution to the pluri-Lagrangian problem.", "Theorem 2 Consider a 2-form $\\mathcal {L}$ and a hierarchy of commuting PDEs $u_i = Q_i[u] \\qquad i = 2,\\ldots , N,$ with $Q_i:\\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ .", "If the exterior derivative of $\\mathcal {L}$ is constant up to a term that attains a double zero on solutions of (REF ), i.e., if $ \\mathrm {d}\\mathcal {L}= \\gamma + \\sum _{I,J} \\sum _{i,j} \\omega _{i,j}^{I,J} I (u_i - Q_i) J (u_j - Q_j) $ for some $\\mathcal {J}^\\infty $ -dependent 3-forms $\\omega _{i,j}^{I,J}$ and a 3-form $\\gamma $ that does not depend on $u$ or its derivatives, then all solutions $u: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ to the hierarchy $(\\ref {hierarchy})$ also solve the pluri-Lagrangian problem for $\\mathcal {L}$ .", "Strictly speaking, the assumption that the PDEs (REF ) commute can be dropped from this theorem.", "If they do not commute then there will usually be no non-trivial solutions $u: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ to all PDEs simultaneously, so in this case the theorem would be of very limited relevance.", "[Proof of Theorem REF ] Let $u$ be a solution to the hierarchy and $\\Gamma = \\partial B$ a surface defined as the boundary of a 3-manifold $B$ .", "It is sufficient to show that the pluri-Lagrangian property holds on such surfaces.", "Indeed, without loss of generality we can require variations to be supported on small open subsets and for any sufficiently small open subset $\\Gamma ^{\\prime }$ of a given surface, one can find a 3-manifold such that $\\Gamma ^{\\prime }$ is contained in its boundary.", "As a consequence of the assumption on $\\mathcal {L}$ there holds for any variation $v: \\mathbb {R}^N \\rightarrow \\mathbb {R}$ that $ \\operatorname{pr}(v \\partial _u) \\mathrm {d}\\mathcal {L}[u] = \\frac{\\mathrm {d}}{\\mathrm {d}\\varepsilon }\\bigg |_{\\varepsilon =0} \\mathrm {d}\\mathcal {L}[u+\\varepsilon v] = 0.", "$ Therefore $ \\frac{\\mathrm {d}}{\\mathrm {d}\\varepsilon }\\bigg |_{\\varepsilon =0} \\int _\\Gamma \\mathcal {L}[u+\\varepsilon v]= \\frac{\\mathrm {d}}{\\mathrm {d}\\varepsilon }\\bigg |_{\\varepsilon =0} \\int _B d \\mathcal {L}[u+\\varepsilon v]= 0 ,$ hence the action integral over any surface $\\Gamma $ is critical with respect to variations of $u$ .", "There are strong indications that the existence of a pluri-Lagrangian structure is deeply connected to integrability.", "One such indication comes from within the theory: the multi-time Euler-Lagrange equations are highly overdetermined.", "Hence if nontrivial solutions exist, then we are dealing with a system with remarkable properties.", "Other indications are connections to different notions of integrability, including Hamiltonian formulations [26], [28] and Lax pairs [24], even though these connections have not yet been studied in full detail.", "Despite some recent discoveries, relatively few examples of pluri-Lagrangian hierarchies of PDEs are known.", "To our knowledge, the list is limited to the potential KdV equation [27] and several related hierarchies obtained as continuum limits from lattice equations [29], [30], as well as (a matrix-valued generalization of) the AKNS system [24].", "The goal of this paper is to establish a construction of a pluri-Lagrangian 2-form for a given hierarchy of $(1+1)$ -dimensional PDEs, assuming we know classical Lagrange functions for the individual equations.", "Furthermore, we will assume that the vector field associated to each of the PDEs is a variational symmetry for the Lagrangians of the rest of the hierarchy.", "This assumption can be thought of as the Lagrangian analogue to commuting Hamiltonian flows." ], [ "From variational symmetries to a pluri-Lagrangian 2-form", "We will take $t_1 = x$ to be the space coordinate.", "Then we can take the coefficients $L_{1j}$ of the pluri-Lagrangian 2-form (REF ) to be classical Lagrangians for the individual equations of the hierarchy.", "However, the coefficients $L_{ij}$ with $i,j>1$ do not have an interpretation in a classical variational principle.", "It is not obvious under which conditions suitable $L_{ij}$ exist, such that the given hierarchy solves the pluri-Lagrangian problem for the 2-form.", "Below we will give an answer to this question for a large class of Lagrangians.", "For a hierarchy of evolutionary equations, $u_i = Q_i(u_1,u_{11},\\ldots ) \\qquad i = 2,\\ldots , N,$ it is a reasonable assumption that the corresponding Lagrangians do not contain second or higher derivatives with respect to the time variable.", "Similarly, we will assume that the Lagrangian does not contain products of time-derivatives.", "Suppose we have a family of Lagrangians $L_{1i}$ for $i = 2, \\ldots , N$ satisfying these assumptions: $ L_{1i}[u] = p(u,u_1,u_{11},\\ldots ) u_i - h_i(u,u_1,u_{11},\\ldots ).$ Here $p$ and $h$ are two arbitrary functions of their arguments.", "In particular the term $p(u,u_1,u_{11},\\ldots ) u_i$ plays the role of a kinetic energy.", "Note that we are not including mixed derivatives, $u_{1i}, u_{11i},\\ldots $ .", "This does not restrict generality, because if a Lagrangian depends linearly on such derivatives, then we can integrate by parts to get an equivalent Lagrangian of the form (REF ).", "Furthermore, note that the factor $p(u,u_1,u_{11},\\ldots )$ in the kinetic term of $L_{1i}[u]$ is the same for all $i$ .", "This is a direct consequence of the multi-time Euler-Lagrange equations of type ().", "The Euler-Lagrange equations (REF ) of the Lagrangians (REF ) will not be evolutionary.", "Instead we assume that the Euler-Lagrange equations are differential consequences of the hierarchy (REF ), i.e., equations of the form $ \\mathcal {E}_p( u_i - Q_i(u_1,u_{11},\\ldots ) ) = 0 ,$ where $\\mathcal {E}_p$ is some differential operator, depending on the kinetic term of the Lagrangians.", "In the case of the KdV hierarchy we have $\\mathcal {E}_p = 1$ , see Example REF .", "Assume that the prolonged vector fields $\\mathfrak {D}_i = \\operatorname{pr}(Q_i \\partial _u)$ , corresponding to the equations of the hierarchy, commute pairwise and are variational symmetries of the $L_{1j}$ : $\\mathfrak {D}_i L_{1j} = 1 A_{ij} + j B_{ij}$ for some functions $A_{ij}, B_{ij}: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ .", "If we consider only those terms that contain a $t_j$ -derivative, what remains of Equation (REF ) is of the form $ \\mathfrak {D}_i(p u_j) = 1 \\bar{A}_{ij}(u,u_1,u_j,\\ldots ) + j B_{ij}(u,u_1,u_{11},\\ldots ) $ for some function $\\bar{A}_{ij}: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ .", "This is an algebraic identity (as opposed to an equality on solutions), hence we can replace $t_j$ -derivatives by new dependent variables, e.g.", "$u_j$ by a field denoted by $u_t$ .", "We find $ \\mathfrak {D}_i (p u_t) = 1 \\bar{A}_{ij}(u,u_1,u_t,\\ldots ) + k B_{ij}(u,u_1,u_{11},\\ldots ).", "$ Since the left hand side of this equation is independent of $j$ , we can choose $\\bar{A}_{ij}$ and $B_{ij}$ independent of $j$ as well.", "In particular, we can write $B_{ij} = B_i$ and get $\\mathfrak {D}_i L_{1j} = 1 A_{ij} + j B_{i}.$ Note that $A_{ij}, B_i: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ are only defined up to a constant, hence we can choose them to be zero on the zero field: $A_{ij}[0] = B_i[0] = 0$ .", "Lemma 3 For Lagrangians of the form (REF ) with commuting variational symmetries (REF ), there exist functions $F_{ij}: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}:[u] \\mapsto F_{ij}(u,u_1,u_{11},\\ldots )$ , that do not depend on any time-derivatives, such that $1 F_{ij} = i L_{1j} - j L_{1i}$ on solutions of the hierarchy (REF ).", "Since the variational symmetries $\\mathfrak {D}_i = \\operatorname{pr}(Q_i \\partial _u)$ commute, we have for any $k \\ne i,j$ $0 &= [\\mathfrak {D}_{i}, \\mathfrak {D}_{j}] L_{1k} \\\\&= 1 \\left( \\mathfrak {D}_{i} A_{jk} - \\mathfrak {D}_{j} A_{ik} \\right) + k \\left( \\mathfrak {D}_{i} B_j - \\mathfrak {D}_{j} B_i \\right) .$ Now let $u$ be an arbitrary compactly supported smooth field.", "Then $0 &= \\int _{-\\infty }^\\infty 1 \\left( \\mathfrak {D}_{i} A_{jk} - \\mathfrak {D}_{j} A_{ik} \\right) + k \\left( \\mathfrak {D}_{i} B_j - \\mathfrak {D}_{j} B_i \\right) \\mathrm {d}t_1 \\\\&= \\int _{-\\infty }^\\infty k \\left( \\mathfrak {D}_{i} B_j - \\mathfrak {D}_{j} B_i \\right) \\mathrm {d}t_1 \\\\&= k \\int _{-\\infty }^\\infty \\left( \\mathfrak {D}_{i} B_j - \\mathfrak {D}_{j} B_i \\right) \\mathrm {d}t_1 .$ Since $u$ and in particular its $t_k$ -derivatives are arbitrary, it follows that $\\mathfrak {D}_{i} B_j - \\mathfrak {D}_{j} B_i$ is a null Lagrangian.", "This implies (see e.g.", "[20]) that there exists a function $G_{ij}:\\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ such that $ \\mathfrak {D}_{i} B_j - \\mathfrak {D}_{j} B_i = 1(G_{ij}).", "$ Hence with $F_{ij} = G_{ij} + A_{ij} - A_{ji}$ we find that, on solutions of the hierarchy (REF ), $\\mathfrak {D}_{i} L_{1j} - \\mathfrak {D}_{j} L_{1i}&= 1 A_{ij} + j B_i - 1 A_{ji} - i B_j \\\\&= 1 A_{ij} + \\mathfrak {D}_{j} B_i - 1 A_{ji} - \\mathfrak {D}_{i} B_j \\\\&= 1 F_{ij}.$ Since we are working on solutions of the equations the hierarchy, we can use those equations to eliminate time-derivatives from $F_{ij}$ , hence we can assume it depends on the jet as $F_{ij}(u,u_1,u_{11},\\ldots )$ .", "We now present our main result, which is the analogue in 2-dimensional field theory of Theorem 10 in [22].", "Theorem 4 Assume we have Lagrangians of the form (REF ) with commuting variational symmetries (REF ).", "Let $L_{ij}[u]&= \\sum _{\\alpha \\ge 0} \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^\\alpha (u_i - Q_i) - \\sum _{\\alpha \\ge 0} \\frac{\\delta _{1i} {L_{1i}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^\\alpha (u_j - Q_j) + F_{ij}(u,u_1,u_{11},\\ldots ) ,$ where $F_{ij}: \\mathcal {J}^\\infty \\rightarrow \\mathbb {R}$ is as in Lemma REF and the operator $\\frac{\\delta _{ij} {}}{\\delta {}}$ is the variational derivative from Equation (REF ).", "Then every solution of the hierarchy (REF ) is a critical point of $ \\mathcal {L}[u] = \\sum _{i<j} L_{ij}[u] \\,\\mathrm {d}t_i \\wedge \\mathrm {d}t_j $ in the pluri-Lagrangian sense.", "We show that $\\mathcal {L}$ is almost-closed in the sense of Theorem REF .", "We start by calculating $1 L_{ij}$ .", "We have: $1 & \\left(\\sum _{\\alpha \\ge 0} \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^\\alpha (u_i - Q_i) \\right)\\\\&= \\sum _{\\alpha \\ge 0}1 \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^\\alpha (u_i - Q_i) + \\sum _{\\alpha \\ge 0} \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^{\\alpha +1} (u_i - Q_i) \\\\&= \\sum _{\\alpha \\ge 0} \\left( 1 \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1}}}} + \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^\\alpha }}} \\right) 1^\\alpha (u_i - Q_i) - \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u}} 1 (u_i - Q_i) \\\\&= \\sum _{\\alpha \\ge 0} \\left( \\frac{\\partial {L_{1j}}}{\\partial {u_{t_1^\\alpha }}} - j \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha } t_j}}} - 1 j \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1} t_j}}} \\right) 1^\\alpha (u_i - Q_i) - \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u}} 1 (u_i - Q_i) .$ Since $L_{1j}$ does not depend on any mixed derivatives $u_{t_1^{\\alpha +1} t_j}$ , this simplifies to $1 & \\left(\\sum _{\\alpha \\ge 0} \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^\\alpha (u_i - Q_i) \\right)\\\\&= \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1j}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_i - Q_i) - j \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u_{t_j}}} (u_i - Q_i) - \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u}} 1 (u_i - Q_i) \\\\&\\equiv \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1j}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_i - Q_i) - (j p) (u_i - Q_i),$ where $\\equiv $ denotes equality modulo double zeros.", "Similarly, there holds $ 1 \\left( \\sum _{\\alpha \\ge 0} \\frac{\\delta _{1i} {L_{1i}}}{\\delta {u_{t_1^{\\alpha +1}}}} 1^\\alpha (u_j - Q_j) \\right)\\equiv \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1i}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_j - Q_j) - (i p) (u_j - Q_j) .", "$ Hence $1 L_{ij}&\\equiv \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1j}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_i - Q_i)- \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1i}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_j - Q_j) \\\\&\\quad - (j p) (u_i - Q_i) + (i p) (u_j - Q_j) + 1 F_{ij}.$ Using the assumption that the Lagrangians $L_{1i}$ and $L_{1j}$ are of the form (REF ), we can write $&i L_{1j} - \\mathfrak {D}_{i} L_{1j} = p j (u_i - Q_i) + \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1j}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_i - Q_i) \\\\&j L_{1i} - \\mathfrak {D}_{j} L_{1i} = p i (u_j - Q_j) + \\sum _{\\alpha \\ge 0} \\frac{\\partial {L_{1i}}}{\\partial {u_{t_1^\\alpha }}} 1^\\alpha (u_j - Q_j) ,$ where $\\mathfrak {D}_i = \\operatorname{pr}(Q_i \\partial _u)$ and $\\mathfrak {D}_j = \\operatorname{pr}(Q_j \\partial _u)$ .", "Hence $\\begin{split}1 L_{ij} - i L_{1j} + j L_{1i}&\\equiv - \\mathfrak {D}_{i} L_{1j} + \\mathfrak {D}_{j} L_{1i} - (j p) (u_i - Q_i) - p j (u_i - Q_i) \\\\&\\quad + (i p) (u_j - Q_j) + p i (u_j - Q_j) + 1 F_{ij} .\\end{split}$ By definition of $F_{ij}$ we have that $1 F_{ij} - \\mathfrak {D}_{i} L_{1j} + \\mathfrak {D}_{j} L_{1i} = 0$ on solutions of (REF ).", "Furthermore, the only time derivatives in this expression come from the kinetic parts $p u_i$ and $p u_j$ of the Lagrangians.", "Therefore, $1 F_{ij} &- \\mathfrak {D}_{i} L_{1j} + \\mathfrak {D}_{j} L_{1i} \\\\&= - \\mathfrak {D}_{i} (p u_j - p Q_j) + \\mathfrak {D}_{j} (p u_i - p Q_i) \\\\&= - p \\mathfrak {D}_{i} (u_j - Q_j) - (\\mathfrak {D}_{i} p) (u_j - Q_j) + p \\mathfrak {D}_{j} (u_i - Q_i) + (\\mathfrak {D}_{j} p) (u_i - Q_i) \\\\&\\equiv - p i (u_j - Q_j) - (i p) (u_j - Q_j) + p j (u_i - Q_i) + (j p) (u_i - Q_i) .", "$ Combining Equations (REF ) and (REF ) gives $1 L_{ij} - i L_{1j} + j L_{1i} \\equiv 0.$ Consider three copies of Equation (REF ), each with an additional differentiation: $&k ( 1 L_{ij} - i L_{1j} + j L_{1i} ) \\equiv 0, \\\\&j ( 1 L_{ik} - i L_{1k} + k L_{1i} ) \\equiv 0, \\\\&i ( 1 L_{jk} - j L_{1k} + k L_{1j} ) \\equiv 0.$ A linear combination of these three equations gives us $ 1 ( k L_{ij} - j L_{ij} + i L_{jk} ) \\equiv 0.", "$ Since all coefficients are autonomous, this implies that $k L_{ij} - j L_{ij} + i L_{jk} \\equiv \\text{const} .$ Equations (REF ) and (REF ) together imply that $\\mathcal {L}$ fulfills the conditions of Theorem REF , hence every solution of the hierarchy (REF ) is a critical point of the pluri-Lagrangian problem for $\\mathcal {L}$ .", "Theorem REF and its proof are formulated for scalar systems, but they can be extended to the case of multicomponent systems.", "If $u = (u^1,\\ldots ,u^\\ell )$ satisfies the equations $u^k_i = Q^k_i$ , we construct the Lagrangian coefficients by $L_{ij}[u]&= \\sum _{k = 1}^\\ell \\sum _{\\alpha \\ge 0} \\frac{\\delta _{1j} {L_{1j}}}{\\delta {u^k_{t_1^{\\alpha +1}}}} 1^\\alpha \\big ( u^k_i - Q^k_i \\big ) - \\sum _{k = 1}^\\ell \\sum _{\\alpha \\ge 0} \\frac{\\delta _{1i} {L_{1i}}}{\\delta {u^k_{t_1^{\\alpha +1}}}} 1^\\alpha \\big ( u^k_j - Q^k_j \\big ) + F_{ij}(u,u_1,u_{11},\\ldots ) .$" ], [ "Examples", "In this last Section we discuss three examples.", "For the first one, the potential Korteweg-de Vries hierarchy, a pluri-Lagrangian structure is known in the literature [28].", "Our discussion illustrates that this structure can be obtained using Theorem REF .", "The second example is the Nonlinear Schrödinger (NLS) hierarchy.", "Its pluri-Lagrangian structure can be considered as a special case of the one for the AKNS hierarchy obtained in [24].", "The final example is the system consisting of the sine-Gordon and modified KdV equations, which indicates that the construction of Theorem REF can be adapted to non-evolutionary equations.", "The calculations in this Section were performed in the SageMath software system [23].", "The code is available at [31]." ], [ "Potential KdV hierarchy", "We start with our running example of the Korteweg-de Vries equation.", "The potential KdV hierarchy was the first complete hierarchy of PDEs for which a pluri-Lagrangian structure was found [27].", "Here we show that this structure can also be derived using Theorem REF .", "We present only a minimal example consisting of just the first two equations in the hierarchy, $u_2 &= 3 u_{1}^{2} + u_{111} , \\\\u_3 &= 10 u_{1}^{3} + 5 u_{11}^{2} + 10 u_{1} u_{111} + u_{11111}.", "$ The corresponding Lagrangians are $L_{12}[u] &= \\frac{1}{2} u_{1} u_{2} - u_{1}^{3} - \\frac{1}{2} u_{1} u_{111}, \\\\L_{13}[u] &= \\frac{1}{2} u_{1} u_{3} - \\frac{5}{2} u_{1}^{4} + 5 u_{1} u_{11}^{2} - \\frac{1}{2} u_{111}^{2},$ and have as their Euler-Lagrange equations $&1( u_2 - (3 u_{1}^{2} + u_{111})) = 0, \\\\&1( u_3 - (10 u_{1}^{3} + 5 u_{11}^{2} + 10 u_{1} u_{111} + u_{11111})) = 0.", "$ On solutions of the evolutionary equations, there holds $2 L_{13} - 3 L_{12}&= -10 u_{1}^{3} u_{12} + 10 u_{1} u_{11} u_{112} + 5 u_{11}^{2} u_{12} + 3 u_{1}^{2} u_{13} - uu_{111} u_{1112} + \\frac{1}{2} u_{1} u_{1113} \\\\&\\qquad + \\frac{1}{2} u_{111} u_{13} - \\frac{1}{2} u_{13} u_{2} + \\frac{1}{2} u_{12} u_{3} \\\\&= 15 u_{1}^{4} u_{11} + 135 u_{1} u_{11}^{3} + 210 u_{1}^{2} u_{11} u_{111} + 25 u_{1}^{3} u_{1111} - 18u_{11} u_{111}^{2} \\\\&\\qquad + \\frac{15}{2} u_{11}^{2} u_{1111} + 34 u_{1} u_{111} u_{1111} + 33 u_{1}u_{11} u_{11111} + \\frac{13}{2} u_{1}^{2} u_{111111} \\\\&\\qquad + \\frac{1}{2} u_{1111} u_{11111} - u_{111} u_{111111} + \\frac{1}{2} u_{1} u_{11111111} .$ Integrating this gives us $F_{23}(u,u_1,u_{11},\\ldots ) &= 3 u_{1}^{5} + \\frac{135}{2} u_{1}^{2} u_{11}^{2} + 25 u_{1}^{3} u_{111} - \\frac{25}{2} u_{11}^{2} u_{111} + 7 u_{1} u_{111}^{2} + 20 u_{1} u_{11} u_{1111} \\\\&\\quad + \\frac{13}{2} u_{1}^{2} u_{11111} + \\frac{1}{2} u_{1111}^{2} - \\frac{1}{2} u_{111} u_{11111} - \\frac{1}{2} u_{11} u_{111111} + \\frac{1}{2} u_{1} u_{1111111}.$ Let $Q_2$ and $Q_3$ be the right hand sides of Equations (REF ) and () .", "Then the remaining terms in Equation (REF ) are $&\\frac{\\delta _{13} {L_{13}}}{\\delta {u_1}} ( u_2 - Q_2) = \\left( \\frac{1}{2} u_{3} - 10 u_{1}^{3} - 5 u_{11}^{2} - 10 u_{1} u_{111} - u_{11111}) \\right) ( u_2 - 3 u_{1}^{2} - u_{111}), \\\\&\\frac{\\delta _{13} {L_{13}}}{\\delta {u_{11}}} 1 ( u_2 - Q_2) = (10 u_{1} u_{11} - u_{1111}) ( u_{12} - 6 u_{1} u_{11} - u_{1111}), \\\\&\\frac{\\delta _{13} {L_{13}}}{\\delta {u_{111}}} {11} ( u_2 - Q_2) = u_{111} ( u_{112} - 6 u_{1} u_{111} - 6 u_{11}^2 - u_{11111}),$ and $&-\\frac{\\delta _{12} {L_{12}}}{\\delta {u_1}} ( u_3 - Q_3) = -\\left( \\frac{1}{2} u_{2} - 3 u_{1}^{2} - u_{111} \\right) ( u_3 - 10 u_{1}^{3} - 5 u_{11}^{2} - 10 u_{1} u_{111} - u_{11111}), \\\\&-\\frac{\\delta _{12} {L_{12}}}{\\delta {u_{11}}} 1 ( u_3 - Q_3) = -\\frac{1}{2} u_{11} ( u_{13} - 30 u_{1}^{2} u_{11} - 20u_{11} u_{111} - 10 u_{1} u_{1111} - u_{111111}), \\\\&-\\frac{\\delta _{12} {L_{12}}}{\\delta {u_{111}}} {11} ( u_3 - Q_3) \\\\&\\quad = \\frac{1}{2} u_{1} ( u_{113} - 60 u_{1} u_{11}^{2} - 30 u_{1}^{2} u_{111} - 20 u_{111}^{2} - 30 u_{11} u_{1111} - 10 u_{1} u_{11111} - u_{1111111})).$ Adding everything together, as in Equation (REF ) of Theorem REF , we find $L_{23}[u] &= 3 u_{1}^{5} - \\frac{15}{2} u_{1}^{2} u_{11}^{2} + 10 u_{1}^{3} u_{111} - 5 u_{1}^{3} u_{2} + \\frac{7}{2} u_{11}^{2} u_{111} + 3 u_{1} u_{111}^{2} \\\\&\\quad - 6 u_{1} u_{11} u_{1111} + \\frac{3}{2} u_{1}^{2} u_{11111} + 10 u_{1} u_{11} u_{12} - \\frac{5}{2} u_{11}^{2}u_{2} - 5 u_{1} u_{111} u_{2} \\\\&\\quad + \\frac{3}{2} u_{1}^{2} u_{3} - \\frac{1}{2} u_{1111}^{2} + \\frac{1}{2} u_{111} u_{11111} - u_{111} u_{112} + \\frac{1}{2} u_{1} u_{113} \\\\&\\quad + u_{1111} u_{12} - \\frac{1}{2} u_{11} u_{13} - \\frac{1}{2} u_{11111} u_{2} + \\frac{1}{2} u_{111} u_{3} .$ Note that the classical Euler-Lagrange equations $ \\frac{\\delta _{12} {L_{12}}}{\\delta {u}} = 0 \\qquad \\text{and} \\qquad \\frac{\\delta _{13} {L_{13}}}{\\delta {u}} = 0 $ yield Equations (REF )–(), which are the $t_1$ -derivatives of the potential KdV equations (REF )–().", "However, the multi-time Euler-Lagrange equations also contain the potential KdV equations themselves: $\\frac{\\delta _{12} {L_{12}}}{\\delta {u_1}} = -\\frac{\\delta _{23} {L_{23}}}{\\delta {u_3}} \\qquad \\Rightarrow \\qquad &\\frac{1}{2} u_2 - 3 u_1^2 - u_{111} = -\\frac{3}{2} u_1^2 - \\frac{1}{2} u_{111}, \\\\\\frac{\\delta _{13} {L_{13}}}{\\delta {u_1}} = \\frac{\\delta _{23} {L_{23}}}{\\delta {u_2}} \\qquad \\Rightarrow \\qquad &\\frac{1}{2} u_3 - 10 u_1^3 - 5 u_{11}^2 - 10 u_1 u_{111} - u_{11111} \\\\&\\quad = - 5 u_1^3 - \\frac{5}{2} u_{11}^2 - 5 u_1 u_{111} - \\frac{1}{2} u_{11111}.$" ], [ "Nonlinear Schrödinger hierarchy", "The nonlinear Schrödinger equation is one of the most prominent integrable PDEs [11], [12].", "The corresponding hierarchy is discussed for example in [21], [1].", "It is a special case of the AKNS hierarchy, the pluri-Lagrangian of which is studied in [24].", "Here we construct a pluri-Lagrangian structure for the NLS hierarchy using Theorem REF .", "In this example we consider a complex field $u: \\mathbb {R}^N \\rightarrow .", "The first two equations of the hierarchy are the nonlinear Schrödinger equation itself and the complex modified KdV equation,{\\begin{@align}{1}{-1}u_{2} &= i u_{11} - 2 i |u|^2 u , \\\\u_{3} &= u_{111} - 6 |u|^2 u_{1} .", "\\end{@align}}Fields $ u$ that solve both these equations and their complex conjugates are critical fields for the Lagrangians (see e.g.", "\\cite {avan2016lagrangian}){\\begin{@align*}{1}{-1}L_{12}[u] &= \\frac{i}{2} \\left( u_{2} \\bar{u}_{} - u_{} \\bar{u}_{2} \\right) - |u_{1}|^2 - |u|^4 , \\\\L_{13}[u] &= \\frac{i}{2} \\left( u_{3} \\bar{u} - u_{} \\bar{u}_{3} \\right) + \\frac{i}{2} \\left( u_{11} \\bar{u}_{1} - u_{1} \\bar{u}_{11} \\right) + \\frac{3i}{2} |u|^2 \\left( u_{1} \\bar{u} - u \\bar{u}_{1} \\right) .\\end{@align*}}$ For these Lagrangians Lemma REF gives us the function $F_{23}(u,u_1,u_{11},\\ldots ) = 2 |u|^6 - \\frac{3}{2} |u|^2 \\left( u_{11} \\bar{u} - u \\bar{u}_{11} \\right) - 6 |u u_1|^2 + \\frac{1}{2} \\left( u_{111} \\bar{u}_{1} + u_{1} \\bar{u}_{111} \\right) + |u_{11}|^2$ and Theorem REF provides the coefficient $L_{23}[u] &= -4 |u|^6 - u_{1}^2 \\bar{u}_{}^2 - u_{}^2 \\bar{u}_{1}^2 + 2 |u u_{1}|^2 + 2 |u|^2 \\left( u_{11} \\bar{u} + u \\bar{u}_{11} \\right) + \\frac{3}{2} i |u|^2 \\left( u_{2} \\bar{u} - u \\bar{u}_{2} \\right) \\\\&\\quad + \\frac{i}{2} \\left( u_{12} \\bar{u}_{1} - u_{1} \\bar{u}_{12} \\right) + u_{3} \\bar{u}_{1} + u_{1} \\bar{u}_{3} - |u_{11}|^2 + i \\left( u_{11} \\bar{u}_{2} - u_{2} \\bar{u}_{11} \\right)$ of a pluri-Lagrangian 2-form $\\mathcal {L}[u] = L_{12}[u] \\,\\mathrm {d}t_1 \\wedge \\mathrm {d}t_2 + L_{13}[u] \\,\\mathrm {d}t_1 \\wedge \\mathrm {d}t_3 + L_{23}[u] \\,\\mathrm {d}t_2 \\wedge \\mathrm {d}t_3.$ Interestingly, in this example the classical Euler-Lagrange equations $ \\frac{\\delta _{12} {L_{12}}}{\\delta {u}} = 0 \\qquad \\text{and} \\qquad \\frac{\\delta _{13} {L_{13}}}{\\delta {u}} = 0 $ already yield the evolutionary form of the NLS equations ()–().", "All other multi-time Euler-Lagrange equations, in particular those of the form $\\frac{\\delta _{12} {L_{12}}}{\\delta {u_1}} = -\\frac{\\delta _{23} {L_{23}}}{\\delta {u_3}} \\qquad \\text{and} \\qquad \\frac{\\delta _{13} {L_{13}}}{\\delta {u_1}} = \\frac{\\delta _{23} {L_{23}}}{\\delta {u_2}} $ are trivially satisfied." ], [ "Sine-Gordon equation and modified KdV hierarchy", "Consider the sine-Gordon equation $ u_{12} = \\sin u $ and the (potential) modified KdV hierarchy $u_3 &= u_{111} + \\frac{1}{2}u_1^3 , \\\\u_4 &= \\frac{3}{8} u_1^5 + \\frac{5}{2} u_1 u_{11}^2 + \\frac{5}{2} u_1^2 u_{111} + u_{11111} , \\\\&{=}$ This hierarchy consists of symmetries of the sine-Gordon equation (see, e.g.", "[19] or [17]).", "The corresponding Lagrangians are $L_{12}[u] &= \\frac{1}{2} u_{1} u_{2} - \\cos u , \\\\L_{13}[u] &= \\frac{1}{2} u_{1} u_{3} - \\frac{1}{8} u_{1}^{4} + \\frac{1}{2} u_{11}^{2}, \\\\L_{14}[u] &= \\frac{1}{2} u_{1} u_{4} - \\frac{1}{16} u_{1}^6 - \\frac{5}{12} u_{1}^3 u_{111} - \\frac{1}{2} u_{111}^2 , \\\\&{=}$ Since the sine-Gordon equation is not evolutionary, Theorem REF does not apply to this hierarchy.", "Surprisingly, a naive adaptation of the construction leads to a suitable 2-form, at least for the first few equations of the hierarchy.", "We start the construction of a pluri-Lagrangian 2-form in three dimensions, considering only $t_1$ , $t_2$ and $t_3$ .", "Let $Q_3 = u_{111} + \\frac{1}{2}u_1^3$ .", "Then on solutions of the equations, there holds $2 L_{13} - 3 L_{12}&= \\frac{1}{2} u_{12} u_3 - \\frac{1}{2} u_1^3 u_{12} + u_{11} u_{112} - \\frac{1}{2} u_{13} u_2 - u_3 \\sin u \\\\&= -\\frac{1}{2} u_{12} Q_3 - \\frac{1}{2} u_2 1 Q_3 - \\frac{1}{2} u_1^3 \\sin u + u_{11} u_1 \\cos u \\\\&= 1 F_{23}$ for $ F_{23}[u] = -\\frac{1}{2} u_2 \\left(u_{111} + \\frac{1}{2} u_1^3 \\right) + \\frac{1}{2} u_1^2 \\cos u .", "$ Since there is no evolutionary equation for $u_2$ , we tolerate the dependence of $F_{23}$ on this derivative.", "For the same reason, the term $\\frac{\\delta _{12} {L_{12}}}{\\delta {u_{1^{\\alpha +1}}}} 1^\\alpha (u_2 - Q_2)$ in Equation (REF ) only makes sense for $\\alpha > 0$ .", "For $\\alpha = 0$ we just remove it.", "We are left with $L_{23}[u] &= \\frac{\\delta _{13} {L_{13}}}{\\delta {u_{11}}} (u_{12} - \\sin u) - \\frac{\\delta _{12} {L_{12}}}{\\delta {u_{1}}} (u_{3} - u_{111} - \\frac{1}{2}u_1^3) + F_{23}[u] \\\\&= u_{11} (u_{12} - \\sin u) - \\frac{1}{2} u_2 (u_{3} - u_{111} - \\frac{1}{2}u_1^3) -\\frac{1}{2} u_2 \\left(u_{111} + \\frac{1}{2} u_1^3 \\right) + \\frac{1}{2} u_1^2 \\cos u \\\\&= u_{11} (u_{12} - \\sin u) - \\frac{1}{2} u_2 u_3 + \\frac{1}{2} u_1^2 \\cos u .$ This pluri-Lagrangian structure in $\\mathbb {R}^3$ was first found in [27], but a pluri-Lagrangian structure incorporating more equations of the hierarchy has not been given previously.", "With the method presented here, such an extension is obtained by a straightforward (but long) calculation.", "For example, we can calculate $F_{24}$ and $F_{34}$ analogously to $F_{23}$ above.", "This in turn allow us to calculate the coefficients of the Lagrangian 2-form, $L_{24}[u] &= \\frac{3}{8} u_{1}^4 \\cos u - \\frac{5}{12} u_{1}^3 u_{112} + \\frac{5}{4} u_{1}^2 u_{11} u_{12} - \\frac{3}{2} u_{1}^2 u_{11} \\sin u - \\frac{1}{2} u_{11}^2 \\cos u \\\\&\\quad + u_{1} u_{111} \\cos u - u_{111} u_{112} + u_{1111} u_{12} - \\frac{1}{2} u_{2} u_{4} - u_{1111} \\sin u$ and $L_{34}[u] &= \\frac{3}{128} u_{1}^8 - \\frac{5}{16} u_{1}^4 u_{11}^2 + \\frac{7}{16} u_{1}^5 u_{111} - \\frac{3}{16} u_{1}^5 u_{3} - \\frac{1}{8} u_{11}^4 + \\frac{7}{4} u_{1} u_{11}^2 u_{111} \\\\&\\quad + \\frac{3}{4} u_{1}^2 u_{111}^2 - \\frac{3}{2} u_{1}^2 u_{11} u_{1111} + \\frac{1}{4} u_{1}^3 u_{11111} - \\frac{5}{12} u_{1}^3 u_{113} + \\frac{5}{4} u_{1}^2 u_{11} u_{13} \\\\&\\quad - \\frac{5}{4} u_{1} u_{11}^2 u_{3} - \\frac{5}{4} u_{1}^2 u_{111} u_{3} + \\frac{1}{4} u_{1}^3 u_{4} - \\frac{1}{2} u_{1111}^2 + \\frac{1}{2} u_{111} u_{11111} \\\\&\\quad - u_{111} u_{113} + u_{1111} u_{13} - u_{11} u_{14} - \\frac{1}{2} u_{11111} u_{3} + \\frac{1}{2} u_{111} u_{4} .$ The presented hierarchy can be extended to a doubly-infinite hierarchy, where the sine-Gordon equation connects two copies of the modified KdV hierarchy, one as stated above and one where $t_2$ is used as space variable.", "The calculations presented here can be easily extended to cover both sides of the hierarchy.", "A pluri-Lagrangian structure of this double hierarchy was previously obtained using a carefully chosen continuum limit [29].", "In this example, a straightforward adaptation of Equation (REF ) gives us suitable coefficients $L_{ij}$ .", "However, there does not seem to be a simple generalization of the proof we gave for Theorem REF to cover this case.", "In this example we have verified by direct calculation that the multi-time Euler-Lagrange equations consist of the Sine-Gordon and modified KdV equations and differential consequences thereof.", "Showing the validity of our construction in a more general setting, ideally with a more conceptual proof, is a goal for future research." ], [ "Conclusions", "We have shown that a hierarchy of 2-dimensional variational PDEs, that are variational symmetries of each other, possesses a pluri-Lagrangian structure.", "This extends the results of [22], where a similar result was obtained for variational ODEs.", "The existence of a hierarchy of variational symmetries for a PDE is closely related to its integrability.", "Hence our result contributes significantly to the evidence that pluri-Lagrangian structures are a fundamental feature of integrability.", "Furthermore, our construction can be used to obtain new examples of pluri-Lagrangian 2-forms, as we illustrated in the context of the nonlinear Schrödinger hierarchy.", "As illustrated by the example of the Sine-Gordon and mKdV equations, our construction applies more generally than the proof we provided.", "More research is needed to determine the most general form of the ideas presented here.", "Relevant to this line of investigation is the paper [25], which deals with the same topics as the present work (and appeared on the arXiv one day after it)." ], [ "Acknowledgments", "The authors are grateful to Yuri Suris for helpful discussions and feedback on a draft of this manuscript.", "The authors are partly supported by DFG (Deutsche Forschungsgemeinschaft) in the frame of SFB/TRR 109 “Discretization in Geometry and Dynamics”." ] ]
1906.04535
[ [ "The possibility of direct observation of the Bloch-Siegert shift in\n coherent dynamics of multiphoton Raman transitions" ], [ "Abstract We study Rabi oscillations of the second-order Raman transition realized on dressed states of a qubit excited by an amplitude-modulated microwave field.", "The co-rotating component of the ultrastrong low-frequency modulation field excites virtual multiple photon processes between the dressed states and forms the Rabi frequency in the so-called rotating wave approximation (RWA).", "The counter-rotating modulation component also gives a significant contribution to the Rabi frequency owing to the Bloch--Siegert effect.", "It is shown that for properly chosen parameters of the modulation field and qubit, the Rabi oscillations in the RWA vanish due to destructive interference of multiple photon processes.", "In this case the Rabi oscillation results exclusively from the Bloch--Siegert effect and is directly observed in the time-resolved coherent dynamics as the Bloch--Siegert oscillation.", "Correspondingly, in Fourier spectra of the coherent response, triplets are transformed into doublets with the splitting between the lines equal to twice the Bloch--Siegert shift.", "We demonstrate these features by calculations of the qubit's evolution in the conditions of experiments with a NV center in diamond, where Raman transitions were observed.", "The direct observation of the Bloch--Siegert oscillation offers new possibilities for studying driven quantum systems in the ultastrong regime." ], [ "INTRODUCTION", "In two-level quantum systems driven by a strong oscillating electromagnetic field, the Bloch–Siegert effect [1] is a phenomenon in quantum physics in which the observed resonance frequency is changed by the presence of a counter-rotating (off-resonance) component of the driving field.", "This shift of the resonance frequency is usually negligible for optical transitions [2], but becomes significant for precision nuclear magnetic resonance (NMR) experiments [3], preventing accurate observation of the resonance frequencies.", "In the dispersive regime this shift sometimes is referred to as dynamical Stark shift [4].", "The appearance of the Bloch–Siegert shift means that the commonly used rotating wave approximation (RWA) is broken and the contribution of the counter-rotating (non-RWA) terms to the coupling Hamiltonian must be taken into account.", "In the past decade, studies of the resonant matter-light interaction evolve toward the ultrastrong ($0.1<g/\\varepsilon \\lesssim 1$ ) and deep strong ($g>\\varepsilon $ ) coupling regime where the coupling strength $g$ is comparable to, or exceeds, the transition frequency $\\varepsilon $ between two energy levels of the quantum system.", "As a result, the contribution of the non-RWA terms results in complex dynamics of the field-matter interaction (see, e.g.", "[5], [6], [7], [8]) and makes difficulties for its analytical description.", "The coherent dynamics of resonant interaction between coherent radiation and two-level systems (qubits) can be described in terms of Rabi oscillations between the two energy eigenstates.", "This dynamics is extremely important for quantum information processing [9], protection against decoherence [10] and is widely studied for various quantum objects such as nuclear and electronic spins [3], [9], natural atoms [2], artificial atoms such as quantum dots [11] and superconducting qubits [12].", "The rate of coherent manipulations of qubit's states is characterized by the Rabi frequency and depends on the strength of the driving field.", "The increase in the manipulation rate results in faster state operation, however, can lead to the strong driving regime, where, due to breakdown of the RWA, complex dynamics of qubits occurs.", "The steady-state response of quantum systems, mainly superconducting qubits, under their strong continuous-wave driving have been studied [12], [13].", "The strong driving of qubits has been investigated in Landau-Zener-Stückelberg interferometry on quantum dots [14], [15] and in hybrid quantum systems (a superconducting flux qubit – a single nitrogen-vacancy (NV) center [16]).", "Recently, in the ultrastrong regime ($g>\\varepsilon $ ), the time-resolved Rabi oscillations of artificial atoms including superconducting flux [17], [18], [19] and charge [20] qubits as well as a single NV center in diamond [21], [22], dressed states of NV centers [23], nuclear spins [24] and a single NV center under mechanical driving [25] have been studied.", "The presence of the various frequency components in the Rabi oscillations and the Bessel-function dependence of the quasienergy difference on the driving strength have been observed [18], [19], [20].", "In particular, the ultrastrong regime has been realized in the solid state system for the quantum Rabi model [26], [27], [28], [29], where the light field is quantized, and the semiclassical Rabi model [19], [21], [22], where light is treated as an external classical control field.", "The deep strong regime of the quantum Rabi model has been considered in [30].", "Note that the extremely strong driving (for the semiclassical Rabi model) induces the transitions between the qubit's levels not only at the resonant or near resonant excitation, but also when the frequency of driving field is far away from resonance and exceeds significantly the qubit transition frequency [31], [32].", "In terms of the dressed-state formalism [33], the dressing of qubit by the resonant electromagnetic field gives rise to new energy levels of the coupled field-qubit system and the Rabi frequency characterizes the splitting of each bare level.", "The applied second field with the frequency closed to the Rabi frequency excites effectively transitions between the dressed states.", "This phenomenon, called the Rabi resonance, has been observed for spin ensembles [34], [35] and a single spin [36] in electron paramagnetic resonance, NMR [37], [38], [39], [40] as well as for atoms in the optical range [41], [42], [43].", "Additional resonances occur at the subharmonics of the Rabi frequency [39], [40], [41], [44].", "The additional Rabi oscillations between the dressed states at the frequency determined by the strength of the second driving field have been observed under the Rabi resonance condition [34], [35], [36], [38], [40], [42], [43].", "Since the Rabi frequency in the second driving field can easily be obtained even larger than the Rabi frequency in the first driving field, the RWA is often broken in a description of the coherent dynamics of the dressed-state transitions and under such strong driving the Bloch–Siegert effect becomes significant [35], [36], [45].", "In the time-resolved observations, the Rabi resonance was realized when the first driving field was in resonance with the two-level system.", "In this case the bichromatic control of Rabi oscillations between doubly dressed states and prolongation of their coherence can find applications in quantum information processing [46], [47], [48].", "Moreover, in a single-spin magnetometry, Rabi oscillations between doubly dressed states open the possibility for the direct and sensitive detection of weak radio-frequency magnetic fields [49], [50].", "Recently, rich Floquet dynamics has been demonstrated [51].", "So-called Floquet Raman transitions have been observed in the solid-state spin system of NV center in diamond driven by the microwave field with its low-frequency amplitude modulation [51].", "The microwave frequency was detuned from the resonant frequency of the two-level system and Raman transitions between dressed spin states were excited by the low-frequency field when multiphoton Rabi resonances (termed also Floquet resonances [52]) were realized.", "The observe dynamics offers new capabilities to achieve effective Floquet coherent control of a quantum system with potential applications in quantum technologies or as a quantum simulator for the physics of periodically driven systems [51].", "Closed-form expressions for the Rabi frequencies of Raman transitions have been obtained beyond the RWA for the low-frequency driving component [54].", "It was shown that the ultrastrong regime ($g/\\varepsilon \\approx 0.2$ ) for the semiclassical Rabi model is reached in the experiment [51] resulting in the significant Bloch-Siegert shift.", "However, it still remains an interesting possibility to achieve the stronger light-matter coupling and observe so far unexplored behaviors of Rabi oscillations for Raman transitions.", "In particular, in the experiment [51] as well as in experiments, where the counter-rotating terms manifest important impacts, the Bloch-Siegert effect does not observe separately from other oscillating processes in the qubit's dynamics.", "To our knowledge, there are no propositions for filtering the Bloch-Siegert shift in literature and its direct observation remains a challenge.", "In the present paper, we focus on the ultrastrong regime of the coherent dynamics of the second-order Raman transition excited by the amplitude-modulated microwave field in the two-level system and propose the method for direct observation of the Bloch-Siegert oscillation.", "By using the semi-classical Rabi model and in the framework of the nonsecular perturbation theory based on the Bogoliubov averaging method, we obtain a closed-form expression for the Rabi frequency of this transition beyond the RWA.", "It is demonstrated that at the ultrastrong modulation field, due to destructive interference of multiple photon processes, the Rabi oscillations disappear under some ratio of the amplitude and frequency of the modulation field.", "At this condition the oscillation resulting from the Bloch–Siegert effect is only observed.", "Note that usually the Bloch–Siegert effect does not observe directly, but it is revealed as the shift of the resonant frequency [3] or the complex (multifrequency) character of observed Rabi oscillations [24].", "The analytical description of coherent dynamics of the second-order Raman transition is presented in Sec.", "II.", "Time and spectral manifestations of the Bloch–Siegert effect are demonstrated in Sec.", "III using the calculations at the parameters of the driving field which can be realized in experimental studies of NV center in diamond [51].", "The effects of the phase of the low-frequency modulation field are also considered." ], [ "COHERENT DYNAMICS OF THE SECOND-ORDER RAMAN TRANSITION", "Raman transitions between Floquet dressed states of an initially two-level spin system are excited by a microwave field $V(t) = \\Delta _{x} \\cos (\\omega _{d} t)+2A\\cos (\\omega _{d} t)\\sin (\\omega t+\\psi )$ , where $\\cos (\\omega _{d} t)$ describes the high-frequency component of the field, $\\sin (\\omega t+\\psi )$ represents the low-frequency component with the initial phase $\\psi $ , and the amplitudes of these componentes $ \\Delta _{x}$ , $A\\ll \\omega _{d}$ [51].", "The Hamiltonian of the two-level system at such driving can be written as $H_{lab} = \\frac{\\Delta E}{2} \\sigma ^{z} +\\Delta _{x} \\cos (\\omega _{d} t)\\sigma ^{x} +2A\\cos (\\omega _{d} t)\\sin (\\omega t+\\psi )\\sigma ^{x}$ , where $\\Delta E$ is the transition energy between the ground and excited levels; $\\sigma ^{z}$ and $\\sigma ^{z}$ are Pauli operators.", "In the frame rotating with the driving field frequency $\\omega _d$ and in the RWA for this field (since $\\omega , \\Delta _{x}, A \\ll \\omega _{d}$ ), the Hamiltonian is $ H=\\frac{\\Delta _{z} }{2} \\sigma ^{z} +\\frac{\\Delta _{x} }{2} \\sigma ^{x} +A\\sin (\\omega t+\\psi )\\sigma ^{x},$ where $\\Delta _{z} =\\Delta E-\\omega _{d} $ .", "The dynamics of the system is described by the Liouville equation for the density matrix $\\rho $ : $i\\partial \\rho /\\partial t=H\\rho $ (in the following we take $\\hbar =1$ ).", "Rotating the frame around the y axis by angle of $\\theta $ ($\\rho \\rightarrow \\rho _{1} =U_{1}^{+} \\rho U_{1} $ , $U_{1} =e^{-i\\theta \\sigma ^{y} /2} $ , and $\\sigma ^{y} =(\\sigma ^{+} -\\sigma ^{-} )/i$ ), we can write the same equation with the Hamiltonian $H_{1} =U_{1}^{+} HU_{1} =\\frac{\\omega _{0} }{2} \\sigma ^{z} +A\\cos \\theta \\sin (\\omega t+\\psi )\\sigma ^{x} +A\\sin \\theta \\sin (\\omega t+\\psi )\\sigma ^{z} $ , where $\\omega _{0} =\\sqrt{\\Delta _{z}^{2} +\\Delta _{x}^{2} } $ , $\\sin \\theta =\\Delta _{x} /\\omega _{0} $ , $\\cos \\theta =\\Delta _{z} /\\omega _{0} $ .", "After the second canonical transformation $\\rho _{1} \\rightarrow \\rho _{2} =U_{2}^{+} \\rho _{1} U_{2} $ with $U_{2} =\\exp \\left\\lbrace -i\\left[\\omega _{0} t-\\frac{2A\\sin \\theta }{\\omega } \\cos (\\omega t+\\psi )\\right]\\frac{\\sigma ^{z} }{2} \\right\\rbrace $ , we obtain the Liouville equation for $\\rho _{2} $ with the Hamiltonian $H_{2} =U_{2}^{+} H_{1} U_{2} -iU_{2}^{+} \\frac{\\partial U_{2} }{\\partial t} =$ $=\\frac{A}{2i} \\cos \\theta \\left[\\sigma ^{+} \\sum _{n=-\\infty }^{\\infty }J_{n} (a)e^{-in\\pi /2}\\right.\\times $ $ \\times \\Biggl .", "\\left(e^{i(n+1)\\omega t} e^{i(n+1)\\psi } -e^{i(n-1)\\omega t} e^{i(n-1)\\psi } \\right)e^{i\\omega _{0} t} +H.", "c. \\Biggr ],$ where $J_{n} (a)$ is the Bessel function of the first kind and $a=2A\\sin \\theta /\\omega $ .", "Figure: (a) The dependence of the RWA Rabi frequency Ω 2 \\Omega _{2}, the non-RWA Rabi frequency Ω 2 * \\Omega _{2}^{*}, Ω ˜ 2 * \\tilde{\\Omega }_{2}^{*} and the Bloch–Siegert shift ω 2 BS \\omega _{2}^{BS} on the normalized amplitude of the low-frequency driving field at ω/2π\\omega /2\\pi = 5.22 MHz, Δx/2π\\Delta x/2\\pi = 10 MHz, and Δz/2π\\Delta z/2\\pi = 3 MHz.", "(b) The frequencies presented in (a) near A * /ωA^{*} /\\omega show in more detail.", "This plot is useful to obtain the values of these frequencies for ΔA/ω\\Delta A/\\omega used in the following figures.Figure: The state population of the spin level |0〉|0 \\rangle for the second-order Raman transitions as a function of the evolution time at ω/2π\\omega /2\\pi = 5.22 MHz, Δx/2π\\Delta x/2\\pi = 10 MHz, and Δz/2π \\Delta z/2\\pi = 3 MHz.", "The strength of the low-frequency driving field is A=A * +ΔAA = A^{*} +\\Delta A, where A * /ωA^{*} /\\omega = 2.68 and ΔA/ω\\Delta A/\\omega = 0.25, 0.1, 0, –0.1, –0.25.", "The coherent oscillations in the qubit's evolution are presented for the phase of the driving field ψ\\psi = 0 (left panel) and 90 0 90^{0} (right panel).", "The red line shows the Bloch–Siegert oscillation.Figure: Fourier spectra as a function of ΔA/ω=(A-A * )/ω\\Delta A/\\omega = (A-A^{*})/\\omega for the phase of the low-frequency driving field ψ\\psi = 0 (left panel) and 90 0 90^{0} (right panel).", "Red, blue and green lines show cuts at ΔA/ω\\Delta A/\\omega equal to 0.25, 0 and –0.25, respectively.", "The other parameters are the same as in Fig.", "2.Figure: Fourier spectra as a function of the phase ψ\\psi of the low-frequency driving field at ΔA/ω\\Delta A/\\omega equaled to 0 (a) and –0.25 (b).", "The other parameters are the same as in Fig.", "3.We consider only the second-order Raman transition when the resonance condition $\\omega _{0}/2 =\\omega $ is fulfilled.This transition is well observed and has the largest Rabi frequency among others multiphoton Raman transitions [51].", "The Hamiltonian $H_{2}$ contains an infinite sum of oscillating harmonics with the frequencies which are integer multiples of the frequency $\\omega $ .", "There are no oscillations for $n=-1$ and $n=-3$ .", "Therefore, the terms of the sum with these n give the largest contribution and correspond to the RWA.", "At the strong coupling condition $0.1<A/\\omega <1$ the other oscillating terms are significant.", "Their contribution can be taken into account using the Bogoliubov averaging method [53] for constructing time-independent effective Hamiltonian in the framework of the non-secular perturbation theory.", "The averaging procedure up to the second order in $A\\cos \\theta /\\omega $ (see [53], [54]) gives the following effective Hamiltonian: $H_{2} \\rightarrow H_{eff} =H_{2}^{(1)} +H_{2}^{(2)} $ , where $H_{2}^{(1)} =<H_{2} (t)>,$ $ H_{2}^{(2)} =\\frac{i}{2} <[\\int _{}^{t}d\\tau (H_{2} (\\tau )-<H_{2} (\\tau )>),H_{2} (t) ]>.$ Here the symbol $\\langle .", ".", ".", "\\rangle $ denotes time averaging over rapid oscillations of the type $\\exp (\\pm im\\omega t)$ given by $\\langle O(t)\\rangle =\\frac{\\omega }{2\\pi } \\int _{0}^{{2\\pi \\mathord {\\left\\bad.", "{\\vphantom{2\\pi \\omega }} \\right.", "\\hspace{0.0pt}} \\omega } }O(t)dt $ .", "The upper limit t of the indefinite integral indicates the variable on which the result of the integration depends, and square brackets denote the commutation operation.", "As a result, the effective Hamiltonian can be written as $H_{eff} = \\frac{\\omega _{2}^{BS} }{2} \\sigma ^{z} +\\frac{\\Omega _{2} }{2} (\\sigma ^{+} e^{-i2\\psi } +H.c.", "),$ where $ \\nonumber \\Omega _{2} = 4\\frac{J_{2} (a)}{a} A\\cos \\theta , \\\\\\nonumber \\omega _{2}^{BS} = \\frac{A^{2} \\cos ^{2} \\theta }{2\\omega }\\times \\\\\\times \\left\\lbrace \\sum _{n\\ne -3}\\frac{J_{n}^{2} +J_{n} J_{n+2} }{n+3} +\\sum _{n\\ne -1}\\frac{J_{n}^{2} +J_{n} J_{n-2} }{n+1}\\right\\rbrace .$ Here $\\Omega _{2}$ is the Rabi frequency of the second-order Raman transition in the RWA and $\\omega _{2}^{BS}$ is the Bloch–Siegert frequency shift for the second-order transition caused by the non-resonant rapidly oscillating non-RWA terms.", "The Bessel function $J_{2} (a)$ in Eq.", "(4) for $\\Omega _{2}$ appears due to virtual multiphoton transitions, in which the number of absorbed (emitted) photons exceeds by 2 the number of emitted (absorbed) photons.", "In the equation for $\\omega _{2}^{BS}$ we omit the argument $a$ of the Bessel functions.", "In the following, we will consider the experimental situation with a NV center when the microwave field excites transitions between the spin sublevels ${\\left| 0 \\right\\rangle }$ and ${\\left| -1 \\right\\rangle }$ of this center, while the level ${\\left| +1 \\right\\rangle }$ is far detuned [51].", "We assume that the spin system is initially in the ground state ${\\left| 0 \\right\\rangle }$ .", "Then, for the second-order Raman transition the probability to find the system in some moment again in the ground state $P_{{\\left| 0 \\right\\rangle } }^{(2)} (t)$ is: $ P_{{\\left| 0 \\right\\rangle } }^{(2)} (t) = \\frac{1}{2} (1+\\cos ^{2} \\theta -2c_{1})+\\\\+e\\cos \\left[2\\omega t-a\\cos (\\omega t+\\psi )-\\phi _{e} \\right]+\\\\+c\\cos (\\Omega _{2}^{*} t-\\phi _{c})+\\\\+b\\cos \\Omega _{2}^{*} t\\cos \\left[2\\omega t-a\\cos (\\omega t+\\psi )-\\phi _{b} \\right]+\\\\+d\\sin \\Omega _{2}^{*} t\\cos \\left[2\\omega t-a\\cos (\\omega t+\\psi )-\\phi _{d} \\right],$ where the effective Rabi frequency $\\Omega _{2}^{*} = \\sqrt{\\Omega _{2}^{2} +(\\omega _{2}^{BS})^{2} }$ is introduced taking into account the Bloch–Siegert shift and the following denotations are used: $c = (c_{1}^{2} +c_{2}^{2})^{1/2}$ , $e = (e_{0}^{2} +e_{\\pi /2}^{2})^{1/2}$ , $b = (b_{0}^{2} +b_{\\pi /2}^{2})^{1/2}$ , $d = (d_{0}^{2} +d_{\\pi /2}^{2})^{1/2}$ ; $\\cos \\phi _{c} = c_{1}/c$ , $\\cos \\phi _{e} = e_{0}/e$ , $\\cos \\phi _{b} = b_{0}/b$ , $\\cos \\phi _{d} = d_{0}/d$ ; $\\nonumber c_{1} = \\frac{\\omega _{2}^{BS} \\Omega _{2} }{4\\Omega _{2}^{*2} } \\sin 2\\theta \\cos \\left(2\\psi -a\\cos \\psi \\right)+\\frac{1}{2} \\left(\\frac{\\Omega _{2} }{\\Omega _{2}^{*} } \\right)^{2} \\cos ^{2} \\theta , \\\\\\nonumber c_{2} = \\frac{\\Omega _{2} }{4\\Omega _{2}^{*} } \\sin 2\\theta \\sin (2\\psi -a\\cos \\psi ),$ $\\nonumber e_{0} = \\frac{1}{4} \\left[\\frac{\\Omega _{2}^{2} }{\\Omega _{2}^{*2} } \\sin ^{2} \\theta \\cos (4\\psi -a\\cos \\psi )\\right.+\\\\+\\left(\\frac{\\Omega {}_{2} }{\\Omega _{2}^{*} } \\right)^{2} \\sin ^{2} \\theta \\cos (a\\cos \\psi )-\\\\-\\left.\\frac{\\Omega _{2} \\omega _{2}^{BS} }{\\Omega _{2}^{*2} } \\sin 2\\theta \\cos 2\\psi \\right],$ $\\nonumber b_{0} = -e_{0} +\\frac{1}{2} \\sin ^{2} \\theta \\cos (a\\cos \\psi ), \\\\\\nonumber d_{0} = -\\frac{\\Omega _{2} }{4\\Omega _{2}^{*} } \\sin 2\\theta \\sin 2\\psi -\\frac{\\omega _{2}^{BS} }{2\\Omega _{2}^{*} } \\sin ^{2} \\theta \\sin (a\\cos \\psi ).$ We use labels 0 and $\\pi /2$ for $e$ , $b$ and $d$ .", "The expressions for $e_{\\pi /2}, b_{\\pi /2}, d_{\\pi /2}$ are obtained from $e_{0}, b_{0}, d_{0}$ by the addition of the phase $\\pi /2$ in the arguments of sine and cosine functions containing $\\psi $ , for example, $ \\cos (4\\psi -a\\cos \\psi )\\rightarrow \\cos (4\\psi -a\\cos \\psi +\\pi /2)$ , $ \\cos 2\\psi \\rightarrow \\cos (2\\psi +\\pi /2)$ , and so on." ], [ "TIME AND SPECTRAL MANIFESTATIONS OF THE BLOCH–SIEGERT EFFECT", "To demonstrate the possibility of direct observation of the Bloch–Siegert effect, we calculate from Eq.", "(REF ) time and spectral features of the second-order Raman transition at the parameters of the driving field which can be realized in experimental studies similar to those in the experiment [51].", "Figure 1 shows the dependences of the RWA Rabi frequency $\\Omega _{2}$ , the non-RWA Rabi frequency $\\Omega _{2}^{*}$ and the Bloch–Siegert-like shift $\\omega _{2}^{BS}$ on the normalized driving strength $a = 2A\\sin \\theta /\\omega $ .", "The dashed line presents the non-RWA Rabi frequency $\\tilde{\\Omega }_{2}^{*}$ [54] taking into account the contribution of the third order in the parameter $A\\cos \\theta /\\omega $ ; this contribution is very small.", "When approaching a value of $A^{*} /\\omega $ , where $A^{*}/\\omega = a^{*} /2\\sin \\theta $ and $a^{*}$ is the first root of the equation $J_{2} (a) = 0$ in the expression (4) for $\\Omega _{2}$ , the RWA Rabi frequency $\\Omega _{2}$ tends to zero due to destructive interference of multiple photon processes, and the non-RWA Rabi frequency $\\Omega _{2}^{*}$ reaches its minimum: $\\Omega _{2}^{*} = \\sqrt{\\Omega _{2}^{2} +(\\omega _{2}^{BS})^{2} } = \\omega _{2}^{BS}$ .", "This first minimum is at the first zero crossing of the Bessel function $J_{2} (a)$ .", "The first zero is a $A/\\omega = 2.", "68$ , the second zero is at $A/\\omega = 4.39$ , etc.", "The non-RWA Rabi frequency reaches its minima at all values of $A/\\omega $ corresponding the zero crossing of the Bessel function.", "The first term of Eq.", "(REF ) for $P_{{\\left| 0 \\right\\rangle } }^{(2)} (t)$ is time-independent.", "This term as well as the values of the parameters $A, e, b$ and $d$ are a function of the physical parameters of the quantum system and the driving high- and low-frequency fields.", "Using decomposition by a series of the Bessel functions, one can see that the second term of Eq.", "(REF ) describes rapid oscillations at the “carrier” frequencies $n\\omega $ , where $n$ is integer.", "The third term presents slow oscillations at the non-RWA Rabi frequency $\\Omega _{2}^{*}$ .", "The fourth and fifth terms describe oscillations at the “carrier” frequencies $n\\omega $ with amplitudes modulated by the Rabi oscillations at the frequency $\\Omega _{2}^{*}$ .", "Fig.", "2 illustrates the evolution of $P_{{\\left| 0 \\right\\rangle } }^{(2)} (t)$ for two values of the phase $\\psi $ of the modulation field near and at the first minimum of the non-RWA Rabi frequency $\\Omega _{2}^{*}$ .", "The phase influences the oscillations of the ground state population of the drivev qubit and the strongest differences are observed between the oscillations at $\\psi = 0$ and $\\psi = 90^{0}$ .", "At the value of $A^{*} /\\omega $ the RWA Rabi frequency $\\Omega _{2} = 0$ , the non-RWA Rabi frequency $\\Omega _{2}^{*} = \\omega _{2}^{BS}$ , and $P_{{\\left| 0 \\right\\rangle } }^{(2)} (t)$ can be written as $ P_{{\\left| 0 \\right\\rangle } }^{(2)} (t;\\Omega _{2} = 0) = \\frac{1}{2} (1+\\cos ^{2} \\theta )+\\\\\\nonumber +\\frac{1}{2} \\sin ^{2} \\theta \\cos \\left[\\omega _{2}^{BS} t+2\\omega t-a^{*} \\cos (\\omega t+\\psi )+a^{*} \\cos \\psi \\right].$ We see in Fig.", "2 that at $\\Delta A/\\omega $ = 0 the amplitude modulation vanishes and the oscillations with the constant amplitude and periodically changing frequency occur.", "Thus, the disappearance of the amplitude modulation in the evolution of the ground state population is the evidence that the RWA Rabi frequency vanishes and the non-RWA Rabi frequency becomes equal to the Bloch–Siegert shift.", "In this case, the Bloch–Siegert oscillation is observed as the frequency modulation of the coherent response and is presented in Fig.", "2 by the red line.", "The used parameters correspond to the ultrastrong regime with the coupling constant defined by $g/\\varepsilon =Acos\\theta /\\omega \\approx 0.8$ .", "The disappearance of the RWA Rabi frequency represents a kind of electromagnetically induced transparency.", "The two-level system may become transparent under its bichromatic driving when conditions of the RWA are fulfilled for the high- and low-frequency field [55], [56].", "This effect is based on the destructive interference of multiple photon processes excited by the bichromatic driving; as a result, for properly chosen experimental parameters of the low-frequency field, amplitude of overall multiple photon transitions between the dressed states becomes zero.", "In contrast, at the strong driving when the non-RWA must be used for the low-frequency field, the full electromagnetically induced transparency cannot be realized, because the Bloch–Siegert oscillation remains if even the RWA Rabi frequency vanishes.", "The Fourier spectra of $P_{{\\left| 0 \\right\\rangle } }^{(2)} (t)$ , $F(\\omega _{*}) = \\int _{0}^{\\infty }e^{-i\\omega _{*} t} e^{-\\gamma t} P_{{\\left| 0 \\right\\rangle } }^{(2)} (t) dt$ , are shown in Fig.", "3 for two values of the phase of the modulation field.", "The decay rate $\\gamma $ was introduced phenomenologically using its value corresponding to a coherence time of 4 $\\mu $ s [51].", "The spectra consist of Lorentzian lines at zero frequency, at the non-RWA Rabi frequency $\\Omega _{2}^{*}$ and series of triplets are observed at frequencies $n\\omega $ and $n\\omega \\pm \\Omega _{2}^{*}$ corresponding to the amplitude-modulated oscillations.", "When $a\\rightarrow a^{*}$ (or $A\\rightarrow A^{*}$ ), the RWA Rabi frequency $\\Omega _{2} \\rightarrow 0$ as well as the coefficients $A_{1}, A_{2}, e_{0}, e_{\\pi /2} \\rightarrow 0$ and only the coefficients $d_{0}, d_{\\pi /2}, b_{0}, b_{\\pi /2}$ have non-zero values.", "In this case the line, corresponding to the oscillations at the RWA Rabi frequency, vanishes.", "At $A = A^{*}$ , the non-RWA Rabi frequency $\\Omega _{2}^{*} = \\omega _{2}^{BS}$ , only two side lines at the frequencies $n\\omega \\pm \\omega _{2}^{BS}$ are remained in triplets and each triplet are transformed into doublet.", "A splitting between the doublet lines becomes exactly equal to $2\\omega _{2}^{BS}$ .", "Note that the fourth triplet near $\\omega /2\\pi $ = 20 MHz degenerates in a singlet.", "Indeed, using decomposition expansion of Eq.", "(REF ) by a series of the Bessel functions, the time-dependent part of this equation can be written as $\\frac{1}{2} \\sum _{n = -\\infty }^{n = \\infty }J_{n} (a^{*})\\exp \\left\\lbrace i\\left[\\left((n-2)\\omega -\\omega _{2}^{BS} \\right)t\\right.\\right.$ $\\left.\\left.-a^{*} \\cos \\psi +n(\\psi +\\pi /2) \\right]\\right\\rbrace +c.c.$ It follows directly from this expression that the line with the higher frequency in the doublet ($4\\omega -\\omega _{2}^{BS}, 4\\omega +\\omega _{2}^{BS}$ ) at $n = -2$ and $n = 6$ vanishes, because its amplitude $J_{-2} (a^{*}) = 0$ .", "It is an additional indication that the RWA Rabi frequency $ \\Omega _{2}$ becomes equal to zero and the conditions are realized when the double Bloch–Siegert shift $2\\omega _{2}^{BS}$ can directly be determined from the splitting between the doublet lines in the spectrum $F(\\omega _{*})$ .", "Features of the coherent oscillations in the qubit's evolution depend on the phase $\\psi $ of the modulation field that results in the phase dependences of the spectral line amplitudes.", "Fig.", "4 illustrates such dependences for $\\Delta A/\\omega $ equaled to 0 and –0.25.", "At $\\Delta A/\\omega = 0$ , doublets occurs at the frequencies $n\\omega \\pm \\omega _{2}^{BS}$ for all values of the phase of the modulation field; the spectra are practically phase-independent (Fig.", "4(a)).", "On the other hand, at $\\Delta A/\\omega \\ne 0$ the spectra depend on the phase of the low-frequency field and the strongest differences for the intensity of triplets are at $\\psi = 0$ and $\\psi = 90^{0}$ (Fig.", "4(b))." ], [ "CONCLUSION", "We have studied the second-order Raman transition excited by the amplitude-modulated microwave field in the two-level system.", "It was shown that at the ultrastrong driving by the low-frequency modulation field the excited effective Rabi frequency is significantly modified by the Bloch–Siegert shift $\\omega _{2}^{BS}$ due to multiphoton antiresonant interactions between the quantum system and the modulation field.", "In this regime, when the coupling strength $A$ exceeds the modulation frequency $\\omega $ , the RWA Rabi frequency $\\Omega _{2}$ can become zero and the non-RWA Rabi frequency $\\Omega _{2}^{*}$ is equal to the Bloch–Siegert shift $\\omega _{2}^{BS}$ .", "That results in the qualitative changes in the coherent dynamics of the system, i.e.", "the amplitude modulation of the oscillations in the evolution of the ground state population transforms into the oscillations of the constant amplitude and periodically changing frequency.", "In the Fourier spectra of the coherent response triplets are transformed into doublets with the splitting between the doublet lines equal to $2\\omega _{2}^{BS}$ .", "So, it is a unique possibility for the direct measuring the Bloch–Siegert shift in experiments on observation of Raman transitions in driven quantum systems.", "We demonstrate this possibility using calculations of the time and spectral behavior of the coherent oscillations in the qubit's evolution in the conditions of experiments with the NV center in diamond, where Raman transitions were observed [51].", "Such possibility can also be realized for the high-order Raman transitions.", "Unlike the second-order transition, for the third- and fourth-order transitions the Bloch–Siegert shift can give the dominant contribution to the non-RWA Rabi frequencies [49].", "However, the non-RWA Rabi frequency strongly decreases with increasing the transition order making difficulties for experimental studies of the high-order Raman transitions.", "The proposed direct observation of the Bloch–Siegert oscillations reveals additional features of coherent Raman dynamics and may be used as a new technique for studying quantum systems at bichromatic and multichromatic driving in the ultastrong regime." ], [ "Acknowledgements", "The work was supported by State Programm of Scientific Investigations “Physical material science, new materials and technologies”, 2016-2020, and by Project 01-3-1137-2019/2023, JINR, Dubna." ] ]
1906.04528
[ [ "Forecasted constraints on modified gravity from Sunyaev Zel'dovich\n tomography" ], [ "Abstract Observational cosmology has become an important laboratory for testing General Relativity, with searches for modified gravity forming a significant portion of the science case for existing and future surveys.", "In this paper, we illustrate how future measurements of the Cosmic Microwave Background (CMB) temperature and polarization anisotropies can be combined with large galaxy surveys to improve constraints on modified gravity using the technique of Sunyaev Zel'dovich (SZ) tomography.", "SZ tomography uses the correlations between the kinetic/polarized SZ contributions to the small-angular scale CMB and the distribution of structure measured in a galaxy redshift survey to reconstruct the remote dipole and quadrupole fields, e.g.", "the CMB dipole and quadrupole observed throughout the Universe.", "We compute the effect of a class of modifications of gravity on the remote dipole and quadrupole fields, illustrating that these observables combine a number of the desirable features of existing probes.", "We then perform a fisher forecast of constraints on a two-parameter class of modifications of gravity for next-generation CMB experiments and galaxy surveys.", "By incorporating information from the reconstructed remote dipole and quadrupole fields, we find that it is possible to improve the constraints on this model by a factor of $\\sim 2$ beyond what is possible with a galaxy survey alone.", "We conclude that SZ tomography is a promising method for testing gravity with future cosmological datasets." ], [ "Introduction", "Our observable Universe presents us with at least two distinct ways in which we can observationally constrain General Relativity (GR) in the strong gravity regime, where the full non-linear nature of Einstein's equations are manifest.", "The first is near the horizons of black holes, which have been probed by LIGO through the observation of gravitational radiation from binary black hole mergers [1] and the Event Horizon Telescope, which has imaged the vicinity of the supermassive black hole at the centre of M87 [2].", "These observations have placed important constraints on potential deviations from GR [3], [4], [5], [6], and promise to provide even more stringent tests in the future (e.g.", "[7]).", "The second regime is on cosmological distance scales, of order the size of the observable Universe.", "The observed accelerated expansion of the Universe, which in the standard cosmological model $\\Lambda $ CDM is due to a cosmological constant, requires explanation in the strong gravity regime At an even more basic level, the full non-linear machinery of GR is necessary to understand how the averaged inhomogeneous distribution of matter on small scales induces expansion of the Universe on very large scales (see e.g. [8])..", "In addition, the evolution (and observation) of inhomogeneities on ultra-large scales requires a fully relativistic treatment, and is sensitive to deviations from GR; see Refs.", "[9], [10] for recent reviews.", "Cosmic microwave background (CMB) experiments such as the Planck satellite [11], [12] and galaxy redshift surveys [13], [14], [15], [16], [17] (through various combinations of clustering, redshift space distortions, and lensing) have put meaningful constraints on deviations from GR, and further tests are a primary science driver of future surveys such as Euclid [18] and LSST [19].", "In this paper, we explore the potential of kinetic Sunyaev Zel'dovich (kSZ) and polarized Sunyaev Zel'dovich (pSZ) tomography to test GR on cosmological scales.", "SZ tomography is used to denote the combination of kSZ and pSZ tomography.", "The kSZ effect [20], temperature anisotropies induced by the scattering of CMB photons from free electrons in bulk motion after reionization, is the dominant blackbody component of the CMB on small angular scales (corresponding to multipoles $\\ell 4000$ ).", "The observed kSZ temperature anisotropies from a given location in the observable Universe are determined by the product of the optical depth and the remote dipole field (e.g.", "the CMB dipole as observed from different points in spacetime) projected along the line of sight.", "The remote dipole field can be reconstructed from the correlations between a tracer of structure and the small-angular scale CMB using the technique of kSZ tomography [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36].", "The pSZ effect is the observed polarized component of CMB photons scattered after reionization, determined by the optical depth and the remote quadrupole field (the CMB quadrupole observed at different locations).", "The remote quadrupole field can be reconstructed analogously to the remote dipole field using pSZ tomography [37], [38], [39], [40], [41], [42], [33], [43], [44], [45].", "Because kSZ/pSZ tomography reconstructs the remote dipole/quadrupole fields along our past light cone, these techniques can be used to probe the growth of structure over cosmic timescales, and therefore can in principle serve as a powerful probe of modifications to GR.", "A major limitation on using kSZ/pSZ tomography for measurement of growth is our inability to use a tracer of LSS to perfectly infer the distribution of electrons, a problem known as the optical depth degeneracy [46], [41], [34].", "This can in principle be mitigated by measurements of other tracers of the electron distribution [47], and we discuss the impact of the optical depth degeneracy on our constraints below.", "There are a number of theoretical approaches to modifying GR in the cosmological setting [48].", "One can take a top-down approach, constructing theoretically viable models at the level of the Lagrangian.", "Examples include $f(R)$ models [49], braneworld scenarios [50], and Horndeski theories [51].", "An effective field theory (EFT) approach to cosmological perturbation theory [52], [53] can incorporate modifications to GR systematically [54], [55], [56], [57], encoding the modifications as free coefficients/functions in the EFT.", "Parametric expansions about GR solutions, such as the Parameterized Post-Friedmann (PPF) framework [58], [59], have also been employed.", "Both the EFT and PPF framework encode the effects of any top-down model consistent with various assumptions about symmetries and extra degrees of freedom.", "In this paper, we employ a phenomenological approach that directly modifies the linearized Einstein equations [60], [61], [62] by adding two free functions of time and scale, assuming that the background evolution remains as in $\\Lambda $ CDM.", "Our primary motivation for using this approach is that it can encode all modifications of GR in the scalar sector [63], [64], and it is employed in a variety of existing cosmological analyses (e.g.", "[65], [66], [67], [12], [11], [16]) and Boltzmann codes [68], [60].", "As our purpose is to illustrate the general utility of kSZ/pSZ tomography for constraining modifications of GR, we focus on the simplest possible parameterization in this class, where any scale dependence of modified growth is neglected and where the time-dependence is proportional to the fractional energy density in dark energy, assuming it is a cosmological constant.", "We do not undertake a comprehensive forecast of the constraints on more general models of modified gravity using the top-down, EFT, or PPF approaches.", "Within this framework, the remote dipole and quadrupole fields are sensitive to modifications of GR in a number of ways.", "The largest contribution to the remote dipole field is from the local peculiar velocity field, which is sensitive to changes in the linear growth function.", "Both the remote dipole and quadrupole fields receive contributions from the integrated Sachs-Wolfe effect, which is sensitive to both the linear growth function and gravitational slip (e.g.", "the non-equality of the two Bardeen potentials).", "The remote dipole and quadrupole fields combine a number of desirable properties of other datasets used to constrain GR.", "The primary CMB is sensitive to modifications of GR primarily through the ISW effect, which is limited to a line of sight integral from our position to the surface of last scattering.", "However, the ISW contribution to the remote dipole/quadrupole fields can be measured at a number of redshifts, yielding more information about the evolution of growth and gravitational slip.", "The local Doppler contribution to the remote dipole field provides an excellent tomographic measurement of the line-of-sight peculiar velocity field, especially on the largest scales [33], [34], yielding a sensitive probe of growth.", "Just as the popular $E_G$ statistic [63] combines measurements of velocities and lensing to constrain both growth and gravitational slip while removing the degeneracy with galaxy bias, the full correlation structure of the remote dipole/quadrupole fields with a galaxy survey can be used to constrain modified GR while mitigating the effects of galaxy bias and the optical depth degeneracy.", "Several previous works have investigated the utility of measurements of the kSZ effect for constraining proposed modifications of GR Several papers have also explored constraints on dark energy in the context of GR using the kSZ effect, e.g.", "[69], [70], [71].", "Refs.", "[72], [73] forecasted constraints on a parameterized modification of the growth function from future CMB experiments using the pairwise velocity statistic.", "Formally, the information content of the pairwise velocity statistic should be equivalent to remote dipole reconstruction, as shown in [34].", "A direct comparison with our work is not possible, since we choose to constrain a different class of models, however the degree of improvement offered by including kSZ measurements is roughly compatible with what is found here.", "Ref.", "[74] and [75] calculated the global kSZ power spectrum for a class of modifications of GR.", "This analysis did not take into account the information in cross-correlations with LSS measurements, and we therefore expect that the present analysis would provide more stringent constraints on such models.", "There are several advantages to the strategy employed here, as compared to previous work.", "First, remote dipole/quadrupole reconstruction isolates the relevant cosmological information content of the kSZ/pSZ effect: it is a powerful probe of large-scale inhomogeneities through cosmic time, and therefore the growth function in the linear regime.", "In addition, it is straightforward to calculate the covariances of the remote dipole/quadrupole fields with a variety of other cosmological probes such as the primary CMB temperature and polarization, galaxy number density, the lensing potential, etc.", "This provides a convenient framework in which to do joint forecasts, as we do here.", "Finally, the optical depth degeneracy can be incorporated simply as a redshift-dependent bias on the amplitude of the reconstructed fields which can be marginalized over, as shown in [34].", "The focus of the present work is to outline the impact of modifications of GR on the remote dipole and quadrupole fields, as well as to provide a forecast highlighting the improvement in constraints possible from kSZ/pSZ tomography using next-generation CMB experiments and LSS surveys.", "We find that significant improvement is possible, especially for gravitational slip, even in the scenario where galaxy bias and the optical depth degeneracy are fully marginalized over.", "The plan of the paper is as follows.", "In Section , we introduce the parameterization of modified gravity (MG) we use, detail the effect of MG on the angular power spectra of galaxy number counts and remote dipole and quadrupole fields, and discuss the imprints of a preferred reference frame associated with MG on the remote dipole field.", "In Section , we perform a Fisher forecast to explore how well the two free MG parameters we consider can be constrained using different datasets.", "We summarize our results in Section .", "In Appendix and we also provide some analytic understanding of the effect of MG on growth in the large scale and small scale limits." ], [ "Sunyaev Zel'dovich tomography in modified gravity", "In this section, we outline the phenomenological parameterization of MG that we consider and discuss the effect of such modifications on the observed galaxy number counts and the remote dipole/quadrupole fields.", "We consider scalar perturbations only, and assume that the expansion history is as in $\\Lambda $ CDM, using the best-fit cosmological parameters from the Planck temperature and lensing data [76].", "The perturbed FRW metric in Newtonian gauge is written as $ds^2 = -(1+2\\Psi (t, \\vec{x})) dt^2 + a^2(t)(1-2\\Phi (t,\\vec{x})) d\\vec{x}^2\\ ,$ where $\\Psi $ and $\\Phi $ are the Newtonian pentential and curvature perturbation, respectively.", "Within GR, in the absence of anisotropic stress, the linearized Einstein equations reduce to the Poisson equation and the condition $\\Psi =\\Phi $ .", "Here, we consider parameterized modifications of GR characterized by two free functions of time and scale, $\\mu $ and $\\gamma $ , defined by: $-k^2 \\Psi &= 4\\pi G a^2 (\\rho _{\\rm m} + \\rho _{\\rm r})\\mu \\Delta \\ , \\\\\\Phi /\\Psi &= \\gamma \\ .", "$ Here $\\Delta $ is the total energy density perturbation defined by $(\\rho _{\\rm m} + \\rho _{\\rm r})\\Delta = \\rho _{\\rm m} \\Delta _{\\rm m} + \\rho _{\\rm r} \\Delta _{\\rm r}$ , with $\\Delta _{\\rm m}$ and $\\Delta _{\\rm r}$ being the matter and radiation components, respectively.", "We note that this definition of the total energy density perturbation assumes that any extra degrees of freedom associated with this class of modifications of GR do not cluster.", "The parameter $\\gamma $ is often referred to as gravitational “slip\".", "This particular class of parameterized modifications of GR was introduced in Ref.", "[60], has been incorporated into CAMB [68], [60], and constrained using data from e.g.", "CFHTLenS [66], Planck [12], [11], and the Dark Energy Survey (DES) [16].", "In principle, both $\\mu $ and $\\gamma $ are functions of time and scale $k$ .", "However, in this paper we focus on a simple model where $\\begin{aligned}\\mu &= 1 + \\delta \\mu \\times \\Omega _{\\rm DE}(a) \\ , \\\\\\gamma &= 1 + \\delta \\gamma \\times \\Omega _{\\rm DE}(a)\\ ,\\end{aligned}$ where $\\Omega _{\\rm DE}(a)$ is the dark energy fraction of the total energy density assuming $\\Lambda $ CDM and $ \\delta \\mu $ , $\\delta \\gamma $ are constants.", "This choice of time dependence is motivated by the potential relation between modifications of GR and the observed accelerated expansion of the Universe.", "This model corresponds to the “DE Related\" parameterization from Planck [12], [11]; mapping between variables we have $\\delta \\mu = E_{11}$ and $\\delta \\gamma = E_{22}$ .", "This is by no means a comprehensive analysis, but is meant to give a flavor of what kSZ and pSZ tomography could add to the study of modified gravity on cosmological scales.", "In the following subsections, we outline the effect of this parameterized modification of GR on the observables relevant for kSZ/pSZ tomography, including the observed galaxy number counts and the remote dipole and quadrupole fields." ], [ "Galaxy number counts in modified gravity", "The overdensity of galaxy number counts is written as [77], [78] $\\Delta _{\\rm g}(\\hat{\\mathbf {n}}, z) = b_{\\rm g}(z) \\Delta _{\\rm m}(\\hat{\\mathbf {n}}, z) + \\frac{1}{\\mathcal {H}(z)} \\partial _r(\\mathbf {V}(\\hat{\\mathbf {n}}, z) \\cdot \\hat{\\mathbf {n}} ) + \\frac{2-5s(z)}{2\\chi (z)}\\int _0^{\\chi (z)} d\\chi \\left(2- \\frac{\\chi (z)-\\chi }{\\chi }\\Delta _\\Omega \\right) (\\Psi +\\Phi )\\ ,$ where the second and third terms are the contributions from redshift space distortion and from gravitational lensing, respectively.", "Here $b_{\\rm g}$ is the galaxy bias, $\\Delta _{\\rm m}$ is the matter overdensity, $ is the matter velocity field, and $ H(z) = H(z) a$ is the comoving Hubble parameter.", "In the lensing term, $ (z) = 0z dz/H(z)$ is comoving radial coordinate, $$ is the angular Laplacian, and $ s$ is the so-called magnification bias characterizing the galaxy number densitydependence on the luminosity and we calculate its fiducial values following the treatment of Ref.~\\cite {Contreras:2019bxy}.", "In the following, we assume a fiducial galaxy bias model given by \\cite {0912.0201}\\begin{equation}b_{\\rm g}(z) = 0.95/D(z)\\ ,\\end{equation}where $ D(z)$ is the growth factor normalised as $ D(z=0)=1$.We neglect the relativistic corrections to the observed galaxy number counts~\\cite {Yoo:2008tj,2009PhRvD..80h3514Y,Yoo:2010ni,1105.5280,1105.5292,1107.5427}.", "While kSZ tomography can be used to measure relativistic corrections~\\cite {Contreras:2019bxy}, the effects from the parameterized modifications of GR considered here are expected to be unobservably small~\\cite {Baker:2015bva}.", "We have confirmed that including relativistic corrections into the forecast below does not significantly affect our results.$ In Fig.", "REF , we show the dependence of the galaxy angular power spectrum $C_\\ell ^{\\rm gg}$ on the two MG parameters, where evolution of matter perturbation $\\Delta _{\\rm m}(a,k)$ , velocity $V(a,k)$ and potential $(\\Psi +\\Phi )(a,k)$ are calculated using MGCAMB [60], [68].", "From the plot, we can see $\\delta \\gamma $ induces a nearly scale-independent fractional change to $C_\\ell ^{\\rm gg}$ , which makes its effect on $C_\\ell ^{\\rm gg}$ highly degenerate with galaxy bias $b_g(z)$ ; $\\delta \\mu $ induces a larger, scale-dependent fractional change.", "Especially, we find the lensing term play an import role in breaking the degeneracy via its sensitivity on the potential change in MG (see Appendix ).", "Figure: The dependence of the galaxy number counts power spectrum C ℓ gg C_\\ell ^{\\rm gg} on the two MG parameters.", "The error bars shown arecalculated assuming the shot noise of the LSST gold sample (in the left panel, the error bars are too small tostick out), and the inset plots are the corresponding fractional change dC ℓ gg dθ/C ℓ gg \\frac{dC_\\ell ^{\\rm gg}}{d\\theta }/C_\\ell ^{\\rm gg},with θ\\theta being δμ\\delta \\mu or δγ\\delta \\gamma ." ], [ "The remote dipole and quadrupole fields in modified gravity", "The kSZ effect is the result of Compton scattering of CMB photons by free electrons moving with respect to the CMB rest frame.", "This produces CMB temperature anisotropies given by $\\left.", "\\frac{ \\Delta T }{ T } \\right| _ { \\mathrm { kSZ } } \\left( \\hat{ \\mathbf { n } }_ { e } \\right) = \\int d \\chi _ { e } \\ \\dot{\\tau } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) v _ { \\mathrm { eff } } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right)$ where $\\chi _e = \\int _0^{z_e} dz/H(z)$ is comoving radial coordinate to the electron along the past light cone and $ \\hat{\\mathbf {n}}_e$ is the angular direction on the sky to the electron.", "The differential optical depth is defined as $\\dot{\\tau } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) \\equiv - \\sigma _T a(\\chi _e) \\bar{n}_e (\\chi _e) \\left[ 1 + \\delta _e \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) \\right]$ where $\\sigma _T$ is the Thomson scattering cross section, $\\bar{n}_e (\\chi _e)$ is the mean electron number density, $\\delta _e \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right)$ is the electron overdensity field.", "The remote dipole field $v _ { \\mathrm { eff } } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right)$ observed by each electron projected along the line of sight is $v _ { \\mathrm { eff } } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) \\equiv \\sum _{m=-1}^{1} \\Theta _{1}^m \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) Y_{1m} \\left(\\hat{ \\mathbf { n } } _ { e } \\right), \\ \\ \\ \\ \\Theta _{1}^m \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) \\equiv \\int \\ d^2 \\hat{\\mathbf {n}}\\Theta ( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}}) Y_{1m}( \\hat{\\mathbf {n}})$ where $\\Theta ( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}})$ is the CMB temperature the electron sees along direction $ \\hat{\\mathbf {n}}$ , $\\Theta ( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}}) = \\Theta _{\\rm SW} + \\Theta _{\\rm ISW} + \\Theta _{\\rm Dop} $ .", "The SW contribution is given by $\\begin{aligned}\\Theta _{\\rm SW}( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}})= \\frac{1}{3}\\Psi (\\chi _{\\rm dec}, \\mathbf {r}_{\\rm dec})= \\frac{1}{3} D_\\Psi (\\chi _{\\rm dec}, \\mathbf {r}_{\\rm dec}) \\Psi _i(\\mathbf {r}_{\\rm dec})\\ ,\\end{aligned}$ where $\\mathbf {r}_{\\rm dec} = \\chi _e \\hat{\\mathbf {n}}_e+ \\chi _{\\rm dec}^e \\hat{\\mathbf {n}}$ with $\\chi _{\\rm dec}^e = \\chi _{\\rm dec}-\\chi _e$ , and $D_\\Psi $ is the potential growth function defined by $\\Psi (a, {\\mathbf {r}}) = D_\\Psi (a, {\\mathbf {r}})\\Psi _i({\\mathbf {r}})$ .", "We have used the fact that the MG effect is negligible at early times, i.e., $\\gamma _i = \\gamma _{\\rm dec} = 1$ .", "The ISW contribution is given by $\\begin{aligned}\\Theta _{\\rm ISW}( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}})= \\int _{a_{\\rm dec}}^a \\frac{d(\\Psi +\\Phi )}{da} da= \\int _{a_{\\rm dec}}^a \\frac{d(1+\\gamma )D_\\Psi }{da} \\Psi _i da .\\end{aligned}$ The Doppler contribution is given by the relative velocity of the emitter and the scatter, $\\Theta _{\\rm Dop}( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}}) = \\hat{\\mathbf {n}}\\cdot [ {\\mathbf {r}}_e, \\chi _e)- {\\mathbf {r}}_{\\rm dec}, \\chi _{\\rm dec})] \\ .$ Similar to the potential growth function $D_\\Psi $ , we can define a velocity growth function $D_v(a,k)$ connecting the velocity to the primordial potential perturbation by $\\mathbf {V} = - D_v(a, \\mathbf {r}) \\nabla \\Psi _i(\\mathbf {r})$ .", "We can define a kernel in Fourier space relating the primordial potential $\\Psi _i(k)$ to the remote dipole field [31], $v _ { \\mathrm { eff } } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) = i\\int \\frac{d^3k}{(2\\pi )^2} \\Psi _i(\\mathbf {k})\\mathcal {K}(k,\\chi _e)\\mathcal {P}_1(\\hat{\\mathbf {k}}\\cdot \\hat{\\mathbf {n}}_e) e^{i\\chi _e {\\mathbf {k}}\\cdot \\hat{\\mathbf {n}}_e}$ $\\mathcal {P}_n$ is the Legendre polynomial of degree $n$ .", "The Fourier kernel is $\\mathcal {K}(k, \\chi _e) = \\mathcal {K}_{\\rm SW}(k, \\chi _e)+ \\mathcal {K}_{\\rm ISW}(k, \\chi _e)+ \\mathcal {K}_{\\rm Dop}(k, \\chi _e)$ with each component given by $\\begin{aligned}\\mathcal {K}_{\\rm SW}(k, \\chi _e)&= 3 \\left(\\frac{1}{3}D_\\Psi (k, \\chi _{\\rm dec}) \\right) j_1(k\\chi _{\\rm dec}^e)\\ , \\\\\\mathcal {K}_{\\rm ISW}(k, \\chi _e)&= 3 \\int _{a_{\\rm dec}}^{a_e} \\frac{d(1+\\gamma )D_\\Psi (k, \\chi _a)}{da} j_1(k\\chi _a^e) da \\ , \\\\\\mathcal {K}_{\\rm Dop}(k,\\chi _e)& = kD_v(k, \\chi _{\\rm dec})\\left[j_0(k \\chi _{\\rm dec}^e) -2 j_2(k \\chi _{\\rm dec}^e)\\right] -k D_v(k, \\chi _e) \\ ,\\end{aligned}$ where the growth functions $D_\\Psi (a,k)$ and $D_v(a,k)$ are computed using MGCAMB [60], [68].", "From Eq.", "(REF ), we conclude: $\\mathcal {K}_{\\rm SW}$ is determined by physics at the time of recombination, and is therefore insensitive to the modifications of gravity we consider here; MG affects $\\mathcal {K}_{\\rm ISW}$ and $\\mathcal {K}_{\\rm Dop}$ ; because the Doppler term is the dominant contribution to the remote dipole field, most of the constraining power on MG comes from this term.", "In top two panels of Fig.", "REF , we show the dependence of the angular power spectrum of the remote dipole field $C_\\ell ^{\\rm vv}$ on the two MG parameters.", "We see that $C_\\ell ^{\\rm vv}$ is only sensitive to $\\delta \\mu $ , not $\\delta \\gamma $ .", "The reason is that $C_\\ell ^{\\rm vv}$ is mainly sourced by $\\mathcal {K}_{\\rm Dop}$ on sub-horizon scales, and $D_v(a,k)$ only depends on $\\delta \\mu $ , not $\\delta \\gamma $ (see Appendix ).", "Figure: The effect of MG on the remote dipole and quadrupole signals (C ℓ vv C_\\ell ^{\\rm vv} and C ℓ qq C_\\ell ^{\\rm qq}),where the inset plots are the corresponding fractional change dC ℓ gg dθ/C ℓ gg \\frac{dC_\\ell ^{\\rm gg}}{d\\theta }/C_\\ell ^{\\rm gg},with θ\\theta being δμ\\delta \\mu or δγ\\delta \\gamma .", "The error bars are the correspondingkSZ and pSZ reconstruction noise expected fromcross correlating a LSS survey with a CMB experiment detailed in Sec.", ".The pSZ effect is the polarized component of scattered photons after reionization, and the perturbations to the Stokes parameters are given by: $( Q \\pm i U ) ^ { \\mathrm { pSZ } } \\left( \\hat{ \\mathbf { n } } _ { e } \\right) = \\frac{ \\sqrt{ 6 } }{ 10 } \\int d \\chi _ { e } \\ \\dot{\\tau } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) q _ { \\mathrm { eff } } ^ { \\pm } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right),$ where $q _ { \\mathrm { eff } } ^ { \\pm } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right)$ is the remote quadrupole field, which receives contributions from both scalar and tensor modes; we consider only scalar contributions here, for a detailed discussion of the tensor contribution see Refs.", "[40], [33], [45].", "For scalar modes, $q _ { \\mathrm { eff } } ^ {\\pm } \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) = q _ { \\mathrm { eff } } ^ {E} \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) \\equiv \\sum _{m=-2}^2 \\Theta _{2}^m \\left( \\hat{ \\mathbf { n } } _ { e } , \\chi _ { e } \\right) _{\\pm 2} Y_{2m} \\left(\\hat{ \\mathbf { n } } _ { e } \\right)$ where $q _ { \\mathrm { eff } } ^ {E}$ is an E-mode type remote quadrupole [40], [33], [45].", "In analogy to the remote dipole, we can relate the remote quadrupole to the primordial potential $\\Psi _i(\\bf k)$ by [33] $\\Theta _{2}^m ( \\hat{\\mathbf {n}}_e, \\chi _e) = \\int \\ d^2 \\hat{\\mathbf {n}}\\ \\Theta ( \\hat{\\mathbf {n}}_e, \\chi _e, \\hat{\\mathbf {n}}) Y_{2m}( \\hat{\\mathbf {n}}) = \\int \\frac{d^3k}{(2\\pi )^2} \\Psi _i(\\mathbf {k})\\mathcal {G}(k,\\chi _e)Y^*_{2m}(\\hat{\\mathbf {k}}\\cdot \\hat{\\mathbf {n}}_e) e^{i\\chi _e {\\mathbf {k}}\\cdot \\hat{\\mathbf {n}}_e}\\ ,$ where $\\mathcal {G}(k,\\chi _e) = \\mathcal {G}_{\\rm SW}(k,\\chi _e) + \\mathcal {G}_{\\rm ISW}(k,\\chi _e) + \\mathcal {G}_{\\rm Dop}(k,\\chi _e)$ with each component given by $\\begin{aligned}\\mathcal {G}_{\\rm SW}(k, \\chi _e)&= -4\\pi \\left(\\frac{1}{3}D_\\Psi (k, \\chi _{\\rm dec})\\right) j_2(k\\chi _{\\rm dec}^e)\\ , \\\\\\mathcal {G}_{\\rm ISW}(k, \\chi _e) &= -4\\pi \\int _{a_{\\rm dec}}^a \\frac{d(D_\\Psi (1+\\gamma ))}{da} j_2(k\\chi _a^e) da\\ , \\\\\\mathcal {G}_{\\rm Dop}(k, \\chi _e) &= \\frac{4\\pi }{5}kD_v(k,\\chi _{\\rm dec}) \\left[3j_3(k\\chi _{\\rm dec}^e)-2j_1(k\\chi _{\\rm dec}^e) \\right]\\ .\\end{aligned}$ In bottom two panels of Fig.", "REF , we show the dependence of the E-mode remote quadrupole $C_\\ell ^{\\rm qq}$ on the two MG parameters.", "Here we see that $C_\\ell ^{\\rm qq}$ is sensitive to both MG parameters $\\delta \\mu $ and $\\delta \\gamma $ since the ISW contribution is more important than for the remote dipole field.", "The remote dipole and quadrupole fields defined above can be reconstructed from maps of the CMB temperature/polarization anisotropies and a three-dimensional probe of structure such as a galaxy redshift survey using the technique of SZ tomography.", "The galaxy survey is used to trace the differential optical depth at different locations, assuming a model for the correlation between electron and galaxy overdensity.", "Binning the galaxies in redshift, an unbiased quadratic estimator can be defined to give a three-dimensional reconstruction of the remote dipole and quadrupole fields.", "These are the primary observables which we employ below to obtain constraints on MG.", "The reconstruction noises are given in Ref.", "[42], to which we refer the reader for more details.", "Generally speaking, the reconstruction noise decreases with decreasing shot noise in the galaxy and with increasing sensitivity and resolution of the CMB experiment." ], [ "The tilted Universe in modified gravity", "For adiabatic perturbations in GR, a pure gradient curvature perturbation can be removed by a coordinate transformation (see e.g.", "[87], [88]).", "This implies that, in the absence of a preferred reference frame, gradient modes can have no observable consequences.", "This so-called “tilted Universe\" [89] was analyzed in detail by [90] (see also Ref.", "[91]), who showed that in Newtonian gauge there is a precise cancellation in the CMB temperature anisotropies between the SW, ISW, and Doppler contributions which ensure that a pure gradient mode is unobservable.", "The kSZ signal was shown to vanish in a tilted Universe in Refs.", "[30], [31], which relies on a more stringent cancellation which must occur everywhere in the post-recombination Universe.", "In particular, the following relation between growth functions must hold in Newtonian gauge: $F(a)\\equiv \\left( \\frac{1}{3} D_{\\Psi } (a_{\\rm dec})\\right) (\\chi (a_{\\rm dec}) - \\chi (a)) - D_v(a) + D_v (a_{\\rm dec}) + \\int _{a_{\\rm dec}}^a da^{\\prime } \\frac{d (1+\\gamma )D_{\\Psi }}{da^{\\prime }} (\\chi (a_{\\rm dec}) - \\chi (a^{\\prime })) = 0,$ i.e., $\\lim _{k\\rightarrow 0} \\partial \\mathcal {K}(k,a)/\\partial k = 0$ , which was shown to hold within $\\Lambda $ CDM in Ref. [31].", "Modifications of GR that alter growth will generically violate Eq.", "(REF ), thus making a gradient mode observable via the primary CMB and the kSZ effect.", "We plot Eq.", "(REF ) in the left panel of Fig.", "REF for non-zero $\\delta \\mu $ and $\\delta \\gamma $ to illustrate this.", "Because we assume that modified growth does not occur until the onset of dark energy domination, we can see that Eq.", "(REF ) is violated only at late times.", "The contribution of a gradient mode to the remote dipole field is determined by the first derivative of the Fourier kernel, Eq.", "(REF ), in the limit where $k \\rightarrow 0$ .", "In GR, this is zero, implying that the leading-order behavior is $\\mathcal {K}(k)|_{\\rm GR}\\propto k^3$ .", "In Fig.", "REF , we show the remote dipole Fourier kernel for GR as well as for non-zero $\\delta \\mu $ and $\\delta \\gamma $ .", "Here, we see that for a generic non-zero $\\delta \\mu $ and $\\delta \\gamma $ , the leading order behavior is $\\mathcal {K}^v(k)|_{\\rm GR}\\propto k$ .", "One consequence of this is that the remote dipole field within MG can be relatively more sensitive to super-horizon perturbations than in GR.", "However the kSZ signal is mainly sourced by the sub-horizon Doppler contribution, and therefore the extra sensitivity to super-horizon perturbations in MG is hard to observe.", "The in-principle observability of superhorizon gradient modes implies the existence of a preferred reference frame.", "In the parameterized modification of GR that we consider in this paper, the introduction of time-dependent functions $\\mu $ and $\\gamma $ explicitly break diffeomorphism invariance, and fix the preferred frame.", "In a top-down approach, this could correspond to an explicit or implicit breaking of diffeomorphism invariance due to the existence of additional degrees of freedom in the gravitational sector.", "Because the preferred frame is manifest only at late times, in the absence of more theoretical input, there is no reason that the frame setting the initial conditions for perturbations in the early Universe should be equivalent to the preferred frame fixed at late times.", "One could incorporate this uncertainty by introducing three extra parameters, corresponding to the components of a pure-gradient mode in $\\Psi $ .", "Here, we simply note that growth can be a probe of diffeomorphism invariance via Eq.", "(REF ), and that the remote dipole field can therefore be a unique probe of diffeomorphism invariance in theories that modify GR.", "Figure: In the left panel, we show function F(a)=lim k→0 ∂𝒦(a,k)/∂kF(a) =\\lim _{k\\rightarrow 0} \\partial \\mathcal {K}(a,k)/\\partial k (see Eq. )", "in GR and MG as functions of scale factor aa.In the right panel, we show the kernel 𝒦(k)\\mathcal {K}(k) at a=0.5a=0.5 for superhorizon modes,where 𝒦(k)| GR ∝k 3 \\mathcal {K}(k)|_{\\rm GR}\\propto k^3 while 𝒦(k)| MG ∝k\\mathcal {K}(k)|_{\\rm MG}\\propto k in the small kk limit.", "This demonstrates that there are observable imprints of a preferred frame on the remote dipole field in MG.With some intuition about the effect of MG on galaxy number counts and the remote dipole/quadrupole field, we now perform a Fisher forecast of the constraints on $\\delta \\mu $ and $\\delta \\gamma $ from forthcoming CMB experiments and galaxy surveys using SZ tomography." ], [ "Experiments", "The remote dipole and quadrupole fields can be reconstructed by cross correlating large-scale structure (LSS) with small-scale CMB anisotropies using the quadratic estimators defined in [42].", "The reconstruction noise on the remote dipole and quadrupole fields depends on the volume and shot noise of the galaxy survey and the sensitivity the CMB experiment; we refer the reader to Ref.", "[42] for further details.", "We assume data on the full sky, and neglect foregrounds and systematics in both the CMB experiment and galaxy survey.", "These are clearly idealistic assumptions, but should give a flavor of what is in-principle obtainable with future experiments.", "In this paper, we use the LSST gold sample as our fiducial LSS experiment.", "For this dataset, the galaxy number density $n(z)$ per square arcmin is expected to be [19] $n(z) = n_{\\rm g} \\frac{1}{2z_0} \\left( \\frac{z}{z_0}\\right)^2 \\exp \\left(\\frac{z}{z_0} \\right)\\ ,$ with $z_0 = 0.3$ and $n_{\\rm g}=40/{\\rm arcmin}^2$ .", "The predicted photo-z error is $\\sigma _z = 0.03(1+z)$ , which determines the minimum width of our redshift bins.", "We consider hypothetical CMB experiments with a beam full-width-half-maximum (FWHM) $\\theta _{\\rm FWHM}$ of 1.5 arcmins and effective detector noise level $\\Delta _{\\rm T} = \\lbrace 5.0, 1.0, 0.1\\rbrace $ $\\mu $ K-armin, i.e., $N_\\ell ^{\\rm TT} = \\Delta _{\\rm T}^2 \\exp \\left(\\ell (\\ell +1)\\theta _{\\rm FWHM}^2/8\\ln 2 \\right)$ and $N_\\ell ^{\\rm EE} = 2N_\\ell ^{\\rm TT}$ .", "The expected noise $N_\\ell ^{ \\lbrace \\rm vq\\rbrace \\lbrace \\rm vq\\rbrace } $ of reconstructed kSZ/pSZ signals is calculated following Ref.", "[34] and we show the noise level expected from LSST gold sample and the CMB experiment with optimal sensitivity ($\\Delta _{\\rm T}=0.1$ $\\mu $ K-armin) in Fig.", "REF ." ], [ "Forecasts", "The Fisher matrix for model parameters $\\theta $ constrained by angular power spectra $C_\\ell $ is written as [92] $F_{\\alpha \\beta } = \\sum _\\ell ^{\\ell _{\\rm max}} \\frac{2\\ell +1}{2} f_{\\rm sky}{\\rm Tr}\\left(C_\\ell ^{-1} \\frac{\\partial C_\\ell }{\\partial \\theta _\\alpha } C_\\ell ^{-1}\\frac{\\partial C_\\ell }{\\partial \\theta _\\beta } \\right)\\ ,$ and it is related to the expected uncertainty of a model parameter $\\theta _\\alpha $ by $\\sigma (\\theta _\\alpha ) = \\sqrt{(F^{-1})_{\\alpha \\alpha }}\\ ,$ where $f_{\\rm sky}$ is the (mutual) sky fraction covered by the surveys, $C_\\ell = C_\\ell ^{\\rm XY} + N_\\ell ^{\\rm XY}$ with $C_\\ell ^{\\rm XY}$ and $N_\\ell ^{\\rm XY}$ being the cross spectra of signals and noises, respectively; ${\\rm X}$ and ${\\rm Y}$ are the corresponding observables.", "To model the angular power spectra of galaxy number counts $C_\\ell ^{{\\rm g}_i {\\rm g}_j}$ across all redshift bins $[z_i, z_{i+1}]\\times [z_j, z_{j+1}]$ ($i,j=1, 2, ..., N_{\\rm bins}$ ), we need $ 2N_{\\rm bins}+2$ parameters $\\lbrace b_{\\rm g}^i, s^i, \\delta \\mu , \\delta \\gamma \\rbrace $ (all $\\Lambda $ CDM parameters are assumed fixed).", "Due to the optical depth degeneracy [46], [41], [34], the remote dipole and quadrupole fields reconstructed from kSZ/pSZ tomography are uncertain up to an optical depth bias $b_v^i$ in each redshift bin.", "To model the angular power spectra of the dipole/quadrupole fields $C_\\ell ^{\\lbrace {\\rm v}_i{\\rm q}_m\\rbrace \\lbrace {\\rm v}_j{\\rm q}_n\\rbrace }$ across all redshift bins ($i,j, m,n=1, 2, ..., N_{\\rm bins}$ ), we need $ N_{\\rm bins}+2$ parameters $\\lbrace b_v^i, \\delta \\mu , \\delta \\gamma \\rbrace $ .", "In our forecast, we take a conservative cut of $\\ell _{\\rm max} = 50$ for the angular power spectra of galaxy number counts and moments of the remote dipole and quadrupole fields.", "We consider $N_{\\rm bins}$ redshift bins with equal width in comoving distance, covering the range $0<z<3$ .", "In general, the larger $N_{\\rm bins}$ , the thinner each redshift bin and the larger number of modes available for use.", "We use $N_{\\rm bins} = 40$ ensuring all redshift bins are wider than the expected redshift error $\\sigma _z$ of the LSST gold sample.", "In addition, all our forecast results are based on $f_{\\rm sky} = 1$ , and therefore all the uncertainties obtained should be multiplied by a factor $\\sqrt{f_{\\rm sky}^{-1}}$ for partial sky coverage.", "The results of our forecast are shown in Table REF and Fig.", "REF .", "Table: Forecasted constraints f sky -1 σ(δμ,δγ)\\sqrt{f_{\\rm sky}^{-1}}\\sigma (\\delta \\mu , \\delta \\gamma ) from different datasets, where theLSST gold sample is used for galaxy number counts and the reconstruction noise on the remote dipole/quadrupole fields are based on the LSST gold sample and a CMB experiment with three representative sensitivities (θ FWHM =1.5\\theta _{\\rm FWHM}=1.5 arcmin, Δ T =0.1/1.0/5.0μ\\Delta _{\\rm T} = 0.1/ 1.0/ 5.0 \\ \\mu K-arcmin).", "Here we have used notation ** for large uncertainties (>10,>10)(>10, >10).For comparison, the constraints from the primaryCMB alone expected from a Planck-like experiment and from an ideal cosmic-variance-limited (CVL) experiment are σ(δμ,δγ)| Planck =(0.66,1.51)\\sigma (\\delta \\mu ,\\delta \\gamma )|_{\\rm Planck} = (0.66,1.51) andσ(δμ,δγ)| CVL =(0.27,0.59)\\sigma (\\delta \\mu ,\\delta \\gamma )|_{\\rm CVL} = (0.27,0.59).As shown in Fig.", "REF , the MG parameter $\\delta \\gamma $ changes the galaxy angular power spectrum $C_\\ell ^{\\rm gg, i}$ by a nearly scale-independent factor, which is strongly degenerate with the galaxy bias $b_{\\rm g}^i$ .", "Fortunately, the degeneracy is broken by the power spectra across different redshift bins.", "Therefore the constraint of $(\\delta \\mu , \\delta \\gamma )$ only marginally improves by adding bias priors $\\mathcal {P}(b_{\\rm g}^i) = 0.1 b_{\\rm g}^i$ and $\\mathcal {P}(s^i) = 0.1 s^i$ (see Table REF and Fig.", "REF ).", "As shown in Fig.", "REF , the angular power spectrum of the remote dipole field $C_\\ell ^{\\rm vv,i}$ is sensitive to $\\delta \\mu $ , but not $\\delta \\gamma $ .", "The parameter $\\delta \\mu $ changes $C_\\ell ^{\\rm vv}$ by a nearly scale-independent factor, which is strongly degenerate with the optical depth bias $b_v$ .", "Therefore both $\\delta \\mu $ and $\\delta \\gamma $ are unconstrained from kSZ tomography without a prior on the optical depth bias $b_v$ .", "Imposing optical depth bias priors $\\mathcal {P}(b_v)$ (we take as $\\sigma (b_v^i)=0.1$ [47]), we find $\\delta \\mu $ is constrained by kSZ tomography with uncertainty $\\sigma (\\delta \\mu )\\approx 0.13$ , which is almost independent of the CMB experiment sensitivities, i.e, the constraint on $\\delta \\mu $ is largely limited by its degeneracy with the optical depth bias.", "The angular power spectrum of the remote quadrupole field $C_\\ell ^{\\rm qq}$ depends on both parameters $\\delta \\mu $ and $\\delta \\gamma $ in a scale-dependent way.", "The uncertainties of $\\delta \\mu $ and $\\delta \\gamma $ constrained from pSZ tomography are largely limited by the reconstruction noise, and imposing optical depth bias $b_v$ priors only slightly reduces the uncertainties.", "This is evident from the large improvement on the constraints using the remote quadrupole field for CMB experiments with higher sensitivity.", "Using both the remote dipole and quadrupole fields $C_\\ell ^{ \\lbrace \\rm vq\\rbrace \\lbrace \\rm vq\\rbrace }$ , the $\\delta \\mu -b_v$ degeneracy is further broken, and the uncertainties of $(\\delta \\mu ,\\delta \\gamma )$ improve by a factor of $\\gtrsim 2$ compared with the constraint from the remote quadrupole field only.", "Imposing at $10 \\%$ prior on the optical depth bias, we again find $\\sigma (\\delta \\mu )\\approx 0.13$ , independent of the sensitivity of the CMB experiments we considered.", "Using the number counts and remote fields $C_\\ell ^{ \\lbrace \\rm gvq\\rbrace \\lbrace \\rm gvq\\rbrace }$ without any priors, we obtain a better constraint than that from $C_\\ell ^{ \\lbrace \\rm vq\\rbrace \\lbrace \\rm vq\\rbrace }+ \\mathcal {P}(b_v)$ , especially for the low-sensitivity CMB experiment we considered.", "Imposing both galaxy bias priors $\\mathcal {P}(b_g)$ and optical depth bias priors $\\mathcal {P}(b_v)$ further reduces the uncertainty on both MG parameters to the $\\mathcal {O}(0.1)$ level.", "Figure: 1-σ\\sigma contours of forecasted uncertainties of δμ\\delta \\mu and δγ\\delta \\gamma using different datasets (LSST and CMB experiment with Δ T =0.1μ\\Delta _{\\rm T} = 0.1\\ \\mu K-arcmin), where in the right panel, we show the forecasts with external priorinformation on the galaxy bias σ(b g i )=0.1b g i \\sigma (b_{\\rm g}^i)=0.1 b_{\\rm g}^i and the optical depth bias σ(b v i )=0.1\\sigma (b_v^i)=0.1.In summary: although the remote dipole field is sensitive to $\\delta \\mu $ , this parameter is not well constrained by kSZ tomography alone due to a strong degeneracy between $\\delta \\mu $ and the optical depth bias $b_v$ .", "To better constrain $\\delta \\mu $ , we can add prior information $\\mathcal {P}(b_v)$ from other tracers of the electron distribution or use larger dataset $C_\\ell ^{ \\lbrace \\rm gvq\\rbrace \\lbrace \\rm gvq\\rbrace }$ in which the $\\delta \\mu -b_v$ degeneracy is broken.", "In a similar way, $\\delta \\gamma $ is not well constrained by pSZ tomography $C_\\ell ^{\\rm qq}$ alone due to a degeneracy with $\\delta \\mu $ , which can be broken using a larger dataset $C_\\ell ^{ \\lbrace \\rm vq\\rbrace \\lbrace \\rm vq\\rbrace }$ or $C_\\ell ^{ \\lbrace \\rm gvq\\rbrace \\lbrace \\rm gvq\\rbrace }$ (see Fig.", "REF ).", "The constraint from the full dataset $C_\\ell ^{ \\lbrace \\rm gvq\\rbrace \\lbrace \\rm gvq\\rbrace }$ without any priors is better than that from either $C_\\ell ^{\\rm gg} + \\mathcal {P}(b_{\\rm g}, s)$ or $C_\\ell ^{ \\lbrace \\rm vq\\rbrace \\lbrace \\rm vq\\rbrace } + \\mathcal {P}(b_v)$ ." ], [ "Conclusions", "In this paper, we explored the potential contribution of SZ tomography from future cosmological datasets to tests of GR on cosmological scales.", "The remote dipole and quadrupole fields reconstructed using SZ tomography are sensitive to modifications of gravity in a number of complimentary ways.", "We have chosen as our example of modified gravity a two-parameter $(\\delta \\mu , \\delta \\gamma )$ modification of the linearized Einstein equations, where the growth of structure is impacted in proportion to the relative importance of dark energy in the energy budget.", "In this parameterization, $\\delta \\mu $ affects the strength of gravitational clustering while $\\delta \\gamma $ encodes the gravitational slip (e.g.", "the non-equality of the Bardeen potentials).", "The remote dipole field is sensitive to $\\delta \\mu $ through the enhancement/weakening of the peculiar velocity field in deeper/shallower potential wells.", "Because the ISW contribution to the remote dipole field is far smaller than the Doppler contribution from peculiar velocities, this observable has limited sensitivity to $\\delta \\gamma $ .", "The remote quadrupole field is sensitive to both $\\delta \\mu $ and $\\delta \\gamma $ , primarily due to the significant contribution of the ISW effect to this observable.", "Unlike the primary CMB, where the late-time ISW effect makes a significant contribution to a rather limited number of modes [12], [11], the ISW effect makes an important contribution to the remote quadrupole field everywhere.", "The power of this observable is therefore limited by the fidelity of the reconstruction, and not cosmic variance.", "We have forecasted the possible constraints on $(\\delta \\mu , \\delta \\gamma )$ using a next-generation galaxy survey such as LSST and a high-resolution, low-noise CMB experiment such as CMB-S4.", "A major limitation on using kSZ/pSZ tomography for testing gravity is the optical depth degeneracy (our inability to use a tracer of LSS to perfectly infer the distribution of electrons), which we model as a redshift-dependent multiplicative bias $b_v$ on the remote dipole and quadrupole fields.", "In the absence of a prior on $b_v$ , the remote dipole field cannot be used to constrain modified gravity due to a large degeneracy between $b_v$ and $\\delta \\mu $ .", "This degeneracy is not as problematic for the remote quadrupole field.", "However, due to the large reconstruction noise, the constraints on modified gravity from the remote quadrupole field are not competitive.", "Constraints from the galaxy number counts themselves also suffer from a degeneracy between the galaxy bias and the MG parameters.", "A major result of this paper is that these degeneracies can be largely mitigated by using correlations between the galaxy number counts and the remote dipole/quadrupole fields.", "Comparing with galaxies-only, the uncertainties of $\\delta \\mu $ and $\\delta \\gamma $ decrease by $\\sim 40\\%$ when including the remote dipole/quadrupole fields, yielding limits $\\sigma (\\delta \\mu , \\delta \\gamma )$ of $(0.12,0.15)$ for the median CMB noise considered (assuming data on the full sky).", "If $10\\%$ priors on the galaxy bias and optical depth bias are included, then these constraints can be further improved by $\\sim 30 \\%$ .", "Further improvement is available as the CMB noise is lowered, which makes more information from the remote dipole/quadrupole fields accessible.", "This can be compared with the cosmic-variance limited constraint from the primary CMB temperature and polarization of $(0.27,0.59)$ .", "Although we have made a number of idealistic assumptions, such as data on the full sky and no foregrounds or systematics among others, our result is intended to determine if SZ tomography could in principle be an important tool for testing gravity with cosmology.", "SZ tomography will be feasible with future cosmological datasets, providing additional information on modifications of gravity for `free'.", "In this respect, we view our results as encouraging, motivating more detailed analyses with existing and future cosmological datasets." ], [ "Acknowledgments", "We would like to thank Niayesh Afshordi, Juan Cayuso, Adrienne Erickcek, James Mertens, and Moritz Munchmeyer for helpful discussions.", "This research was supported in part by Perimeter Institute for Theoretical Physics.", "Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.", "MCJ is supported by the National Science and Engineering Research Council through a Discovery grant." ], [ "Analytic understanding of perturbations in modified gravity", "To obtain some intuition about the effect of MG on various observables, we now analyze the evolution of structure in large scale small scale limits [86], [93], [58].", "Without loss of generality, we focus on late-time evolution when radiation energy is far smaller than matter energy and energy-momentum equation of matter is written as $\\Delta _{\\rm m}^{\\prime } + k_H V & = 3\\zeta ^{\\prime } \\ \\Leftrightarrow Y^{\\prime }+ k_H V = 0, \\\\V^{\\prime } + V & = k_H\\Psi \\ , $ where $^{\\prime } = d/d\\ln a$ , $k = k/aH$ , $\\zeta = \\Phi + V/k_H$ is the gauge-invariant curvature perturbation and $Y = \\Delta _{\\rm m}-3 \\zeta $ .", "For later use, we define $H_{\\rm m}^2 = 8\\pi G\\rho _{\\rm m}a^3/3$ .", "Assuming the background evolution in MG is same to that in GR + $\\Lambda $ CDM, it easy to see $2HH^{\\prime } + 3H_{\\rm m}^2/a^3 = 0\\ .$ Substituting Eq.", "(,REF ) into Eq.", "(REF ), we obtain $-\\Psi = \\frac{3}{2}\\frac{H_{\\rm m}^2}{ak^2}\\mu \\frac{ Y -3\\frac{Y^{\\prime }}{k_H^2} }{1+\\frac{9}{2}\\frac{H_{\\rm m}^2}{ak^2}\\mu \\gamma }\\ ,$ Combining the above equation and Eq.", "(REF ), we find $\\Delta _{\\rm m}= \\frac{ Y -3\\frac{Y^{\\prime }}{k_H^2} }{1+\\frac{9}{2}\\frac{H_{\\rm m}^2}{ak^2}\\mu \\gamma }\\ .$ Differentiating Eq.", "(REF ) and using Eq.", "(REF ,), we obtain the evolution equation for the overdensity $Y$ $Y^{\\prime \\prime } + Y^{\\prime }\\left(2+\\frac{H^{\\prime }}{H}+ \\frac{\\frac{9}{2}\\frac{H_{\\rm m}^2}{ak^2}\\mu }{1+\\frac{9}{2}\\frac{H_{\\rm m}^2}{ak^2}\\mu \\gamma } \\right)-Y\\frac{\\frac{3}{2}\\frac{H_{\\rm m}^2}{a^3H^2}\\mu }{1+\\frac{9}{2}\\frac{H_{\\rm m}^2}{ak^2}\\mu \\gamma } = 0\\ .$ In the large scale limit $k\\rightarrow 0$ , Eq.", "(REF ) is simplified to $Y^{\\prime \\prime } + Y^{\\prime }\\left(2 + \\frac{H^{\\prime }}{H}+ \\frac{1}{\\gamma }\\right) -Y\\frac{k_H^2}{3}\\frac{1}{\\gamma } = 0\\ ,$ which (along with Eqs.", "[REF ,REF ]) shows that $Y$ , therefore $V$ and $\\Psi +\\Phi $ only depends on $\\gamma $ , while $\\Delta _{\\rm m}$ depends on both $\\mu $ and $\\gamma $ .", "In the opposite limit $k\\rightarrow \\infty $ , Eq.", "(REF ) is simplified as $Y^{\\prime \\prime } + Y^{\\prime }\\left(2+\\frac{H^{\\prime }}{H} \\right)-Y \\left(\\frac{3}{2}\\frac{H_{\\rm m}^2}{a^3H^2}\\mu \\right)= 0\\ ,$ which shows that $Y$ , $V$ and $\\Delta _{\\rm m}$ only depend on $\\mu $ , while $\\Psi +\\Phi $ depends on both $\\mu $ and $\\gamma $ ." ], [ "Consistency requirement on super-horizon modes", "Assuming that MG is a metric theory in a statistically homogeneous and isotropic cosmology where energy-momentum is conserved, Bertschinger [93] showed that the curvature perturbation $\\zeta $ remains a constant to leading order, $\\zeta ^{\\prime } = \\mathcal {O}(k_H^2 \\zeta )\\ ,$ where $\\zeta ^{\\prime } = \\Phi ^{\\prime } + \\Psi + \\frac{V}{k_H}\\frac{H^{\\prime }}{H} \\ .$ The consistency between the super-horizon constraint (REF ) and the MG parameterization (REF ,) along with the energy-momentum conservation equations (REF ,) has been explicitly examined in previous works [60], [86].", "Here we perform an order of magnitude estimate pointing out a subtle difference in super-horizon evolution in GR versus MG.", "Since $\\mathcal {O}(V/k_H)=\\mathcal {O}(\\zeta )$ , we can parameterize Eq.", "(REF ) as [58] $\\lim _{k_H\\rightarrow 0} \\zeta ^{\\prime } = \\frac{1}{3}f_\\zeta k_H V\\,$ where $f_\\zeta $ is of $\\mathcal {O}(1)$ and needs to be determined by the consistency requirement.", "From Eq.", "(REF ), we obtain $-(\\Psi /\\mu ) - (\\Psi /\\mu )^{\\prime } = \\frac{3}{2}\\frac{H_{\\rm m}^2}{a} \\Delta _{\\rm m}^{\\prime }\\ .$ Combining Eq.", "(REF ) with Eq.", "(REF ), we have $\\lim _{k_H\\rightarrow 0} (\\Psi /\\mu ) + (\\Psi /\\mu )^{\\prime } =\\frac{3}{2}\\frac{H_{\\rm m}^2}{a^3H^2}(1-f_\\zeta )\\frac{V}{k_H}\\ ,$ Combining Eq.", "(REF ,REF , ) with the above equation, we obtain the governing equation of $f_\\zeta $ $\\lim _{k_H\\rightarrow 0} \\Psi \\left(1-\\frac{1}{\\gamma }-\\frac{\\gamma ^{\\prime }}{\\gamma }-\\frac{\\mu ^{\\prime }}{\\mu } \\right)-\\frac{V}{\\gamma k_H}\\frac{H^{\\prime }}{H}= - (1-f_\\zeta )\\frac{V}{k_H}\\frac{H^{\\prime }}{H}\\ .$ where we have used Eq.", "(REF ).", "It is easy to see $f_\\zeta |_{\\rm GR}= 0$ , while $f_\\zeta |_{\\rm MG}\\ne 0$ , i.e.", ", $\\lim _{k_H\\rightarrow 0}\\zeta ^{\\prime }|_{\\rm GR} = \\mathcal {O}(k_H^3 \\zeta )\\ , \\quad \\lim _{k_H\\rightarrow 0}\\zeta ^{\\prime }|_{\\rm MG} = \\mathcal {O}(k_H^2 \\zeta )\\ .$ This explains why it is safe to set $\\zeta ^{\\prime } =0$ in Eq.", "(REF ) in analyzing super-horizon evolution in GR, while the same approximation leads to a wrong solution in MG." ] ]
1906.04208
[ [ "Provably Robust Deep Learning via Adversarially Trained Smoothed\n Classifiers" ], [ "Abstract Recent works have shown the effectiveness of randomized smoothing as a scalable technique for building neural network-based classifiers that are provably robust to $\\ell_2$-norm adversarial perturbations.", "In this paper, we employ adversarial training to improve the performance of randomized smoothing.", "We design an adapted attack for smoothed classifiers, and we show how this attack can be used in an adversarial training setting to boost the provable robustness of smoothed classifiers.", "We demonstrate through extensive experimentation that our method consistently outperforms all existing provably $\\ell_2$-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\\ell_2$-defenses.", "Moreover, we find that pre-training and semi-supervised learning boost adversarially trained smoothed classifiers even further.", "Our code and trained models are available at http://github.com/Hadisalman/smoothing-adversarial ." ], [ "Introduction", "Neural networks have been very successful in tasks such as image classification and speech recognition, but have been shown to be extremely brittle to small, adversarially-chosen perturbations of their inputs [33], [14].", "A classifier (e.g., a neural network), which correctly classifies an image $x$ , can be fooled by an adversary to misclassify $x+\\delta $ where $\\delta $ is an adversarial perturbation so small that $x$ and $x + \\delta $ are indistinguishable for the human eye.", "Recently, many works have proposed heuristic defenses intended to train models robust to such adversarial perturbations.", "However, most of these defenses were broken using more powerful adversaries [4], [2], [35].", "This encouraged researchers to develop defenses that lead to certifiably robust classifiers, i.e., whose predictions for most of the test examples $x$ can be verified to be constant within a neighborhood of $x$ [39], [27].", "Unfortunately, these techniques do not immediately scale to large neural networks that are used in practice.", "To mitigate this limitation of prior certifiable defenses, a number of papers [21], [22], [6] consider the randomized smoothing approach, which transforms any classifier $f$ (e.g., a neural network) into a new smoothed classifier $g$ that has certifiable $\\ell _2$ -norm robustness guarantees.", "This transformation works as follows.", "Let $f$ be an arbitrary base classifier which maps inputs in $\\mathbb {R}^d$ to classes in $\\mathcal {Y}$ .", "Given an input $x$ , the smoothed classifier $g(x)$ labels $x$ as having class $c$ which is the most likely to be returned by the base classifier $f$ when fed a noisy corruption $x + \\delta $ , where $\\delta \\sim \\mathcal {N}(x, \\sigma ^2 I)$ is a vector sampled according to an isotropic Gaussian distribution.", "As shown in [6], one can derive certifiable robustness for such smoothed classifiers via the Neyman-Pearson lemma.", "They demonstrate that for $\\ell _2$ perturbations, randomized smoothing outperforms other certifiably robust classifiers that have been previously proposed.", "It is scalable to networks with any architecture and size, which makes it suitable for building robust real-world neural networks." ], [ "Our contributions", "In this paper, we employ adversarial training to substantially improve on the previous certified robustness resultsNote that we do not provide a new certification method incorporating adversarial training; the improvements that we get are due to the higher quality of our base classifiers as a result of adversarial training.", "of randomized smoothing [21], [22], [6].", "We present, for the first time, a direct attack for smoothed classifiers.", "We then demonstrate how to use this attack to adversarially train smoothed models with not only boosted empirical robustness but also substantially improved certifiable robustness using the certification method of [6].", "We demonstrate that our method outperforms all existing provably $\\ell _2$ -robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\\ell _2$ -defenses.", "For instance, our Resnet-50 ImageNet classifier achieves $56\\%$ provable top-1 accuracy (compared to the best previous provable accuracy of $49\\%$ ) under adversarial perturbations with $\\ell _2$ norm less than $127/255$ .", "Similarly, our Resnet-110 CIFAR-10 smoothed classifier achieves up to $16\\%$ improvement over previous state-of-the-art, and by combining our technique with pre-training [17] and semi-supervised learning [5], we boost our results to up to $22\\%$ improvement over previous state-of-the-art.", "Our main results are reported in Tables REF and REF for ImageNet and CIFAR-10.", "See Tables REF and REF in Appendix  for the standard accuracies corresponding to these results.", "Finally, we provide an alternative, but more concise, proof of the tight robustness guarantee of [6] by casting this as a nonlinear Lipschitz property of the smoothed classifier.", "See appendix  for the complete proof.", "Table: Certified top-1 accuracy of our best ImageNet classifiers at various ℓ 2 \\ell _2 radii.Table: Certified top-1 accuracy of our best CIFAR-10 classifiers at various ℓ 2 \\ell _2 radii." ], [ "Our techniques", "Here we describe our techniques for adversarial attacks and training on smoothed classifiers.", "We first require some background on randomized smoothing classifiers.", "For a more detailed description of randomized smoothing, see [6]." ], [ "Background on randomized smoothing", "Consider a classifier $f$ from $\\mathbb {R}^d$ to classes $\\mathcal {Y}$ .", "Randomized smoothing is a method that constructs a new, smoothed classifier $g$ from the base classifier $f$ .", "The smoothed classifier $g$ assigns to a query point $x$ the class which is most likely to be returned by the base classifier $f$ under isotropic Gaussian noise perturbation of $x$ , i.e., $g(x) &= \\operatornamewithlimits{arg\\,max}_{c \\in \\mathcal {Y}} \\; \\mathbb {P}(f(x+\\delta ) = c) \\quad \\text{where} \\; \\delta \\sim \\mathcal {N}(0, \\sigma ^2 I) \\; .$ The noise level $\\sigma ^2$ is a hyperparameter of the smoothed classifier $g$ which controls a robustness/accuracy tradeoff.", "Equivalently, this means that $g(x)$ returns the class $c$ whose decision region $\\lbrace x^{\\prime } \\in \\mathbb {R}^d: f(x^{\\prime }) = c\\rbrace $ has the largest measure under the distribution $\\mathcal {N}(x, \\sigma ^2 I)$ .", "[6] recently presented a tight robustness guarantee for the smoothed classifier $g$ and gave Monte Carlo algorithms for certifying the robustness of $g$ around $x$ or predicting the class of $x$ using $g$ , that succeed with high probability." ], [ "Robustness guarantee for smoothed classifiers", "The robustness guarantee presented by [6] uses the Neyman-Pearson lemma, and is as follows: suppose that when the base classifier $f$ classifies $\\mathcal {N}(x, \\sigma ^2 I)$ , the class $c_A$ is returned with probability $p_A = \\mathbb {P}(f(x+\\delta ) = c_A)$ , and the “runner-up” class $c_B$ is returned with probability $p_B = \\max _{c \\ne c_A} \\mathbb {P}(f(x+\\delta ) = c)$ .", "The smoothed classifier $g$ is robust around $x$ within the radius $R = \\frac{\\sigma }{2} \\left(\\Phi ^{-1}(p_A) - \\Phi ^{-1}(p_B)\\right),$ where $\\Phi ^{-1}$ is the inverse of the standard Gaussian CDF.", "It is not clear how to compute $p_A$ and $p_B$ exactly (if $f$ is given by a deep neural network for example).", "Monte Carlo sampling is used to estimate some $\\underline{p_A}$ and $\\overline{p_B}$ for which $\\underline{p_A} \\le p_A$ and $\\overline{p_B} \\ge p_B$ with arbitrarily high probability over the samples.", "The result of (REF ) still holds if we replace $p_A$ with $\\underline{p_A}$ and $p_B$ with $\\overline{p_B}$ .", "This guarantee can in fact be obtained alternatively by explicitly computing the Lipschitz constant of the smoothed classifier, as we do in Appendix ." ], [ "We now describe our attack against smoothed classifiers.", "To do so, it will first be useful to describe smoothed classifiers in a more general setting.", "Specifically, we consider a generalization of (REF ) to soft classifiers, namely, functions $F: \\mathbb {R}^d \\rightarrow P(\\mathcal {Y})$ , where $P(\\mathcal {Y})$ is the set of probability distributions over $\\mathcal {Y}$ .", "Neural networks typically learn such soft classifiers, then use the argmax of the soft classifier as the final hard classifier.", "Given a soft classifier $F$ , its associated smoothed soft classifier $G: \\mathbb {R}^n \\rightarrow P(\\mathcal {Y})$ is defined as $G (x) = \\left( F * \\mathcal {N}(0, \\sigma ^2 I) \\right) (x) = \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} [F(x + \\delta )] \\; .$ Let $f(x)$ and $F (x)$ denote the hard and soft classifiers learned by the neural network, respectively, and let $g$ and $G$ denote the associated smoothed hard and smoothed soft classifiers.", "Directly finding adversarial examples for the smoothed hard classifier $g$ is a somewhat ill-behaved problem because of the argmax, so we instead propose to find adversarial examples for the smoothed soft classifier $G$ .", "Empirically we found that doing so will also find good adversarial examples for the smoothed hard classifier.", "More concretely, given a labeled data point $(x, y)$ , we wish to find a point $\\hat{x}$ which maximizes the loss of $G$ in an $\\ell _2$ ball around $x$ for some choice of loss function.", "As is canonical in the literature, we focus on the cross entropy loss $\\ell _{\\mathrm {CE}}$ .", "Thus, given a labeled data point $(x, y)$ our (ideal) adversarial perturbation is given by the formula: $\\hat{x} &= \\operatornamewithlimits{arg\\,max}_{\\Vert x^{\\prime } - x\\Vert _2 \\le \\epsilon } \\ell _\\mathrm {CE}(G (x^{\\prime }), y) \\nonumber \\\\&= \\operatornamewithlimits{arg\\,max}_{\\Vert x^{\\prime } - x\\Vert _2 \\le \\epsilon } \\left( - \\log \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\left( F (x^{\\prime } + \\delta ) \\right)_y \\right] \\right) \\; .", "$ We will refer to (REF ) as the SmoothAdv objective.", "The SmoothAdv objective is highly non-convex, so as is common in the literature, we will optimize it via projected gradient descent (PGD), and variants thereof.", "It is hard to find exact gradients for (REF ), so in practice we must use some estimator based on random Gaussian samples.", "There are a number of different natural estimators for the derivative of the objective function in (REF ), and the choice of estimator can dramatically change the performance of the attack.", "For more details, see Section .", "We note that (REF ) should not be confused with the similar-looking objective $\\hat{x}_{\\mathrm {wrong}}&= \\operatornamewithlimits{arg\\,max}_{\\Vert x^{\\prime } - x\\Vert _2 \\le \\epsilon } \\left( \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\ell _\\mathrm {CE}(F (x^{\\prime } + \\delta ), y) \\right] \\right) \\nonumber \\\\&= \\operatornamewithlimits{arg\\,max}_{\\Vert x^{\\prime } - x\\Vert _2 \\le \\epsilon } \\left( \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[-\\log \\left( F(x^{\\prime } + \\delta ) \\right)_y\\right] \\right) \\; , $ as suggested in section G.3 of [6].", "There is a subtle, but very important, distinction between (REF ) and (REF ).", "Conceptually, solving (REF ) corresponds to finding an adversarial example of $F$ that is robust to Gaussian noise.", "In contrast, (REF ) is directly attacking the smoothed model i.e.", "trying to find adversarial examples that decrease the probability of correct classification of the smoothed soft classifier $G$ .", "From this point of view, (REF ) is the right optimization problem that should be used to find adversarial examples of $G$ .", "This distinction turns out to be crucial in practice: empirically, [6] found attacks based on (REF ) not to be effective.", "Interestingly, for a large class of classifiers, including neural networks, one can alternatively derive the objective (REF ) from an optimization perspective, by attempting to directly find adversarial examples to the smoothed hard classifier that the neural network provides.", "While they ultimately yield the same objective, this perspective may also be enlightening, and so we include it in Appendix ." ], [ "Adversarial training using ", "We now wish to use our new attack to boost the adversarial robustness of smoothed classifiers.", "We do so using the well-studied adversarial training framework [20], [25].", "In adversarial training, given a current set of model weights $w_t$ and a labeled data point $(x_t, y_t)$ , one finds an adversarial perturbation $\\hat{x}_t$ of $x_t$ for the current model $w_t$ , and then takes a gradient step for the model parameters, evaluated at the point $(\\hat{x}_t, y_t)$ .", "Intuitively, this encourages the network to learn to minimize the worst-case loss over a neighborhood around the input.", "At a high level, we propose to instead do adversarial training using an adversarial example for the smoothed classifier.", "We combine this with the approach suggested in [6], and train at Gaussian perturbations of this adversarial example.", "That is, given current set of weights $w_t$ and a labeled data point $(x_t, y_t)$ , we find $\\hat{x}_t$ as a solution to (REF ), and then take a gradient step for $w_t$ based at gaussian perturbations of $\\hat{x}_t$ .", "In contrast to standard adversarial training, we are training the base classifier so that its associated smoothed classifier minimizes worst-case loss in a neighborhood around the current point.", "For more details of our implementation, see Section REF .", "We emphasize that although we are training using adversarial examples for the smoothed soft classifier, in the end we certify the robustness of the smoothed hard classifier we obtain after training.", "We make two important observations about our method.", "First, adversarial training is an empirical defense, and typically offers no provable guarantees.", "However, we demonstrate that by combining our formulation of adversarial training with randomized smoothing, we are able to substantially boost the certifiable robust accuracy of our smoothed classifiers.", "Thus, while adversarial training using SmoothAdv is still ultimately a heuristic, and offers no provable robustness by itself, the smoothed classifier that we obtain using this heuristic has strong certifiable guarantees.", "Second, we found empirically that to obtain strong certifiable numbers using randomized smoothing, it is insufficient to use standard adversarial training on the base classifier.", "While such adversarial training does indeed offer good empirical robust accuracy, the resulting classifier is not optimized for randomized smoothing.", "In contrast, our method specifically finds base classifiers whose smoothed counterparts are robust.", "As a result, the certifiable numbers for standard adversarial training are noticeably worse than those obtained using our method.", "See Appendix REF for an in-depth comparison." ], [ "Implementing ", "As mentioned above, it is difficult to optimize the SmoothAdv objective, so we will approximate it via first order methods.", "We focus on two such methods: the well-studied projected gradient descent (PGD) method [20], [25], and the recently proposed decoupled direction and norm (DDN) method [29] which achieves $\\ell _2$ robust accuracy competitive with PGD on CIFAR-10.", "The main task when implementing these methods is to, given a data point $(x, y)$ , compute the gradient of the objective function in (REF ) with respect to $x^{\\prime }$ .", "If we let $J(x^{\\prime }) = \\ell _{CE} (G (x^{\\prime }), y)$ denote the objective function in (REF ), we have $\\nabla _{x^{\\prime }} J(x^{\\prime }) = \\nabla _{x^{\\prime }} \\left( - \\log \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} [F (x^{\\prime } + \\delta )_y] \\right) \\; .$ However, it is not clear how to evaluate (REF ) exactly, as it takes the form of a complicated high dimensional integral.", "Therefore, we will use Monte Carlo approximations.", "We sample i.i.d.", "Gaussians $\\delta _1, \\ldots , \\delta _m \\sim \\mathcal {N}(0, \\sigma ^2 I)$ , and use the plug-in estimator for the expectation: $\\nabla _{x^{\\prime }} J(x^{\\prime }) \\approx \\nabla _{x^{\\prime }} \\left( - \\log \\left( \\frac{1}{m} \\sum _{i = 1}^m F (x^{\\prime } + \\delta _i)_y \\right) \\right) \\;.$ It is not hard to see that if $F$ is smooth, this estimator will converge to (REF ) as we take more samples.", "In practice, if we take $m$ samples, then to evaluate (REF ) on all $m$ samples requires evaluating the network $m$ times.", "This becomes expensive for large $m$ , especially if we want to plug this into the adversarial training framework, which is already slow.", "Thus, when we use this for adversarial training, we use $m_{\\mathrm {train}} \\in \\lbrace 1, 2, 4, 8\\rbrace $ .", "When we run this attack to evaluate the empirical adversarial accuracy of our models, we use substantially larger choices of $m$ , specifically, $m_{\\mathrm {test}} \\in \\lbrace 1, 4, 8, 16, 64, 128\\rbrace $ .", "Empirically we found that increasing $m$ beyond 128 did not substantially improve performance.", "While this estimator does converge to the true gradient given enough samples, note that it is not an unbiased estimator for the gradient.", "Despite this, we found that using (REF ) performs very well in practice.", "Indeed, using (REF ) yields our strongest empirical attacks, as well as our strongest certifiable defenses when we use this attack in adversarial training.", "In the remainder of the paper, we let $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ denote the PGD attack with gradient steps given by (REF ), and similarly we let $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ denote the DDN attack with gradient steps given by (REF )." ], [ "An unbiased, gradient free method", "We note that there is an alternative way to optimize (REF ) using first order methods.", "Notice that the logarithm in (REF ) does not change the argmax, and so it suffices to find a minimizer of $G(x^{\\prime })_y$ subject to the $\\ell _2$ constraint.", "We then observe that $\\nabla _{x^{\\prime }} (G(x^{\\prime })_y) = \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\nabla _{x^{\\prime }} F(x^{\\prime } + \\delta )_y \\right] \\stackrel{(a)}{=} \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\frac{\\delta }{\\sigma ^2} \\cdot F(x^{\\prime } + \\delta )_y \\right] \\; .$ The equality (a) is known as Stein's lemma [32], although we note that something similar can be derived for more general distributions.", "There is a natural unbiased estimator for (REF ): sample i.i.d.", "gaussians $\\delta _1, \\ldots , \\delta _m \\sim \\mathcal {N}(0, \\sigma ^2 I)$ , and form the estimator $ \\nabla _{x^{\\prime }} (G(x^{\\prime })_y) \\approx \\frac{1}{m} \\sum _{i = 1}^m \\frac{\\delta _i}{\\sigma ^2} \\cdot F (x^{\\prime } + \\delta _i)_y \\; .$ This estimator has a number of nice properties.", "As mentioned previously, it is an unbiased estimator for (REF ), in contrast to (REF ).", "It also requires no computations of the gradient of $F$ ; if $F$ is a neural network, this saves both time and memory by not storing preactivations during the forward pass.", "Finally, it is very general: the derivation of (REF ) actually holds even if $F$ is a hard classifier (or more precisely, the one-hot embedding of a hard classifier).", "In particular, this implies that this technique can even be used to directly find adversarial examples of the smoothed hard classifier.", "Despite these appealing features, in practice we find that this attack is quite weak.", "We speculate that this is because the variance of the gradient estimator is too high.", "For this reason, in the empirical evaluation we focus on attacks using (REF ), but we believe that investigating this attack in practice is an interesting direction for future work.", "See Appendix REF for more details.", "[t] 1: SmoothAdv-ersarial Training function TrainMiniBatch($(x^{(1)}, y^{(1)})$ , $(x^{(2)}, y^{(2)})$ , ..., $(x^{(B)}, y^{(B)})$ )    Attacker $\\leftarrow $ ($\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ or $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ )    Generate noise samples $\\delta _i^{(j)} \\sim \\mathcal {N}(0, \\sigma ^2 I)$ for $1 \\le i \\le m$ , $1 \\le j \\le B$    $L \\leftarrow []$    # List of adversarial examples for training    for $1 \\le j \\le B$ do       $\\hat{x}^{(j)} \\leftarrow x^{(j)}$    # Adversarial example       for $1 \\le k \\le T$ do          Update $\\hat{x}^{(j)}$ according to the $k$ -th step of Attacker, where we use          the noise samples $\\delta _1^{(j)}$ , $\\delta _2^{(j)}$ , ..., $\\delta _m^{(j)}$ to estimate a gradient of the loss of the smoothed          model according to (REF )          # We are reusing the same noise samples between different steps of the attack       end       Append $((\\hat{x}^{(j)} + \\delta _1^{(j)}, y^{(j)}), (\\hat{x}^{(j)} + \\delta _2^{(j)}, y^{(j)}), \\ldots , (\\hat{x}^{(j)} + \\delta _m^{(j)}, y^{(j)}))$ to $L$       # Again, we are reusing the same noise samples for the augmentation    end    Run backpropagation on $L$ with an appropriate learning rate" ], [ "Implementing adversarial training for smoothed classifiers", "We incorporate adversarial training into the approach of [6] changing as few moving parts as possible in order to enable a direct comparison.", "In particular, we use the same network architectures, batch size, and learning rate schedule.", "For CIFAR-10, we change the number of epochs, but for ImageNet, we leave it the same.", "We discuss more of these specifics in Appendix , and here we describe how to perform adversarial training on a single mini-batch.", "The algorithm is shown in Pseudocode REF , with the following parameters: $B$ is the mini-batch size, $m$ is the number of noise samples used for gradient estimation in (REF ) as well as for Gaussian noise data augmentation, and $T$ is the number of steps of an attackNote that we are reusing the same noise samples during every step of our attack as well as during augmentation.", "Intuitively, this helps to stabilize the attack process.." ], [ "Experiments", "We primarily compare with [6] as it was shown to outperform all other scalable provable $\\ell _2$ defenses by a wide margin.", "As our experiments will demonstrate, our method consistently and significantly outperforms [6] even further, establishing the state-of-the-art for provable $\\ell _2$ -defenses.", "We run experiments on ImageNet [8] and CIFAR-10 [19].", "We use the same base classifiers $f$ as [6]: a ResNet-50 [16] on ImageNet, and ResNet-110 on CIFAR-10.", "Other than the choice of attack ($\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ or $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ ) for adversarial training, our experiments are distinguished based on five main hyperparameters: $\\small \\begin{aligned}\\epsilon &= \\text{maximum allowed $\\ell _2$ perturbation of the input}\\\\[-4pt]T &= \\text{number of steps of the attack}\\\\[-4pt]\\sigma &= \\text{std.", "of Gaussian noise data augmentation during training and certification}\\\\[-4pt]m_{\\mathrm {train}} &= \\text{number of noise samples used to estimate (\\ref {eq:plug-in}) during training}\\\\[-4pt]m_{\\mathrm {test}} &= \\text{number of noise samples used to estimate (\\ref {eq:plug-in}) during evaluation}\\end{aligned}\\qquad \\mathrm {(\\Diamond )}$ Given a smoothed classifier $g$ , we use the same prediction and certification algorithms, Predict and Certify, as [6].", "Both algorithms sample base classifier predictions under Gaussian noise.", "Predict outputs the majority vote if the vote count passes a binomial hypothesis test, and abstains otherwise.", "Certify certifies the majority vote is robust if the fraction of such votes is higher by a calculated margin than the fraction of the next most popular votes, and abstains otherwise.", "For details of these algorithms, we refer the reader to [6].", "The certified accuracy at radius $r$ is defined as the fraction of the test set which $g$ classifies correctly (without abstaining) and certifies robust at an $\\ell _2$ radius $r$ .", "Unless otherwise specified, we use the same $\\sigma $ for certification as the one used for training the base classifier $f$ .", "Note that $g$ is a randomized smoothing classifier, so this reported accuracy is approximate, but can get arbitrarily close to the true certified accuracy as the number of samples of $g$ increases (see [6] for more details).", "Similarly, the empirical accuracy is defined as the fraction of the $\\ell _2$ SmoothAdv-ersarially attacked test set which $g$ classifies correctly (without abstaining).", "Both Predict and Certify have a parameter $\\alpha $ defining the failure rate of these algorithms.", "Throughout the paper, we set $\\alpha =0.001$ (similar to [6]), which means there is at most a 0.1% chance that Predict does not return the most probable class under the smoothed classifier $g$ , or that Certify falsely certifies a non-robust input.", "Figure: Comparing our SmoothAdv-ersarially trained CIFAR-10 classifiers vs .", "(Left) Upper envelopes of certified accuracies over all experiments.", "(Middle) Upper envelopes of certified accuracies per σ\\sigma .", "(Right) Certified accuracies of one representative model per σ\\sigma .Details of each model used to generate these plots and their certified accuracies are in Tables - in Appendix .Figure: Comparing our SmoothAdv-ersarially trained ImageNet classifiers vs .Subfigure captions are same as Fig.", ".Details of each model used to generate these plots and their certified accuracies are in Table  in Appendix ." ], [ "To assess the effectiveness of our method, we learn a smoothed classifier $g$ that is adversarial trained using (REF ).", "Then we compute the certified accuraciesSimilar to [6], we certified the full CIFAR-10 test set and a subsampled ImageNet test set of 500 samples.", "over a range of $\\ell _2$ radii $r$ .", "Tables REF and REF report the certified accuracies using our method compared to [6].", "For all radii, we outperform the certified accuracies of [6] by a significant margin on both ImageNet and CIFAR-10.", "These results are elaborated below." ], [ "For CIFAR-10", "Fig.", "REF (left) plots the upper envelope of the certified accuracies that we get by choosing the best model for each radius over a grid of hyperparameters.", "This grid consists of $m_{train} \\in \\lbrace 1, 2, 4, 8\\rbrace $ , $\\sigma \\in \\lbrace 0.12, 0.25, 0.5,1.0\\rbrace $ , $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0\\rbrace $ (see REF for explanation), and one of the following attacks {$\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ , $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ } with $T \\in \\lbrace 2, 4 , 6, 8, 10\\rbrace $ steps.", "The certified accuracies of each model can be found in Tables REF -REF in Appendix .", "These results are compared to those of [6] by plotting their reported certified accuracies.", "Fig.", "REF (left) also plots the corresponding empirical accuracies using $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ with $m_{test} = 128$ .", "Note that our certified accuracies are higher than the empirical accuracies of [6].", "Fig.", "REF (middle) plots our vs [6]'s best models for varying noise level $\\sigma $ .", "Fig.", "REF (right) plots a representative model for each $\\sigma $ from our adversarially trained models.", "Observe that we outperform [6] in all three plots." ], [ "For ImageNet", "The results are summarized in Fig.", "REF , which is similar to Fig.", "REF for CIFAR-10, with the difference being the set of smoothed models we certify.", "This set includes smoothed models trained using $m_{\\mathrm {train}}=1$ , $\\sigma \\in \\lbrace 0.25, 0.5,1.0\\rbrace $ , $\\epsilon \\in \\lbrace 0.5, 1.0, 2.0, 4.0\\rbrace $ , and one of the following attacks {1-step $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ , 2-step $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ }.", "Again, our models outperform those of [6] overall and per $\\sigma $ as well.", "The certified accuracies of each model can be found in Table REF in Appendix .", "We point out, as mentioned by [6], that $\\sigma $ controls a robustness/accuracy trade-off.", "When $\\sigma $ is low, small radii can be certified with high accuracy, but large radii cannot be certified at all.", "When $\\sigma $ is high, larger radii can be certified, but smaller radii are certified at a lower accuracy.", "This can be observed in the middle and the right plots of Fig.", "REF and REF ." ], [ "Effect on clean accuracy", "Training smoothed classifers using SmoothAdv as shown improves upon the certified accuracy of [6] for each $\\sigma $ , although this comes with the well-known effect of adversarial training in decreasing the standard accuracy, so we sometimes see small drops in the accuracy at $r=0$ , as observed in Fig.", "REF (right) and REF (right).", "Table: Certified ℓ ∞ \\ell _{\\infty } robustness at a radius of 2 255\\frac{2}{255} on CIFAR-10.", "Note that our models and 's give accuracies with high probability (w.h.p)." ], [ "$\\ell _{2}$ to {{formula:dd1c5310-b736-4364-b423-af6eb18b8b4a}} certified defense", "Since the $\\ell _2$ ball of radius $\\sqrt{d}$ contains the $\\ell _\\infty $ unit ball in $\\mathbb {R}^d$ , a model robust against $\\ell _2$ perturbation of radius $r$ is also robust against $\\ell _\\infty $ perturbation of norm $r/\\sqrt{d}$ .", "Via this naive conversion, we find our $\\ell _2$ -robust models enjoy non-trivial $\\ell _{\\infty }$ certified robustness.", "In Table REF , we report the bestWe report the model with the highest certified $\\ell _2$ accuracy on CIFAR-10 at a radius of 0.435, amongst all our models trained in this paper.", "$\\ell _{\\infty }$ certified accuracy that we get on CIFAR-10 at a radius of 2/255 (implied by the $\\ell _{2}$ certified accuracy at a radius of $0.435 \\approx 2\\sqrt{3\\times 32^2} / 255$ ).", "We exceed previous state-of-the-art in certified $\\ell _{\\infty }$ defenses by at least $3.9\\%$ .", "We obtain similar results for ImageNet certified $\\ell _{\\infty }$ defenses at a radius of $1/255$ where we exceed the previous state-of-the-art by $8.2\\%$ ; details are in appendix ." ], [ "Additional experiments and observations", "We compare the effectiveness of smoothed classifiers when they are trained SmoothAdv-versarially vs. when their base classifier is trained via standard adversarial training (we will refer to the latter as vanilla adversarial training).", "As expected, because the training objective of SmoothAdv-models aligns with the actual certification objective, those models achieve noticeably more certified robustness over all radii compared to smoothed classifiers resulting from vanilla adversarial training.", "We defer the results and details to Appendix REF .", "Furthermore, SmoothAdv requires the evaluation of (REF ) as discussed in Section .", "We analyze in Appendix REF how the number of Gaussian noise samples $m_{\\mathrm {train}}$ , used in (REF ) to find adversarial examples, affects the robustness of the resulting smoothed models.", "As expected, we observe that models trained with higher $m_{\\mathrm {train}}$ tend to have higher certified accuracies.", "Finally, we analyze the effect of the maximum allowed $\\ell _2$ perturbation $\\epsilon $ used in SmoothAdv on the robustness of smoothed models in Appendix REF .", "We observe that as $\\epsilon $ increases, the certified accuracies for small $\\ell _2$ radii decrease, but those for large $\\ell _2$ radii increase, which is expected." ], [ "More Data for Better Provable Robustness", "We explore using more data to improve the robustness of smoothed classifiers.", "Specifically, we pursue two ideas: 1) pre-training similar to [17], and 2) semi-supervised learning as in [5]." ], [ "Pre-training", "[17] recently showed that using pre-training can improve the adversarial robustness of classifiers, and achieved state-of-the-art results for empirical $l_\\infty $ defenses on CIFAR-10 and CIFAR-100.", "We employ this within our framework; we pretrain smoothed classifiers on ImageNet, then fine-tune them on CIFAR-10.", "Details can be found in Appendix REF ." ], [ "Semi-supervised learning", "[5] recently showed that using unlabelled data can improve the adversarial robustness as well.", "They employ a simple, yet effective, semi-supervised learning technique called self-training to improve the robustness of CIFAR-10 classifiers.", "We employ this idea in our framework and we train our CIFAR-10 smoothed classifiers via self-training using the unlabelled dataset used in [5].", "Details can be found in Appendix REF .", "We further experiment with combining semi-supervised learning and pre-training, and the details are in Appendix REF .", "We observe consistent improvement in the certified robustness of our smoothed models when we employ pre-training or semi-supervision.", "The results are summarized in Table REF ." ], [ "Attacking trained models with ", "In this section, we assess the performance of our attack, particularly $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ , for finding adversarial examples for the CIFAR-10 randomized smoothing models of [6].", "$\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ requires the evaluation of (REF ) as discussed in Section .", "Here, we analyze how sensitive our attack is to the number of samples $m_{\\mathrm {test}}$ used in (REF ) for estimating the gradient of the adversarial objective.", "Fig.", "REF shows the empirical accuracies for various values of $m_{\\mathrm {test}}$ .", "Lower accuracies corresponds to stronger attack.", "SmoothAdv with $m_{\\mathrm {test}}=1$ sample performs worse than the vanilla PGD attack on the base classifier, but as $m_{\\mathrm {test}}$ increases, our attack becomes stronger, decreasing the gap between certified and empirical accuracies.", "We did not observe any noticeable improvement beyond $m_{\\mathrm {test}}=128$ .", "More details are in Appendix REF .", "While as discussed here, the success rate of the attack is affected by the number of Gaussian noise samples $m_{\\mathrm {test}}$ used by the attacker, it is also affected by the number of Gaussian noise samples $n$ in Predict used by the classifier.", "Indeed, as $n$ increases, abstention due to low confidence becomes more rare, increasing the prediction quality of the smoothed classifier.", "See a detailed analysis in Appendix REF ." ], [ "Related Work", " Recently, many approaches (defenses) have been proposed to build adversarially robust classifiers, and these approaches can be broadly divided into empirical defenses and certified defenses.", "Empirical defenses are empirically robust to existing adversarial attacks, and the best empirical defense so far is adversarial training [20], [25].", "In this kind of defense, a neural network is trained to minimize the worst-case loss over a neighborhood around the input.", "Although such defenses seem powerful, nothing guarantees that a more powerful, not yet known, attack would not break them; the most that can be said is that known attacks are unable to find adversarial examples around the data points.", "In fact, most empirical defenses proposed in the literature were later “broken” by stronger adversaries [4], [2], [35], [1].", "To stop this arms race between defenders and attackers, a number of work tried to focus on building certified defenses which enjoy formal robustness guarantees.", "Certified defenses are provably robust to a specific class of adversarial perturbation, and can guarantee that for any input $x$ , the classifier's prediction is constant within a neighborhood of $x$ .", "These are typically based on certification methods which are either exact (a.k.a “complete”) or conservative (a.k.a “sound but incomplete”).", "Exact methods, usually based on Satisfiability Modulo Theories solvers [18], [11] or mixed integer linear programming [34], [24], [12], are guaranteed to find an adversarial example around a datapoint if it exists.", "Unfortunately, they are computationally inefficient and difficult to scale up to large neural networks.", "Conservative methods are also guaranteed to detect an adversarial example if exists, but they might mistakenly flag a safe data point as vulnerable to adversarial examples.", "On the bright side, these methods are more scalable and efficient which makes some of them useful for building certified defenses [39], [36], [37], [27], [28], [40], [10], [9], [7], [30], [13], [26], [31], [15], [38], [41].", "However, none of them have yet been shown to scale to practical networks that are large and expressive enough to perform well on ImageNet, for example.", "To scale up to practical networks, randomized smoothing has been proposed as a probabilistically certified defense." ], [ "Randomized smoothing", "A randomized smoothing classifier is not itself a neural network, but uses a neural network as its base for classification.", "Randomized smoothing was proposed by several works [23], [3] as a heuristic defense without proving any guarantees.", "[21] first proved robustness guarantees for randomized smoothing classifier, utilizing inequalities from the differential privacy literature.", "Subsequently, [22] gave a stronger robustness guarantee using tools from information theory.", "Recently, [6] provided a tight robustness guarantee for randomized smoothing and consequently achieved the state of the art in $\\ell _2$ -norm certified defense." ], [ "Conclusions", " In this paper, we designed an adapted attack for smoothed classifiers, and we showed how this attack can be used in an adversarial training setting to substantially improve the provable robustness of smoothed classifiers.", "We demonstrated through extensive experimentation that our adversarially trained smooth classifiers consistently outperforms all existing provably $\\ell _2$ -robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state of the art for provable $\\ell _2$ -defenses." ], [ "Acknowledgements", "We would like to thank Zico Kolter, Jeremy Cohen, Elan Rosenfeld, Aleksander Madry, Andrew Ilyas, Dimitris Tsipras, Shibani Santurkar, and Jacob Steinhardt for comments and discussions." ], [ "Alternative proof of the robustness guarantee of {{cite:ebab0f367122baa16b5d32a126da6a460890180a}} via explicit Lipschitz constants of smoothed classifier", "In this appendix, we present an alternate derivation of (REF ).", "Fix $f : \\mathbb {R}^n \\rightarrow [0,1]$ and define $\\hat{f}$ by: $\\hat{f}(x) = \\left( f * \\mathcal {N}(0, I) \\right) (x) = \\frac{1}{(2\\pi )^{n/2}} \\int _{\\mathbb {R}^n} f(t) \\exp \\left(- \\frac{1}{2} \\Vert x-t\\Vert ^2 \\right) dt \\,.$ The smoothed function $\\hat{f}$ is known as the Weierstrass transform of $f$ , and a classical property of the Weierstrass transform is its induced smoothness, as demonstrated by the following.", "Lemma 1 The function $\\hat{f}$ is $\\sqrt{\\frac{2}{\\pi }}$ -Lipschitz.", "It suffices to prove that for any unit direction $u$ one has $u \\cdot \\nabla \\hat{f}(x) \\le \\sqrt{\\frac{2}{\\pi }}$ .", "Note that: $ \\nabla \\hat{f}(x) = \\frac{1}{(2\\pi )^{n/2}} \\int _{\\mathbb {R}^n} f(t) (x-t) \\exp \\left(- \\frac{1}{2} \\Vert x-t\\Vert ^2 \\right) dt \\,,$ and thus (using $|f(t)| \\le 1$ , and classical integration of the Gaussian density) $u \\cdot \\nabla \\hat{f}(x) & \\le & \\frac{1}{(2\\pi )^{n/2}} \\int _{\\mathbb {R}^n} |u \\cdot (x-t)| \\exp \\left(- \\frac{1}{2} \\Vert x-t\\Vert ^2 \\right) dt \\\\& = & \\frac{1}{\\sqrt{2 \\pi }} \\int _{-\\infty }^{+ \\infty } |s| \\exp \\left(-\\frac{1}{2} s^2 \\right) ds = \\sqrt{\\frac{2}{\\pi }} \\,.$ However, $\\hat{f}$ in fact satisfies an even stronger nonlinear smoothness property as shown in the following lemma.", "Lemma 2 Let $\\Phi (a) = \\frac{1}{\\sqrt{2 \\pi }} \\int _{-\\infty }^a \\exp \\left( - \\frac{1}{2} s^2 \\right) ds$ .", "For any function $f : \\mathbb {R}^n \\rightarrow [0,1]$ , the map $x \\mapsto \\Phi ^{-1}(\\hat{f}(x))$ is 1-Lipschitz.", "Note that: $\\nabla \\Phi ^{-1}(\\hat{f}(x)) = \\frac{\\nabla \\hat{f}(x)}{\\Phi ^{\\prime }(\\Phi ^{-1}(\\hat{f}(x))} \\,,$ and thus we need to prove that for any unit direction $u$ , denoting $p = \\hat{f}(x)$ , $u \\cdot \\nabla \\hat{f}(x) \\le \\frac{1}{\\sqrt{2\\pi }} \\exp \\left( -\\frac{1}{2} (\\Phi ^{-1}(p))^2 \\right) \\,.$ Note that the left-hand side can be written as follows (recall (REF )) $\\operatornamewithlimits{\\mathbb {E}}_{X \\sim \\mathcal {N}(0,I_n)} [ f(x+X) X \\cdot u ] \\,.$ We now claim that the supremum of the above quantity over all functions $f : \\mathbb {R}^n \\rightarrow [0,1]$ , subject to the constraint that $\\mathbb {E}[ f(x+X) ] = p$ , is equal to: $ \\mathbb {E} [ (X \\cdot u) 1\\lbrace X \\cdot u \\ge - \\Phi ^{-1}(p) \\rbrace ] = \\frac{1}{\\sqrt{2\\pi }} \\exp \\left( -\\frac{1}{2} (\\Phi ^{-1}(p)^2) \\right) \\,,$ which would conclude the proof.", "To see why the latter claim is true, first notice that $h : x \\mapsto 1\\lbrace x \\cdot u \\ge - \\Phi ^{-1}(p)\\rbrace $ achieves equality.", "Let us assume by contradiction that the maximizer is obtained at some function $f: \\mathbb {R}^n \\rightarrow [0, 1]$ different from $h$ .", "Consider the set $\\Omega ^+$ where $h(x) > f(x)$ and $\\Omega ^-$ the set where $h(x) < f(x)$ , and note that since both functions integrate to $p$ , it must be that $\\int _{\\Omega ^+} (h-f) d\\mu = \\int _{\\Omega ^-} (f-h) d \\mu $ (where $\\mu $ is the Gaussian measure).", "Now simply consider the new function $\\tilde{f} = f + (h-f) 1\\lbrace \\Omega ^+\\rbrace - (f-h) 1\\lbrace \\Omega ^-\\rbrace $ .", "Note that $\\tilde{f}$ takes value in $[0,1]$ and integrates to $p$ .", "Moreover, denoting $g(x) = x \\cdot u$ , one has $\\int f g d\\mu < \\int \\tilde{f} g d\\mu $ .", "Indeed, by definition of $h$ , one has for any $x \\in \\Omega _+$ and $y \\in \\Omega ^-$ that $g(x) > g(y)$ .", "This concludes the proof.", "It turns out that the smoothness property of lemma 2 naturally leads to the robustness guarantee (REF ) of [6].", "To see why, let $\\hat{f}_i: \\mathbb {R}^n \\rightarrow [0, 1]$ be the output of the smoothed classifier mapping a point $x\\in \\mathbb {R}^n$ to the probability of it belonging to class $c_i$ .", "Assume that the smooth classifier assigns to $x$ the class $c_A$ with probability $p_A= \\hat{f}_A(x)$ .", "Denote by $c_B$ any other class such that $c_B \\ne c_A$ and $p_B = \\hat{f}_B(x) \\le p_A$ .", "By lemma 2, we know that under any perturbation $\\delta \\in \\mathbb {R}^n$ of $x$ , $\\Phi ^{-1}\\left(\\hat{f}_A(x)\\right) - \\Phi ^{-1}\\left(\\hat{f}_A(x + \\delta )\\right) \\le \\Vert \\delta \\Vert _2.$ For an adversarial $\\delta $ , $\\hat{f}_A(x + \\delta ) \\le \\hat{f}_B(x + \\delta )$ for some class $c_B$ , leading to $\\Phi ^{-1}\\left(\\hat{f}_A(x)\\right) - \\Phi ^{-1}\\left(\\hat{f}_B(x + \\delta )\\right) \\le \\Vert \\delta \\Vert _2 .$ By lemma 2 applied to $\\hat{f}_B$ , and noting that $\\hat{f}_B(x+\\delta ) \\ge \\hat{f}_B(x)$ , we know that, $\\Phi ^{-1}\\left(\\hat{f}_B(x + \\delta )\\right) - \\Phi ^{-1}\\left(\\hat{f}_B(x)\\right) \\le \\Vert \\delta \\Vert _2.$ Combining (REF ) and (REF ), it is straightforward to see that $\\Vert \\delta \\Vert _2 \\ge \\frac{1}{2}\\left( \\Phi ^{-1}\\left(p_A\\right) - \\Phi ^{-1}\\left(p_B\\right)\\right)$ The above equation gives a lower bound on the minimum $\\ell _2$ adversarial perturbation required to flip the classification from $c_A$ to $c_B$ .", "This lower bound is minimized when $p_B$ is maximized over the set of classes $C\\setminus \\lbrace c_A\\rbrace $ .", "Therefore, $c_B$ is the runner up class returned by the smoothed classifier at $x$ .", "Finally, the factor $\\sigma $ that appears in (REF ) can be obtained by re-deriving the above with $\\hat{f}(x) = \\left( f * \\mathcal {N}(0,\\sigma ^2 I) \\right) (x)$ and $\\Phi (a)=\\frac{1}{\\sqrt{2 \\pi }} \\int _{-\\infty }^a \\exp \\left( - \\frac{1}{2} (\\frac{s}{\\sigma })^2 \\right) ds$ .", "Note that both lemmas presented in this appendix give the same robustness guarantee for small gaps ($p_A - p_B$ ), but the second lemma is much better for large gaps (in fact, in the limit of a gap going to 1, the second lemma gives an infinite radius while the first lemma only gives a radius of $\\frac{1}{2} \\sqrt{\\frac{\\pi }{2}}$ )." ], [ "Another perspective for deriving ", "In this section we provide an alternative motivation for the SmoothAdv objective presented in Section REF .", "We assume that we have a hard classifier $f: \\mathbb {R}^d \\rightarrow \\mathcal {Y}$ which takes the form $f(x) = \\operatornamewithlimits{arg\\,max}_{y \\in \\mathcal {Y}} L(x)_y$ , for some function $L: \\mathbb {R}^d \\rightarrow \\mathbb {R}^{\\mathcal {Y}}$ .", "If $f$ is a neural network classifier, this $L$ can be taken for instance to be the map from the input to the logit layer immediately preceding the softmax.", "If $f$ is of this form, then the smoothed soft classifier $g$ with parameter $\\sigma ^2$ associated to (the one-hot encoding of) $f$ can be written has $g(x)_{y} &= \\Pr _{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\operatornamewithlimits{arg\\,max}_{y^{\\prime } \\in \\mathcal {Y}} L(x + \\delta )_{y^{\\prime }} = y\\right] \\nonumber \\\\&= \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\nu (L(x + \\delta ))_{y} \\right] \\; , $ for all $y \\in \\mathcal {Y}$ , where $\\nu : \\mathbb {R}^d \\rightarrow \\mathbb {R}^{\\mathcal {Y}}$ is the function, which at input $z$ , has $y$ -th coordinate equal to 1 if and only if $y = \\operatornamewithlimits{arg\\,max}_{y^{\\prime } \\in \\mathcal {Y}} z_{y^{\\prime }}$ , and zero otherwise.", "The function $\\nu $ is somewhat hard to work with, therefore we will approximate it with a smooth function, namely, the softmax function.", "Recall that the softmax function with inverse temperature parameter $\\beta $ is the function $\\zeta _\\beta : \\mathbb {R}^{\\mathcal {Y}} \\rightarrow P(\\mathcal {Y})$ given by $\\zeta _\\beta (z)_y = e^{\\beta z_y} / \\sum _{y^{\\prime } \\in \\mathcal {Y}} e^{\\beta z_{y^{\\prime }}}$ .", "Observe that for any $z \\in \\mathbb {R}^{\\mathcal {Y}}$ , we have that $\\zeta _\\beta (z) \\rightarrow \\nu (z)$ as $\\beta \\rightarrow \\infty $ .", "Thus we can approximate (REF ) with $g(x)_{y} \\approx \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\zeta _\\beta (L(x + \\delta ))_{y} \\right] \\; .$ To find an adversarial perturbation of $g$ at data point $(x, y)$ , it is sufficient to find a perturbation $\\hat{x}$ so that $g(x)_{y}$ is minimized.", "Combining this with the approximation (REF ), we find that a heuristic to find an adversarial example for the smoothed classifier at $(x, y)$ is to solve the following optimization problem: $\\hat{x} = \\operatornamewithlimits{arg\\,min}_{\\Vert x^{\\prime } - x\\Vert _2 \\le \\epsilon } \\operatornamewithlimits{\\mathbb {E}}_{\\delta \\sim \\mathcal {N}(0, \\sigma ^2 I)} \\left[ \\zeta _\\beta (L(x^{\\prime } + \\delta ))_y \\right] \\; ,$ and as we let $\\beta \\rightarrow \\infty $ , this converges to finding an adversarial example for the true smoothed classifier.", "To conclude, we simply observe that for neural networks, $ \\zeta _\\beta (L(x + \\delta ))_y$ is exactly the soft classifier that is thresholded to form the hard classifier, if $\\beta $ is taken to be 1.", "Therefore the solution to (REF ) and (REF ) with $\\beta = 1$ are the same, since $\\log $ is a monotonic function.", "An interesting direction is to investigate whether varying $\\beta $ in (REF ) allows us to improve our adversarial attacks, and if they do, whether this gives us stronger adversarial training as well.", "Intuitively, as we take $\\beta \\rightarrow \\infty $ , the quality of the optimal solution should increase, but the optimization problem becomes increasingly ill-behaved, and so it is not clear if the actual solution we obtain to this problem via first order methods becomes better or not." ], [ "Adversarial attacking the base model instead of the smoothed model", "We compare SmoothAdv-ersarial training (training the smoothed classifier $g$ ) to: using vanilla adversarial training (PGD) to find adversarial examples of the base classifier $f$ and train on them.", "We refer to this as Vanilla PGD training.", "using vanilla adversarial training (PGD) to find adversarial examples of the base classifier $f$ , add Gaussian noise to them, then train on the resulting inputs.", "We refer to this as Vanilla PGD+noise training.", "For our method and the above two methods, we use $T = 2$ steps of attack, $m_{train} = 1$ , and we train for $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0\\rbrace $ , and for $\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace $ .", "Fig.", "REF plots the best certified accuracies over all $\\epsilon $ and $\\sigma $ values, for each $\\ell _2$ radius $r$ using our $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ trained classifiers vs. smoothed models trained via Vanilla PGD or Vanilla PGD+noise.", "Fig.", "REF also plots [6] results as a baseline.", "Observe that SmoothAdv-ersially trained models are more robust overall.", "Figure: Certified defenses: ours vs. vs. vanilla PGD vs. vanilla PGD + noise." ], [ "Effect of number of noise samples $m_{train}$ in (", "As presented in Section REF , more noise samples $\\delta _i$ lead to stronger SmoothAdv-eraial attack.", "Here, we demonstrate that if we train with such improved attacks, we get higher certified accuracies of the smoothed classifier.", "Fig.", "REF plots the best certified accuracies over models trained using $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ or $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ with $T \\in \\lbrace 2,4,6,8,10\\rbrace $ , $\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace $ , $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0\\rbrace $ , and across various number of noise samples $m_{train}$ for the attack.", "Observe that models trained with higher $m_{train}$ tend to have higher certified accuracies.", "Figure: Vary number of samples m train m_{train}." ], [ "Effect of $\\epsilon $ during training on the certified accuracy of smoothed classifiers", "Here, we analyze the effect of the maximum allowed $\\ell _2$ perturbation of SmoothAdv during adversarial training on the robustness of the obtained smoothed classifier.", "Fig.", "REF plots the best certified accuracies for $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0\\rbrace $ over models trained using $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ with $T \\in \\lbrace 2, 4, 6, 8, 10\\rbrace $ , $m_{train} \\in \\lbrace 1,2,4,8\\rbrace $ , and $\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace $ .", "Observe that as $\\epsilon $ increases, the certified accuracies for small $\\ell _2$ radii decrease, but those for large $\\ell _2$ radii increase, which is expected.", "Figure: Vary ϵ\\epsilon .", "Observe that as ϵ\\epsilon increases, the certified accuracies for small ℓ 2 \\ell _2 radii decrease, but those for large ℓ 2 \\ell _2 radii increase, which is expected." ], [ "Effect of the number of samples $m_{test}$ in (", "$\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ requires the evaluation of (REF ) as discussed in Section .", "Here, we analyze how sensitive our attack is to the number of samples $m_{test}$ used in (REF ).", "Fig.", "REF shows the empirical accuracies for various values of $m_{test}$ .", "Lower accuracies correspond to stronger attacks.", "For $m_{test}=1$ , the vanilla PGD attack (attacking the base classifier instead of the smooth classifier) performs better than SmoothAdv, but as $m_{test}$ increases, our attack becomes stronger, decreasing the gap between certified and empirical accuracies.", "We did not observe any noticeable improvement beyond $m_{test}=128$ .", "Figure: (A larger version of Fig. )", "Certified and empirical robust accuracy of 's models on CIFAR-10.", "For each ℓ 2 \\ell _2 radius rr, the certified/empirical accuracy is the maximum over randomized smoothing models trained using σ∈{0.12,0.25,0.5,1.0}\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace .", "The empirical accuracies are found using 20 steps of SmoothAdv PGD \\textsc {SmoothAdv}_{\\mathrm {PGD}}.", "The closer an empirical curve is to the certified curve, the stronger the corresponding attack is (the lower the better)." ], [ "Effect of the number of Monte Carlo samples $n$ in ", "Fig.", "REF plots the empirical accuracies of $g$ using a $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ attack (with $m_{test}=128$ ) across different numbers of Monte Carlo samples n that are used by Predict.", "Observe that the empirical accuracies increase as $n$ increases since the prediction quality of the smoothed classifier improves i.e.", "less predictions are abstained.", "Figure: Empirical accuracies.", "Vary number of samples nn.", "The higher the better." ], [ "Performance of the gradient-free estimator (", "Despite the appealing features of the gradient-free estimator (REF ) presented in Section REF as an alternative to (REF ), in practice we find that this attack is quite weak.", "This is shown in Fig.", "REF for various values of $m_{test}$ .", "We speculate that this is because the variance of the gradient estimator is too high.", "We believe that investigating this attack in practice is an interesting direction for future work.", "Figure: The emprirical accuracies found by the attack () using the plug-in estimator () vs. the gradient-free estimator ().", "The closer an empirical curve is to the certified curve, the stronger the attack." ], [ "Certification Abstention Rate", "In this section, we compare the certification abstention rates of our smoothed models against those of [6]'s models.", "Table REF reports the abstention rates for the best models at various $\\ell _2$ -radii.", "These are the models corresponding to Table REF .", "Our models have a substantially lower abstention rate across all $\\ell _2$ radii.", "Note that [6] reported the abstention rates for prediction (but not certification), which tend to be lower than the certification abstention rates.", "Table: The certification abstention rate of our best CIFAR-10 classifiers at various ℓ 2 \\ell _2 radii." ], [ "Experiments Details", "Here we include details of all the experiments conducted in this paper." ], [ "Attacks used in the paper", "We use two of the strongest attacks in the literature, projected gradient descent (PGD) [25] and decoupled direction and norm (DDN) [29] attacks.", "We adapt these attacks such that their gradient steps are given by (REF ), and we call the resulting attacks $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ and $\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ , respectively.", "For PGD ($\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ ), we use a constant step size $\\gamma = 2\\frac{\\epsilon }{T}$ where $T$ is the number of attack steps, and $\\epsilon $ is the maximum allowed $\\ell _2$ perturbation of the input.", "For DDN ($\\textsc {SmoothAdv}_{\\mathrm {DDN}}$ ), the attack objective is in fact different than that of PGD (i.e.", "different that (REF )).", "DDN tries to find the “closest” adversarial example to the input instead of finding the “best” adversarial example (in terms of maximizing the loss in a given neighborhood of the input).", "We stick to the hyperparameters used in the original paper [29].", "We use $\\epsilon _0 = 1$ , $\\gamma = 0.05$ , and an initial step size $\\alpha = 1$ that is reduced with cosine annealing to 0.01 in the last iteration (see [29] for the definition of these parameters).", "We experimented with very few iterations ($\\lbrace 2, 4, 6, 8, 10\\rbrace $ ) as compared to the original paper, but we still got good results.", "We emphasize that we are not using PGD and DDN to attack the base classifer $f$ of a smoothed model, instead we are using them to adversarially train smoothed classiers (see Pseudocode REF )." ], [ "Training details", "In order to report certified radii in the original coordinates, we first added Gaussian noise and/or do adversarial attacks, and then standardized the data (in contrast to importing a standardized dataset).", "Specifically, in our PyTorch implementation, the first layer of the base classifier is a normalization layer that performed a channel-wise standardization of its input.", "For both ImageNet and CIFAR-10, we trained the base classifier with random horizontal flips and random crops (in addition to the Gaussian data augmentation discussed in Section REF ).", "The main training algorithm is shown in Pseudocode REF .", "It has the following parameters: $B$ is the mini-batch size, $m$ is the number of noise samples used for gradient estimation in (REF ) as well as for Gaussian noise data augmentation, and $T$ is the number of steps of an attack.", "We point out few remarks.", "First, an important parameter is the radius of the attack $\\epsilon $ .", "During the first epoch, it is set to zero, then we linearly increase it over the first ten epochs, then it stays constant.", "Second, we are reusing the same noise samples during every step of our attack as well as augmentation.", "Intuitively, it helps to stabilize the attack process.", "Finally, the way training is described in Pseudocode REF is not efficient; it needs to be appropriately batched so that we compute adversarial examples for every input in a batch at the same time." ], [ "Compute details and training time", "On CIFAR-10, we trained using SGD on one NVIDIA P100 GPU.", "We train for 150 epochs.", "We use a batch size of 256, and an initial learning rate of 0.1 which drops by a factor of 10 every 50 epochs.", "Training time varies between few hours to few days, depending on how many attack steps $T$ and noise samples $m$ are used in Pseudocode REF .", "On ImageNet we trained with synchronous SGD on four NVIDIA V100 GPUs.", "We train for 90 epochs.", "We use a batch size of 400, and an initial learning rate of 0.1 which drops by a factor of 10 every 30 epochs.", "Training time varies between 2 to 6 days depending on whether we are doing SmoothAdv-ersarial training or just Gaussian noise training (similar to [6])." ], [ "Models used", "The models used in this paper are similar to those used in [6]: a ResNet-50 [16] on ImageNet, and ResNet-110 on CIFAR-10.", "These models can be found on the github repo accompanying [6] https://github.com/locuslab/smoothing/blob/master/code/architectures.py." ], [ "Parameters of ", "For details of these algorithms, please see the Pseudocode in [6].", "For Certify, unless otherwise specified, we use $n=100,000$ , $n_0=100$ , $\\alpha = 0.001$ .", "For Predict, unless otherwise specified, we use $n=100,000$ and $\\alpha =0.001$ ." ], [ "Source code", "Our code and trained models are publicly available at http://github.com/Hadisalman/smoothing-adversarial.", "The repository also includes all our training/certification logs, which enables the replication of all the results of this paper by running a single piece of code.", "Check the repository for more details." ], [ "Pre-training", "In this appendix, we describe the details of how we employ pre-training within our framework to boost the certified robustness of our models.", "We pretrain smoothed classifiers on a 32x32 down-sampled version of ImageNet (ImageNet32) as done by [17].", "Then we fine-tune all the weights of these models on CIFAR-10 (with the 1000-dimensional logit layer of each model replaced by a randomly initialized 10-dimensional logit layer suitable for CIFAR-10)." ], [ "ImageNet32 training", "We train ResNet-110 architectures on ImageNet32 using SGD on one NVIDIA P100 GPU.", "We train for 150 epochs.", "We use a batch size of 256, and an initial learning rate of 0.1 which drops by a factor of 10 every 50 epochs.", "We use $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ with $T = 2$ steps and $m_{train}=1$ noise samples.", "We train a total of 16 models each corresponding to a choice of $\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace $ and $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0 \\rbrace $ ." ], [ "Fine-tuning on CIFAR-10", "For each choice of $\\sigma $ and $\\epsilon $ , we fine tune the corresponding ImageNet32 model on CIFAR-10; we replace the 1000-dimensional logit layer of each model with a randomly initialized 10-dimensional logit layer suitable for CIFAR-10, then we train for 30 epochs with a constant learning rate of 0.001 and a batch size of 256.", "We use $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ with $T \\in \\lbrace 2,4,6,8,10\\rbrace $ and $m_{train} \\in \\lbrace 1,2,4,8\\rbrace $ ." ], [ "Semi-supervised Learning", "In this appendix, we detail how we employ semi-supervised learning [5] within our framework to boost the certified robustness of our models.", "We train our CIFAR-10 smoothed classifiers via the self-training technique of [5] using their 500K unlabelled dataset.", "We equip this dataset with pseudo-labels generated by a standard neural network trained on CIFAR-10, as in [5]; see [5] for more detailsThe 500K unlabelled dataset was not public at the time this paper was written.", "We obtained it, along with the pseudo-labels, from the authors of [5].", "We refer the reader to the authors of [5] to obtain this dataset if interested in replicating our self-training results.. Self-training a smoothed classifier works as follows: at every step we randomly sample either a labelled minibatch from CIFAR-10, or a pseudo-labelled minibatch from the 500K dataset: for a labelled minibatch, we follow Pseudocode REF as is.", "for a pseudo-labelled minibatch, we scale the CE loss by a factor of $\\eta \\in \\lbrace 0.1, 0.5, 1.0\\rbrace $ and we follow the rest of Pseudocode REF .", "We use $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ with $T \\in \\lbrace 2,4,6,8,10\\rbrace $ , $m_{train} = 1 $ , $\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace $ , and $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0 \\rbrace $ ." ], [ "Semi-supervised Learning with Pre-training", "We also experiment with combining semi-supervised learning with pre-training in the hopes of obtaining further improvements.", "We start from the same ResNet-110 models pretrained on ImageNet32 as in Appendix REF .", "Then we finetune these models using semi-supervision, as in Appendix REF , for 30 epochs with a learning rate of 0.001.", "We use $\\textsc {SmoothAdv}_{\\mathrm {PGD}}$ with $T \\in \\lbrace 2,4,6,8,10\\rbrace $ , $m_{train} = 1 $ , $\\sigma \\in \\lbrace 0.12, 0.25, 0.5, 1.0\\rbrace $ , and $\\epsilon \\in \\lbrace 0.25, 0.5, 1.0, 2.0 \\rbrace $ ." ], [ "$\\ell _{2}$ to {{formula:e01bf97e-2d86-4a06-9f59-22fcecb19f0f}} Certified Defense on ImageNet", "We find our $\\ell _2$ -robust ImageNet models enjoy non-trivial $\\ell _{\\infty }$ certified robustness.", "In Table REF , we report the best $\\ell _{\\infty }$ certified accuracy that we get at a radius of 1/255 (implied by the $\\ell _{2}$ certified accuracy at a radius of $1.5 \\approx \\sqrt{3\\times 224^2} / 255$ ).", "We exceed previous state-of-the-art in certified $\\ell _{\\infty }$ defenses by around $8.2\\%$ .", "Table: Certified ℓ ∞ \\ell _{\\infty } robustness at a radius of 1 255\\frac{1}{255} on ImageNet." ], [ "ImageNet and CIFAR-10 Detailed Results", "In this appendix, we include the certified accuracies of each mode that we use in the paper.", "For each $\\ell _2$ radius, we highlight the best accuracy across all models.", "Note that we outperform the models of [6] (first three rows of each table) over all $\\ell _2$ radii by wide margins.", "Table: Approximate certified test accuracy on ImageNet.", "Each row is a setting of the hyperparameters σ\\sigma and ϵ\\epsilon , each column is an ℓ 2 \\ell _2 radius.", "The entry of the best σ\\sigma for each radius is bolded.", "For comparison, random guessing would attain 0.001 accuracy.Table: SmoothAdv-ersarial training T=2T=2 steps, m train =1m_{train} = 1 sample.Table: SmoothAdv-ersarial training T=4T=4 steps, m train =1m_{train} = 1 sample.Table: SmoothAdv-ersarial training T=6T=6 steps, m train =1m_{train} = 1 sample.Table: SmoothAdv-ersarial training T=8T=8 steps, m train =1m_{train} = 1 sample.Table: SmoothAdv-ersarial training T=10T=10 steps, m train =1m_{train} = 1 sample.Table: SmoothAdv DDN \\textsc {SmoothAdv}_{\\mathrm {DDN}} training T=4T=4 steps, m train ∈{1,2,4,8}m_{train} \\in \\lbrace 1,2,4,8\\rbrace samples.Table: SmoothAdv DDN \\textsc {SmoothAdv}_{\\mathrm {DDN}} training T=10T=10 steps, m train ∈{1,2,4,8}m_{train} \\in \\lbrace 1,2,4,8\\rbrace samples.Table: SmoothAdv PGD \\textsc {SmoothAdv}_{\\mathrm {PGD}} training T=2T=2 steps, m train ∈{1,2,4,8}m_{train} \\in \\lbrace 1,2,4,8\\rbrace samples.Table: SmoothAdv PGD \\textsc {SmoothAdv}_{\\mathrm {PGD}} training T=10T=10 steps, m train ∈{1,2,4,8}m_{train} \\in \\lbrace 1,2,4,8\\rbrace samples." ] ]
1906.04584
[ [ "Assessing the effects of exposure to sulfuric acid aerosol on\n respiratory function in adults" ], [ "Abstract Sulfuric acid aerosol is suspected to be a major contributor to mortality and morbidity associated with air pollution.", "The objective of the study is to determine if exposure of human participants to anticipated levels of sulfuric acid aerosol ($\\sim 100\\mu g/m^3 $) in the near future would have an adverse effect on respiratory function.", "We used data from 28 adults exposed to sulfuric acid for 4 hours in a controlled exposure chamber over a 3 day period with repeated measures of pulmonary function (FEV1) recorded at 2-hour intervals.", "Measurements were also recorded after 2 and 24 hours post exposure.", "We formulated a linear mixed effect model for FEV1 with fixed effects (day of treatment, hour, day-hour interaction, and smoking status), a random intercept and an AR1 covariance structure to estimate the effect of aerosol exposure on FEV1.", "We further assessed whether smoking status modified the exposure effects and compared the analysis to the method used by Kerr et al.,1981.", "The findings of the study show that the effect of day 3 exposure is negatively associated with lung function (coefficient ($\\beta$), -0.08; 95% CI, -0.16 to -0.01).", "A weak negative association is also observed with increasing hours of exposure ($\\beta$, -0.01; 95% CI, -0.03 to 0.00).", "Among the smokers, we found a significant negative association with hours of exposure ($\\beta$, -0.02; 95% CI, -0.03 to -0.00), day 3 exposure ($\\beta$, -0.11; 95% CI, -0.14 to -0.02) and a borderline adverse effect for day 2 treatment ($\\beta$, -0.06; 95% CI, -0.14 to 0.03) whilst no significant association was observed for nonsmokers.", "In conclusion, anticipated deposits of sulfuric acid aerosol in the near would adversely affect respiratory function.", "The effect observed in smokers is significantly more adverse than in nonsmokers." ], [ "Introduction", "Sulfuric acid ($\\texttt {H}_2\\texttt {SO}_4$ ) is considered a possible cause of increased mortality and morbidity resulting from various episodes of air pollutions over the past.", "[1][2] The recent increases in $\\texttt {H}_2\\texttt {SO}_4$ emission from automobile catalytic converters present enormous environmental challenges with peak estimates projected to reach as high as $80\\mu g/m^3$ in some industrialized countries.", "[3] The effect of sulfuric acid aerosol on pulmonary function in humans has been demonstrated in various studies over the past.", "[4] In controlled studies [5][6] where participants were exposed to high levels of $\\texttt {H}_2\\texttt {SO}_4$ ($\\ge 200$ micro grams per meter-cube, $\\mu g/m^3$ ), adverse effects in pulmonary functions were associated with $\\texttt {H}_2\\texttt {SO}_4$ .", "However, studies of the association of lung function and low levels of $\\texttt {H}_2\\texttt {SO}_4$ have generally reported inconsistent findings.", "Kerr et al.", "(1981), in their analyses, reported no significant difference in pulmonary function in a randomized study where participants were exposed to 100 $\\mu g/m^3$ of $\\texttt {H}_2\\texttt {SO}_4$ in an environmentally controlled chamber.", "The objectives of the current study include (1) To determine if exposure (of participants) to anticipated levels of $\\texttt {H}_2\\texttt {SO}_4$ aerosol in the near future, in a realistic time frame, would have an adverse effect upon respiratory function; (2) To assess if estimated effects of $\\texttt {H}_2\\texttt {SO}_4$ exposure are modified by smoking status; (3) To compare the results obtained to the analyses conducted in Kerr et al., 1981." ], [ "Exposure and Outcome definition", "The study design, experimental set-up and methods used are described in (Kerr et al., 1981).", "Additionally, information on other pulmonary function measurements not included in the current report are provided in [4].", "Subject selection The study was based on 28 healthy adults (aged 18- 45 years) with no previous history of chronic respiratory or cardiovascular diseases.", "The participants comprise of 19 males and 9 females, 14 smokers and 14 non-smokers of mean age 24 and mean height 175m.", "The participants were required to refrain from smoking the morning prior to the study.", "Air Pollution Exposure The study was conducted in an environmentally controlled exposure chamber.", "Over a 3-day period and at the same time, the participants were exposed to: (1) treatment day 1, the subjects breathed in only filtered clean air for 6 hours; 2) treatment day 2, they breathed in $100 \\mu g/m^3$ of sulfuric acid for 4 hours; and 3) treatment day 3, the subjects breathed in only filtered clean air for 6 hours.", "At the time of study, the participants were blinded to the type of exposure administered in any day.", "Outcome Measures Measurement of pulmonary test function (FEV1) by spirometry was performed prior to exposure, at 2 hours during exposure, immediately following exposure (approximately 4 hours from the starts), and 2 hours post exposure.", "The same measurements were repeated for each participant for three consecutive days.", "To simulate an environment similar to living in an urban setting, at 1 hour and 3 hours during exposure in each day, the participants were required to complete a bicycle ergometer exercise using a load of 100W at 60rpm for 15 minutes.", "Additional covariates Information was also collected on other relevant covariates which were taken into account during study design; they include age, height, sex, day of treatment, time of measurements and smoking status.", "All the variables included in the study had no missing measurement." ], [ "Statistical Analysis ", "Statistical analyses were performed using R Version 3.4.2 (R Foundation for Statistical Computing, Vienna, Austria).", "In this analysis, we formulated a linear mixed effect (LME) model to fit the repeated FEV1 measure.", "To properly specify the LME model, we used the four-stage approach recommended by Diggle and Verbeke, 2002 and others.[7][8].", "Figure: FEV1 repeated measures for (n=28) study participants for treatment days 1,2,3.", "Time point 0 represent measurement at baseline (prior to exposure), 1-4 denotes FEV1 levels for hours 1,2,3 and 4 respectively, 5 represents FEV1 level at 2 hours after treatment and time point 6 is 24 hours after treatment.Figure: Mean FEV1 measures for treatment days 1, 2 and 3.In the exploratory analysis, we plot repeated FEV1 measures of participants in each of the three treatment groups.", "In Figure REF , we present individual FEV1 measurements for treatment day 1 (left), day 2 (middle) and day 3 (right).", "The day curves for measures of FEV1 appear to be linear with time suggesting they could be included in the model as a linear covariate.", "In Figure REF , we present an exploration of the mean FEV1 curves for each treatment day over the time course of treatment.", "The mean curves suggest linearity in mean FEV1 over treatment duration.", "In the LME model, we proposed a mean model component involving linear fixed effects of the treatment groups (day), time (hour), treatment-time interaction (day-hour interaction), and smoking status.", "Table: Covariance and empirical correlation estimates for FEV1 repeated measures data.", "Covariance above the diagonal, variance on the diagonal and correlation below the diagonal.The second stage involves specifying a covariance structure to proper capture variations between individual participants and the covariance between FEV1 measurements at different times on the same participant.", "Residuals obtained from a linear fit of the mean component specified in the first stage are used to construct an empirical correlation matrix of the 6-hour time-period.", "The estimates of the variance between participants in the same treatment group (day) are printed in the diagonals of the covariance matrix in Table REF .", "Figure: Scatter plots of FEV1 repeated measures at measurements time points 0 hours (baseline), 2 hours, 4 hours and 6 hours (2 hours post exposure).The patterns observed from the exploratory plots and examination of the correlation structure seem to suggest a model with random intercepts with measurements taken close apart being more correlated.", "Additionally, a scatter plot of FEV1 measures at different times in Figure REF suggest a homogeneous variability between individuals.", "Consequently, we propose an autoregressive order 1 (AR1) covariance structure and random intercepts term for the participants.", "We combine the mean model component and variance structure proposed in stage I-II to formulate a linear mixed model for FEV1 of the form $\\texttt {FEV1}_{ijk} = \\beta _0 + \\beta _1\\texttt {Smoker} + \\alpha _i\\texttt {Day} + \\gamma _k\\texttt {Hour} + \\mu _{ik}\\texttt {Day}\\ast \\texttt {Hour} + b_{ij} + \\epsilon _{ijk}$ where $\\beta _0$ represents a common constant for all measurements, $\\beta _1$ is the coefficient of smoking status, $\\alpha _i$ is the parameter for treatment day $i$ , $\\gamma _k$ is the parameter corresponding to the hour $k$ and $\\mu _{ik}$ is the coefficient of the interaction between day $i$ and hour $k$ .", "We assume the random intercept $b_{ij}$ is normally distributed with mean zero and constant variance $\\sigma _b$ , the measurement error $\\epsilon _{ijk}$ is normally distributed with mean zero and variance $\\sigma _\\epsilon $ and independent of $b_{ij}$ .", "In the third and fourth stages, we fit the mean model and explore the possibility of incorporating polynomial day curves over treatment hours.", "To assess the presence of potential effect modification resulting from the smoking status of the participant, we separately analyze FEV1 measurements for smokers and nonsmokers, and assess the difference in exposure effect between smokers and nonsmokers.", "Finally, we examine model diagnostic plots to access the LME model assumptions.", "The normality assumption of the random intercept and measurement error are assessed from Q-Q plots of the random intercept in Figure REF .", "The plots of standardized residuals are also presented in Figure 6 to assess the assumption of linearity.", "Further, the appropriateness of the specified autocorrelation is also assessed via examination of ACF plots of the normalized residuals of the fitted model.", "In order to ensure that the specified correlation structure is indeed ideal, the fitted model is compared with a model incorporation a compound symmetric covariance structure (a potential candidate for the covariance structure) and the model with the best fit is consequently selected by based on AIC criterion." ], [ "Results", "Participant characteristics in the three treatment days are described in the methods section.", "Out of the 28 participants, 14 smokers and 14 nonsmokers, the mean FV1 measurement on treatment day 1 varied from a maximum of 4.23 prior to filtered air exposure to 4.15 after 4 hours of exposure.", "A similar varying mean FEV1 measure is observed on day 3, under the same filtered air treatment, with a maximum mean measure of 4.23 observed immediately post exposure and the least measure just before exposure.", "On day 2, when the participants were exposed to 100 of sulfuric acid, we observe a low mean FEV1 measure of 4.14 after two and four hours of exposure; the highest mean FEV1 measure was 4.22 on exposure day 2 is observed.", "More specifically, we observe a decreasing trend in pulmonary lung function in the day 2 mean plot in Figure 2; the trend begins to improve with improving FEV1 measure post exposures.", "In Table 1, we present the covariance and correlation of mean FEV1 measures between the different measurement times.", "The between-patient variances within day group at each measurement time are seen in the diagonals of the covariance matrix ranging from 0.79-0.83 and the correlation below the diagonals range from 0.98-1.", "In general, we observe a decreasing empirical correlation estimate from 0.99 between FEV1 at time Hour 0 and Hour 2 to 0.98 between FEV1 at time Hour 0 and HR 6.", "A Scatter plot of FEV1 repeated measures at each 2-hour versus FEV1 all other measurement times are presented in Figure 3.", "Based on results obtain from the exploratory data analysis, we proposed linear mixed model FEV1 in equation (1).", "The model included smoking status, day of treatment, hour (time), day-hour interaction, and a random intercept with an AR1 covariance structure.", "Results obtained from a fit of the linear mixed model proposed in equation one is presented in Table REF .", "Table: Linear mixed effect model for FEV1Exposure to sulfuric acid in treatment day 2 is seemingly weakly associated with a decrease in FEV1 measures with an effect estimate (95% CI) of -0.03(-0.12, 0.04).", "Compared to the baseline day 1 exposure, we see a significant adverse association between exposure to filtered air on day 3 and pulmonary function measure with an effect estimate (95% CI) of -0.08(-0.16,-0.01).", "Further, with increasing time, we see a seemingly negative association between exposure to sulfuric acid and pulmonary function over the course of the study with a somewhat weaker significant effect estimate -0.01(-0.03,0.00).", "We see similar weak, but positive, associations between FEV1 and the effect of interaction between sulfuric acid exposure on day 2 and hour 0.01(-0.01,0.03) or the interaction between day 3 and hour estimate 0.02(0.00,0.04).", "An assessment of potential effect modification by smoking status indicates a significant evidence of modification of the association between FEV1 measure and treatment day 3, and the association of FEV1 with time (hour).", "Results obtained from separately analyzing data for smokers and nonsmokers is in Table REF .", "As expected, in the group of participants who smoke, we observe a significant adverse association between FEV1 measures and time of exposure with an estimated effect (95% CI) -0.02(-0.03,-0.00).", "A similar negative association of FEV1 measure and exposure to filtered clean air on day 3 is observed (coefficient -0.11, 95% CI -0.14 to -0.02).", "Table: Linear mixed effect model estimates for FEV1 independently assessed for smokers and nonsmokersIn Figure REF , we show a diagnostic plot of the assessment of the AR1 correlation structure.", "The plots of the normalized residuals indicate that our model has dealt with temporal autocorrelation at lag 1 and 2.", "However, we observe some unexpected values for larger lags probably resulting from other anomalies in the data.", "The correlation pattern was further assessed by examining a variogram plot of the residuals.", "Additionally, residual plots of the model and random intercept, in Figure REF (right), do not give any indication that the normality assumptions are violated.", "Moreover, a plot of the observed and fitted values in Figure REF indicates a suitable fit of the model to data.", "Figure: Autocorrelation plot of LME model.", "Normalized residual (right) and non-normalized (left)Figure: Normal Q-Q plots of random intercept (left); Q-Q plots of treatment days 1-3 (middle); Residual plot for treatment days (right)" ], [ "Discussion", "Results obtained in this analysis indicate that short-term exposure to low levels of sulfuric acid aerosol has an adverse effect on the measure of pulmonary lung function (FEV1).", "More specifically, we see a significant negative association between FEV1 and treatment with filtered clean on the last day of the trial.", "A similar seemingly negative, but weak, association is observed with time (hour) of exposure to sulfuric acid.", "A stronger adverse effect is evident in participants who smoke compared to nonsmokers.", "Previous studies [9][10] have suggested that sulfuric acid constitutes the main component of air pollution responsible for increased mortality and morbidity in various events of major air pollution over the recent past.", "Spektor et al., 1989 [11] showed that cumulative exposure of inhaled sulfuric acid on is adversely associated with tracheobronchial particle clearance in healthy humans.", "Similar studies [6][5][9] over the years support the general consensus that exposure to high levels of sulfuric acid could lead to complications with pulmonary system.", "However, results from [4] looking at exposure to low doses of sulfuric acid exposure have been inconsistent.", "The analysis in this report explored the possibility of effect modification by smoking status of the participant.", "As expected, participants with previous smoking history appear more adversely affected by exposure to sulfuric acid compared to nonsmokers.", "This is indicative of a major flaw in the design of the study, which inadvertently affected the analysis in Kerr et al.", "(1981).", "In our analysis, results obtained from fitting a model for smokers among the participants were significantly different from the results obtained for nonsmokers.", "In general, we observe a weak adverse effect of exposure to sulfuric acid on pulmonary lung functions.", "These findings are in line with previous work using higher concentrations of the exposure primarily in animals.", "In the analysis of the dataset presented by Kerr at al., 1981, they reported no significant difference in pulmonary function during exposure, immediately after, or 2 and 24 hours post exposure.", "In their analysis of the data, they used a paired student t-test to compare the exposed participant with their controls.", "They assumed that the design was completely randomized and balanced.", "Indeed, the analysis would have been suitable if the measurements were not repeated and adjustments were only needed for within-subject variations.", "Consequently, a limitation of the method they used is that it does not account for repeated measurements within a subject and potential between-subject variations.", "They also used ANOVA for factorial design to assess the varying effect of pulmonary lung function with day of exposure, hour and day-hour interaction; however, the method is often criticized to be biased in general and loses information by not effectively incorporating intermediate measurements.", "Moreover, it could lead to inaccurate conclusions in unbalanced designs.", "[12] As a consequence of the patterns of correlations observed in the exploratory analysis, a standard analysis of variance as prescribed in Milliken and Johnson[3] is likely not appropriate for this dataset.", "Thus, a linear mixed effect analysis implemented, which did not in general assume a complete balance setting.", "A major advantage of using a linear mixed model for this design is that the manner in which the subjects are assigned to the treatment days in itself typically induces a covariance structure.", "Moreover, the design induces a covariance due to the contributions of the random effects.", "In the analysis, the 28 participants were included in the analysis and any potential confounding resulting from smoking was controlled by including their smoking status in the model.", "In order to accurately capture the true effect of exposure to sulfuric acid on pulmonary lung function, information on other potentially relevant variables would need to be properly included in the mean model.", "Incorporating fundamental covariates such as age, sex and BMI could immensely improve the model fit.", "The accuracy of the present analysis, in part, assumes that during the design stage as reported in Kerr et al.", "(1980), the effect of the variables age, height and sex were controlled.", "Another possible limitation of the model is in the specification of the covariance structure.", "Although the model with the best covariance structure was selected (AR1 VS Compound symmetric), the number of repeated measures was too small to truly capture and account for the covariance structure of the data generating mechanism.", "[13] We conclude that short-term exposure to sulfuric acid is negatively associated with the measure of pulmonary lung function FEV1; this is particularly evident among smokers.", "Since the study was conducted over a short period of time using a low dosage of sulfuric acid, we observe a minimally weak adverse effect in nonsmokers.", "This seems to be in agreement with studies [11][14] that assessed the association between FEV1 and exposure to various sulfuric constituents of air pollutants.", "A further investigation incorporating other potentially useful risk factor or longer exposure time is warranted." ] ]
1906.04296
[ [ "Optimal In-field Routing for Full and Partial Field Coverage with\n Arbitrary Non-Convex Fields and Multiple Obstacle Areas" ], [ "Abstract Within the context of optimising the logistics in agriculture this paper relates to optimal in-field routing for full and partial field coverage with arbitrary non-convex fields and multiple obstacle areas.", "It is distinguished between nine different in-field routing tasks: two for full-field coverage, seven for partial-field coverage and one for shortest path planning between any two vertices of the transition graph.", "It differentiates between equal or different start and end vertices for a task, coverage of only a subset of vertices, and a subset of edges or combinations.", "The proposed methods are developed primarily for applying sprays and fertilisers with larger operating widths and with fields where there is unique headland path.", "Partial field coverage where, e.g., only a specific subset of edges has to be covered is relevant for precision agriculture and also for optimised logistical operation of smaller-sized machinery with limited loading capacities.", "The result of this research is the proposition of two compatible algorithms for optimal full and partial field coverage path planning, respectively.", "These are evaluated on three real-world fields to demonstrate their characteristics and computational efficiency." ], [ "Motivation", "Within the context of optimising logistics in agriculture this paper addresses the research question: how to optimally solve multiple in-field routing tasks including both full and partial field coverage, while simultaneously accounting for non-convex fields, multiple obstacle areas, partitioned subfields, and compacted area minimisation?", "It is thus aimed at optimising in-field coverage path planning, while generalising this to different tasks and different field characteristics." ], [ "Problem Formulation and Contribution", "The problem addressed is to solve the following nine in-field routing tasks: (i) full field coverage with equal start and end vertex, (ii) full field coverage with different start and end vertex, (iii) coverage of a subset of vertices with equal start and end vertex, (iv) coverage of a subset of vertices with different start and end vertex, (v) coverage of a subset of edges with equal start and end vertex, (vi) coverage of a subset of edges with different start and end vertex, (vii) a combination of (iii) and (v), (viii) a combination of (iv) and (vi), and (ix) shortest path planning between any two vertices.", "As a starting point, the following assumptions are made: availability of a transition graph with edges connecting vertices, a corresponding cost array with edge-costs equal to the path length of edges, a designated start and end vertex, implementation of the task by one vehicle (instead of multiple in-field operating vehicles), and any of above in-field routing tasks (i)-(ix).", "For this setup, the contribution of this paper is to address the research question from section REF .", "Figure: Illustration of different types of field shapes and the notion of tractor tracks defining a transition graph 𝒢\\mathcal {G}.", "Headland and interior edges are denoted by solid and dashed lines, respectively.", "Vertices are indicated by dots and are, in general, labelled for identification such that the transition between any two vertices is unique.", "(Left) Uninterrupted edges when aligned in a rotated coordinate frame.", "(Right) Interruped edges due to field indents and obstacle areas that are prohibited from tresspassing by any vehicle operating in the field." ], [ "Background and Further Motivation", "According to [1] there are four main functional sectors for the agri-food supply chain: production, harvesting, storage and distribution.", "Optimising logistics and route planning plays an important role in all of the four functional areas for improved supply chain efficiency.", "Furthermore, according to [24] it can be distinguished between in-field, inter-field, inter-sector and inter-regional logistics.", "This paper relates to the first functional area of the agri-food supply chain, i.e., production, and further to in-field optimisation of logistics.", "The classic vehicle routing problem (VRP) seeks total cost-minimising routes for multiple identical vehicles that all start and end at a single depot and are subject to load constraints, and where multiple vertices (customers) subject to various demands must be serviced exactly once by exactly one vehicle.", "There are many variations, see [26].", "The focus is on vertex-coverage.", "By contrast, for agricultural in-field routing typically edge-coverage is of primary importance.", "Often a single large machine is operating in-field, e.g., during a spraying application.", "Therefore, instead of VRPs, arc routing problems (ARPs) are here of more interest, see [7], [8].", "For shortest path planning between two vertices (routing task (ix) according to sect.", "REF ) greedy algorithms such as the algorithm by [6] and the A$^*$ -algorithm by [13] are famous.", "However, these are specialised algorithms that do not solve field coverage path planning problems where a set of edges needs to be covered.", "For ARPs according to [7], [8], problems can be distinguished between the classes of the Chinese postman problem (CPP) and the rural postman problem (RPP), where all and only a subset of all arcs of the graph need to be traversed, respectively.", "For the CPP it is further distinguished between undirected, directed, windy, mixed and hierarchical CPPs.", "Similarly, this occurs for RPPs.", "To further illustrate complexity of solution algorithms and their typical hierarchical structure, a directed RPP can be solved by first constructing a shortest spanning arborescence, then deriving an Eulerian graph on top, and ultimately determining an Eulerian tour on the augmented graph.", "In general, Eulerian tours on an Eulerian graph are not unique and can differ substantially.", "Therefore, specific heuristics can be derived to develop path planning behaviour.", "Given an agricultural working area, the first step is to fit a transition graph with edges and vertices as their connection points.", "This step requires to (i) decide on field-interior edge shapes, straight or curved, and (ii) to account for field contours and possibly also for 3D topography, see [10].", "According to sect.", "REF , this paper assumes a transition graph as a given starting point, whereby obstacle areas prohibited from trespassing are also accounted for according to Fig.", "REF .", "[9] and [23] examined the role of field shapes on efficiency of path planning.", "[29] discussed a hierarchical and heuristic algorithm for in-field routing with obstacle areas.", "It was hierarchical since it decomposed the problem into three sequential stages.", "It was heuristic since it decomposed the field area firstly into cells, then determined a sequence for coverage of the cells, and only then considered path plans and their linking between the different cells.", "A similar method was described in [25].", "Importantly, because of these hierarchical heuristics and for transition graphs applied to arbitrary field shapes and arbitrarily located multiple obstacles, these methods can in general not guarantee to find a minimum cost tour covering all edges at least once.", "This is mentioned to explicitly stress that, by contrast, the algorithm proposed in this paper is guaranteed to find the minimum cost tour.", "This is achieved by working directly on the full Eulerian graph augmentation.", "Heuristic rules are derived on top to guide the planning of an Eulerian tour with favourable properties for additional partial field coverage and practical implementation.", "Table: NO_CAPTIONField decomposition into subfields using trapezoids as presented in [16] is a popular method to deal with irregularly shaped fields ([15] and [21]).", "[3], [4] and [28], discussed different fieldwork patterns and headland turning methods subject to vehicle kinematic constraints.", "They were typically motivated by the desire to minimise accumulated non-working path lengths at headlands.", "Note that these methods do not naturally account for in-field obstacles.", "Instead, as mentioned in the introduction of [4], “B-patterns do not generate any subfield areas division but, by contrast, the generation of the subfields (when it is needed due to e.g.", "physical obstacles or complex field shapes) is a prerequisite for applying B-patterns methodology”.", "As a consequence, when accounting for obstacles (such as tree islands), B-patterns are in general no longer optimal since the same limitations of aforementioned hierarchical algorithms apply.", "For a discussion of field experiments over three years for different headland turning methods see [17].", "In [18] two path planning patterns for partial field coverage were compared for convex field shapes.", "One of them was identified as particularly favourable when aiming for minimal compacted area from tractor tracks while accounting for limited turning radii of agricultural machinery.", "However, patterns are in general never optimal for arbitrary non-convex field shapes particularly when also considering multiple obstacle areas.", "Thus, the present paper is generalising and relevant not only for full but also for partial field coverage.", "Furthermore, as will be shown, algorithms are designed purposely such that the preferred pattern from [18] is automatically recovered for convex field shapes.", "The remaining paper is organised as follows: algorithms, real-world examples, benefits and limitations, and the conclusion are described in sections -." ], [ "High-level strategy to address nine different routing tasks", "The nine classes of different in-field routing tasks addressed in this paper are summarised in Table REF .", "In practice, this number is required for generality.", "For perspective, spraying applications occur multiple times throughout any crop year.", "Depending on available machinery, weather and varying available time-windows, different routing tasks may apply, also including partial field coverage per field run.", "In view of precision agriculture, algorithmic solutions are therefore needed to address all of tasks, T1 to T9.", "In this paper, two main levels are distinguished.", "At the highest level, there are the full field coverage tasks T1 and T2, whereby T2 can be considered as an extension of T1.", "The algorithm proposed therefore is discussed in sect.", "REF .", "On the second level, there are the partial field coverage tasks T3 to T8.", "Here, tasks T3, T5 and T7, and equivalently tasks T4, T6 and T8, exploit as their starting points the full field coverage solutions for T1 and T2, respectively.", "For the second level, the proposed algorithm is discussed in sect.", "REF .", "Finally, T9 is a special case that is discussed in sect.", "REF .", "Table: In this paper, 9 classes of in-field routing tasks are considered.Figure: Illustration of curved interior edges aligned to part of the field contour." ], [ "Preliminaries", "Topographical characteristics relevant for in-field routing, data variables, and solution variables are discussed.", "For the former, these are (i) arbitrarily shaped fields (convex or non-convex), (ii) multiple obstacle areasThe term “obstacle area” comprises all obstacles prohibited from trespassing by in-field operating vehicles, including tree islands, ponds and so forth.", "within the field, (iii) either straight or curved interior edges aligned to part of the field contour, and (iv) partitioned subfields with interior edges orientated differently from the area-wise largest main part of the field.", "These are shown in Figs.", "REF and REF .", "Two more comments should be made.", "Firstly, curved interior edges still permit a transition graph representation that is analogous to Fig.", "REF .", "Secondly, in this paper separate transition graphs are defined for all partitioned subfields and the main field.", "The synchronised handling of all of these is detailed in sect.", "REF .", "Figure: Illustration of a subfield connected to the main field.", "Characteristic is (i) the sharing of a path, here from vertex 17 to 24, that is coinciding for both subfield and main field, and (ii) different interior edge orientations.", "The partition into main field and subfields occurs in particular in case of strongly non-convex fields, where it is worthwhile to differentiate interior edge orientations to minimise compacted area.", "The annotation is added for two reasons: (i) to illustrate vertex labelling on a satellite image, (ii) to emphasise that connected subfields and main field are treated separately according to sect.", ".Data variables used for the problem description are summarised in the main nomenclature table.", "See also Fig.", "REF for visualisation.", "Some additional comments are made.", "First, for every undirected transition graph, $\\mathcal {G}=(\\mathcal {V},\\mathcal {E})$ , the set of vertices and edges can be partitioned into subsets as follows: $\\mathcal {V} &= \\mathcal {V}^\\text{hdl} \\cup \\mathcal {V}^\\text{isl},\\\\\\mathcal {E} &= \\mathcal {E}_{\\text{hdl},\\text{hdl}}^\\text{hdl} \\cup \\mathcal {E}_{\\text{isl},\\text{isl}}^\\text{hdl} \\cup \\mathcal {E}_{\\text{hdl},\\text{isl}}^\\text{int} \\cup \\mathcal {E}_{\\text{isl},\\text{isl}}^\\text{int}.$ Second, start and end vertex, $s_\\text{start}$ and $s_\\text{end}$ , typically denote the field entry and exit vertex.", "Third, a list of elements is denoted by $\\lbrace \\cdot \\rbrace $ , the number of elements in a list by $|\\cdot |$ , and an edge between vertices $i$ and $j$ by $(i,j)$ .", "The “$+$ ”-operator is overloaded to indicate concatenation of lists as $\\lbrace \\cdot \\rbrace +\\lbrace \\cdot \\rbrace $ .", "Fourth, throughout this paper, it is assumed that only forward motion of any in-field operating machinery is permitted.", "Thus, any sequence of vertices such as a-b-a is prohibited.", "For shortest path planning it would imply the necessity of reverse driving, which is impractical, in particular for operations with trailers and for large edge lengths.", "Fifth, as Fig.", "REF illustrates, the number of edges incident to every vertex is either two or three.", "However, because of the previous assumption about forward motion only, at every vertex there are always only either one or two transition decisions available during routing.", "Ultimately, there always exists a path length minimising global optimal solution to all routing tasks T1 to T9.", "This immediately follows from the nonnegativity property of edge-costs, here defined as path lengths.", "Solution variables most relevant in the presented algorithms are $s_t$ , $\\lbrace s_t\\rbrace _{0}^{T}$ , $C$ , and variations with different superscripts.", "These indicate a vertex at index $t$ , a sequence of vertices, and the accumulated path length, respectively." ], [ "Full Field Coverage: T1 and T2", "Algorithm REF is proposed for full field coverage path planning according to routing tasks T1 and T2.", "Full Field Coverage for T1 and T2 Table: NO_CAPTIONSeveral explanatory comments are made.", "First, $\\mathcal {G}^{\\prime }$ denotes the Eulerian graph augmentation of the undirected graph $\\mathcal {G}$ .", "Thus, $\\mathcal {G}$ is augmented in a total minimum cost manner such that afterwards every vertex has an even degree, i.e., an even number of incident edges, however, this is subject to the constraint that all interior edges shall not be eligible as augmentation candidates.", "The reason therefore is to enforce path planning with forward motion only for any in-field operating vehicle.", "Consequently, only headland edges and island headland edges are eligible.", "The edges replicated from $\\mathcal {G}$ for this augmentation are denoted by $\\mathcal {E}^{\\prime }$ .", "Due to the characteristic connectivity of $\\mathcal {G}$ with each vertex being connected to at most 3 vertices and aforementioned constraint, an Eulerian graph augmentation (see [5]) can always be constructed by pairing neighbouring vertices in a cost-minimising manner.", "As a consequence, an overall path length minimising field coverage route for T1 with equal start end vertex is always constructed by traversing every edge at most twice.", "By contrast, for T2 with $s_\\text{start}\\ne s_\\text{end}$ , some edges have to be traversed three times due to the final transition to $s_\\text{end}$ after completing the coverage of all edges of $\\mathcal {G}$ .", "Second, function $\\mathcal {F}^\\text{hdl}(\\cdot )$ returns $\\lbrace s_t\\rbrace _{0}^{T^\\text{hdl}}$ , which is the concatenation of the sequence of vertices tracing the headland path in counter clockwise (CCW) direction from $s_\\text{start}$ to $s_\\text{start}$ .", "The CCW direction is a choice, motivated by the desire to ultimately obtain consistent circular pattern-like path planning to be detailed below.", "Importantly, this choice does not compromise optimality since tracing complies with $\\mathcal {G}^{\\prime }$ and $\\mathcal {G}^{\\prime }$ is not affected thereby.", "Step 3 of Algorithm REF , i.e., a shortest path computation on top of Step 2 only applies for T2 when $s_\\text{start}$ differs from $s_\\text{end}$ .", "Third, in Step 4 all edges that were covered as part of Step 2 are removed from $\\mathcal {G}^{\\prime }$ in both directions, whereby they are removed only once.", "Thus, edges stemming from the Eulerian graph augmentation as well as all interior edges and island headland edges are unaffected and remain with $\\mathcal {G}^{\\prime }$ .", "Furthermore, edges that stem from the shortest path contribution due to T2 and Step 3 are not removed.", "The general consequence of Step 4 is a reduction of edge candidates from $\\mathcal {G}^{\\prime }$ that are available for future traversal in Steps 5-11.", "Fourth, $\\lbrace s_t\\rbrace _0^T$ is traced throughout Steps 5-11.", "As soon as an edge from the Eulerian graph augmentation is traversed in Step 6, a subtour is computed in Step 7 starting from vertex $s_{\\tau +1}$ and ending at $s_{\\tau }$ .", "The method for subtour computation is equivalent to a shortest path computation, however, here subject to two additional constraints plus one exploration heuristic.", "The two constraints are: (i) transitions along the headland path are feasible only in CCW-direction, and (ii) only forward motion and thus no a-b-a sequence of vertices is permitted.", "The exploration heuristic is crucial.", "It enforces exploration of any edge as soon as that edge is element of $\\mathcal {E}^{\\prime }$ and determined to be a feasible next transition.", "This exploration step is necessary to avoid making part of $\\mathcal {G}^{\\prime }$ disconnected at Step 10 of Algorithm REF , in which case full field coverage would become impossible afterwards.", "Without the exploration heuristic and computation of just the shortest path between $s_{\\tau +1}$ and $s_{\\tau }$ , the possibility of disconnection occurs in particular in case there are multiple obstacle areas.", "Fifth, Steps 8-10 of Algorithm REF are responsible for removal of covered edges from $\\mathcal {G}^{\\prime }$ and insertion of the subtour determined in Step 7 into the main sequence of vertices, $\\lbrace s_t\\rbrace _0^T$ , in Step 9.", "Note that the length of $\\lbrace s_t\\rbrace _0^T$ thus changes dynamically during runtime.", "Consequently, there are also the number of iterations according to Step 5 of Algorithm REF that change during runtime.", "Sixth, in particular for convex field shapes and in the absence of obstacle areas, the combination of enforcing (i) traversal along all headland edgesIt should be emphasised that headland edges and island headland edges are distinguishable according to sect.", "REF .", "The CCW-direction constraint only holds for headland edges, but not for island headland edges.", "only in CCW direction, (ii) traversal of all interior edges only once, and (iii) the corresponding Eulerian graph augmentation only along headland and island headland edges causes by design a certain circular path planning pattern visualised in Fig.", "REF .", "On the one hand, this pattern is favourable for partial field coverage as discussed in [18], but it is also optimal when it results from the application of Algorithm REF .", "This follows directly from the fact that Algorithm REF works on the full Eulerian graph $\\mathcal {G}^{\\prime }$ , which ensures a minimum cost tour.", "The pattern is not optimal, particularly in scenarios with multiple obstacle areas.", "However, then it naturally does also not occur in the solution of Step 13.", "Examples that illustrate this further are discussed in sect.", ".", "Seventh, any subtour, $\\lbrace s_k^\\text{sub}\\rbrace _{\\tau +1}^{T^\\text{sub}}$ , is inserted into the main sequence, $\\lbrace s_t\\rbrace _{0}^{T}$ , at Step 9 of Algorithm REF immediately as soon as the subtour becomes available for traversal.", "As a consequence, interior edges are always covered first before any continuation along the headland path until the next cover of interior edges occurs according to a next inserted subtour.", "Furthermore, in case patterns according to Fig.", "REF are determined from Step 7 for subtours (e.g., in the absence of obstacle areas and for convex fields), interior edges are sequentially covered in pairs to form a concatenation of multiple of these patterns.", "This is favourable for the derivation of Algorithm REF for partial field coverage.", "Ultimately, the output of Algorithm REF in Step 13 summarises the total accumulated cost (path length), $C$ , computed in Step 12, and the corresponding sequence of vertices, $\\lbrace s_t\\rbrace _0^T$ , of the path for full field coverage according to T1 and T2.", "Figure: (Left) Sketches of the field coverage pattern naturally resulting from the application of Algorithm for convex field shapes and in the absence of obstacle areas.", "The sequence of vertices, {a,b,c,d,e}\\lbrace a,b,c,d,e\\rbrace , exemplifies the path planning for coverage of two straight edges (a,d)(a,d) and (b,c)(b,c).", "(Right) Concatenation of two patterns and emphasis of two illustrative path transitions." ], [ "Partial Field Coverage: T3 to T8", "Algorithm REF is proposed for partial field coverage path planning according to routing tasks T3 to T8.", "Partial Field Coverage for T3 to T8 Table: NO_CAPTIONSeveral explanatory comments are made.", "First, $\\mathcal {L}_\\mathcal {V}$ and $\\mathcal {L}_\\mathcal {E}$ denote the lists of vertices and edges to be covered according to any respective task from T3 to T8.", "The coverage of specific vertices may be relevant, for example, for refilling of spraying tanks at a mobile depot waiting at a specified vertex at the field headland, or for the picking up or dropping off of, e.g., fertilising material.", "It is also included for generality.", "Second, Step 2 of Algorithm REF is discussed.", "Function $\\mathcal {F}^\\text{seq}(\\cdot )$ traces the output of T1 for T3, T5 and T7 or the output of T2 for T4, T6 and T8, before returning the list of edges ordered according to this tracing.", "Thus, this list can be written as $\\mathcal {P}= \\lbrace \\mathcal {P}_0, \\mathcal {P}_1, \\dots , \\mathcal {P}_{|\\mathcal {P}|-1} \\rbrace $ , whereby each element $\\mathcal {P}_i = \\left( \\lbrace v_i^\\text{in}\\rbrace ,~ v_i^\\text{out} \\right),\\quad \\forall i=0,\\dots ,|\\mathcal {P}|-1,$ defines a directed edge with unique vertex $v_i^\\text{out}$ for both $\\mathcal {L}_\\mathcal {V}$ and $\\mathcal {L}_\\mathcal {E}$ , but unique $v_i^\\text{in}$ only for $\\mathcal {L}_\\mathcal {E}$ .", "Thus, $(v_i^\\text{in},v_i^\\text{out})\\in \\mathcal {L}_\\mathcal {E}$ and $v_i^\\text{out}\\in \\mathcal {L}_\\mathcal {V}$ .", "The notation $\\lbrace v_i^\\text{in}\\rbrace $ in (REF ) is used since when tracing the tour $\\lbrace s_t\\rbrace _{0}^T$ in Step 2, there may be multiple vertices immediately preceding any $v_i^\\text{out}\\in \\mathcal {L}_\\mathcal {V}$ throughout that tour.", "All of these vertices are stored in lists denoted by $\\lbrace v_i^\\text{in}\\rbrace ,~\\forall i=0,\\dots ,|\\mathcal {P}|-1$ .", "Thus, each list $\\lbrace v_i^\\text{in}\\rbrace $ has length 1 for all $v_i^\\text{out}\\in \\mathcal {L}_\\mathcal {E}$ , but may have multiple entries for $v_i^\\text{out}\\in \\mathcal {L}_\\mathcal {V}$ depending on the tour $\\lbrace s_t\\rbrace _{0}^T$ .", "Third, in Step 3 the initial ordering of $\\mathcal {P}_i$ -elements is summarised by index-list $\\mathcal {I}$ .", "For reordered instances of index-list $\\mathcal {I}$ throughout Steps 6-15, the corresponding reordering of $\\mathcal {P}_i$ -elements shall be denoted by $\\mathcal {P}(\\mathcal {I})$ .", "Fourth, function $\\mathcal {F}^\\text{csp}(\\cdot )$ returns a sequence of multiple concatenated shortest paths, $\\lbrace s_t^{\\text{pfc},\\star }\\rbrace _{0}^{T^{\\text{pfc},\\star }}$ , and the corresponding accumulated cost, $C^{\\text{pfc},\\star }$ .", "Adding vertices, $s_\\text{start}$ and $s_\\text{end}$ , the following list can first be written, $\\left\\lbrace s_\\text{start}, ~\\lbrace v_{\\mathcal {I}_0}^\\text{in}\\rbrace ,~ v_{\\mathcal {I}_0}^\\text{out},~\\lbrace v_{\\mathcal {I}_1}^\\text{in}\\rbrace ,~ v_{\\mathcal {I}_1}^\\text{out},\\dots ,~\\lbrace v_{\\mathcal {I}_{|\\mathcal {P}|-1}}^\\text{in}\\rbrace ,~ v_{\\mathcal {I}_{|\\mathcal {P}|-1}}^\\text{out}, s_\\text{end} \\right\\rbrace ,$ before pairwise shortest paths are computed, i.e., between pairs $s_\\text{start}$ and $\\lbrace v_{\\mathcal {I}_0}^\\text{in}\\rbrace $ , between $v_{\\mathcal {I}_0}^\\text{out}$ and $\\lbrace v_{\\mathcal {I}_1}^\\text{in}\\rbrace $ , and so forth, until between $v_{\\mathcal {I}_{|\\mathcal {P}|-1}}^\\text{out}$ and $s_\\text{end}$ .", "For the case that any $\\lbrace v_{\\mathcal {I}_j}^\\text{in}\\rbrace $ comprises more than one vertex, the shortest path between $v_{\\mathcal {I}_{j-1}}^\\text{out}$ and all of their vertices is computed.", "The best vertex from $\\lbrace v_{\\mathcal {I}_j}^\\text{in}\\rbrace $ is then selected such that the path length from $v_{\\mathcal {I}_{j-1}}^\\text{out}$ to it plus the edge path length from it to $v_{\\mathcal {I}_{j}}^\\text{out}$ is shortest.", "The return values, $\\lbrace s_t^{\\text{pfc},\\star }\\rbrace _{0}^{T^{\\text{pfc},\\star }}$ and $C^{\\text{pfc},\\star }$ , represent the vertex sequence resulting from the concatenation of all pairwise shortest paths and the corresponding accumulated path length, whereby any edges, $(\\lbrace v_{\\mathcal {I}_j}^\\text{in}\\rbrace ,~v_{\\mathcal {I}_{j}}^\\text{out})$ , that link the different pairwise shortest paths are also included.", "The shortest path computation on $\\mathcal {G}$ is based on [6], however, here accounting for two additional constraints: (i) transitions along the headland path are feasible only in CCW-direction in accordance with the method from sect.", "REF for full field coverage, (ii) directed edges $(v_{\\mathcal {I}_{j}}^\\text{out},~\\lbrace v_{\\mathcal {I}_j}^\\text{in}\\rbrace ),~\\forall j=0,\\dots ,|\\mathcal {P}|-1$ are prohibited from being on any corresponding shortest path between $v_{\\mathcal {I}_{j-1}}^\\text{out}$ and $\\lbrace v_{\\mathcal {I}_j}^\\text{in}\\rbrace $ .", "The latter is done to enforce paths with forward motion only.", "Because of the special modeling technique with $(v_{\\mathcal {I}_{j}}^\\text{out},~\\lbrace v_{\\mathcal {I}_j}^\\text{in}\\rbrace ),~\\forall j=0,\\dots ,|\\mathcal {P}|-1$ , plus the two vertices $s_\\text{start}$ and $s_\\text{end}$ , the length of (REF ) will always be even such that pairwise shortest path computations are always exactly possible.", "Fifth, in Step 5 of Algorithm REF a tabu list, $\\mathcal {T}^\\text{abu}$ , is initialised with indexing list $\\mathcal {I}$ , which indicates the sequence of edges and nodes to be covered according to Step 2 and 3.", "The currently best indexing list, $\\mathcal {I}^\\star $ , is also initialised, before improvement iterations start from Step 6 on.", "Sixth, the fundamental idea of Steps 6-15 is to iterate over indexing list $\\mathcal {I}$ with the purpose of improving cost $C^{\\text{pfc},\\star }$ and the corresponding sequence of vertices, $\\lbrace s_t^{\\text{pfc},\\star }\\rbrace _{0}^{T^{\\text{pfc},\\star }}$ .", "Function $\\mathcal {F}^\\text{r2n}(\\cdot )$ in Steps 7 and 9 randomly exchanges 2 neighbouring indices in $\\mathcal {I}^\\star $ and $\\mathcal {I}$ , respectively, to produce a new candidate list $\\mathcal {I}$ .", "It was found that in Step 7 attempting to exchange $\\mathcal {I}^\\star $ , instead of the last $\\mathcal {I}$ , improved performance.", "Similarly, it was found that incrementally exchanging only two neighbouring indices yielded faster solve times in contrast to randomly reshuffling the entire $\\mathcal {I}$ -list at every iteration.", "Most importantly, it was found that the employment of a tabu list, $\\mathcal {T}^\\text{abu}$ , significantly helped increasing the likelihood and speed of finding the global optimal $C^{\\text{pfc},\\star }$ .", "This is since the effect of $\\mathcal {T}^\\text{abu}$ and Steps 8-10 is that exploration of different $\\mathcal {I}$ -candidates is enforced.", "According to Step 11, an $\\mathcal {I}$ not yet in $\\mathcal {T}^\\text{abu}$ is added at index 0 and all remaining elements of $\\mathcal {T}^\\text{abu}$ are shifted by 1 index, thereby deleting its previous last element if necessary to ensure a finite maximum length of the tabu list.", "More aspects of the size of $\\mathcal {T}^\\text{abu}$ are discussed at the end of this section and in sect.", ".", "Seventh, the edges to be covered as a part of a routing task may be defined as undirected edges in $\\mathcal {L}_\\mathcal {E}$ when input to Algorithm REF as part of Step 1.", "However, after the tracing in Step 2, all edges of $\\mathcal {L}_\\mathcal {E}$ are automatically directed according to the transitions from $\\lbrace s_t\\rbrace _{0}^T$ .", "Since $\\mathcal {P}$ is not changed beyond Step 2 in Algorithm REF , also the direction of edge traversals is not further changed.", "This is done on purpose.", "In combination with the method of concatenating shortest paths subject to 2 constraints outlined above, it is thereby ensured that all transitions between any headland edge and any interior edge are unique.", "This is favourable with respect to minimisation of the soil compacted area.", "If unique transitions were not enforced, then depending on a routing mission, an unconstrained shortest path computation may generate a new transition between a headland and interior edge, which in practice due to limited turning radii of in-field operating tractors would cause newly compacted area for this transition.", "Consequently, the harvestable area would be destroyed and the crop lost.", "Furthermore, in case there are many different partial field coverage missions, the consequence may even be that at every transition from headland to interior and vice versa there are tractor tracks in every which direction, which would be the worst-case scenario with respect to minimisation of the soil compacted area.", "Eighth, the motivation for the general methodology in Algorithm REF is underlined.", "In general, the partial field coverage problems can be considered as travelling salesman problems (TSPs) subject to additional constraints.", "For general TSPs, the complexity increases extremely quickly with problem size.", "For $n$ general entities to be visited, the number of different orders in which these can be visited is $n!$ .", "For $n=5$ this is $n!=120$ , however, for $n=10$ it is already $n!=3628800$ .", "Key notion and the main argument for the design of Algorithm REF is that full field coverage can be considered as a special case of partial field coverage.", "For that particular case, the optimal solution is immediately recovered from Step 2-4 due to the fact that Algorithm REF starts for all partial field coverage solutions always from the full field coverage solution according to Algorithm REF .", "By contrast, for alternative TSP-solution methods that would start from a more general setting, no guarantee can be provided about the retrieval of the desired original optimal full field coverage plan.", "Similarly, Algorithm REF is well suited for partial field coverage applications where groups of neighbouring interior edges (pairs), or specific regions of the field are meant to be covered.", "This again follows from the initial tracing of the solution for full field coverage according to Step 2.", "Nevertheless, iteration steps 6-15, and in particular also the usage of the tabu list for exploration, are still required for generality of partial field coverage missions, for example, when multiple vertices and edges that are very distant apart need to be covered.", "Ultimately, the 2 hyperparameters in Algorithm REF , $N_\\mathcal {I}$ and $N_{\\mathcal {T}^\\text{abu}}$ , are discussed.", "An upper bound is $N_{\\mathcal {T}^\\text{abu}}\\le \\left(|\\mathcal {L}_\\mathcal {V}| + |\\mathcal {L}_\\mathcal {E}|\\right)!$ .", "There is no gain from a larger tabu list since in that case $\\mathcal {T}^\\text{abu}$ already can accomodate all possible sequences of vertices and edges of $\\mathcal {L}_\\mathcal {V}$ and $\\mathcal {L}_\\mathcal {E}$ .", "Furthermore, it is sensible to bound $N_\\mathcal {I}\\ge N_{\\mathcal {T}^\\text{abu}}$ since the tabu list is otherwise never filled completely.", "For $N_\\mathcal {I}> N_{\\mathcal {T}^\\text{abu}}$ , there is a likelihood that throughout iteration Steps 6-15 the same $\\mathcal {I}$ is added multiple times to the tabu list, which does not yield exploration progress.", "Therefore, it is proposed that $N_\\mathcal {I} = N_{\\mathcal {T}^\\text{abu}}.$ Then, there is only 1 hyperparameter in Algorithm REF , which is also easy to tune: the larger the better for exploration and finding of the optimal solution.", "In practice, $N_{\\mathcal {T}^\\text{abu}}$ may be limited by a constrained or desired maximum solve time or above factorial bound for small problems.", "Examples of this are discussed in sect.", "REF ." ], [ "Special case: T9", "The last in-field routing task for shortest path planning between any 2 vertices of $\\mathcal {G}$ may become relevant, for example, once a spraying tank is empty and it must be returned efficiently to a mobile depot waiting along the field boundary.", "The method for shortest path computation applied is identical to the one described in sect.", "REF , where multiple shortest paths are concatenated.", "By non-negativity of edge weights and connectivity of $\\mathcal {G}$ there always exists a shortest path between any 2 vertices.", "Figure: Field 1.", "A real-world field (54 ∘ ^\\circ 10'52.44\"N, 10 ∘ ^\\circ 19'57.48\"E) of size 13.5ha.", "The operating width is 36m.", "The vertices of 𝒢\\mathcal {G} are labelled.", "The field entry and exit is vertex 0." ], [ "Special case: Handling of connected subfields", "As mentioned in sect.", "REF , separate transition graphs are defined for all partitioned subfields and the main field.", "Consequently, the algorithms for T1 and T2 from sect.", "REF for full field coverage, for T3 to T8 for partial field coverage, and T9 as a special case are all also applied separately to each of the subfields and the main field.", "However, for synchronisation 1 modification is implemented.", "While the headland traversal direction of the main field is defined as CCW, it is now defined as CW for all subfields.", "This is the only difference and it permits insertion of subfield solutions into the main field solution sequence of vertices, while ensuring consistent travel direction along the headland path segments coinciding for both the main and all connected subfields.", "For visualisation, see Fig.", "REF .", "According to above rules the travel sequence of vertices along the coinciding headland paths is $\\lbrace 17,18,\\dots ,24\\rbrace $ .", "An effect of this method to handle connected subfields is that part of the path coinciding for both main field and any subfield is covered four times.", "Figure: Field 2.", "A real-world field (54 ∘ ^\\circ 13'9.65\"N, 10 ∘ ^\\circ 21'8.08\"E) of size 74.3ha.", "The operating width is 36m.", "There are 4 obstacle areas within field contours." ], [ "Illustrative Examples", "All methods were implemented in Python running an Intel i7-7700K [email protected]$\\times $ 8 processor with 15.6 GiB memory." ], [ "Full Field Coverage Examples 1 to 3", "Algorithm REF was evaluated on three real-world examples.", "It should be recalled that results are guaranteed to be path length optimal since it is worked directly on the full Eulerian graph augmentation as discussed in sect.", "REF .", "Thus, results can quantitatively not be further improved.", "Nevertheless, path length savings, $\\Delta $ AB, with respect to the in practice widespread (but suboptimal) AB-pattern are stated in Table REF to provide a comparison.", "For a detailed discussion of the disadvantages of the AB-pattern, see [18].", "It is also emphasised that in addition to optimality, Algorithm REF according to sect.", "REF is designed purposely (i) for compatibility with partial field coverage, and (ii) to recover the path planning pattern from Fig.", "REF whenever possible.", "The benefits of this design are discussed further below.", "Figure: Field 3.", "A real-world field (53 ∘ ^\\circ 46'26.34\"N, 11 ∘ ^\\circ 11'45.43\"E) of size 62.9ha.", "The operating width is 36m.", "There are 6 obstacle areas within field contours.Example 1 is based on the field in Fig.", "REF .", "There are no obstacle areas present.", "The optimal sequence of vertices, $\\lbrace s_t\\rbrace _0^T$ , for full field coverage according to T1 is: $&\\lbrace 0,1,22,0,1,\\textbf {2},\\textbf {3},\\textbf {20},\\textbf {21},\\textbf {2},\\textbf {3},\\textbf {4},5,18,19,4,5,6,7,\\\\&16,17,6,7,8,9,14,15,8,9,10,11,12,13,10,11,\\\\&23,12,13,14,15,16,17,18,19,20,21,22,0 \\rbrace ,$ whereby the first occurence of the pattern illustrated in Fig.", "REF , and which is naturally evolving from the application of Algorithm REF , is emphasised in bold.", "Field size, number of vertices, path length and in particular computation times for Algorithm REF are summarised in Table REF .", "When tracing (REF ), there are five of the aforementioned patterns concatenated for field coverage.", "Example 2 is based on the field in Fig.", "REF .", "The optimal sequence of vertices, $\\lbrace s_t\\rbrace _{0}^{T}$ , for full field coverage according to T2 with $s_\\text{start}=0$ and $s_\\text{end}=14$ is: $\\lbrace &0, 1, 2, 63, 64, 65, 66, 1, 2, 3, 4, 61, 62, 3, 4, 5, 6, 59, \\\\& 60, 5, 6, 7, 8, 57, 58, 7, 8, 9,10, 79, 98, \\textbf {97}, \\textbf {96}, \\textbf {55}, \\\\& \\textbf {56}, \\textbf {97}, \\textbf {96}, \\textbf {95}, 94, 53, 54, 95, 94, 93, 92, 51, 52, 93,\\\\& 92, 91, 90, 49, 50, 91, 90, 89,88, 47, 48, 89, 88, 87,\\\\& 86, 17, 18, 87, 86, 85, 84, 15,16, 85, 84, 83, 82, 13,\\\\& 14, 83, 82, 81, 80, 11, 12, 81, 80, 79, 98, 9, 10, 11,\\\\& 12, 13, 14, 15, 16, 17, 18, 19, \\textbf {20}, \\textbf {75}, \\textbf {78}, \\textbf {77}, \\textbf {76}, \\textbf {45}, \\\\& \\textbf {46}, \\textbf {77}, \\textbf {76}, \\textbf {75}, \\textbf {78}, \\textbf {19}, \\textbf {20}, \\textbf {21}, 22, 71, 74, 73, 72, 43, \\\\& 44, 73, 72, 71, 74, 21, 22, 23, 24, 41, 42, 23, 24, 25,\\\\& 26, 39, 40, 25, 26, 27, 28, 67, 70, 69, 68, 37, 38, 69, \\\\& 68, 67, 70, 27, 28, 29, 30, 35, 36, 29, 30, \\textbf {31}, \\textbf {32}, \\textbf {33}, \\\\& \\textbf {34}, \\textbf {31}, \\textbf {32}, \\textbf {99}, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, \\\\& 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56,\\\\& 57, 58, 59, 60, 61, 62, 63, 64, 100, 65, 66, 0,1,2,3,\\\\&4,5,6,7,8,9,10,11,12,13,14\\rbrace ,$ Several comments are made.", "First, in (REF ) and () multiple patterns can again be observed.", "Second, between () and () the coverage of the first and largest obstacle area with additional exploring of patterns outgoing from the island can be observed.", "The first pattern thereof is emphasised in bold in () and ().", "The effect of exploring interior edges connected to the island headland edges is an immediate result from the exploration heuristic as discussed in sect.", "REF .", "Third, in ()-() the sequence of vertices for the covering of the second island is in shown bold for emphasis.", "Notably, the same method to cover the third and fourth island can be observed in ()-(), and also in ()-(), respectively.", "However, it cannot be deduced from this that islands with four interior edges incident are always handled optimally as indicated.", "Here, this is merely a coincidence due to the Eulerian graph augmentation for the particular field in Fig.", "REF .", "Nevertheless, from these cases deterministic consistency of the output from Algorithm REF can be observed , which is desirable.", "Fourth, in ()-() the last pattern and coverage of the two last remaining interior edges is emphasised in bold.", "Afterwards, the sequence of vertices ()-() proceeds along the headland for final coverage of not-yet covered headland edges.", "Because of the characteristic Eulerian graph augmentation and previous pattern-like coverage of interior edges, every second headland edge from vertex 34 up until 0 has not been covered to this point.", "Fifth and ultimately, () results from the fact that $s_\\text{start}$ is different from $s_\\text{end}$ for this example.", "Example 3 is based on the field in Fig.", "REF .", "The optimal sequence of vertices, $\\lbrace s_t\\rbrace _{0}^{T}$ , for field coverage according to T1 is: $&\\lbrace 0, 1, 2, 59, 60, 1, 2, 3, 4, 57, 58, 3, 4, 5,6, 55, 56, 5,\\\\& 6, 7, 8, 53, 54, 7, 8, \\textbf {9}, \\textbf {10}, \\textbf {63}, \\textbf {64}, \\textbf {11}, \\textbf {12}, \\textbf {49}, \\textbf {50}, \\textbf {75},\\\\& \\textbf {76}, \\textbf {77}, \\textbf {78}, \\textbf {61}, \\textbf {62}, \\textbf {63}, \\textbf {64}, \\textbf {61}, \\textbf {62}, \\textbf {77}, \\textbf {78}, \\textbf {75}, \\textbf {76}, \\textbf {51},\\\\& \\textbf {52}, \\textbf {9}, \\textbf {10}, \\textbf {11}, 12, 13, 14, 80,81, 67, 66, 65, 70, 17,\\\\& 18, 43, 44, 65, 70, 69, 68, 15, 16, 69, 68, 67, 66,45, \\\\& 46, 82, 79, 80, 81, 82, 79, 47, 48, 13, 14, 15, 16, 17,\\\\& 18, \\textbf {19}, \\textbf {20}, \\textbf {89}, \\textbf {88}, \\textbf {87}, \\textbf {86}, \\textbf {41}, \\textbf {42}, \\textbf {87}, \\textbf {86}, \\textbf {85}, \\textbf {84}, \\textbf {39},\\\\& \\textbf {40}, \\textbf {85}, \\textbf {84}, \\textbf {83}, \\textbf {100}, \\textbf {37}, \\textbf {38}, \\textbf {83}, \\textbf {100}, \\textbf {99}, \\textbf {98}, \\textbf {35}, \\textbf {36}, \\\\& \\textbf {99}, \\textbf {98}, \\textbf {97}, \\textbf {96}, \\textbf {27}, \\textbf {28}, \\textbf {33}, \\textbf {34}, \\textbf {97}, \\textbf {96}, \\textbf {95}, \\textbf {94}, \\textbf {25}, \\textbf {26},\\\\& \\textbf {95}, \\textbf {94}, \\textbf {93}, \\textbf {92}, \\textbf {23}, \\textbf {24}, \\textbf {93}, \\textbf {92}, \\textbf {91}, \\textbf {90}, \\textbf {71}, \\textbf {74}, \\textbf {73}, \\textbf {72},\\\\& \\textbf {21}, \\textbf {22}, \\textbf {73}, \\textbf {72}, \\textbf {71}, \\textbf {74}, \\textbf {91}, \\textbf {90}, \\textbf {89}, \\textbf {88}, \\textbf {19}, \\textbf {20}, \\textbf {21}, 22,\\\\& 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 29, 30, 101, 31, \\\\&32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, \\\\& 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,\\\\& 60, 0\\rbrace ,$ Several comments are made.", "First, in (REF ) multiple of aforementioned patterns can be observed.", "Second, the sequence of vertices covering both the first and second obstacle area is in bold for emphasis throughout ()-().", "This sequence is not intuitive a priori.", "Edges are covered twice according to the Eulerian graph augmentation.", "Third, the coverage of the third and fourth island is described in () -().", "Fourth, the sequence of vertices covering the fifth and sixth island is in bold for emphasis in ()-().", "Similar to Example 2, the exploration of interior edges connected to the headland island edges can be observed, which again is an immediate result of the exploration heuristic in Step 7 of Algorithm REF .", "Table: Full field coverage examples.", "Summary of results.", "Examples 1-3 correspond to Fields 1-3 in Fig.", "-.", "Computation runtimes for Algorithm are in bold for emphasis." ], [ "Partial Field Coverage Examples 4 to 6", "Three partial field coverage examples are discussed.", "The first is based on Fig.", "REF , while the latter two are based on Fig.", "REF .", "It is stressed that Algorithm REF is closely linked and explicitly builds on the solution from Algorithm REF for full field coverage as discussed in sect.", "REF .", "This is enforced in order to make full and partial field coverage compatible for minimisation of the soil compacted area from tractor tracks when accounting for limited turning radii of agricultural machinery.", "In absence of this compatibility, depending on the routing mission, unconstrained shortest path computation may generate new transitions between headland and interior edges, which in practice due to limited turning radii of in-field operating tractors would cause newly compacted areas for these transitions.", "Consequently, the harvestable area would be destroyed and crop lost.", "This lack of compatibility and thus danger of crop loss always occurs when partial field coverage routes are planned without accounting explicitly for full field coverage.", "As an aside, it should be noted that the proposed method of Algorithm REF for partial field coverage can be built on any provided solution for full field coverage.", "For example, instead of providing the path length optimal solution for full field coverage according to Algorithm REF as data input, alternatively, the path length suboptimal AB-pattern solution could be provided as input, $\\lbrace s_t\\rbrace _0^T$ , in Step 1 of Algorithm REF .", "As a result, partial field coverage plans compatible with the full field coverage plan according to the AB-pattern could be generated.", "In the interest of space, this section focuses on the main practical aspects and hyperparameter choices for Algorithm REF for the path length optimal case.", "For Example 4, the artificial problem setup comprises $\\mathcal {L}_\\mathcal {E}=\\lbrace (6,17),~(9,14),~(20,21)\\rbrace $ .", "This in-field routing task classifies as T5, arguably the most relevant class for partial field coverage.", "For the results in Table REF , hyperparameters were set as $N_\\mathcal {I}=6$ and $N_{\\mathcal {T}^\\text{abu}}=6$ since here $|\\mathcal {L}_\\mathcal {E}|!=6$ and according to (REF ).", "The optimal sequence of vertices, $\\lbrace s_t^{\\text{pfc},\\star }\\rbrace _{0}^{T^{\\text{pfc},\\star }}$ , is: $& \\lbrace 0, 1, 2, 3, 4, 5, 6, 7, 8, \\textbf {9}, \\textbf {14}, 15, 16, \\textbf {17}, \\textbf {6}, 7, 16, \\\\&17, 18, 19, \\textbf {20}, \\textbf {21}, 22, 0\\rbrace ,$ where all edges involving $\\mathcal {L}_\\mathcal {E}$ are in bold for emphasis.", "Two comments are made.", "First, note that edges from $\\mathcal {L}_\\mathcal {E}$ are initially undirected.", "However, as a consequence of Step 2 in Algorithm REF , they become directed in the list of $\\mathcal {P}$ .", "These directions are then maintained throughout as emphasised in (REF ).", "Second, note how interior edge $(7,16)\\notin \\mathcal {L}_\\mathcal {E}$ is traversed as part of the shortest path towards $(20,21)$ after coverage of the 2 edges $(9,14)$ and $(17,6)$ that are element of $\\mathcal {L}_\\mathcal {E}$ .", "For Example 5, the artificial problem setup comprises 3 randomly and far apart selected vertices and edges, respectively.", "These are $\\mathcal {L}_\\mathcal {V}=\\lbrace 28,91,79\\rbrace $ and $\\mathcal {L}_\\mathcal {E}=\\lbrace (63,64),~(54,55),~(101,31)\\rbrace $ .", "This in-field routing task thus classifies as T7.", "For the results in Table REF , hyperparameters were set as $N_\\mathcal {I}=50$ and $N_{\\mathcal {T}^\\text{abu}}=50$ .", "The optimal sequence of vertices, $\\lbrace s_t^{\\text{pfc},\\star }\\rbrace _{0}^{T^{\\text{pfc},\\star }}$ , is: $& \\lbrace 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, \\textbf {63}, \\textbf {64}, 11, 12, 13, 14, \\\\&15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, \\textbf {27},\\textbf {28},\\\\& 29, 30, \\textbf {101}, \\textbf {31}, 32, 33, 34, 97, 96, 95, 94, 93, \\textbf {92}, \\textbf {91},\\\\& 90, 89, 88, 87, 42, 43, 44, 45, 46, \\textbf {82}, \\textbf {79}, 47, 48, 49, \\\\&50, 51, 52, 53, \\textbf {54}, \\textbf {55}, 56, 57, 58, 59, 60, 0\\rbrace $ where all edges involving $\\mathcal {L}_\\mathcal {V}$ and $\\mathcal {L}_\\mathcal {E}$ are in bold for emphasis.", "An additional comment is made.", "When tracing the output of T1, $\\lbrace s_t\\rbrace _{0}^{T}$ , from (REF ) as part of Step 2 in Algorithm REF , it is derived $\\mathcal {P}_3 = \\left( \\lbrace 92,74\\rbrace ,~ 91 \\right).$ This implies (i) that vertex 91, which is element of $\\mathcal {L}_\\mathcal {V}$ , is encountered at index $i=3$ , and further (ii) that this vertex has 2 candidate vertices, 92 and 74, immediately preceding vertex 91 in $\\lbrace s_t\\rbrace _0^T$ of T1.", "This possibility was discussed in detail in sect.", "REF .", "As indicated in (REF ), the final optimal transition is $(92,91)$ .", "For Example 6, the problem setup comprises 8 edges to imitate a precision agriculture application where only specific edges, however, spread over the entire field, must be covered.", "These are $\\mathcal {L}_\\mathcal {E}=\\lbrace (1,60),~(2,59),~(19,88),~(20,89)$ , $(27,96),~(97,34),~(28,33),~(29,32)\\rbrace $ .", "For the results in Table REF , hyperparameters were set as $N_\\mathcal {I}=350$ and $N_{\\mathcal {T}^\\text{abu}}=350$ .", "The optimal accumulated path length minimising sequence of vertices, $\\lbrace s_t^{\\text{pfc},\\star }\\rbrace _{0}^{T^{\\text{pfc},\\star }}$ , is: $&\\lbrace 0, 1, \\textbf {2}, \\textbf {59}, \\textbf {60}, \\textbf {1}, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, \\\\&14, 15, 16, 17, 18, 19, \\textbf {20}, \\textbf {89}, \\textbf {88}, \\textbf {19}, 20, 21, 22, 23, \\\\&24, 25, 26, 27, \\textbf {28}, \\textbf {33}, \\textbf {34}, \\textbf {97}, \\textbf {96}, \\textbf {27}, 28, 29, 30,31, \\\\&\\textbf {32}, \\textbf {29}, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, \\\\&42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, \\\\&56, 57, 58, 59, 60, 0\\rbrace ,$ where all edges involving $\\mathcal {L}_\\mathcal {E}$ are in bold for emphasis.", "Two comments are made.", "First, in this example the second constraint discussed in sect.", "REF for the specific shortest path computation to enforce forward motion only becomes active.", "The result is the sequence of vertices in (REF ), $\\lbrace 28,29,30,31\\rbrace $ , preceding the directed edge traveral $(32,29)$ in ().", "Second, notice how interior edge $(30,31)$ is covered twice throughout (REF ).", "The second traversal in () is part of the shortest path back to $s_\\text{end}=0$ , after coverage of the last remaining edge from $\\mathcal {L}_\\mathcal {E}$ .", "For full field coverage, all interior edges were constrained to be covered only once as part of the Eulerian graph augmentation in order to ensure forward motion and to encourage circular pattern-like optimal path planning whenever applicable.", "However, for partial field coverage, this constraint is dropped to minimise path length and soil strain due to tractor tracks.", "Importantly, all transitions between interior edges and headlands are still fully compatible with the results for full field coverage, such that no new compacted areas due to limited turning radii of in-field operating vehicles are created.", "Finally, hyperparameter choices and the role of the tabu list are discussed.", "It was observed that the inclusion of a tabu list, $\\mathcal {T}^\\text{abu}$ , in Algorithm REF significantly helped to retrieve the global optimal solution for partial field coverage tasks.", "Furthermore, by increasing the size of the tabu list, exploration is more enforced and the process of finding the optimum is significantly accelerated.", "For example, when reducing the maximum tabu list size to $N_{\\mathcal {T}^\\text{abu}}=25$ for both Examples 5 and 6, and to still retrieve the optimal solution, $N_\\mathcal {I}$ had to be increased to 100 and 650, respectively.", "Then, the corresponding solve times for these scenarios were $0.3306$ s and $2.7173$ s, which are roughly twice as large as the results in Table REF .", "To stress this more, when reducing $N_{\\mathcal {T}^\\text{abu}} $ to 10 in Example 6, it had to be set $N_\\mathcal {I}=3300$ before recovering the optimal solution, which resulted in $T_\\text{solve}=14$ s. To summarise, it was found that inclusion of the tabu list in Algorithm REF is a simple yet effective method to enforce exploration and speed up solve times.", "The larger the tabu list the better the exploration throughout Algorithm REF .", "Table: Partial field coverage examples.", "Summary of results.", "Example 4 refers to Field 1, while Example 5 and 6 correspond to Field 3.", "Computation runtimes for Algorithm are in bold for emphasis.", "Hyperparameter choices used for the 3 examples are (N ℐ ,N 𝒯 abu )=(6,6)(N_\\mathcal {I},N_{\\mathcal {T}^\\text{abu}})=(6,6), (50,50)(50,50) and (350,350)(350,350), respectively." ], [ "Benefits and Limitations", "One benefit of the proposed methods is very small $T_\\text{solve}$ , which demonstrates its computational efficiency.", "As Table REF demonstrates, this particularly holds for full field coverage tasks and is a consequence of starting with an Eulerian graph $\\mathcal {G}^{\\prime }$ before subsequently removing covered edges such that the set of feasible edge transitions shrinks with iterations, which further accelerates runtimes.", "By contrast, as Table REF demonstrates for partial field coverage applications, $T_\\text{solve}$ is typically higher.", "Similar to travelling salesman problems, here with additional constraints enforcing forward motion, and traversal along the headland allowed only in CCW-direction, the sequence to trace a subset of vertices or edges is not straightforward to compute, particularly if multiple obstacle areas are present.", "Another benefit of proposed methods is the total absence of hyperparameters in Algorithm REF and the presence of only two hyperparameters in Algorithm REF ; preferably just 1 according to (REF ).", "As pointed out towards the end of sect.", "REF , the larger the maximum tabu list size, $N_{\\mathcal {T}^\\text{abu}}$ , the better for enforcing exploration.", "For a very small number of vertices and edges to cover, one can select $N_{\\mathcal {T}^\\text{abu}}=(|\\mathcal {L}_\\mathcal {E}| + |\\mathcal {L}_\\mathcal {V}|)!$ , which guarantees that all possible sequences of vertices and edges to be covered will be tested.", "The main practical limitation of proposed methods is nonintuitive path planning, in particular in presence of multiple obstacle areas.", "One may argue that even if the returned sequence of vertices is path length optimal, it may still be not implementable.", "As long as tractors are not fully automatically following path plans, for example, similar to the method of [19], fully optimised path plans may not be practical.", "This is because the vehicle driver has to be “glued to” a navigation screen and audio commands to follow an nonintuitive path plan while simultaneously concentrating on keeping track, which may be stressful to the driver.", "However, it is emphasised that this aspect is explicitly addressed and mitigated by the design of Algorithm REF and REF through the enforcement of pattern-like field coverage whenever applicable, which (i) maintains optimality since being based on $\\mathcal {G}^{\\prime }$ , and which (ii) can favourably be translated to consistent rule-based driving instructions at least for all convex fields, such that (iii) nonintuitive paths remain only for field portions with multiple obstacle areas." ], [ "Conclusions", "This paper discussed optimal in-field routing for full and partial field coverage with arbitrary non-convex fields and multiple obstacle areas.", "It is distinguished between nine different in-field routing tasks: two for full field coverage, seven for partial field coverage and one for shortest path planning between any two vertices of the transition graph.", "There was differentiation between equal and different start and end vertices for a task, coverage of all edges (full field coverage), or coverage of only a subset of vertices, and a subset of edges or combinations (partial field coverage).", "A key notion is how to efficiently combine the coverage of headland and island headland edges together in combination with the coverage of all interior edges.", "Starting from an Eulerian graph augmentation, proposed algorithms encourage a particular circular-like pattern whenever applicable without compromising optimality, such that field coverage is consistent for convexly shaped fields.", "For arbitrary non-convex fields and with multiple obstacle areas, the resulting path guidance is not any more intuitive, however the path length is optimal.", "Proposed methods are primarily developed for spraying and fertilising applications with larger working widths for in-field operating vehicles.", "The handling of subfields connected to the main field was discussed.", "The proposed solution for partial field coverage starts from the solution for full field coverage to consistently comply with transitions between interior and headland edges by accounting for limited turning radii of agricultural vehicles, and thereby ensuring that no new tractor tracks are generated in view of compacted area minimisation.", "For partial field coverage, the benefit of employing a tabu list in the solution algorithm for improved exploration was highlighted.", "Proposed methods were illustrated by means of six experiments on three real-world fields, with a focus on demonstrating low computation runtimes and the explicit mentioning of sequences of vertices to emphasise aspects of the presented path planning methods." ] ]
1906.04264
[ [ "Proposition d'une nouvelle approche d'extraction des motifs ferm\\'es\n fr\\'equents" ], [ "Abstract This work is done as part of a master's thesis project.", "The increase in the volume of data has given rise to various issues related to the collection, storage, analysis and exploitation of these data in order to create an added value.", "In this master, we are interested in the search of frequent closed patterns in the transaction bases.", "One way to process data is to partition the search space into subcontexts, and then explore the subcontexts simultaneously.", "In this context, we have proposed a new approach for extracting frequent closed itemsets.", "The main idea is to update frequent closed patterns with their minimal generators by applying a strategy of partitioning of the initial extraction context.", "Our new approach called UFCIGs-DAC was designed and implemented to perform a search in the test bases.", "The main originality of this approach is the simultaneous exploration of the research space by the update of the frequent closed patterns and the minimal generators.", "Moreover, our approach can be adapted to any algorithm of extraction of the frequent closed patterns with their minimal generators." ], [ "Remerciements", " C'est un grand plaisir que je réserve cette page pour exprimer toute ma gratitude et ma reconnaissance à tous ceux qui m'ont aidé à la réalisation de ce travail qui m'a permis de m'épanouir professionnellement.", "Je tiens d'abord à adresser mes plus sincères remerciements et toute ma reconnaissance, à mon directeur de mémoire, M. Sadok BEN YAHYA, Professeur à la Faculté des Sciences de Tunis,Université de Tunis Elmanar, qui a toujours été présent à mes côtés pour m'orienter.", "Il m'a permis d'approfondir au maximum mes travaux afin de pouvoir être fier aujourd'hui du travail réalisé.", "Ce travail n'aurait pas vu le jour sans l'aide et le constant encouragement de Mme.", "Souad BOUASKER.", "A son contact, j'ai beaucoup profité de sa finesse d'analyse, tant lors de l'élaboration de réflexions scientifiques que pour la gestion des aspects politiques inhérents à toute activité de recherche, ainsi que pour son suivi conjoint et sa disponibilité.", "Par la même occasion, je remercie tout spécialement l'ensemble du personnel et des enseignants de la Faculté des Sciences de Tunis et plus précisément au département de l'informatique pour leur implication pendant toutes mes années de formation.", "C'est grâce à qui j'ai acquis de précieuses connaissances lors de mon passage à la FST.", "Mes remerciements sincères s'adressent aux membres de jury pour l'honneur qu'ils me font d'accepter de juger mon travail.", "À mes parents Aucun hommage ne pourrait être à la hauteur de l’amour dont ils ne cessent de me combler.", "Que dieu leur procure bonne santé et longue vie À ma très chère sœur Oumaima En souvenir d’une enfance dont nous avons partagé les meilleurs et les plus agréables moments.", "À mon cher petit frère Taher Pour toute l’ambiance dont tu m’as entouré, pour toute la spontanéité et ton élan chaleureux, Je te dédie ce travail .", "À qui je souhaite toute la réussite et le bonheur.", "À toute personne qui m'a soutenue durant la période de ce projet.", "À tous ceux qui m'ont soutenu de près ou de loin à réaliser ce travail.", "Ainsi que tous ceux dont je n'ai pas indiqué le nom mais qui ne sont pas moins chers.", "tocchapterIntroduction générale La notion de \"Knowledge Discovery from Databases KDD\" ou en Français \"Extraction de Connaissances à partir de Données ECD\", a été initialement introduite au début des années 1990 [49], [21].", "L’Extraction des Connaissances à partir de Données est une discipline qui regroupe tous les domaines des bases de données, des statistiques, de l’intelligence artificielle et de l’interface homme-machine [20].", "Le principal objectif de cette discipline étant de découvrir des connaissances nouvelles, pertinentes et cachées par le volume important de données.", "La réalisation de cet objectif nécessite la conception et la mise au point de méthodes permettant d’extraire les informations essentielles et cachées qui seront interprétées par les experts afin de les transformer en connaissances utiles à l’aide à la décision.", "Fayyad [35] décrit le processus d’extraction de connaissances à partir de bases de données comme un processus itératif composé de plusieurs étapes.", "Ce processus suscite un fort intérêt industriel, notamment pour son champ d’application très large, pour son coût de mise en œuvre relativement faible, et surtout pour l’aide qu’il peut apporter à la prise de décision.", "Le processus d’ECD peut être découpé en cinq grandes étapes comme l’illustre la Figure REF .", "Ce processus commence par sélectionner le sous-ensemble des données qui peuvent être effectivement intéressantes.", "Vient ensuite l’étape de pré traitement visant quant à elle à corriger les données manquantes ou erronées.", "Puis, il faut transformer les données pour qu’elles soient utilisables par l’algorithme de choix.", "Celui-ci génère un certain nombre de motifs qu’il faut interpréter pour enfin obtenir des nouvelles connaissances.", "Figure: Étapes du processus de l’ECDL'étape principale dans le processus de ECD est l’étape de fouille de données.", "C'est la partie la plus complexe du point de vue algorithmique.", "De nombreuses méthodes existent alliant statistiques, mathématiques et informatique, de la régression linéaire à l’extraction de motifs fréquents.", "L’un des objectifs fréquemment recherchés en fouille de données est la facilité d'interpréter les connaissances extraites.", "Notre mémoire de mastère s’inscrit dans le cadre du traitement des données.", "Dans ce travail, nous nous intéressons à la fouille des motifs fermés fréquents à partir des bases de données .", "En effet, l'augmentation du volume de données a donné naissance à diverses problématiques liées à la collecte, au stockage, à l’analyse et l’exploitation de ces données pour créer de la valeur ajoutée.", "Pour traiter des données, une solution consiste à partitionner l’espace de recherche en des sous-contextes, puis explorer ces derniers simultanément.", "Dans le cadre de nos travaux de recherche, nous proposons une approche visant l’extraction des motifs fermés fréquents, ainsi que leurs générateurs minimaux associés à partir des bases des transactions.", "En effet, l’idée principale de notre approche est de mettre à jour des motifs fermés fréquents avec leurs générateurs minimaux, et ce, en appliquant une stratégie de partitionnement du contexte d’extraction initial.", "Les résultats de nos travaux de recherche sont synthétisés dans ce mémoire qui est composé de quatre chapitres.", "Le premier chapitre introduit les notions de base qui seront utilisées tout au long de ce travail.", "Ces dernières incluent les notions préliminaires relatives à l'extraction des motifs fréquents et à l’analyse des concepts formels AFC.", "Le deuxième chapitre décrit l’état de l’art des approches séquentielles d'extraction des motifs fréquents ainsi que les motifs fermés fréquents.", "Nous y étudions et analysons aussi les approches de fouille parallèle.", "Dans le troisième chapitre, nous introduisons notre nouvelle approche UFCIGs-DAC d’extraction des motifs fermés fréquents, ainsi que leurs générateurs minimaux.", "Nous allons présenter la démarche et le processus de déroulement de notre approche.", "Le quatrième chapitre présente les études expérimentales menées sur des bases de test Benchmark.", "Cette étude s'étalera sur deux principaux axes.", "Le premier axe concerne la comparaison du temps de réponse de notre algorithme.", "Le second axe, concerne le nombre des motifs fermés fréquents extraits par notre approche UFCIGs-DAC.", "Pour finir, nous clôturons ce mémoire par une conclusions générale récapitulant les résultats de nos travaux ainsi qu'un nombre de perspectives futures de recherches." ], [ "Introduction", "Ce chapitre a pour objectif de définir les notions de bases, qui constituent le point de départ dans la présentation de notre approche.", "De ce point de vue, la première section sera consacrée aux concepts nécessaires utiles dans l'extraction des motifs.", "Nous enchaînons dans, la deuxième section avec la présentation de la méthode d'analyse des concepts formels $\\mathit {ACF}$ ." ], [ "Extraction des motifs", "Nous commençons par définir l'ensemble des notions de base relatives à la technique d'extraction des motifs, qui seront utilisés tout au long de ce travail.", "Définissons d'abord la version de base d'extraction de motifs qui permet de faire la fouille dans une base des transactions." ], [ "Base de transactions ", "Une base de transactions (appelée aussi contexte d'extraction) est défini par un triplet $\\emph {D} = (\\mathit {T , I, R})$ ou : $T$  :est un ensemble fini de transactions (ou objets).", "$\\mathit {I}$  :est un ensemble fini d'items (ou attributs).", "$\\mathit {R}$  : est une relation binaire $\\mathit {R}$ $\\subseteq $ $ \\mathit {T}$ $\\ast $ $\\mathit {I}$ entre les transactions et les items.", "Un couple (t, i) $ \\in $ $\\mathit {R}$ dénote le fait que la transaction $t \\in \\mathit {T}$ contient l'item $i \\in \\mathit {I} $ .", "Exemple : Un exemple d'une base de transactions $\\emph {D} = (\\mathit {T , I, R})$ (resp contexte d'extraction $\\mathit {K} = (\\mathit {O , I, R})) $ est donné par la table REF .", "Dans cette base (resp.", "ce contexte), l'ensemble de transactions $\\mathit {T=\\lbrace 1, 2, 3, 4, 5\\rbrace }$ (resp.", "d'objets $ O = \\lbrace 1, 2, 3, 4, 5\\rbrace )$ et l'ensemble d'items $ I = \\lbrace A, B, C, D, E\\rbrace $ .", "Le couple $(2, B) \\in R$ car la transaction $2 \\in T $ contient l'item $ B \\in I $ .", "Table: NO_CAPTION table (Base de Transactions) $\\mathit {D} , T = \\lbrace 1, 2, 3, 4, 5, 6\\rbrace \\ et \\ I = \\lbrace A, B, C, D, E\\rbrace $ Le tableau ci-dessous représente également la base de transactions $\\mathit {D}$ en mode binaire.", "Table: NO_CAPTION table Représentation en mode binaire de la base de transactions $\\mathit {D}$ Nous notons, par souci de précision, que les notations de base de transactions et de contexte d'extraction seront les mêmes dans la suite.", "Ils seront notés $\\emph {D} = (\\mathit {T , I, R})$ .", "Un motif, aussi appelé itemset, est un sous-ensemble non vide de $\\mathit {I}$ où $\\mathit {I}$ représente l’ensemble des items.", "Une transaction t $\\in \\mathit {T}$ , avec un identificateur communément noté TID (Tuple IDentifier), contient un ensemble, non vide, d'items de $\\mathit {I}$ .", "Un sous-ensemble I de $ \\mathit {I}$ où k =$\\left| I \\right| $ est appelé un k-motif ou simplement un motif, et k représente la cardinalité de I Le nombre de transactions t d'une base $\\mathit {D}$ contenant un motif $\\mathit {I}$ , $\\left| \\left\\lbrace t\\in D\\mid I\\subseteq t \\right\\rbrace \\right|$ , est appelé support absolu de I et noté par la suite Supp ($\\wedge I$ ) .", "Un treillis des motifs[10] est un regroupement conceptuel et hiérarchique des motifs.", "Il est aussi dit treillis d'inclusion ensembliste.", "Toutefois l'ensemble des parties de $ \\mathit {I}$ est ordonné par inclusion ensembliste dans le treillis des motifs.", "Le treillis des motifs associé au contexte donné par le Tableau REF est représenté par la figure REF .", "Toutefois, plusieurs mesures sont utilisées pour évaluer l'intérêt d'un motif, dont les plus connues sont présentées à travers la définition 4.", "Dans la base des transactions illustrée dans le Tableau REF , $ t_1$ possède les motifs :$\\phi $ , a, c, d, ac, ad, cd et acd.", "Parmi l’ensemble global de $2^{P}$ motifs, on va chercher ceux qui apparaissent fréquemment.", "Pour cela, on introduira les notions de connexion de Galois et de support d’un motif.", "Le support d’un motif $\\mathit {i}$ est donc le rapport de la cardinalité de l’ensemble des transactions qui contiennent tous les items de $\\emph {I}$ par la cardinalité de l’ensemble de toutes les transactions.", "Il capture la portée du motif, en mesurant sa fréquence d’occurrence.", "support$(\\textit {I})=\\frac{{}\\mid \\psi (\\textit {I})\\mid }{\\mid (\\textsl {O})\\mid }$ .", "Nous distinguons trois types de supports correspondants à $\\mathit {I} $  : Support conjonctif : $Supp(\\wedge \\textit {I} )=\\mid \\lbrace t\\in T\\mid \\forall i \\in \\textit {I} :(t,i) \\in \\mathfrak {R}\\rbrace \\mid $ Support disjonctif : $Supp(\\vee \\textit {I} )=\\mid \\lbrace t\\in T\\mid \\exists i \\in \\textit {I} :(t,i) \\in \\mathfrak {R}\\rbrace \\mid $ Support négatif : $Supp(\\lnot \\textit {I} )=\\mid \\lbrace t\\in T\\mid \\forall i \\in \\textit {I} :(t,i) \\notin \\mathfrak {R}\\rbrace \\mid .$ Exemple : Dans la base des transactions $D$ , on trouve, Supp(A)= $ \\frac{\\left| \\lbrace 1,3,5\\rbrace \\right|}{6}=\\frac{1}{2} $ , et Supp(CE) = $\\frac{\\left| \\lbrace 2,3,5,6\\rbrace \\right|}{6}=\\frac{2}{3}$ .", "La valeur du support est décroissante, c'est à dire, si $\\mathit {I_1}$ est un sous motif $\\mathit {I} $ $( \\mathit {I_1} \\supseteq \\mathit {I}) $ alors Support $(\\mathit {I_1})$ $ \\ge $ Support$(\\mathit {I})$ .", "Le support mesure la fréquence d’un motif : plus il est élevé, plus le motif est fréquent.", "On distinguera les motifs fréquents des motifs in-fréquents à l’aide d’un seuil minimal de support conjonctif Minsupp.", "Dans la suite, s'il n'y a pas de risque de confusion, le support conjonctif sera simplement appelé support.", "Soit le contexte d'extraction $\\mathfrak {\\emph {D}} = (\\mathfrak {\\lbrace }\\emph {T} , I, R\\rbrace ) $ .", "Soit l’application $\\phi $ de l’ensemble des parties de $\\emph {O}$ (c’est-à-dire l’ensemble de tous les sous-ensembles de $\\emph {O})$ , noté par $P(\\emph {O})$ , dans l’ensemble des parties de $\\emph {I}$ , noté par $P(\\emph {I})$ .", "L’application $\\phi $ associe à un ensemble d’objets O$ \\subseteq $ $\\emph {O}$ , l’ensemble des items i $\\in \\emph {I}$ communs à tous les objets o $\\in \\emph {O}$ .", "$ \\phi :\\emph {P}(\\emph {O})\\rightarrow \\emph {P}(\\emph {I})$ $ \\phi (O) =\\lbrace i \\in \\emph {I}\\mid \\forall o\\in O,(o,i)\\in \\emph {R}\\rbrace $ Soit l’application $\\Psi $ de l’ensemble des parties de I dans l’ensemble des parties de $\\emph {O}$ .", "Elle associe à un ensemble d’items $\\mathit {I}$ $\\subseteq $ $\\emph {I}$ , l’ensemble d’objets o $\\in $ $ \\emph {O}$ communs à tous les items i $\\in $ $\\mathit {I}$  : $ \\Psi :\\emph {P}(\\emph {I})\\rightarrow \\emph {P}(\\emph {O})$ $ \\phi (O) =\\lbrace o \\in \\emph {O}\\mid \\forall i\\in \\mathit {I},(o,i)\\in \\emph {R}\\rbrace $ $\\phi $ (O ) dénote l’ensemble de tous les items communs à un groupe de transactions T (intension), et $ \\psi $ (I) l’ensemble de toutes les transactions partageant les même items de I (extension).", "Le couple $(\\psi , \\phi ) $ définit une correspondance de Galois entre I et T Par exemple, dans la base de données du tableau 2.2, nous avons $\\phi $ $(\\lbrace 4, 6\\rbrace ) = \\lbrace B, E\\rbrace $ et $\\psi $$(\\lbrace A, C\\rbrace ) = \\lbrace 1, 3, 5\\rbrace $ .", "Ce qui signifie que l’ensemble de transactions {4, 6} possède en commun l’ensemble d’attributs $ \\lbrace B, E\\rbrace $ .", "De la même manière, l’ensemble d’attributs $\\lbrace A, C\\rbrace $ possède en commun l’ensemble de transactions $\\lbrace 1, 3, 5\\rbrace $ .", "La définition suivante présente le statut de fréquence d'un motif, fréquent ou in-fréquent, étant donné un seuil minimal de support Soit une base de transactions $\\mathit {D} = (\\mathit {T , I, R})$ , un seuil minimal de support conjonctif minsupp, un motif I $\\subseteq $ $ \\mathit {I}$ est dit fréquent si Supp($\\wedge $$\\mathfrak {I} $ ) $ \\ge $ minsupp.", "I est dit infréquent ou rare sinon.", "Exemple 3 Dans le tableau REF , en fixant une valeur minsupp = $ \\frac{3}{6}$ , on obtient que $ \\lbrace A\\rbrace $ et $\\lbrace BC\\rbrace $ sont fréquents, alors que le motif $\\lbrace ABC\\rbrace $ est in-fréquent ou rare ." ], [ "Analyse de concepts formels", "L'analyse de concepts formels (ACF), est une méthode mathématique de classification, a été introduite dans [7],cette approche a été popularisée par Wille qui a utilisé le treillis de Galois[55] comme base de l'ACF [57].", "Cette méthode a pour objectif de découvrir et d'organiser hiérarchiquement es regroupement possibles d'éléments ayant des caractéristiques en communs.", "La notion de treillis de Galois est utilisée comme base de l'ACF.Chaque élément du treillis est considéré comme un concept formel et le graphe(diagramme de Hasse) comme une relation de généralisation -spécialisation entre les concepts." ], [ " Concept formel", "Soit$ (\\mathit {T , I, R})$ un contexte formel.", "Un concept formel est un couple (A, B) tel que $ A \\subseteq T$ , $B \\subseteq I$ , $A^{^{\\prime }} = B \\ et B^{^{\\prime }} = A.$ A et B sont respectivement appelés extension (extent) et intension (intent) du concept formel (A, B).", "Un motif fermé est l'intension d'un concept formel alors que son support est la cardinalité de l'extension du concept.", "L'ensemble des parties de I est divisé en des sous-ensembles distincts ,nommés aussi classes d'équivalence .les éléments de chaque classe possèdent la même fermeture : Soit $I subste I$ , la classe d’équivalence de $I$ , dénotée , $\\left[ I \\right]$ , est : $ \\left[I \\right] = I_1 \\subseteq I\\mid \\lambda (I) = \\lambda (I_1)$ .", "Les éléments de la classe d'équivalence $\\left[ I \\right]$ ont ainsi la même valeur de support.", "Figure: Le treillis d’Iceberg associé pour minsupp = 2.Exemple : Étant donné le contexte d'extraction de tableau REF ,la figure REF représente Le treillis d’Iceberg associé pour minsupp = 2.", "Chaque nœud dans ce treillis représente une classe d’équivalence.", "Il contient un motif fermé fréquent ainsi que son support et est étiqueté par les générateurs minimaux associés.", "Deux classes d’équivalence sont dites comparables si leurs motifs fermés associés peuvent être ordonnés par inclusion ensembliste, sinon elles sont dites incomparables.", "La définition d’une classe d’équivalence nous amène à celle d’un générateur minimal.", "Un itemset g $ \\subseteq \\mathfrak {I} $ est un générateur minimal , [10] d’un itemset fermé $\\textit {I}$ si et seulement si $\\lambda (g) = \\textit {I}$ et $ \\nexists $ g' $ \\subseteq $ g tel que $\\lambda (g^{\\prime }) = \\textit {I}$ .", "Exemple : Dans la base $D$ du tableau REF , l'itemset $\\lbrace AB\\rbrace $ est un générateur minimal de $\\lbrace ABCE\\rbrace $ puisque $\\lambda $$(\\lbrace AB\\rbrace )$ = $\\lbrace ABCE\\rbrace $ et aucun de ses sous-ensembles propres a l'itemset $\\lbrace ABCE\\rbrace $ comme fermeture.", "Ainsi, un motif fermé fréquent apparaît dans le même ensemble d’objets et par conséquent il a le même support que celui de ses générateurs.", "Il représente donc un ensemble maximal partageant les mêmes items, tandis que ses générateurs minimaux représentent les plus petits éléments décrivant l’ensemble d’objets.", "Ainsi, tout motif est nécessairement compris entre un générateur minimal et un motif fermé.", "Nous allons maintenant nous focaliser sur des propriétés structurelles importantes associées à l’ensemble des motifs fermés et à l’ensemble des générateurs minimaux.", "Étant donné un contexte d’extraction K, l’ensemble de concepts formels $ C_K$ , extrait à partir de K, est un treillis complet $\\mathfrak { L_CK}$ = $(C_K, \\le )$ , appelé treillis de concepts (ou treillis de Galois), quand l’ensemble $C_K$ est considéré avec la relation d’inclusion ensembliste entre les motifs ,  : soient $ c_1 = (O_1, I_1) \\ et c_2 = (O_2, I_2) $ deux concepts formels, $c_1 \\le c_2$ si $ I_1 \\sqsubseteq I_2 $ .", "Outre la contrainte de fréquence minimale traduite par le seuil minsupp, d'autres contraintes peuvent être intégrées dans le processus d'extraction des motifs.", "Ces contraintes admettent différents types, dont les deux principaux sont définis dans ce qui suit [31].", "Dans le processus d'extraction des motifs , deux contraintes principales sont définies dans [47] comme suit : Une contrainte Q est anti-monotone si $\\forall \\mathit {I} \\subseteq \\emph {I},\\forall \\mathit {I_1} \\subseteq \\mathit {I}:\\mathit {I_1} satisfait \\emph {Q} \\Rightarrow \\mathit {I} satisfait \\emph {Q} $ Une contrainte Q est monotone si $\\forall \\mathit {I} \\subseteq \\emph {I},\\forall \\mathit {I_1} \\supseteq \\mathit {I}:\\mathit {I_1} satisfait \\emph {Q} \\Rightarrow \\mathit {I} satisfait \\emph {Q} $ Exemple : Soit P ($\\mathit {I}$ ) l'ensemble de tous les sous-ensembles de $\\mathit {I}$ .", "Dans ce qui suit, nous introduisons les notions duales d'idéal d'ordre et de filtre d'ordre [50] définis sur P ($\\mathit {I}$ ).", "Soit P(I) l'ensemble de tous les sous-ensembles S de I .La notion idéal d'ordre est introduit dans [39] .", "S de P(I) est un idéal d'ordre s'il vérifie les propriétés suivantes : Si $\\mathit {I} \\in \\emph {S} , alors \\forall \\mathit {I_1} \\subseteq \\mathit {I} :\\mathit {I_1} \\in \\emph {S}$ Si $\\mathit {I} \\notin \\emph {S} , alors \\forall \\mathit {I} \\subseteq \\mathit {I_1} :\\mathit {I_1} \\notin \\emph {S}$ Soit P(I) l'ensemble de tous les sous-ensembles S de I .La notion filtre d'ordre est introduit dans [39] .", "S de P(I) est un filtre d'ordre s'il vérifie les propriétés suivantes : Si $\\mathit {I} \\in \\emph {S} , alors \\ \\forall \\mathit {I_1} \\supseteq \\mathit {I} :\\mathit {I_1} \\in \\emph {S}$ Si $\\mathit {I} \\notin \\emph {S} , alors \\ \\forall \\mathit {I} \\supseteq \\mathit {I_1} :\\mathit {I_1} \\notin \\emph {S}$ Une contrainte anti-monotone telle que la contrainte de fréquence induit un idéal d'ordre.", "D'une manière duale, une contrainte monotone telle que la contrainte de rareté forme un filtre d'ordre.", "L'ensemble des motifs satisfaisant une contrainte donnée est appelé théorie dans [38].", "Cette théorie est délimitée par deux bordures, une dite bordure positive et l'autre appelée bordure négative, et qui sont définies comme suit.", "Compte tenu d'un seuil minimum de support minsupp.La bordure positive [38] $ Bd^{+}$ représentée par les motifs fréquents maximaux.", "$ Bd^{+}$ , est l’ensemble des plus grands itemsets fréquents (au sens de l’inclusion) dont tous les sur-ensembles sont fréquents, et est définie comme suit : $ Bd^{+}$ = $\\lbrace \\mathit {I} \\in \\emph {I}\\mid supp(\\mathit {I}) \\geqslant minsupp, \\forall \\mathit {I_1} \\supseteq \\mathit {I}, supp(\\mathit {I_1}) \\geqslant minsupp \\rbrace $ Compte tenu d'un seuil minimum de support minsupp.", "La bordure négative [38] $ Bd^{-}$ est représentée par les motifs in-fréquents minimaux.", "$ Bd^{-}$ , est l’ensemble des plus petits itemsets qui ne sont pas fréquents dont tous les sous ensembles sont fréquents, et est définie comme suit : $ Bd^{-}$ = $\\lbrace \\mathit {I} \\in \\emph {I}\\mid supp(\\mathit {I}) \\le minsupp, \\forall \\mathit {I_1} \\subseteq \\mathit {I}, supp(\\mathit {I_1}) \\geqslant minsupp \\rbrace $ Les applications $\\lambda $ = $\\Phi \\circ \\Psi $ et $\\sigma = \\Psi \\circ \\Phi $ sont appelées les opérateurs de fermeture [50] de la correspondance de Galois [22].", "Par exemple, dans le contexte $\\mathfrak {D}$ du tableau 2.2, on a si $T = \\lbrace 3, 5\\rbrace $ , alors $ \\phi (T) $ = $\\lbrace ABCE\\rbrace $ et donc $\\sigma $ =$ \\lbrace 3, 5\\rbrace $ .", "Et si $T = \\lbrace 1, 2, 3\\rbrace $ , alors$\\phi (T) $ = {C} et donc $\\sigma $ $= \\lbrace 1, 2, 3, 5, 6\\rbrace $ .", "Si $O = \\lbrace AC\\rbrace $ , alors $\\psi (O )$ $= \\lbrace 1, 3, 5\\rbrace $ et donc $\\lambda $$= \\lbrace AC\\rbrace $ .", "Dans ces exemples, les ensembles $ \\lbrace 3, 5\\rbrace $ et $\\lbrace AC\\rbrace $ sont fermés.", "L’opérateur de fermeture $ \\lambda $ , tout comme $\\sigma $ , est caractérisé par le fait qu’il est : Isotonie.", "Expansivité Idempotence.", "Nous allons maintenant introduire la notion de motif fréquent et celle de motif fermé.", "Selon la nature des motifs fréquents nous pouvons trouver deux types : Un itemset I $\\sqsubseteq $ I est fermé si seulement si $\\textit {I}= \\lambda (\\textit {I}) \\cite {pasquier1998pruning}$ .", "$\\textit {I} $ est un ensemble maximal d’items communs à un ensemble d’objets [9].", "Un itemset fermé I est fréquent si seulement si son support noté support(I) $=\\frac{{}\\mid \\psi (\\textit {I})\\mid }{\\mid (\\textsl {O})\\mid } \\ge $ minsup (i.e., le seuil minimal de support) Exemple : Dans la base de données B du tableau 2.2, les motifs $\\lbrace AB\\rbrace $ , $\\lbrace ABC\\rbrace $ et $\\lbrace ABCE\\rbrace $ sont dans la même classe d’équivalence.", "Donc, $\\lbrace ABCE\\rbrace $ est l'itemset fermé.", "Un motif fréquent est dit Maximal si aucun de ses sur-motifs immédiats n’est fréquent.", "Exemple : Le schéma suivant illustre la relation entre le motifs fréquents, fréquents fermés et fréquents maximaux : Figure: Relation entre les motifs fréquents,fréquents fermés et fréquents maximaux Les motifs encadrés par les lignes minces ne sont pas fréquents, les autres le sont.", "Les motifs encadrés par des lignes plus épais sont fermés.", "Les motifs encadrés par des lignes plus épais et colorés sont maximaux $ \\ Les \\ motifs \\ maximaux \\sqsubset \\ Les \\ motifs \\ fermés\\sqsubset \\ Les \\ motifs \\ fréquents$" ], [ "Conclusion ", "Dans ce chapitre, nous avons présenté l'ensemble des notions de bases relatives à l'extraction des motifs que nous utiliserons au chapitre suivant.", "De surcroît, nous nous avons focalisé sur les propriétés structurelles importantes à l'analyse des concepts formels.", "Le chapitre suivant est consacré à la présentation de l'état de l'art des approches traitant de l'extraction des motifs fréquents et des motifs fermés fréquents." ], [ "Introduction", "Étant donné que dans le cadre de ce mémoire, nous jugeons intéressant de consacrer ce chapitre à la présentation et l'étude des approches de l'état de l'art s'inscrivant dans le cadre de notre problématique.", "Dans ce chapitre, nous abordons dans la première section la présentation des approches d'extraction des motifs fréquents et des motifs fermés fréquents en séquentiel.", "Dans la deuxième section nous présentons les approches d'extraction des motifs fréquents et des motifs fermés fréquents en parallèle.", "La troisième section sera consacrée à une synthèse qui permet de classer les différentes approches d'extraction parallèle des motifs fréquents et des motifs fermés fréquents selon des différentes stratégies." ], [ "Algorithmes d'extractions séquentielles des motifs fréquents", "Une approche naïve pour déterminer tous les motifs fréquents dans une base des données $\\emph {D}$ , consiste simplement à déterminer le support (support) de toutes les combinaisons des items dans $\\emph {D}$ .", "Ensuite, garder seulement les items/motifs qui satisfont un seuil de support minimum (MinSup).", "Les algorithmes de recherche de motifs fréquents peuvent se séparer en deux stratégies : en largeur d’abord [2], ou en profondeur d'abord [32].", "Stratégie en largeur d'abord [2] :En adoptant une stratégie en largeur d'abord , tous les k-itemsets candidats sont génères en faisant une auto-jointure de l'ensemble des $(k-1)$ itemsets fréquents : calculer les motifs du treillis en largeur d’abord, niveau par niveau.", "Le premier niveau, $ L_1$ , est initialisé avec l’ensemble des items fréquents[51].", "Chaque niveau $L_k$ est construit en combinant des motifs du niveau précédent, $L_k-1$ .", "Stratégie en profondeur d'abord [32] : Cette méthode consiste à énumérer les motifs fréquents dans un ordre prédéfini(par exemple ordre Lexicographique).", "Stratégie hybride[25] : explore en profondeur d'abord le treillis des itemsets, mais ne génère qu'un seul itemset à la fois." ], [ "Algorithme ", "L'algorithme Apriori a été proposé dans [36].", "Il se concentre sur la réduction de l'accès au disque d'E/S lors de l'extraction des motifs fréquents.", "Pour ce faire, l'algorithme Apriori répond au critère d'anti-monotonie.", "C'est à dire, si un item/itemset n'est pas fréquent, alors tous ses sur-ensembles ne peuvent pas être fréquents.", "Apriori opère en deux étapes pour extraire les motifs fréquents : Étape de combinaison des items.", "Étape d'élagage.", "Pour extraire les motifs fréquents, Apriori parcourt la base de données $D$ et détermine une liste candidate $C_1$ d'items fréquents de taille 1, puis l'algorithme filtre $C_1$ et ne conserve que les items qui satisfont le support minimum $MinSup$ et les stocke dans une liste des fréquents $F_1$ .", "À partir de $F_1$ , l'algorithme génère les motifs candidats de taille 2 dans une liste disons $C_2$ et cela en combinant toutes les paires d'items fréquents de taille 1 en $F_1$ .", "Ensuite, Apriori analyse $D$ et détermine tous les motifs dans $C_2$ qui satisfont le support $MinSup$ , le résultat est stocké dans une liste $F_2$ .", "Le processus d'exploration d'Apriori est exécuté jusqu'à ce qu'il n'y ait plus des motifs candidats dans $D$ à vérifier." ], [ "Algorithme ", "L'algorithme Eclat a été introduit par Zaki dans [61], consiste à effectuer un processus d'extraction des motifs fréquents dans la mémoire sans accéder au disque.", "L'algorithme procède en stockant une liste d'identifiants de transaction (TID) dans la mémoire de chaque item de la base de données.", "Pour déterminer le support d'un motif $I$ , Eclat croise les TIDs de tous les items de $I$ .", "Eclat effectue une recherche des itemsets fréquents en profondeur d’abord et se base sur le concept de classes d’équivalence.", "Par exemple, ABC et ABD appartiennent à la même classe d’équivalence.", "Deux k-itemsets appartiennent à une même classe d’équivalence s’ils ont en communun préfixe de taille (k -1)." ], [ " Algorithme", "FP-Growth (Frequent-Pattern Growth)[32], a été considéré comme l'algorithme le plus performant par rapport aux autres algorithmes pour extraire des itemsets fréquents.", "L'algorithme consiste d’abord à compresser la base de données en une structure compacte appelée FP-tree (Frequent Pattern tree) et qui apporte une solution au problème de la fouille de motifs fréquents dans une grande base de données transactionnelle.", "Contrairement aux techniques mentionnées précédemment, l'algorithme FP-Growth ne repose sur aucune approche de génération de motifs candidats.", "L’algorithme FP-Growth effectue deux passes (scans) à la base de transactions : Passe 1 le premier passage de FP-Growth sur la base de données $\\emph {D}$ est consacré à déterminer la valeur du support de chaque item dans $\\emph {D}$ .", "L'algorithme ne retient que les éléments fréquents dans une liste F-List.", "Ensuite, FP-Growth trie F-List dans un ordre décroissant en fonction du valeur de support et qui est comparé avec le seuil de support préfixé (MinSup).", "Passe 2 un FP-Tree est construit par la création d’une racine vide et un second parcours de la base de données où chaque transaction est décrite dans l’ordre des items donné par la liste F-List.", "Chaque nœud de l'arbre FP-Tree représente un élément dans L et chaque nœud est associé à un compteur (c'est-à-dire, compte de support initialisé à 1).", "Si une transaction partage un préfixe commun avec une autre transaction, le compte de support de chaque nœud visité est incrémenté de 1.", "Pour faciliter la traversée de FP-Tree, une table d'en-tête est construite pour que chaque élément pointe vers ses occurrences dans l'arbre via une chaîne de liens-nœuds.", "En dernier lieu, le FP-Tree est fouillé par la création des (sub-)fragment conditionnels de base.", "En fait, pour trouver ces fragments, on extrait pour chaque fragment de longueur 1 (suffix pattern) l’ensemble des préfixes existant dans le chemin du FP-Tree (conditional pattern base).", "L’itemset fréquent est obtenu par la concaténation du suffixe avec les fragments fréquents extraits des FP-Tree conditionnels.", "Table: NO_CAPTION table Exemple de base de transactions $\\emph {D}$ Table: NO_CAPTION table Items associés à leur support Figure: Construction du FP-TreeEn dernier lieu, le FP-Tree est fouillé par la création des (sub-)fragment conditionnels de base.", "En fait, pour trouver ces fragments, on extrait pour chaque fragment de longueur 1 (suffix pattern) l’ensemble des préfixes existant dans le chemin du FP-Tree (conditional pattern base).", "L’itemset fréquent est obtenu par la concaténation du suffixe avec les fragments fréquents extraits des FP-Tree conditionnels (voir tableau REF ) Table: NO_CAPTION table Fouille du FP-Tree" ], [ " Algorithme ", "l'algorithme $\\emph {SON}$ a été introduit dans [52].", "Cet algorithme permet l'extraction des itemsets fréquent.", "Le principe d'extraction de $\\emph {SON}$ est tiré du fait que l'ensemble de tous les itemsets fréquents globaux (c'est-à-dire tous les itemsets fréquents dans $\\emph {D}$ est inclus dans l'union de l'ensemble de tous les itemsets fréquents locaux.", "Pour déterminer l'ensemble des itemsets fréquents, le processus $\\emph {SON}$ procède en effectuant un processus d'exploration en deux phases comme suit : Phase1 : Diviser la base de données d'entrée $\\emph {D}$ en n partitions de données, $\\emph {D}$$=\\lbrace P_1$ ,$P_2$ ,....,$P_n$ } d'une manière que chaque $P_i$ dans $\\emph {D}$ s'insère dans la mémoire disponible.Ensuite, extraire chaque partition de données $ P_i $ dans la mémoire en fonction d'un support minimum local $\\textsl {LMinSup}$ (le support minimum local est calculé en fonction du nombre de transactions en $ P_i $ et du support minimum global donné $\\emph {GMinSup}$ ) et d'un algorithme $\\emph {FIM}$ spécifique ( par exemple, algorithme Apriori ou une de ses améliorations).", "Ainsi, la première phase de l'algorithme $\\emph {SON}$ est consacrée à la détermination d'une liste des itemsets fréquents locaux $LF_I$ .", "Phase2 : Cette phase passe en filtrant les itemsets fréquents locaux dans la liste $LF_I$ en fonction du support minimum global $\\emph {GMinSup}$ .", "Cette étape est effectuée pour valider la fréquence globale de l'ensemble des itemsets fréquents locaux.", "L'algorithme $\\emph {SON}$ analyse la base de données entière $\\emph {D}$ et vérifie la fréquence de chaque ensemble d'items fréquents locaux dans $LF_I$ .Ensuite, il renvoie une liste d'iemsets fréquents globaux ($GF_I$ ) qui est un sous-ensemble de $LF_I$ , c'est-à-dire $ GF I \\subseteq LF_I $ ." ], [ "Discussion ", "Dans cette sous-section, nous allons discuter les différents algorithmes d'extraction séquentielle des motifs fréquents.", "Nous récapitulons dans le tableau REF les caractéristiques des différentes approches étudiées.", "Cette comparaison couvre les axes suivants : Stratégie d'exploration :Cette propriété décrit la stratégie d'exploration des motifs fréquents[6] .", "Caractéristiques :Cette propriété décrit les caractéristiques de l'approche en question.", "En comparant les algorithmes décrits dans cette sous section, nous pouvons noter les remarques suivantes : Apriori : Malgré la propriété anti-monotonie, la performance de l'algorithme Apriori est proportionnelle à son nombre de candidats itemset à vérifier par rapport à la base de transactions.", "Une amélioration AprioriTID et Eclat [2] qui consiste à intégrer les identificateurs des transactions (TIDs).", "FP-Growth : L’avantage majeur de l’algorithme est qu’il ne fait que deux balayages de la base des transactions.", "De surcroît, il peut être considéré comme un algorithme complet puisqu'il contient toutes les informations sur les éléments fréquents, ainsi es items sont classés en ordre de fréquence décroissante.Néanmoins, malgré sa structure compacte, cela ne garantit pas, dans le cas ou la base de transactions est trop volumineuse, que toute la structure du FP-tree tiendra en mémoire centrale.", "SON Comme il effectue deux analyses de bases de données, le système SON a montré de meilleures performances que l'algorithme Apriori.", "Cependant, la principale limitation de cet algorithme est sa première phase d'extraction.", "c'est-à-dire, dans le cas où une partition de données contient un grand nombre de MOTIFS fréquents locaux, dans ce cas, les performances de la seconde phase seraient également affectées.", "tableTableau comparatif des algorithmes d’extraction des motifs fréquents Afin de résoudre les problèmes rencontrés par les algorithmes d'extraction des itemsets fréquents, une nouvelle approche basée sur l'extraction des itemsets fermés fréquents est apparue[11].", "Cette approche est basée sur la fermeture de la connexion de Galois[43].", "Elle est fondée sur un élagage du treillis des itemsets fermés, en utilisant les opérateurs de fermeture de la connexion de Galois.", "Plusieurs algorithmes ont été proposés dans la littérature, dont le but est de découvrir les itemsets fermés fréquents." ], [ "Algorithmes d'extractions séquentielles des motifs fermés fréquents", "Dans la littérature, plusieurs algorithmes ont été proposés pour résoudre le problème d’extraction des motifs fréquents.", "En effet, ce problème a été d’abord introduit dans [2].", "Dans cette sous section, nous allons passer en revue les principaux algorithmes permettant l'extraction des motifs fermés fréquents en séquentiel.", "les stratégies adoptées pour l'exploration de l'espace de recherche soient classées en deux stratégies, à savoir la stratégie \"Générer-et-tester\", et la stratégie \"Diviser-et générer\" La stratégie \"Générer-et-tester\"[27] : Les algorithmes adoptant cette stratégie parcourent l'espace de recherche par niveau.", "A chaque niveau k, un ensemble de candidats de taille k est génère.", "Cet ensemble de candidats est, généralement, élagué par la conjonction d'une métrique statistique ( le support) et des heuristiques basées essentiellement sur les propriétés structurelles des itemsets fermés et/ou des générateurs minimaux.", "[12] La stratégie Diviser-et-régner [15] : Les algorithmes adoptant cette stratégie essaient de diviser le contexte d'extraction en des sous-contextes et d'appliquer le processus de découverte des itemsets fermés récursivement sur ces sous-contextes.", "Ce processus de découverte repose sur un élagage du contexte base essentiellement sur l'utilisation d'une métrique statistique et d'heuristiques introduites[12]." ], [ "Algorithme ", "L'algorithme Close a été proposé d'abord dans[9].", "C'est un algorithme itératif d'extraction des itemsets fermés fréquents, en parcourant l'ensemble des générateurs des itemsets fermés fréquents par niveaux.", "Durant chaque itération k de l'algorithme, un ensemble $FF_k$ de k-générateurs candidats est considéré.", "Chaque élément de cet ensemble est constitué de trois champs : k-générateur candidat.", "La fermeture de k-générateur, qui est un itemset fermé candidat.", "Le support de k-générateur.", "À la fin de l'itération k, l'algorithme stocke un ensemble $ FF_k $ contenant les k-générateurs fréquents, leurs fermetures, qui sont des itemsets fermés fréquents, et leurs supports.", "Ainsi, chaque itération est composée de deux étapes : Etape d'élagage : dans cette étape, une fonction GEN-CLOSURE est appliquée à chaque générateur de $FFC_k$ , déterminant ainsi son support et sa fermeture.", "Etape de construction : Après l'élagage des générateurs non fréquents, une fonction GEN-GENERATOR utilise l'ensemble d'itemsets fermés fréquents $FF_k$ et calcule l’ensemble$ FFC_k+1$ contenant tous les (k + 1)-itemsets, qui seront utilisés dans l’itération suivante.", "À ce stade, l'ensemble $ FFC_k+1$ est élagué comme suit.", "Pour tout $ c\\in FFC_k+1$ , si c est inclus dans la fermeture d’un des sous-ensembles, i.e.", "les éléments de$ FC_k$ dont la jointure a permis d’obtenir c. Dans ce cas, c est éliminé $FFC_k+1$ .", "L’algorithme s’arrête quand il n’y a plus de générateurs à traiter.", "Exemple La figure REF représente l'exécution de l'algorithme Close sur le contexte d'extraction D pour un seuil minimal de support de $\\frac{2}{6}$ .", "Figure: ContexteDContexte DFigure: Extraction des itemsets fermés fréquents avec CloseL'ensemble $FFC_1$ est initialisé avec la liste des 1-itemsets du contexte D. La procédure Gen- Closure génère les fermetures des 1-générateurs, qui sont les itemsets fermés fréquents potentiels, et leurs supports dans $FFC_1$ .", "Les groupes candidats de $FFC_1$ qui sont fréquents sont insérés dans l'ensemble $FF_1$ .", "La première phase de la procédure Gen-Generator appliquée à l'ensemble $FF_1$ génère six nouveaux 2-générateurs candidats : {AB}, {AC}, {AE}, {BC}, {BE} et {CE} dans $FFC_2$ .", "Les 2-générateurs {AC} et {BE} sont supprimés de $FFC_2 $ par la troisième phase de la procédure Gen-Generator car nous avons {AC} $\\subseteq \\gamma $ ({A}) et {BE} $\\subseteq \\gamma $ ({B}).", "La procédure Gen-Closure calcule ensuite les fermetures et les supports des 2-générateurs restant dans $FFC_2 $ et les ensembles $FF_2 $ et $FFC_2 $ sont identiques car tous les itemsets fermés de $FFC_2$ sont fréquents.", "L'application de la procédure Gen-Generator à l'ensemble $FF_2 $ génère le 3-générateur {ABE} qui est supprimé car le 2-générateur {BE} n'appartient pas à $FF_2$ et l'algorithme s'arrête." ], [ "Algorithme ", "Parmi les premiers algorithmes extrayant les itemsets fermés fréquents nous retrouvons l'algorithmeA-Close [45].", "Entre autres qualités, par rapport à Close ,après avoir construit un ensemble de k-générateurs candidats à partir des (k-1)-générateurs minimaux retenus dans la (k-1) i`eme itération.", "À ce niveau A-close supprime de cet ensemble tout candidat g dont le support est égal au support d'un de ses sous-ensembles de taille (k-1) A-Close considère un ensemble de générateurs candidats d'une taille donnée,et détermine leurs supports et leurs fermetures en réalisant un balayage du contexte lors de chaque itération.", "Les fermetures (fréquentes) des générateurs fréquents sont les itemsets fermés fréquents extraits lors de l'itération.", "Les générateurs candidats sont construits en combinant les générateurs fréquents extraits durant l'itération précédente.", "Ainsi, A-Close procède en deux étapes successives : il détermine tous les générateurs minimaux fréquents, c'est à dire, les plus petits éléments incomparables des classes d'équivalence induites par l'opérateur de fermeture ã. Pour chaque classe d'équivalence, il détermine l'élément maximal résidant au sommet de la hiérarchie.", "i.e l'itemset fermé fréquent." ], [ " Algorithme ", "LCM (connu sous la terminologie anglaise Linear time Closed item set Miner) a été proposé dans [56].", "Cet algorithme est consacré pour l'extraction des itemsets fermés.", "LCM se distingue des autres algorithmes de type backtrack, c'est à dire, qu’il énumère linéairement l’ensemble des itemsets fréquents fermés par un parcoure en profondeur, sans explorer des motifs fréquents non nécessaires.", "Tel qu’illustré dans l’exemple de la Figure REF .", "Pour ce faire, un arbre sous forme de trajets transversaux contenant seulement tous les motifs fermés fréquents est crée.", "Deux techniques ont utilisées pour accélérer les mises à jour sur les occurrences des motifs : Occurrence deliver : Cette technique calcule simultanément les ensembles d’occurrences de tous les successeurs du motif courant durant une, et une seule, itération de balayage sur l’ensemble d’occurrences actuel.", "Diffsets : Cette technique a été introduite dans [60] pour réduire l’utilisation de la mémoire des calculs intermédiaires.", "L'algorithme LCM repose sur un parcours optimisé de l’espace de recherche exploitant le concept de « core prefix ».", "Il faut aussi pouvoir définir un ordre sur les items (l’ordre alphabétique,par exemple).", "Intuitivement, le core prefix d’un itemset fermé $I$ sert de « noyau » d’extension pour générer un autre itemset fermé $\\mathit {I^{^{\\prime }}} $ .Le core prefix d’un itemset $I$ est le plus petit préfixe (selon l’ordre sur les items) qui apparaît dans toutes les transactions où apparaît $I$ .", "Exemple Figure: Exemple d’utilisation de l’algorithme LCM pour découvrir lesitemsets fréquents" ], [ "Algorithme ", "L'algorithme CLOSET a été proposé dans [48].", "Cet algorithme utilise une structure de données avancée, basée sur la notion de trie, appelée arbre FP-Tree [32].", "La particularité de cette structure réside dans le fait que plusieurs transactions partagent un même chemin, de longueur n dans l'arbre FP-Tree.", "S'ils ont les n premiers items en commun, l'algorithme CLOSET effectue le processus d'extraction des itemsets fermés fréquents en deux étapes successives [48] : Construction de l'arbre FP-Tree :Tel qu'illustré dans l'exemple de la Figure REF .", "Les items des transactions sont ordonnés par support décroissant après avoir élagué les items in-fréquents.", "Ensuite, l'arbre FP-Tree est construit comme suit.", "Premièrement, le noeud racine est créé et est étiqueté par \"root\".", "Pour chaque transaction du contexte, les items sont traités et une branche est créée suivant le besoin.", "Dans chaque noeud de la structure FP-Tree, il y a un compteur qui garde la trace du nombre de transactions partageant ce noeud.", "Dans le cas où une transaction présente un préfixe commun avec une branche dans le FP-Tree, le compteur de chaque noeud appartenant a ce préfixe est incrémenté et une sous-branche va être créée contenant le reste des items de la transaction.", "Exploration de l'arbre FP-Tree : Au lieu d'une exploration en largeur d'abord des itemsets fermés candidats, CLOSET effectue une partition de l'espace de recherche pour effectuer ensuite une exploration en profondeur d'abord.", "Ainsi, il commence par considérer les 1-itemsets fréquents, triés par ordre croissant de leurs supports respectifs, et examine seulement leurs sous-contextes conditionnels (ou FP-Tree conditionnels) .", "Un sous-contexte conditionnel ne contient que les items qui co-occurrent avec le 1-itemset en question.", "Le FP-Tree conditionnel associé est construit et le processus se poursuit d'une manière récursive.", "Figure: Contexte d'extraction avec son FP-Tree associé" ], [ "Algorithme TITANIC", "L'algorithme TITANIC a été proposé par Stumme et al [10] ,et qui permet de déterminer les itemsets fermés fréquents.", "Le but de cet algorithme est de minimiser le cout du calcul des fermetures Ceci est réalisé en utilisant un mécanisme de comptage par inférence .", "En reposant sur la stratégie Générer-et-tester,et qui est la mème stratégie que A-Close .", "Titanic , explore l’espace de recherche par niveau, c’est-à- dire en partant de l’ensemble vide vers les motifs de taille 1, ensuite ceux de taille 2, et ainsi de suite.", "De surcroît, cet algorithme adopte un élagage basé sur la mesure statistique minsupp.", "TITANIC évite le balayage coûteux effectué par A-CLOSE pour vérifier la dernière stratégie d’élagage.", "Pour cela, TITANIC utilise pour chaque candidat g de taille k une variable où il stocke son support estimé, c’est-à-dire le minimum du support de ses sous-ensembles de taille (k - 1), et qui doit être différent de son support réel, sinon g n’est pas minimal.", "Ceci est basé sur le lemme suivant : Lemme 1 :Soient X, Y $\\subseteq $ I. Si X $\\subseteq $ Y et $ Supp(X) = Supp(Y )$ , alors $\\Lambda $ (X) = $\\Lambda $ (Y )." ], [ "Algorithme ", "Prince a été proposé dans [29], dont l’objectif principal est de pallier les principales lacunes des algorithmes dédiés à l’extraction des motifs fermés fréquents, c’est-à-dire le coût du calcul des fermetures ainsi que le fait de ne pas construire la relation d’ordre partiel[30].", "Prince opère en trois étapes successives : Détermination des générateurs minimaux :Prince détermine tous les générateurs minimaux [27] fréquents ainsi que la bordure négative non fréquente.", "En effet, Prince parcourt l’espace de recherche par niveau (et donc par taille croissante des candidats générateurs minimaux).", "Tout au long de ce parcours, Prince élimine tout candidat g ne pouvant pas être un générateur minimal.", "L’élagage d’un tel candidat est basé sur la Construction du treillis des générateurs minimaux.", "Extraction des bases génériques de règles." ], [ "Algorithme ", "ZART a été proposé dans [54].", "Un algorithme d'extraction d'itemset multifonctionnel.", "En effet, ZART affiche un certain nombre de fonctionnalités supplémentaires et effectue les tâches suivantes, généralement indépendantes : Mécanisme de comptage par inférence : cette partie de Zart est basée sur Pascal, qui utilise les propriétés du comptage inférence .", "À partir d'un certain niveau, tous les générateurs peuvent être trouvés, ainsi tous les itemsets fréquents restants et leurs supports peuvent être déduits sans autre passage de base de transactions.", "Identification les itemsets fermés fréquents : cette phase consiste à identifier les itemsets fermés fréquents parmi les itemsest fréquents.", "Par définition : Un motif(Itemset) fréquent est dit fermé s’il ne possède aucun sur-motif qui a le même support.", "Associer les générateurs à leurs fermetures : lorsqu'un itemset fermé fréquent est trouvé, tous ses sous-ensembles fréquents sont déjà connus.", "Cela signifie que ses générateurs sont déjà calculés, ils doivent seulement être identifiés." ], [ "Discussion", "Nous récapitulons dans le tableau REF les caractéristiques des différentes approches d'extraction séquentielle des motifs fermés fréquents étudiées.", "Cette comparaison couvre les axes suivants : Stratégie d'exploration : Cette propriété décrit la stratégie d'exploration [58] des motifs générés par l'algorithme.", "Motifs extraits :Cette propriété décrit les motifs générés en sortie par l'algorithme Caractéristiques : Cette propriété décrit les caractéristiques de l'approche en question En comparant les algorithmes décrits dans cette sous section, nous pouvons noter les remarques suivantes : CLOSE, A-CLOSE et TITANIC :ont pour désavantage de calculer la même fermeture plusieurs fois dans le cas où elle admet plusieurs générateurs minimaux.", "Les stratégies d'élagage adoptées par TITANIC sont une amélioration de celle de A-CLOSE.", "En effet, en utilisant le support estimé d'un candidat, TITANIC évite le coUt des balayages effectués par A-CLOSE pour comparer le support d'un candidat générateur minimal de taille k aux supports de ses sous-ensembles de taille (k-1).", "Closet évite le calcul dupliqué des fermetures.", "Ainsi il utilise les mêmes stratégies d'élagages.", "LCM : LCM se distingue des autres algorithmes de type backtrack de la méthode de vérification qu’un itemset est fermé et la méthode d’étendre un itemset fréquent fermé pour générer un nouvel itemset fréquent fermé[34].", "Prince : La principale originalité de PRINCE réside dans la structure de treillis des générateurs minimaux.", "Ce qui permet de maintenir l'ordre partiel entre les motifs fermés fréquents ainsi que leurs générateurs associés [29].", "ZART : un algorithme d'extraction d'itemset multifonctionnel.", "L'idée introduite dans ZART peut être généralisée, et ainsi elle peut être appliquée à n'importe quel algorithme d'extraction d'itemset.", "Table: NO_CAPTIONtableTableau comparatif des algorithmes d’extraction séquentielle des motifs fermés fréquents Table: NO_CAPTIONtableTableau comparatif des algorithmes d’extraction séquentielle des motifs fermés fréquents" ], [ "Extraction parallèle des itemsets ", "Malgré l’efficacité de plusieurs algorithmes séquentiels, ces algorithmes voient leurs performances se dégrader lorsque la taille des données augmente.", "Pour maintenir les performances de ces algorithmes, le développement d’algorithmes parallèles et distribués[62] apparaît comme une solution pouvant aider à accélérer la vitesse de traitement et réduire la taille d’espace mémoire utilisée." ], [ "Extraction des motifs fréquents en parallèle", "Dans cette sous section , nous allons passer en revue les principaux algorithmes permettant l'extraction des motifs fermés fréquents en parallèle." ], [ "Algorithme Parallel Apriori Algorithm ", "Parallel Apriori Algorithm se base sur l'algorithme Apriori [36].", "Dans un environnement volumineux et distribué Parallel Apriori Algorithm la version parallèle de l'algorithme Apriori est plus performant que son séquentiel.", "Même avec le paramètre de parallélisme et la disponibilité d'un nombre élevé de ressources, l'algorithme Apriori a apporté des problèmes et des limitations réguliers, comme indiqué dans sa mise en œuvre séquentielle.", "Dans un environnement massivement distribué tel que MapReduce [16], utilisant l'algorithme Apriori, le nombre de jobs requis pour extraire les itemsets fréquents est proportionnel à la taille du long itemset.", "Par conséquent, avec un très faible support minimum et une grande quantité de données, les performances de Parallel Apriori sont très médiocres.", "Ceci est dû au fait que le processus de travail interne de l'algorithme Apriori est basé sur une approche de génération et de test candidats qui aboutit à un accès E / S disque élevé.", "De plus, dans un environnement massivement distribué, l'algorithme Apriori permet une communication de données élevée entre les mappeurs et les reducers, ceci est particulièrement le cas lorsque le support minimum a tendance à être très faible." ], [ "Algorithme Parallel SON ", "L'algorithme SON est plus flexible et adapté pour être parallélisé dans un environnement massivement distribué.", "La version parallèle de l'algorithme SON a été proposé dans [36].", "L'objectif de Prallel SON est l'extraction des itemsets fréquents selon le paradigme MapReduce .", "En effet Parallel SON opère avec deux Jobs.", "Premier Job : la base de donnée est divisées en des sous -bases,la fouilles des sous-bases s'effectue de façon parallèle à l'aide des mappers et en utilisant un algorithme d’extraction des motifs fréquents selon une valeur minsupp locale.Ensuite les mappers ces résultats( motifs fréquents dans leurs partitions) aux reducers.", "Ces derniers joindraient les résultats et font la somme des valeurs de chaque clé qui sont les motifs selon l'algorithme SON puis écrivaient les résultats dans sytème de fichiers distribué de Hadoop $\\emph {HDFS}$ .", "Deuxième job : un classement est effectué,en séparant les motifs qui sont globalement fréquents de ceux qui ne sont que localement fréquents." ], [ "Algorithme Parallel Eclat", "L'algorithme Parllel Eclat a été introduit dans [61].", "En effet, cette version parallèle apporte les mêmes problèmes et limitations de sa mise en œuvre séquentielle.", "En particulier, le nombre élevés d'items fréquents entraîne une grande augmentation de nombre d'identifiant (TID) de transactions à stocker .", "Cet inconvénient serait terrible dans la capacité du mémoire, c'est à dire que la liste des identifiants de transaction ne peut pas entrer dans la mémoire disponible." ], [ "Algorithme PFP -Growth ", "PFP-Growth a été proposé dans [37].", "C'est la version parallèle du FP-Growth.", "PFP-Growth a été appliqué avec succès pour extraire efficacement les itemsets fréquents dans les grandes base de données.", "Le processus de fouille de PFP-Growth se déroule en mémoire suivant les principes suivants : Lors de son premier travail MapReduce, PFP-Growth effectue un processus de comptage simple pour déterminer une liste d'items fréquents.", "Le second travail MapReduce est dédié à la construction d'un arbre FP-tree à extraire ultérieurement lors de la phase réductrice \"Reduce\".", "Exemple Dans cet exemple une base de transactions comporte cinq transactions composées d'alphabets en minuscules et avec une valeur minsupp = 3.", "La première étape que FP-Growth effectue consiste à trier les items dans les transactions en supprimant les items in-fréquents.Après cette étape, par exemple, $\\mathit {T1}$ (la première transaction) est élagué de $\\lbrace f, a, c, d, g, i, m, p\\rbrace à \\lbrace f, c, a, m, p\\rbrace $ .", "FP-Growth alors compresse ces transactions \"élaguées\" dans un arbre préfixe,dont lequel le racine est l'item le plus fréquent f. Chaque chemin sur l'arbre représente un ensemble de transactions qui partagent le même préfixe,chaque nœud correspond à un item.", "Chaque niveau de l'arborescence correspond à un item et une liste d'éléments est créée pour lier toutes les transactions qui possèdent cet item.Une fois que l'arbre a été construit, l'extraction de motifs suivante peut être effectuée.", "Figure: Un exemple simple de FP-Growth distribué" ], [ "Algorithme PLCM ", "L'algorithme PLCM a été proposé dans [42],c'est version parallèle de l'algorithme LCM.", "Le but de PLCM est l'extraction des motifs fermés fréquents en parallèle.", "Pour ce faire, PLCM adopte un modèle où les threads communiquent par le biais d’un espace de mémoire partagée appelé Tuple Space auquel ils peuvent ajouter ou retirer des tuples.", "Le tuple space stoque les tuples dans N “bancs de travail”, où N est le nombre de threads utilisés.", "Le fait d’avoir un banc de travail assigné à chaque thread permet de limiter la contention au moment des appels aux primitives put et get.", "Chaque thread ajoute et consomme des tuples dans un banc qui lui est propre.", "Lorsque le banc d’un thread est vide, le tuple space lui donne des tuples d’un autre banc.", "Il s’agit d’une forme de vol de travail, qui est directement gérée par le tuple space et transparente pour l’algorithme." ], [ "Algorithme DARCI ", "L'algorithme DARCI \"Distributed Association Rule mining utilizing Closed Itemsets\" a été proposé dans [3].Il se base sur l'algorithme CLOSET pour l’extraction des motifs fermés fréquents locaux .Pour trouver les motifs fermés globaux ,l'algorithme DARCI implique une échange des supports locaux des itemsets fréquents localement dans chaque partitions.", "L'algorithme opère en deux phases : Phase 1 :Envoi des itemsets fermés fréquents.", "Phase 2 :DARCI utilise une technique qui s’appel ‘best scenario pruning’ pour envoyer les supports locaux aux autres partitions ’, si un itemset(motif) I est localement fréquent dans une partition $P_i $ alors il est diffusé dans le cadre des itemsets fréquents locaux, mais si I est non fréquent dans la partition $P_i$ , $ P_i$ doit décider la diffusion du support local de l'itemset I ou non.", "$P_i$ ne devrait pas diffuser le support de I si ce dernier n’est pas fréquent globalement dans le meilleur scénario possible.", "$P_i$ diffuse le support de I si $Sup_best (I) \\ge \\sigma $ tel que $\\sigma $ est le support minimum global,et le $Sup_best$ est le meilleur scénario possible du support global de I.", "$ Sup(\\propto ) \\le sup_best(I) \\ Tel \\ que $ $ sup_best(I)=\\sum I \\ freq \\ Supp^{i}(I) + \\sum I \\ nonfreq \\ (\\ Minsupp*\\left| P^{i} \\right|-1)$" ], [ "Discussion", "Le tableau REF présente une comparaison entre les différents algorithmes que nous avons présenté ci-dessus.", "La comparaison est faite selon les axes suivants : Algorithme de base Type de motifs extraits Version parallèle Table: NO_CAPTIONtableTableau comparatif des algorithmes d’extraction des motifs" ], [ "Classification des algorithmes distribués d’extraction des motifs fréquents et des motifs fermés fréquents ", "Un premier examen de ces algorithmes permet de les classer selon la stratégie de partitionnement, à savoir le partitionnement des données et le partitionnement de l’espace de recherche.", "Nous présentons dans cette section ces caractéristiques permettant de montrer les différences majeures qui pourraient exister entre les algorithmes parallèles et distribués passés en revue.", "Stratégie de partitionnement  : Deux stratégies de partitionnements ont été avancées : un partitionnement des données et un partitionnement de l’espace de recherche.", "Technique d’exploration  : Il existe deux techniques d’exploration de l’espace de recherche, à savoir « tester et générer » et « diviser pour régner ».", "Algorithme de base : Se sont des algorithmes séquentiels utilisés pour extraire les itemsets fréquents locaux et les itemsets fermés fréquents locaux.", "Type de motifs extraits  : Se sont les motifs générés en sortie Table: NO_CAPTIONtable Classification des algorithmes d'extraction parallèles des motifs fermés fréquents" ], [ "Conclusion", "    Dans ce chapitre nous avons discuté les problèmes reliés au processus d’extractions des motifs fréquents et des motifs fermés fréquents.", "En étudiant les approches proposées dans l’état de l’art les avantages et les limitations de ces processus d’extraction des motifs sont reliés particulièrement aux accès multiples à la base des données et la capacité de la mémoire.", "Typiquement, ces différents limitations présentent un défi majeur quand le volume des données est énorme et que le support minimum est très petit ou que les motifs à découvrir sont de grande taille.", "À cet égard, nous proposons une nouvelle approche permettant l'extraction des motifs fermés fréquents avec leurs générateurs minimaux associés." ], [ "Introduction", "Pour pallier aux insuffisances des algorithmes séquentiels, la recherche simultanée d’itemsets fermés fréquents en partitionnant l'espace de recherche apparaît comme une solution intéressante.", "Dans ce chapitre, nous allons introduire dans la première section le principe de l'approche proposée.", "Dans la deuxième section, nous allons présenter l'architecture globale de notre approche.", "La troisième section est dédiée à la conception détaillée de notre approche.", "Enfin la quatrième section est consacrée à présenter un exemple illustratif de notre approche." ], [ "Principe de l'approche", "Dans cette section, nous introduisons le paradigme \"diviser pour régner\", c'est principe sur lequel se base notre approche." ], [ "présentation du paradigme \"Diviser pour régner\"", "En informatique, diviser pour régner ( \"divide and conquer\" en anglais) est une technique algorithmique consistant à : Diviser : découper un problème initial en sous-problèmes Régner : résoudre les sous-problèmes.", "Combiner : calculer une solution au problème initial à partir des solutions des sous-problèmes.", "La méthode de diviser pour régner est une méthode qui permet, parfois de trouver des solutions efficaces à des problèmes algorithmiques.", "L’idée est de découper le problème initial, de taille n, en plusieurs sous-problèmes de taille sensiblement inférieure, puis de recombiner les solutions partielles.", "De façon informelle, il s'agit de résoudre un problème de taille n à partir de la résolution de deux instances indépendantes du même problème mais pour une taille inférieure.", "De nombreux problème peuvent être résolus de cette façon.", "À cet égard, nous allons diviser la base des transactions $\\emph {D}$ , puis nous allons appliquer sur chaque partition(sous-base ou sous-contexte) de la base des transactions un algorithme d'extraction des motifs fermés fréquents ainsi leurs générateurs minimaux pour avoir un fichier pour chaque partition contenant des motifs fermés fréquents avec les générateurs minimaux.", "Ensuite deux algorithmes que nous avons proposé UFCIGs et UFCIGs-pruning seront appliqué pour combiner les fichiers par deux.", "En effet, ces algorithmes permettent la mise à jours des motifs fermés fréquents avec leurs générateurs minimaux ." ], [ "Conception globale de l'approche UFCIGs-DAC ", "Les algorithmes de ce type apparaissent comme composés de deux algorithmes ; le premier partage le problème en sous-problèmes, le second algorithme fusionne les résultats partiels en résultat global.", "Donc l'architecture de notre approche comporte deux grandes phases .", "Phase 1 : Phase de partitionnement de la base et d'extractions des Itemsets fermés fréquents avec leurs générateurs minimaux.", "Phase 2  : c'est la phase du fusionnement des résultats partiels ; la mise à jour des motifs fermés fréquents et des générateurs minimaux Le graphe résultant de cette architecture est comme suit : Figure: Conception globale de l'approche UFCIGs-DAC" ], [ "Conception détaillée de l'approche", "Dans cette section, nous allons détailler les deux phases que nous les avons énoncé dans la section précédente." ], [ "Phase1", "Cette phase est destinée à préparer les entrées de l'algorithme UFCIGs-DAC de la deuxième phase.", "D'abord, un partionnement de la base de transactions $\\emph {D} \\ en \\left| P \\right|= n $ partitions de transactions (par exemple n=4).", "$\\emph {D} =\\lbrace P_1,P_2,P_3,P_4\\rbrace $ .", "Ensuite, le processus d'extraction des Itemsets fermés fréquents avec leurs générateurs minimaux sera exécuté sur chaque partition.", "Enfin, les fichiers $ F_1 ,F_2,F_3, \\ et \\ F_4 $ sont respectivement les résultats d'extractions des partitions $P_1,P_2,P_3, \\ et\\ P_4$ ." ], [ "Phase 2", "Après avoir récupéré les fichiers $ F_1 ,F_2,F3, et F_4 $ , l'algorithme UFCIGs sera appliqué pour faire la mise à jour des Itemsets fermés fréquents avec leurs générateurs minimaux.", "Ce processus sera effectué entre chaque deux fichiers pour fournir en sortie un seul fichier résultat global." ], [ "Présentation de l'algorithme UFCGs-DAC", "Réduire l'espace de recherche (le nombre des transactions )en partitionnant la base des transactions totale en des sous-bases.", "Sur chaque sous-base nous appliquons un algorithme d'extractions des Itemset fermés fréquents et des générateurs minimaux, puis récolter les résultats en un seul sans prendre en considérations les propriétés de la fermeture et de générateur minimal ça va donner des résultats erronés certainement.", "UFCIGs-DAC \"Update of frequent closed itemsets and their generators\" est un algorithme de mise à jour des itemsets fermés fréquents et des générateurs minimaux selon la stratégie \"Divide And Conquer\" .", "L'objectif principal de UFCIGs-DAC est de recombiner chaque deux solutions partielles toute en respectant les notions de fermeture, de générateur minimal avec leur valeur de support que nous avons déjà présenté dans le premier chapitre \"notions de base\".", "nous rappelons les propriétés suivantes : Propriété 1 : un motif $I \\subseteq \\emph {I}$ est dit fréquent si son support relatif, Supp($\\textit {I}$ )=$\\frac{\\left| \\psi (I) \\right|}{\\left| \\mathrm {O} \\right|}$ dépasse un seuil minimum fixé par l’utilisateur noté minsupp.Notons que $\\left| \\psi (I) \\right|$ est appelé support absolu de ($I$ )[28].", "Propriété 2 : Un itemset (motif) $I \\subseteq I$ est dit fermé si $ I =\\lambda (I)$ .", "L'itemset $\\mathit {I}$ est un ensemble maximal l'items communs un ensemble d'objets [28].", "Propriété 3  : Un itemset g $ \\subseteq $$\\mathfrak {I}$ est un générateur minimal d’un itemset fermé $\\textit {I}$ si et seulement si $\\lambda (g) = \\textit {I}$ et $ \\nexists $ g' $ \\subseteq $ g tel que $ \\lambda (g^{\\prime }) = \\textit {I}$ [28]." ], [ "Description de l'algorithme principal de l'approche UFCIGs-DAC ", "Les notations utilisées sont résumées dans le tableau REF .", "Table: NO_CAPTIONtableles notations adoptées dans l'algorithme UFCIGs-DAC.", "L'algorithme principal de notre approche est présenté dans l'algorithme REF et la Figure REF .", "Ces derniers décrivent le processus de déroulement de notre approche, c'est à dire les appels entre les différents algorithmes de notre approche .", "Figure: L'algorithme principal de l'approche UFCIGs-DAC[H] Begin Algorithme Partitionnement ($Contexte\\_init$ ) ; Algorithme Bitset-Rpr($Contexte\\_init$ ) ; Algorithme d'extraction des IFFs et GMs($P_1, P_2$ ) ; Algorithme UFCIGs($F_1, F_2$ ) ; Algorithme $UFCIGs\\_pruning$(Fres) ; End.", "Main UFCIGs-DAC" ], [ "Description détaillée des différentes algorithmes de l'approche UFCIGs-DAC", "Dans cette sous-section, nous allons décrire de façon détaillée et ordonnée les appels entre les différents composants \"algorithmes\" de notre approche.", "Algorithme Partitionnement L'algorithme Partitionnement permet de partitionner le contexte initial $Contexte\\_init$ et prend en entrée : $Context\\_init$  : représentant le fichier du contexte initiale, c'est à dire la base des transactions totale.", "Pour obtenir les partitions : $P_1$  : la première partition ou (sous-contexte).", "$P_2$  : la deuxième partition ou (sous-contexte).", "Algorithme BitSet-Rpr En se basant sur l'algorithme $AprioriTID\\_BitSet$ [2], qui permet de calculer les itemsets fréquents dans une base de transactions.", "Cet algorithme utilise les bitsets comme structures internes pour représenter les TIDs des transactions.", "En effet, l'avantage de l'algorithme $AprioriTID\\_BitSet$ réside dans l'utilisation de BitSets permettant de représenter les ensembles d'identifiants de transactions de façon efficace en termes de consommation de mémoire permettant l'exécution de l'intersection de deux ensembles d'identifiants de transactions (TIDs) efficacement avec des ensembles de bits.", "Nous allons proposer un algorithme BitSet-Rpr qui se base sur l'algorithme $AprioriTID\\_BitSet$, toutefois nous nous intéressons à la représentation des TIDs des items en BitSets.", "$BitSet-Rpr$ prend en entrée : $Context\\_init$  : représente contexte initiale, c'est à dire la base des transactions totale.", "Pour retourner le fichier : F_Bitset : correspond au fichier contenant tous les items du contexte initial, ainsi que leurs TIDs.", "Cela permet de calculer les supports Supp-Biset-Rpr et Sup-Abs par l'algorithme 2.", "Algorithme d'extraction des IFFs,GMs Aprés avoir partitionné le contexte intial, nous allons appliquer un algorithme d'extraction spécifique (par exemple ZART [54]) sur chaque partition $P_1$ et $P_2$ , pour obtenir les fichiers $F_1$ et $F_2$ .", "Chacun de ces fichiers contient les iemsets fermés fréquents, ainsi que leurs générateurs minimaux associés.", "Algorithme UFCIGs Une fois que le processus d'extraction des partions est terminé, l'algorithme UFCIGs, commence à faire la mise à jour de l'ensemble des itemsets fermés fréquents et des générateurs minimaux.", "L'algorithme UFCIGs, dont le pseudo-code est décrit par les algorithmes 2 et 3 prend en entrée : $CL_1$  : Liste des objets de type ClosedSuppGen, c'est à dire, l'objet ClosedSuppGen qui contient les attributs Closed, Support, et générateur du fichier $F_1$ .", "$CL_2$  : Liste des objets de type ClosedSuppGen.", "c'est à dire, l'objet ClosedSuppGen qui contient les attributs Closed, Support, et générateur du fichier $F_2$ .", "UFCIGs commence à parcourir les deux listes $CL_1$ et $CL_2$ comme suit : Si un IFF (Closed $c_1$ ) de la liste des motifs fermés fréquents Listclosed de $LC_1$ apparaît dans la liste des motifs fermés fréquents Listclosed ou la liste des générateurs minimaux Listgm de $LC_2$ , dans ce cas UFCIGs, stocke $c_1$ dans la liste LFres avec une valeur de support égale à la somme des deux supports.", "Si un IFF (Closed $c_1$ ) de la liste des motifs fermés fréquents Listclosed de $LC_1$ n'apprait pas ni dans la liste des motifs fermés fréquents Listclosed ni dans la liste des générateurs minimaux Listgm de $LC_2$ , dans ce cas UFCIGs faire le calcule de support( $Supp\\_BitSet\\_Rpr (c_1)$ ) en Bitset, c'est à dire en faisant l'intersection des TIDs qui sont représentés dans le fichier $F\\_BitSet$ .", "Aprés avoir calculé le Support $Supp\\_BitSet\\_Rpr(c_1)$ en Bitsets, UFCIGs vérifie si son support $Supp\\_BitSet\\_Rpr$ est supérieur ou égale au support absolu du contexte initial.", "Ceci est expliqué par le fait qu'un itemset fermé fréquent localement se trouvant dans une seule partition peu ne plus l'être globalement.", "Si le $Supp\\_BitSet_Rpr(c_1) \\ge Supp\\_Abs$ , UFCIGs stocke $c_1$ dans la liste LFres avec une la valeur de support calculée en BitSets, sinon il ne l'enregistre pas dans LFres.", "Le même processus est appliqué avec les IFFs de $CL_2$ .", "Aprés avoir visité tous les IFFs des Listclosed de $CL_1$ et $CL_2$ , UFCIGs se déclenche à traiter les GMs ($gm_1$ ) de la liste des générateurs minimaux Listgm de $LC_1$  : Si un $gm_1$ de $LC_1$ n'apparaît pas ni dans la Listclosed de $LC_1$ et $LC_2$ , c'est à dire qu'il n'est pas traité : Si $gm_1$ de la Listgm de $LC_1$ apparaît dans la Listgm de $LC_1$ , stocke $gm_1$ dans la liste LFres en tant qu'un itemset fermés fréquent avec une valeur de support égale à la somme des deux supports.", "Sinon, UFCIGs calcule son support en BitSets, si $Supp\\_BitSet\\_Rpr(gm_1) \\ge Supp\\_Abs$ , dans ce cas, $gm_1$ sera stocké dans LFres en tant qu'un itemset fermés fréquent avec un valeur de support calculé en BitSets, sinon il ne l'enregistre pas dans LFres.", "Le même processus est appliqué avec les GMs de $CL_2$ .", "L'algorithme UFCIGs génère en sortie : LFres : Liste des itemsets qui sont globalement fermés fréquents .", "Algorithme UFCIGs pruning : Après avoir récupéré le fichier Fres par l'algorithme UFCIGs, une étape d'élagage doit être appliquée pour répondre à la propriété de fermeture, puis générer et affecter les générateurs minimaux aux itemsets fermés fréquents correspondants.", "Ceci est effectué par l'algorithme UFCIGs pruning dont le pseudo-code est décrit par l'algorithme 4 qui prend en entrée : LFres : Liste Fres de l'algorithme UFCIGs contenant l'ensemble des itemsets fermés fréquents après leurs mise à jour.", "Pour donner en sortie : LFres : Liste finale contenant l'ensemble des itemsets fermés fréquents ainsi que leurs générateurs minimaux après leurs mises à jour.", "L'algorithme commence par : Tester la notion de fermeture comme suit : si un $ IFF \\subset dans IFF^{^{\\prime \\prime }} et supp(IFF)= supp(IFF^{^{\\prime }})$ , alors IFF ne sera plus considéré comme un motif fréquent fermé selon la propriété 2, donc il sera supprimé de la lise LFres.", "affecter les GMs aux IFFs restants dans LFres à travers la procédure $\\emph {Find generators}$ .", "Cette procédure associe les items ou les itemsets aux IFFs adéquats comme des GMs s'ils vérifient la propriété 3 comme suit : Si un item $I \\subset IFF$ et $ supp(I)= supp(IFF)$ et $I =I-plus-petit$ ( itemset composé de moins d'items).", "Sinon, le motif fermé fréquent confond avec son générateur minimal, c'est à dire $IFF=GM$ .", "L'algorithme s'arrête lorsque il n'y a plus des IFFs à visiter." ], [ "Exemple illustratif", "Dans cette section, nous allons présenter un exemple illustratif qui décrit les différentes pahses de notre approche.", "Considérons la base des transactions D décrit par la Figure REF , avec le choix de la valeur de support relatif comme mesure de fréquence est minsupp =0.6.", "Figure: Base des transactions DTout d'abord, nous allons partitionner la base des transactions $\\emph {D}$ , par exemple en deux partitions $P_1$ et $P_2$ , puis nous allons appliquer sur chaque partition un algorithme spécifique à l'extraction des IFFs avec leurs GMs associés, pour avoir les fichiers $F_1$ et $F_2$ .", "Le processus est est représenté dans la Figure REF ci-après.", "Figure: Exemple illustratif de la conception globale de l'approche UFIGs-DACLa Figure REF montre les deux partitions $P_1$ et $P_2$ .", "Chaque partition contient six transactions.", "En fixant une valeur de minsupp=0.6.", "A cet étape nous allons appliquer un algorithme (par exemple ZART[54]) pour extraire les IFFs, GMs et les valeurs de support sur les partitions $P_1$ et $P_2$ .", "Figure: Partitionnement de la base des transactions DFigure: Les deux partitons P 1 P_1 et P 2 P_2Les résultats de la première phase sont représentés dans la Figure REF .", "Le fichier $F_1$ est le résultat de la partition $P_1$ contenant l'ensemble des IFFs avec leur valeur de support et leurs GMs, alors que le fichier $F_2$ est le résultat de la partition $P_2$ contenant l'ensemble des IFFs avec leur valeur de support et leurs GMs.", "Figure: Les fichiers F 1 F_1 et F 2 F_2Le tableau REF est le résultat de l'algorithme BitSet_Rpr.", "c'est la représentation en BitSet de tous les items de la base des transaction D. L'idée de cet algorithme est de déterminer les \"TIDs\" de chaque item .", "En outre ,BitSet-Rpr détérmine les transactions dont lesquelles est apparu un item i.", "Par exemple l'item \"1\" est apparu dans les transactions $\\lbrace 1,2,3,4,5,6,7,8,9,10,11,12 \\rbrace $ , l'item \"16\" est apparu dans les transactions $\\lbrace 9,10\\rbrace $ .", "Table: NO_CAPTIONtableFichier $F-BitSet$ L'algorithme UFCIGs prend en entrée les fichiers $F_1 ,F_2$ et $F\\_BitSet$ .", "UFCIGS commmence à parcourrir les IFFs des $F_1$ et $F_2$ (cf.Figure REF ).", "\"1 3 5 7 9 13 15 17\" appartient à la liste des IFFs de $F_2$ , alors il sera inséré dans le fichier Fres avec une valeur de support égale à la somme des deux supports, c'est à dure (6+4 = 10).", "Les itemsets (IFFs ou GMs) de ce type sont colorés en rouge.", "\"1 3 5 7 9 11 13 15 17\" n'appartient ni à la liste des IFFs , ni à la liste des GMs de $F_2$ .", "Dans ce cas, son support sera calculé en BitSet.", "$ Supp\\_BitSet\\_Rpr(\\leavevmode {\\color [rgb]{0,0.5,0}\\ 1 \\ 3 \\ 5 \\ 7 \\ 9 \\ 11 \\ 13 \\ 15 \\ 17})=8 \\ge Supp\\_Abs =8$ .", "Alors, \"1 3 5 7 9 11 13 15 17\" sera inséré dans Fres avec une valeur de support =8. \"", "1 3 5 7 9 13 15 17 19\" n'appartient ni à la liste des IFFs , ni à la liste des GMs de $F_2$ .", "Dans ce cas, son support sera calculé en BitSet.", "$Supp\\_BitSet\\_Rpr(\\leavevmode {\\color [rgb]{0,0.5,0}\\ 1 \\ 3 \\ 5 \\ 7 \\ 9 \\ 13 \\ 15 \\ 17 \\ 19})=6 <Supp\\_Abs$ .", "Alors, \" 1 3 5 7 9 13 15 17 19\" ne sera pas inséré dans Fres.", "De même pour \"1 3 5 7 9 11 13 15 17 19\".", "\"1 3 5 7 9 13 17\" n'appartient ni à la liste des IFFs , ni à la liste des GMs de $F_1$ .", "Dans ce cas, son support sera calculé en BitSet.", "$ Supp\\_BitSet\\_Rpr(\\leavevmode {\\color {blue}\\ 1 \\ 3 \\ 5 \\ 7 \\ 9 \\ 13 \\ 15 \\ 17 \\ 19})=12 \\ge Supp\\_Abs =8$ .", "Alors, \"1 3 5 7 9 13 15 17 19\" sera inséré dans Fres avec une valeur de support =12.", "De même pour les IFFs \"1 3 5 7 9 11 13 17\" et \"1 3 5 7 9 13 17 19\".", "Figure: Parcoure des IFFs des fichiers F 1 F_1 et F 2 F_2Figure: Fichier résultat FresUne fois, que tous les IFFs de $F_1$ et de $F_2$ sont traités,nous allons passer au traitement des GMs de $F_1$ et de $F_2$ .", "Figure: Parcoure des GMs des fichiers F 1 F_1 et F 2 F_2 EMPTYSET appartient à la liste des GMs de $F_2$ , alors il sera inséré dans le fichier Fres avec une valeur de support égale à la somme des deux supports, c'est à dure (6+6 = 12).", "11 appartient à la liste des GMs de $F_2$ , alors il sera inséré dans le fichier Fres avec une valeur de support égale à la somme des deux supports, c'est à dure (5+4 = 9).", "19 appartient à la liste des GMs de $F_2$ , alors il sera insérér dans le fichier Fres avec une valeur de support égale à la somme des deux supports, c'est à dure (5+4 = 9).", "11 19 n'appartient à la liste des IFFs des GMs de $F_2$ .", "Dans ce cas, son support sera calculé en BitSet.", "$Supp\\_BitSet\\_Rpr(\\leavevmode {\\color [rgb]{0,0.5,0} \\ 11 \\ 19})=6 <Supp\\_Abs$ .", "Alors, \" 11 19\" ne sera pas inséré dans Fres.", "\"15\" n'appartient à la liste des IFFs des GMs de $F_1$ .", "Dans ce cas, son support sera calculé en BitSet.", "$Supp\\_BitSet\\_Rpr(\\leavevmode {\\color {blue}15})=10 >=Supp\\_Abs$ .", "Alors, \"15\" sera inséré dans Fres.", "Figure: Fichier Fres de l'algorithme UFCIGsAprés avoir traiter tous les IFFs et les GMs des fichier $F_1$ et $F_2$ , nous allons appliquer l'algorithme UFCIGs pruning.", "En effet, pour générer et affecter les GMs aux IFFs adéquats du fichier Fres, un traitement doit être effectué comme suit ( : Un motif(Itemset) fréquent est dit fermé s’il ne possède aucun sur-motif qui a le même support.", "$\\textsl {EMPYSET} \\subset \\textsl {\"1 3 5 7 9 13 17\"}$ , et Supp(EMPTYSET)= Supp(1 3 5 7 9 13 17)=12.", "Donc \"EMPTYSET\" n'est plus un itemset fermés fréquent et il sera élagué.", "$\\textsl {\"11\"} \\subset \\textsl {\"1 3 5 7 9 11 13 15 17\"}$ , et Supp(11)= Supp(1 3 5 7 9 11 13 15 17)=9.", "Donc \"11\" n'est plus un itemset fermés fréquent et il sera élagué.", "$\\textsl {\"19\"} \\subset \\textsl {\" 1 3 5 7 9 13 17 19\"}$ , et Supp(19)= Supp(1 3 5 7 9 13 17 19)=9.", "Donc \"19\" n'est plus un itemset fermés fréquent et il sera élagué.", "$ \\textsl {\"15\"} \\subset \\textsl {1 3 5 7 9 13 15 17}$ , et Supp(15)= Supp(1 3 5 7 9 13 15 17 )=10.", "Donc \"15\" n'est plus un itemset fermés fréquent et il sera élagué.", "Figure: partie d'élagagePour affecter les GMs aux IFFs restants dans Fres IFF \"1 3 5 7 9 13 17\" son GM =\"EMPTYSET\".", "Supp(1 3 5 7 9 13 17) = Supp(EMPTYSET) = 12, et \"EMPTYSET\"= $plus\\_ petit$ .", "IFF\"1 3 5 7 9 11 13 15 17\" son GM= \"11\".", "Supp (1 3 5 7 9 11 13 15 17)=Supp(11), et \"11\" IFF \"1 3 5 7 9 13 15 17\" son GM = 15.", "Supp(1 3 5 7 9 13 15 17) = Supp(15) = 9, et \"15\"=$plus\\_petit$ IFF \"1 3 5 7 9 11 13 17\" son GM= 11 15.", "Supp( 1 3 5 7 9 11 13 17 )= Supp(11 15) = 8 et \"11 15\" =$plus\\_petit$ IFF\"1 3 5 7 9 13 17 19\" son GM= \"19\".", "Supp ( 1 3 5 7 9 13 17 19)=Supp(19), et \"19\" =$plus\\_petit$ Figure: Fichier résultat Fres de l'algorithme 4La figure REF résultat final Fres de notre algorithme UFCIGs pruning après la phase d'élagage et d'affectation des GMs aux IFFs correspondants.", "La figure REF montre le résultat d'extraction séquentielle des IFFs avec leurs GMs t sur la totalité de la base, c'est à dire sans partitionner le contexte initial.", "Donc Comparons ce fichier avec notre fichier résultat Fres, nous remarquons que nous avons trouvé les mêmes IFFs ainsi que leurs GMS associés et les mêmes valeurs de supports.", "Figure: Extraction séquentielle des IFFs,GMs." ], [ "Conclusion", "Dans ce chapitre, nous avons proposé une nouvelle approche permettant l'extraction des itemsets fermés fréquents , ainsi que leurs générateurs minimaux associés, dans une base transactionnelle, en les mettant à jour selon la stratégie \"diviser pou régner\".", "Dans le chapitre suivant, nous allons tester les performance de notre algorithme UFCIGs-DAC sur des bases de tests." ], [ "Introduction", "Dans le chapitre précédent, nous avons introduit un nouvel algorithme, appelé UFCIGs-DAC dédié à l'extraction des motifs fermés fréquents avec leurs générateurs minimaux associés.", "En effet, notre algorithme, opère en trois étapes successives : Partitionner la base de transactions initiale.", "Appliquer sur chaque partition un algorithme d'extraction des motifs fermés fréquents avec leurs générateurs minimaux (par exemple ZART[54]).", "La mise à jour des motifs fermés fréquents ainsi que leurs générateurs minimaux.", "Dans ce chapitre, nous allons discuter les résultats des expérimentations que nous avons réalisé avec notre approche sur plusieurs bases de tests pour évoluer ses performances.", "Dans un premier lieu, nous allons présenter l'environnement de l'évaluation de notre approche.", "Dans un deuxième lieu, nous allons présenter les bases \"benchmark\".", "Ensuite, nous comparerons les performances de notre algorithme UFCIGs-DAC à l’algorithme séquentiel ZART." ], [ "Environnement d'expérimentation", "Dans cette section, nous allons commencer tout d'abord par la présentation de l'environnement expérimental sur lequel nous avons travaillé pour évaluer et tester notre approche de mise à jour des motifs fermés fréquents et leurs générateurs minimaux ." ], [ "Environnement matériel et logiciel", "Toutes les expérimentations ont été réalisées sur PC muni d'un processeur Intel Core i3 ayant une fréquence d'horloge de 2.10 GHz et 4 Go de mémoire tournant sur la plate-forme Windows 7.", "Afin de mener une étude comparative avec les approches d'extractions des itemsets fermé fréquents dans le chapitre 2, nous avons implémenté notre algorithme UFCGs-DAC, en java." ], [ "Bases de tests", "Dans cette section, nous présentons les résultats de l’étude expérimentale que nous avons réalisée sur les bases “benchmark” MUSHROOMS, CHESS, Retail, et Foodmart.", "Typiquement, les bases de transactions réélles sont très denses et elles produisent un nombre important d'itemsets fréquents de taille assez large, et ce même pour des valeurs élevées de support.", "Les bases de transactions synthétiques limitent les transactions d'un environnement de ventes au détail.", "Habituellement, les bases de transactions synthétiques sont plus éparses comparées aux bases réelles.", "La table REF énumère les caractéristiques des différentes bases que nous avons utilisé pour nos tests.", "Table: NO_CAPTION table Caractéristiques des bases transactionnelles de test" ], [ "Résultats expérimentaux", "Nous présentons les résultats obtenus suite aux différentes expérimentations réalisées dans l'objectif de comparer les performances de UFCIGs-DAC.", "Tout d'abord,nous avons partitionner les contextes de tests en deux sous-contextes (partitions $P_1$ , $P_2)$ , dont les caractéristiques sont résumées par le tableau REF .", "Puis nous avons extrait les partitions $P_1$ et $P_2$ simultanément en utilisant les Threads.", "Enfin, nous avons mis à jour l'ensemble des motifs fermés fréquents et leurs générateurs minimaux extrait par ZART.", "Table: NO_CAPTION table Caractéristiques des Partitions Dans ce qui suit, nous allons évaluer les performances de notre algorithme UFCIGs-DAC par rapport à l'algorithme ZART selon deux parties distinguées : Temps d'exécution de UFCIGs-DAC versus ZART pour les bases denses et éparses.", "Nombre des itemsets fermés fréquent extraits par UFCIGs-DAC par rapport à ZART pour les bases denses et éparses." ], [ "Expérimentations sur les contextes denses", "Les temps d’exécution de l’algorithme UFCIGs-DAC comparés respectivement à l'algorithme séquentiel ZART sur les contextes qui sont présentés par les Figures REF , REF .", "MUSHROOMS : pour cette base ZART fait mieux que UFCIGs-DAC avec un seuil de support minimum très petit.Les performances de UFCIGs-DAC se dégradent considérablement étant donné qu’ils effectuent des intersectionsn sur un grand nombre d’objets de taille élevée à partir de valeur de minsupp = $70\\%$ .", "En effet les motifs fermés fréquents sont longs et nombreux, et c'est du au fait que les partitions $P_1$ et $P_2$ partagent de nombreux items en commun).Donc, pour passer à la phase de mise à jour, il faudra attendre ZART très long finisse d’extraire les itemsets fermés fréquents avec leurs générateurs minimaux localement.", "CHESS : pour cette base,bien qu'il ait eu un partitionnement de la base,UFCIGs-DAC réalise des temps d’exécution beaucoup moins importants dans la partition $P_2$ que ceux réalisés dans la partitions $P_1$ .", "En effet, la performance de UFCIGs-DAC dépend fortement de la fouille de l'algorithme ZART dans les partitions.", "C’est du au fait que la partition $P_1$ contient plusieurs transactions similaires alors le nombre de motifs qui sont fréquents localement dans cette partition est élevé.", "Dans ce cas, le temps d’exécution de ZART sur $P_1$ serait élevé ce qui impacte la performance globale de UFCIGs-DAC.", "Figure: Les performances de UFCIGs-DAC versus ZART dans la base MUSHROOMSFigure: Les performances de UFCIGs-DAC versus ZART dans la base CHESS" ], [ "Expérimentations sur les contextes épars", "Les temps d’exécution de l’algorithme UFCIGs-DAC comparés respectivement à l'algorithme séquentiel ZART sur les contextes épars qui sont présentés par les Figures REF ,REF .", "Retail : pour cette base, les performances de UFCIGs-DAC sont meilleurs que ZART pour les valeurs de minsupp ($50 \\%, 30\\%, 20\\%, et 10\\%$).", "Alors que c’est l’inverse pour les valeurs de minsupp ($2\\% et 1\\%$).", "Ceci peut être expliqué par le fait que UFCIGs-DAC est pénalisé par le coût le calcul des supports Bitset-Rpr, c'est à dire, pour les valeurs de minsupp ($2\\% et 1\\%$), les itemsets fermés fréquents se trouvant dans l'une des partitions sont plus nombreux que les itemsets fermés fréquents se trouvant dans les deux partitions à la fois.", "Foodmart : dans le cas du contexte Foodmart, les performance de UFCIGs-DAC sont largement meilleurs que celles de ZART pour des valeurs de minsupp inférieures ou égales à $0,4 \\%$.", "Les performances réalisées peuvent être expliquées la taille moyenne des motifs fermés fréquents relativement petite sur lesquelles ils exécutent des intersections.", "De surcroît, la majorité des iemsets fermés fréquents extraits de $P_1$ égales aux iemsets fermés fréquents extraits de $P_2$ .", "Figure: Les performances de UFCIGs-DAC versus ZART dans la base RetailFigure: Les performances de UFCIGs-DAC versus ZART dans la base Foodmart" ], [ "Nombre des itemsets fermés fréquents extraits", "Une étude statique qui permet de déterminer la fréquence en pourcentage (%) de l'algorithme UFCIGs-DAC, c'est à dire, nous allons indiquer le nombre des IFFs retournés par UFCIGs-DAC par rapport à la totalité (le nombre des IFFs retourné par l'algorithme ZART)." ], [ "Expérimentations sur les contextes denses", "Le nombre des itemsets fermés fréquents extraits par l’algorithme UFCIGs-DAC comparés respectivement à l'algorithme ZART sur les contextes qui sont présentés par la Figure REF et le Tableau REF .", "MUSHROOMS : dans cette base, a réussi à extriare toue les motifs fermés fréquents extraits par Zart pour les valeurs de mmissup = $80\\%$ et $70\\% $ .", "Cependant, pour les autres valeurs de minsupports UFCIGs-DAC a pratiquement donné les mêmes résultats que nous jugeons satisfaisants.", "CHESS : dans cette base, comparé à l'algorithme séquentiel ZART, UFCIGs-DAC n'a pas extrait tous les motifs fermés fréquents quelles que soient les valeurs de minsupp, mais l’ensemble des motifs fermés fréquents extrait, présente une fréquence d’apparition jugée satisfaisante.", "De son coté, UFCIGs-DAC pose un problème de perte d'information dans des certains valeur de support miniumum par rapport à l'algorithme Zart.", "En outre, ceci est expliqué par la décomposition des contextes qui peut provoquer la disparition des certaines motifs fermés fréquents.", "Figure: Nombre des IFFs extrait par UFCIGs-DAC par rapport à ZART.Figure: Nombre des IFFs extraits par UFCIGs-DAC par rapport à ZART." ], [ "Expérimentations sur les contextes épars", "Le nombre des itemsets fermés fréquents extraits par l’algorithme UFCIGs-DAC comparés respectivement à l'algorithme ZART sur les contextes qui sont présentés par la Figure REF et le Tableau REF .", "Retail : dans le cas du contexte Retail, UFCIGs-DAC bénéficie du partitionnement de ce contexte, et il a réussi à extraire pratiquement les mêmes résultats que ZART.", "En effet le type de la base Retail qui est considérée comme une base éparse, c'est à dire, elle produit un nombre d'itmsets fermés fréquent de taille assez petit par rapport aux bases denses.", "Foodmart : dans cette base, comparé à ZART, notre algorithme a extrait tous les motifs fermés fréquents quelle que soit la valeur de minsupp.", "Figure: Nombre des IFFs extrait par UFCIGs-DAC par rapport à ZART.Figure: Nombre des IFFs extraits par UFCIGs-DAC par rapport à ZART." ], [ "Interprétation des résultats", "Dans le cas des contextes épars,les performances de UFCIGs-DAC s'avèrent largement meilleures que celles de Zart.", "En effet, les motifs fermés fréquents extraits des contextes épars sont aussi les générateurs minimaux, tandis que dans les contextes denses il y'a des plusieurs motifs longs même pour des valeurs de supports élevées.", "Dans ce cas, UFCIGs-DAC faudra attendre très long l'alogrithme d'extraction spécifique (ZART) finisse d’extraire les itemsets localement.", "C'est pour cela, Zart a un écart de temps d'exécution meilleur que UFCIGs-DAC dans les bases denses.", "De son coté, UFCIGs-DAC pose un problème de perte d'information dans des certains valeur de support miniumum par rapport à l'algorithme Zart.", "En outre, ceci est expliqué par la décomposition des contextes qui peut provoquer la disparition des certaines motifs fermés fréquents.", "Par contre, dans les bases éparses, le temps d'exécution de l'algorithme UFCIGs-DAC commence à être distinguable.", "Les performances réalisées peuvent être expliquées par la taille moyenne des motifs fermés fréquents relativement petite sur lesquelles ils exécutent des intersections.", "Ainsi, notre algorithme a réussi à extraire pratiquement les mêmes résultats que ZART." ], [ "Conclusion", "Dans ce chapitre, nous avons mené une étude expérimentale de l'algorithme UFCIGs-DAC sur des bases \"benchmark\" communément utilisées.", "Nous avons prouvé expérimentalement que nous pouvons réduire le temps d'exécution des motifs fermés fréquents et leurs générateurs minimaux associés ." ], [ "Conclusion générale ", "tocchapterConclusion générale Dans ce mémoire, nous nous sommes intéressés à la fouille des motifs fermés fréquents dans les baeses des transactions.", "À cet égard, nous avons proposé, dans ce mémoire, une nouvelle approche permettant l'extraction des itemsets fermés fréquents.", "En effet, nous avons entamé ce mémoire par la présentation des notions préliminaires relatives aux motifs fréquents et aux motifs fermés fréquents.", "Nous avons tout de même décrit les notions offertes par le cadre de l’analyse des concepts formels ACF.", "Ensuite, nous avons étudié dans le deuxième chapitre, les différentes approches de la littérature traitant de l'extraction séquentielle des motifs fréquents, des motifs fermés fréquents ainsi que les approches parallèles de fouille.", "Egalement, nous avons mené une étude critique des principaux algorithmes d’extraction dans les bases massives en se basant sur une stratégie de partitionnement « Diviser pour régner » du contexte de données.", "Notre nouvelle approche appelé UFCIGs-DAC a été conçu et implémenté afin de réaliser la fouille dans les bases de tests.", "La principale originalité de notre approche est l’exploration simultanée de l’espace de recherche en mettant à jour les motifs fermés fréquents et les générateurs minimaux.", "De plus, notre approche pourrait être adaptée à tout algorithme d'extraction des motifs fermés fréquents avec leurs générateurs minimaux.", "Dans le bus d'améliorer l'exploitation et la flexibilitéion notre approches, en voici quelques perspectives que nous jugeons intéressantes : Application d’une stratégie de partitionnement non-aléatoire du contexte.", "Autrement dit, en prenant en considération l’équité du nombre d’items dans les différentes partitions ainsi que leur répartition.", "Par exemple, quand une partition contient plusieurs transactions similaires (elles partagent de nombreux items en commun), alors le nombre de motifs qui sont fréquents localement dans cette partition est élevé et par la suite le nombre des motifs fermés fréquents.", "Adapter notre approche pour la fouille des \"Big Data\" dans les environnements distribués ( Hadoop [33], Spark [59]).", "Entamer l’étape d’extraction des règles associatives [1],[14].", "Une règle d’association de la forme $X \\rightarrow Y$ , où X et Y sont des motifs disjoints ($X \\cap Y = \\phi $ ), appelées respectivement la prémisse et la conclusion de la règle.", "Cette règle est traduite par \" si X, alors Y \".", "Notons que X c’est l’Itemset fermé fréquent et Y est le générateur minimal associé au fermé X [24],[13]." ] ]
1906.04586
[ [ "Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning" ], [ "Abstract Multi-simulator training has contributed to the recent success of Deep Reinforcement Learning by stabilizing learning and allowing for higher training throughputs.", "We propose Gossip-based Actor-Learner Architectures (GALA) where several actor-learners (such as A2C agents) are organized in a peer-to-peer communication topology, and exchange information through asynchronous gossip in order to take advantage of a large number of distributed simulators.", "We prove that GALA agents remain within an epsilon-ball of one-another during training when using loosely coupled asynchronous communication.", "By reducing the amount of synchronization between agents, GALA is more computationally efficient and scalable compared to A2C, its fully-synchronous counterpart.", "GALA also outperforms A2C, being more robust and sample efficient.", "We show that we can run several loosely coupled GALA agents in parallel on a single GPU and achieve significantly higher hardware utilization and frame-rates than vanilla A2C at comparable power draws." ], [ "Introduction", "Deep Reinforcement Learning (Deep RL) agents have reached superhuman performance in a few domains [26], [27], [18], [34], but this is typically at significant computational expense [32].", "To both reduce running time and stabilize training, current approaches rely on distributed computation wherein data is sampled from many parallel simulators distributed over parallel devices [8], [19].", "Despite the growing ubiquity of multi-simulator training, scaling Deep RL algorithms to a large number of simulators remains a challenging task.", "On-policy approaches train a policy by using samples generated from that same policy, in which case data sampling (acting) is entangled with the training procedure (learning).", "To perform distributed training, these approaches usually introduce multiple learners with a shared policy, and multiple actors (each with its own simulator) associated to each learner.", "The shared policy can either be updated in a synchronous fashion (e.g., learners synchronize gradients before each optimization step [28]), or in an asynchronous fashion [19].", "Both approaches have drawbacks: synchronous approaches suffer from straggler effects (bottlenecked by the slowest individual simulator), and therefore may not exhibit strong scaling efficiency; asynchronous methods are robust to stragglers, but prone to gradient staleness, and may become unstable with a large number of actors [6].", "Alternatively, off-policy approaches typically train a policy by sampling from a replay buffer of past transitions [18].", "Training off-policy allows for disentangling data-generation from learning, which can greatly increase computational efficiency when training with many parallel actors [8], [12], [13], [10].", "Generally, off-policy updates need to be handled with care as the sampled transitions may not conform to the current policy and consequently result in unstable training [9].", "We propose Gossip-based Actor-Learner Architectures (gala), which aim to retain the robustness of synchronous on-policy approaches, while improving both their computational efficiency and scalability.", "gala leverages multiple agents, where each agent is composed of one learner and possibly multiple actors/simulators.", "Unlike classical on-policy approaches, gala does not require that each agent share the same policy, but rather it inherently enforces (through gossip) that each agent's policy remain $\\epsilon $ -close to all others throughout training.", "Relaxing this constraint allows us to reduce the synchronization needed between learners, thereby improving the algorithm's computational efficiency.", "Instead of computing an exact average between all the learners after a local optimization step, gossip-based approaches compute an approximate average using loosely coupled and possibly asynchronous communication (see [21] and references therein).", "While this approximation implicitly injects some noise in the aggregate parameters, we prove that this is in fact a principled approach as the learners' policies stay within an $\\epsilon $ -ball of one-another (even with non-linear function approximation), the size of which is directly proportional to the spectral-radius of the agent communication topology and their learning rates.", "As a practical algorithm, we propose gala-a2c, an algorithm that combines gossip with a2c agents.", "We compare our approach on six Atari games [17] following [28] with vanilla a2c, a3c and the impala off-policy method [7], [19], [8].", "Our main empirical findings are: Following the theory, gala-a2c is empirically stable.", "Moreover, we observe that gala can be more stable than a2c when using a large number of simulators, suggesting that the noise introduced by gossiping can have a beneficial effect.", "gala-a2c has similar sample efficiency to a2c and greatly improves its computational efficiency and scalability.", "gala-a2c achieves significantly higher hardware utilization and frame-rates than vanilla a2c at comparable power draws, when using a GPU.", "gala-a2c is competitive in term of performance relative to a3c and impala.", "Perhaps most remarkably, our empirical findings for gala-a2c are obtained by simply using the default hyper-parameters from a2c.", "Our implementation of gala-a2c is publicly available at  https://github.com/facebookresearch/gala." ], [ "Reinforcement Learning.", "We consider the standard Reinforcement Learning setting [29], where the agent’s objective is to maximize the expected value from each state $V(s) = \\mathbb {E} \\left[ \\sum _{i=0}^{\\infty } \\gamma ^i r_{t+i} | s_t=s \\right]$ , $\\gamma $ is the discount factor which controls the bias towards nearby rewards.", "To maximize this quantity, the agent chooses at each discrete time step $t$ an action $a_t$ in the current state $s_t$ based on its policy $\\pi (a_t|s_t)$ and transitions to the next state $s_{t+1}$ receiving reward $r_t$ based on the environment dynamics.", "Temporal difference (TD) learning [31] aims at learning an approximation of the expected return parameterized by $\\theta $ , i.e., the value function $V(s;\\theta )$ , by iteratively updating its parameters via gradient descent: $\\nabla _{\\theta } \\left(G^N_t- V(s_t;\\theta ) \\right) ^2$ where $G^N_t=\\sum _{i=0}^{N-1} \\gamma ^i r_{t+i} + \\gamma ^N V(s_{t+n};\\theta _t)$ is the $N$ -step return.", "Actor-critic methods [30], [19] simultaneously learn both a parameterized policy $\\pi (a_t|s_t; \\omega )$ with parameters $\\omega $ and a critic $V(s_t;\\theta )$ .", "They do so by training a value function via the TD error defined in (REF ) and then proceed to optimize the policy using the policy gradient (PG) with the value function as a baseline: $\\nabla _\\omega \\left(- \\log \\pi (a_t|s_t;\\omega ) A_t\\right),$ where $A_t = G^N_t - V(s_t; \\theta _t)$ is the advantage function, which represents the relative value the current action has over the average.", "In order to both speed up training time and decorrelate observations, [19] collect samples and perform updates with several asynchronous actor-learners.", "Specifically, each worker $i\\in \\lbrace 1,2, .., W\\rbrace $ , where $W$ is the number of parallel workers, collects samples according to its current version of the policy weights $\\omega _i$ , and computes updates via the standard actor-critic gradient defined in (REF ), with an additional entropy penalty term that prevents premature convergence to deterministic policies: $\\nabla _{\\omega _i} \\left(- \\log \\pi (a_t|s_t;\\omega _i) A_t - \\eta \\sum _a \\pi (a|s_t; \\omega _i) \\log \\pi (a|s_t; \\omega _i)\\right).$ The workers then perform Hogwild!", "[24] style updates (asynchronous writes) to a shared set of master weights before synchronizing their weights with the master's.", "More recently, [7] removed the asynchrony from a3c, referred to as a2c, by instead synchronously collecting transitions in parallel environments $i\\in \\lbrace 1,2, .., W\\rbrace $ and then performing a large batched update: $\\nabla _\\omega \\left[\\frac{1}{W}\\sum _{i=1}^W \\left(- \\log \\pi (a^i_t|s^i_t;\\omega ) A^i_t - \\eta \\sum _a \\pi (a|s^i_t; \\omega ) \\log \\pi (a|s^i_t; \\omega ) \\right) \\right].$" ], [ "Gossip algorithms.", "Gossip algorithms are used to solve the distributed averaging problem.", "Suppose there are $n$ agents connected in a peer-to-peer graph topology, each with parameter vector $x^{(0)}_i \\in \\mathbb {R}^d$ .", "Let $\\mathbf {X}^{(0)} \\in \\mathbb {R}^{n \\times d}$ denote the row-wise concatenation of these vectors.", "The objective is to iteratively compute the average vector $\\frac{1}{n} \\sum _{i=1}^n \\mathbf {x}_i^{(0)}$ across all agents.", "Typical gossip iterations have the form $\\mathbf {X}^{(k+1)} = \\mathbf {P}^{(k)} \\mathbf {X}^{(k)}$ , where $\\mathbf {P}^{(k)} \\in \\mathbb {R}^{n \\times n}$ is referred to as the mixing matrix and defines the communication topology.", "This corresponds to the update $\\mathbf {x}_i^{(k+1)} = \\sum _{j=1}^n p_{i,j}^{(k)} \\mathbf {x}_j^{(k)}$ for an agent $v_i$ .", "At an iteration $k$ , an agent $v_i$ only needs to receive messages from other agents $v_j$ for which $p_{i,j}^{(k)} \\ne 0$ , so sparser matrices $\\mathbf {P}^{(k)}$ correspond to less communication and less synchronization between agents.", "The mixing matrices $\\mathbf {P}^{(k)}$ are designed to be row stochastic (each entry is greater than or equal to zero, and each row sums to 1) so that $\\lim _{K \\rightarrow \\infty } \\prod _{k=0}^{K} \\mathbf {P}^{(k)} = \\mathbf {1}\\mathbf {\\pi }^\\top $ , where $\\mathbf {\\pi }$ is the ergodic limit of the Markov chain defined by $\\mathbf {P}^{(k)}$ and $\\mathbf {1}$ is a vector with all entries equal to 1 [25].Assuming that information from every agent eventually reaches all other agents Consequently, the gossip iterations converge to a limit $\\mathbf {X}^{(\\infty )} = \\mathbf {1}(\\mathbf {\\pi } ^\\top \\mathbf {X}^{(0)})$ ; meaning the value at an agent $i$ converges to $\\mathbf {x}_i^{(\\infty )} = \\sum _{j=1}^n \\pi _j \\mathbf {x}_j^{(0)}$ .", "In particular, if the matrices $\\mathbf {P}^{(k)}$ are symmetric and doubly-stochastic (each row and each column must sum to 1), we obtain an algorithm such that $\\pi _j = 1/n$ for all $j$ , and therefore $\\mathbf {x}_i^{(\\infty )} = 1/n \\sum _{j=1}^n \\mathbf {x}_j^{(0)}$ converges to the average of the agents' initial vectors.", "For the particular case of gala, we only require the matrices $\\mathbf {P}^{(k)}$ to be row stochastic in order to show the $\\epsilon $ -ball guarantees." ], [ "Gossip-based Actor-Learner Architectures", "[t] Gossip-based Actor-Learner Architectures for agent $v_i$ using a2c [1] Initialize trainable policy and critic parameters $x_i = (\\omega _i, \\theta _i)$ .", "$t$ = 0, 1, 2, ... Take $N$ actions $\\lbrace a_t \\rbrace $ according to $\\pi _{\\omega _i}$ and store transitions $\\lbrace (s_t, a_t.", "r_t, s_{t+1})\\rbrace $ Compute returns $G_t^N=\\sum _{i=0}^{N-1} \\gamma ^i r_{t+i} + \\gamma ^N V(s_{t+n};\\theta _i)$ and advantages $A_t= G^N_t - V(s_t; \\theta _i)$ Perform a2c optimization step on $x_i$ using TD in (REF ) and batched policy-gradient in (REF ) Broadcast (non-blocking) new parameters $x_i$ to all out-peers in $\\mathcal {N}^{\\text{out}}_i$ Receive buffer contains a message $m_j$ from each in-peer $v_j$ in $\\mathcal {N}^{\\text{in}}_i$ $x_i \\leftarrow \\frac{1}{1 + |\\mathcal {N}^{\\text{in}}_i|}( x_i + \\sum _{j} m_j)$ [1] Average parameters with messages $^{1}$ We set the non-zero mixing weights for agent $v_i$ to $p_{i,j} = \\frac{1}{1 + |\\mathcal {N}^{\\text{in}}_i|}$ .", "We consider the distributed RL setting where $n$ agents (each composed of a single learner and several actors) collaborate to maximize the expected return $V(s)$ .", "Each agent $v_i$ has a parameterized policy network $\\pi (a_t|s_t; \\omega _i)$ and value function $V(s_t;\\theta _i)$ .", "Let $x_i = (\\omega _i, \\theta _i)$ denote agent $v_i$ 's complete set of trainable parameters.", "We consider the specific case where each $v_i$ corresponds to a single a2c agent, and the agents are configured in a directed and peer-to-peer communication topology defined by the mixing matrix $\\mathbf {P} \\in \\mathbb {R}^{n \\times n}$ .", "In order to maximize the expected reward, each gala-a2c agent alternates between one local policy-gradient and TD update, and one iteration of asynchronous gossip with its peers.", "Pseudocode is provided in Algorithm , where $\\mathcal {N}^{\\text{in}}_i \\lbrace v_j \\mid p_{i,j} > 0 \\rbrace $ denotes the set of agents that send messages to agent $v_i$ (in-peers), and $\\mathcal {N}^{\\text{out}}_i \\lbrace v_j \\mid p_{j,i} > 0 \\rbrace $ the set of agents that $v_i$ sends messages to (out-peers).", "During the gossip phase, agents broadcast their parameters to their out-peers, asynchronously (i.e., don't wait for messages to reach their destination), and update their own parameters via a convex combination of all received messages.", "Agents broadcast new messages when old transmissions are completed and aggregate all received messages once they have received a message from each in-peer.", "Note that the gala agents use non-blocking communication, and therefore operate asynchronously.", "Local iteration counters may be out-of-sync, and physical message delays may result in agents incorporating outdated messages from their peers.", "One can algorithmically enforce an upper bound on the message staleness by having the agent block and wait for communication to complete if more than $\\tau \\ge 0$ local iterations have passed since the agent last received a message from its in-peers." ], [ "Theoretical $\\epsilon $ -ball guarantees:", "Next we provide the $\\epsilon $ -ball theoretical guarantees for the asynchronous $\\textsc {gala} $ agents, proofs of which can be found in Appendix .", "Let $k \\in \\mathbb {N}$ denote the global iteration counter, which increments whenever any agent (or subset of agents) completes an iteration of the loop defined in Algorithm .", "We define $x_i^{(k)} \\in \\mathbb {R}^d$ as the value of agent $v_i$ 's trainable parameters at iteration $k$ , and $\\mathbf {X}^{(k)} \\in \\mathbb {R}^{n \\times d}$ as the row-concatenation of these parameters.", "For our theoretical guarantees we let the communication topologies be directed and time-varying graphs, and we do not make any assumptions about the base gala learners.", "In particular, let the mapping $\\mathcal {T}_i : x_i^{(k)} \\in \\mathbb {R}^d \\mapsto x_i^{(k)} - \\alpha g_i^{(k)} \\in \\mathbb {R}^d$ characterize agent $v_i$ 's local training dynamics (i.e., agent $v_i$ optimizes its parameters by computing $x_i^{(k)} \\leftarrow \\mathcal {T}_i(x_i^{(k)})$ ), where $\\alpha > 0$ is a reference learning rate, and $g_i^{(k)} \\in \\mathbb {R}^d$ can be any update vector.", "Lastly, let $\\mathbf {G}^{(k)} \\in \\mathbb {R}^{n \\times d}$ denote the row-concatenation of these update vectors.", "Proposition 1 For all $k \\ge 0$ , it holds that $\\left\\Vert \\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)} \\right\\Vert \\le \\alpha \\sum ^k_{s=0} \\beta ^{k + 1 - s} \\left\\Vert \\mathbf {G}^{(s)} \\right\\Vert ,$ where $\\overline{\\mathbf {X}}^{(k+1)} \\frac{1_{n} 1_{n}^T}{n} \\mathbf {X}^{(k + 1)}$ denotes the average of the learners' parameters at iteration $k + 1$ , and $\\beta \\in [0, 1]$ is related to the joint spectral radius of the graph sequence defining the communication topology at each iteration.", "Proposition REF shows that the distance of a learners' parameters from consensus is bounded at each iteration.", "However, without additional assumptions on the communication topology, the constant $\\beta $ may equal 1, and the bound in Proposition REF can be trivial.", "In the following proposition, we make sufficient assumptions with respect to the graph sequence that ensure $\\beta < 1$ .", "Proposition 2 Suppose there exists a finite integer $B \\ge 0$ such that the (potentially time-varying) graph sequence is $B$ -strongly connected, and suppose that the upper bound $\\tau $ on the message delays in Algorithm  is finite.", "If learners run Algorithm  from iteration 0 to $k + 1$ , where $k \\ge \\tau + B$ , then it holds that $\\left\\Vert \\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)} \\right\\Vert \\le \\frac{\\alpha \\tilde{\\beta } L}{1 - \\beta },$ where $\\beta < 1$ is related to the joint spectral radius of the graph sequence, $\\alpha $ is the reference learning rate, $\\tilde{\\beta } \\beta ^{- \\frac{\\tau + B}{\\tau + B + 1}}$ , and $L \\sup _{s=1,2,\\ldots } \\left\\Vert \\mathbf {G}^{(s)} \\right\\Vert $ denotes an upper bound on the magnitude of the local optimization updates during training.", "Proposition REF states that the agents' parameters are guaranteed to reside within an $\\epsilon $ -ball of their average at all iterations $k \\ge \\tau + B$ .", "The size of this ball is proportional to the reference learning-rate, the spectral radius of the graph topology, and the upper bound on the magnitude of the local gradient updates.", "One may also be able to control the constant $L$ in practice since Deep RL agents are typically trained with some form of gradient clipping." ], [ "Related work", "Several recent works have approached scaling up RL by using parallel environments.", "[19] used parallel asynchronous agents to perform Hogwild!", "[24] style updates to a shared set of parameters.", "[7] proposed a2c, which maintains the parallel data collection, but performs updates synchronously, and found this to be more stable empirically.", "While A3C was originally designed as a purely CPU-based method, [4] proposed GA3C, a GPU implementation of the algorithm.", "[28] also scaled up various RL algorithms by using significantly larger batch sizes and distributing computation onto several GPUs.", "Differently from those works, we propose the use of Gossip Algorithms to aggregate information between different agents and thus simulators.", "[20], [12], [8], [13], [10] use parallel environments as well, but disentangle the data collection (actors) from the network updates (learners).", "This provides several computational benefits, including better hardware utilization and reduced straggler effects.", "By disentangling acting from learning these algorithms must use off-policy methods to handle learning from data that is not directly generated from the current policy (e.g., slightly older policies).", "Gossip-based approaches have been extensively studied in the control-systems literature as a way to aggregate information for distributed optimization algorithms  [21].", "In particular, recent works have proposed to combine gossip algorithms with stochastic gradient descent in order to train Deep Neural Networks [16], [15], [3], but unlike our work, focus only on the supervised classification paradigm." ], [ "Experiments", "We evaluate gala for training Deep RL agents on Atari-2600 games [17].", "We focus on the same six games studied in [28].", "Unless otherwise-stated, all learning curves show averages over 10 random seeds with $95\\%$ confidence intervals shaded in.", "We follow the reproducibility checklist [23], see Appendix  for details.", "We compare a2c  [7], a3c  [19], impala  [8], and gala-a2c.", "All methods are implemented in PyTorch [22].", "While a3c was originally proposed with CPU-based agents with 1-simulator per agent, [28] propose a large-batch variant in which each agent manages 16-simulators and performs batched inference on a GPU.", "We found this large-batch variant to be more stable and computationally efficient (cf.", "Appendix REF ).", "We use the [28] variant of a3c to provide a more competitive baseline.", "We parallelize a2c training via the canonical approach outlined in [28], whereby individual a2c agents (running on potentially different devices), all average their gradients together before each update using the AllReduce primitive.This is mathematically equivalent to a single a2c agent with multiple simulators (e.g., $n$ agents, with $b$ simulators each, are equivalent to a single agent with $nb$ simulators).", "For a2c and a3c we use the hyper-parameters suggested in [28].", "For impala we use the hyper-parameters suggested in [8].", "For gala-a2c we use the same hyper-parameters as the original (non-gossip-based) method.", "All gala agents are configured in a directed ring graph.", "All implementation details are described in Appendix .", "For the impala baseline, we use a prerelease of TorchBeast [14] available at https://github.com/facebookresearch/torchbeast." ], [ "Convergence and stability:", "We begin by empirically studying the convergence and stability properties of a2c and gala-a2c.", "Figure REF depicts the percentage of successful runs (out of 10 trials) of standard policy-gradient a2c when we sweep the number of simulators across six different games.", "We define a run as successful if it achieves better than $50\\%$ of nominal 16-simulator a2c scores.", "When using a2c, we observe an identical trend across all games in which the number of successful runs decreases significantly as we increase the number of simulators.", "Note that the a2c batch size is proportional to the number of simulators, and when increasing the number of simulators we adjust the learning rate following the recommendation in [28].", "Figure: (a) gala increases or maintains the percentage of convergent runs relative to a2c.", "(b)-(c) gala maintains the best performance of a2c while being more robust.", "(d)-(e) gala achieves competitive scores in each game and in the shortest amount of time.", "(f)-(g) gala achieves competitive game scores while being energy efficient.Figure REF also depicts the percentage of successful runs when a2c agents communicate their parameters using gossip algorithms (gala-a2c).", "In every simulator sweep across the six games (600 runs), the gossip-based architecture increases or maintains the percentage of successful runs relative to vanilla a2c, when using identical hyper-parameters.", "We hypothesize that exercising slightly different policies at each learner using gossip-algorithms can provide enough decorrelation in gradients to improve learning stability.", "We revisit this point later on (cf.", "Figure REF ).", "We note that [28] find that stepping through a random number of uniform random actions at the start of training can partially mitigate this stability issue.", "We did not use this random start action mitigation in the reported experiments.", "While Figure REF shows that gala can be used to stabilize multi-simulator a2c and increase the number of successfull runs, it does not directly say anything about the final performance of the learned models.", "Figures REF and REF show the rewards plotted against the number of environment steps when training with 64 simulators.", "Using gossip-based architectures stabilizes and maintains the peak performance and sample efficiency of a2c across all six games (Figure REF ), and also increases the number of convergent runs (Figure REF ).", "Figures REF and REF compare the wall-clock time convergence of gala-a2c to vanilla a2c.", "Not only is gala-a2c more stable than a2c, but it also runs at a higher frame-rate by mitigating straggler effects.", "In particular, since gala-a2c learners do not need to synchronize their gradients, each learner is free to run at its own rate without being hampered by variance in peer stepping times." ], [ "Comparison with distributed Deep RL approaches:", "Figure REF also compares gala-a2c to state-of-the-art methods like impala and a3c.We report results for both the TorchBeast implementation of impala, and from Table $C.1$ from [8] In each game, the gala-a2c learners exhibited good sample efficiency and computational efficiency, and achieved highly competitive final game scores.", "Table: Across all training seeds we select the best final policy produced by each method at the end of training and evaluate it over 10 evaluation episodes (up to 30 no-ops at the start of the episode).Evaluation actions generated from arg max a π(a|s)\\operatornamewithlimits{arg\\,max}_a \\pi (a|s).The table depicts the mean and standard error across these 10 evaluation episodes.Next we evaluate the final policies produced by each method at the end of training.", "After training across 10 different seeds, we are left with 10 distinct policies per method.", "We select the best final policy and evaluate it over 10 evaluation episodes, with actions generated from $\\operatornamewithlimits{arg\\,max}_a \\pi (a|s)$ .", "In almost every single game, the gala-a2c learners achieved the highest evaluation scores of any method.", "Notably, the gala-a2c learners that were trained for 25M steps achieved (and in most cases surpassed) the scores for impala learners trained for 50M steps [8]." ], [ "Effects of gossip:", "To better understand the stabilizing effects of gala, we evaluate the diversity in learner policies during training.", "Figure REF shows the distance of the agents' parameters from consensus throughout training.", "The theoretical upper bound in Proposition REF is also explicitly calculated and plotted in Figure REF .", "As expected, the learner policies remain within an $\\epsilon $ -ball of one-another in weight-space, and this size of this ball is remarkably well predicted by Proposition REF .", "Figure: (a) The radius of the ϵ\\epsilon -ball within which the agents' parameters reside during training.The theoretical upper bound in Proposition  is explicitly calculated and compared to the true empirical quantity.", "The bound in Proposition  is remarkably tight.", "(b) Average correlation between agents' gradients during training (darker colors depict low correlation and lighter colors depict higher correlations).Neighbours in the gala-a2c topology are annotated with the label “peer.” The gala-a2c heatmap is generally much darker than the a2c heatmap, indicating that gala-a2c agents produce more diverse gradients with significantly less correlation.Figure: Comparing gala-a2c hardware utilization to that of a2c when using one NVIDIA V100 GPU and 48 Intel CPUs.", "(a) Samples of instantaneous GPU utilization and power draw plotted against each other.", "Bubble sizes indicate frame-rates obtained by the corresponding algorithms; larger bubbles depict higher frame-rates.gala-a2c achieves higher hardware utilization than a2c at comparable power draws.This translates to much higher frame-rates and increased energy efficiency.", "(b) Hardware utilization/energy efficiency vs. number of simulators.gala-a2c benefits from increased parallelism and achieves a 10-fold improvement in GPU utilization over a2c.Next, we measure the diversity in the agents' gradients.", "We hypothesize that the $\\epsilon $ -diversity in the policies predicted by Proposition REF , and empirically observed in Figure REF , may lead to less correlation in the agents' gradients.", "The categorical heatmap in Figure REF shows the pair-wise cosine-similarity between agents' gradients throughout training, computed after every 500 local environment steps, and averaged over the first 10M training steps.", "Dark colors depict low correlations and light colors depict high correlations.", "We observe that gala-a2c agents exhibited less gradient correlations than a2c agents.", "Interestingly, we also observe that gala-a2c agents' gradients are more correlated with those of peers that they explicitly communicate with (graph neighbours), and less correlated with those of agents that they do not explicitly communicate with." ], [ "Computational performance:", "Figure REF showcases the hardware utilization and energy efficiency of gala-a2c compared to a2c as we increase the number of simulators.", "Specifically, Figure REF shows that gala-a2c achieves significantly higher hardware utilization than vanilla a2c at comparable power draws.", "This translates to much higher frame-rates and increased energy efficiency.", "Figure REF shows that gala-a2c is also better able to leverage increased parallelism and achieves a 10-fold improvement in GPU utilization over vanilla a2c.", "Once again, the improved hardware utilization and frame-rates translate to increased energy efficiency.", "In particular, gala-a2c steps through roughly 20 thousand more frames per Kilojoule than vanilla a2c.", "Figures REF and REF compare game scores as a function of energy utilization in Kilojoules.", "gala-a2c is distinctly more energy efficient than the other methods, achieving higher game scores with less energy utilization." ], [ "Conclusion", "We propose Gossip-based Actor-Learner Architectures (gala) for accelerating Deep Reinforcement Learning by leveraging parallel actor-learners that exchange information through asynchronous gossip.", "We prove that the gala agents' policies are guaranteed to remain within an $\\epsilon $ -ball during training, and verify this empirically as well.", "We evaluated our approach on six Atari games, and find that gala-a2c improves the computational efficiency of a2c, while also providing extra stability and robustness by decorrelating gradients.", "gala-a2c also achieves significantly higher hardware utilization than vanilla a2c at comparable power draws, and is competitive with state-of-the-art methods like a3c and impala." ], [ "Acknowledgments", "We would like to thank the authors of TorchBeast for providing their pytorch implementation of impala." ], [ "Reproducibility Checklist", "We follow the reproducibility checklist [23]: For all algorithms presented, check if you include: A clear description of the algorithm.", "See Algorithm  An analysis of the complexity (time, space, sample size) of the algorithm.", "See Figures REF and  REF for an analysis of sample efficiency, wall-clock time, and energy efficiency.", "A link to a downloadable source code, including all dependencies.", "See the attached zip file.", "For any theoretical claim, check if you include: A statement of the result.", "See Propositions REF and REF in the main text.", "A clear explanation of any assumptions.", "See Appendix for full details.", "A complete proof of the claim.", "See Appendix for full details.", "For all figures and tables that present empirical results, check if you include: A complete description of the data collection process, including sample size.", "We used the Arcade Learning Environment [17], specifically we used the gym package - see: github.com/openai/gym A link to downloadable version of the dataset or simulation environment.", "See: github.com/openai/gym An explanation of how samples were allocated for training / validation / testing.", "We didn't use explicit training / validation/ testing splits - but ran each algorithm with 10 different random seeds.", "An explanation of any data that were excluded.", "We only used 6 atari games due to time constraints - the same 6 games that were used in [28].", "The range of hyper-parameters considered, method to select the best hyper-parameter configuration, and specification of all hyper-parameters used to generate results.", "We used standard hyper-parameters from [7], [28], [8].", "The exact number of evaluation runs.", "We used 10 seeds for the Atari experiments.", "A description of how experiments were run.", "See Appendix  for full details.", "A clear definition of the specific measure or statistics used to report results.", "$95\\%$ confidence intervals are used in all plots / tables unless otherwise stated.", "Clearly defined error bars.", "$95\\%$ confidence intervals are used in all plots / tables unless otherwise stated.", "A description of results with central tendency (e.g.", "mean) and variation (e.g.", "stddev).", "$95\\%$ confidence intervals are used in all plots / tables unless otherwise stated.", "A description of the computing infrastructure used.", "See Appendix  for full details." ], [ "Setting and Notation", "Before presenting the theoretical guarantees, we define some notation.", "Suppose we have $n$ learners (e.g., actor-critic agents) configured in a peer-to-peer communication topology represented by a directed and potentially time-varying graph (the non-zero entries in the mixing matrix $\\mathbf {P}^{(k)}$ define the communication topology at each iteration $k$ ).", "Learners constitute vertices in the graph, denoted by $v_i$ for all $i \\in [n]$ , and edges constitute directed communication links.", "Let $\\mathcal {N}^{\\text{out}}_i$ denote agent $v_i$ 's out-peers, the set of agents that $v_i$ can send messages to, and let $\\mathcal {N}^{\\text{in}}_i$ denote agent $v_i$ 's in-peers, the set of agents that can send messages to $v_i$ .", "If the graph is time-varying, these sets are annotated with time indices.", "Let $x_i \\in \\mathbb {R}^{d}$ denote the agent $v_i$ 's complete set of trainable parameters, and let the training function $\\mathcal {T}_i : \\mathbb {R}^d \\mapsto \\mathbb {R}^d$ define agent $v_i$ 's training dynamics (i.e., agent $v_i$ optimizes its parameters by iteratively computing $x_i \\leftarrow \\mathcal {T}_i(x_i)$ ).", "For each agent $v_i$ we define send- and receive-buffers, $\\mathcal {B}_i$ and $\\mathcal {R}_i$ respectively, which are used by the underlying communication system (standard in the gossip literature [33]).", "When an agent wishes to broadcast a message to its out-peers, it simply copies the message into its broadcast buffer.", "Similarly, when agent receives a message, it is automatically copied into the receive buffer.", "For convenience, we assume that each learner $v_i$ can hold at most one message from in-peer $v_j$ in its receive buffer, $\\mathcal {R}_i$ at any time $k$ ; i.e., a newly received message from agent $v_j$ overwrites the older one in the receive buffer.", "Let $k \\in \\mathbb {N}$ denote the global iteration counter.", "That is, $k$ increments whenever any agent (or subset of agents) completes one loop in Algorithm .", "Consequently, at each global iteration $k$ , there is a set of agents $\\mathcal {I}$ that are activated, and within this set there is a (possibly non-empty) subset of agents $\\mathcal {C} \\subseteq \\mathcal {I}$ that gossip in the same iteration.", "If a message from agent $v_j$ is received by agent $v_i$ at time $k$ , let $\\tau _{j,i}^{(k)}$ denote the time at which this message was sent.", "Let $\\tau \\ge \\tau _{j,i}^{(k)}$ for all $i,j \\in [n]$ and $k > 0$ denote an upper bound on the message delays.", "For analysis purposes, messages are sent with an effective delay such that they arrive right when the agent is ready to process the messages.", "That is, a message that is sent by agent $v_j$ at iteration $k^\\prime $ and processed by agent $v_i$ at iteration $k$ , where $k \\ge k^\\prime $ , is treated as having experienced a delay $\\tau _{j,i}^{(k)} = k - k^\\prime $ , even if the message actually arrives before iteration $k$ and waits in the receive-buffer.", "Let $\\alpha g_i^{(k)} \\mathcal {T}_i(x_i^{(k)}) - x_i^{(k)}$ denote agent $v_i$ 's local computation update at iteration $k$ after scaling by some reference learning rate $\\alpha > 0$ , and define $g_i^{(k)} 0$ if agent $v_i$ is not active at iteration $k$ .", "Algorithm  can thus be written as follows.", "If agent $v_i$ does not gossip at iteration $k$ , then its update is simply $x_i^{(k+1)} = x_i^{(k)} + \\alpha g_i^{(k)}.$ If agent $v_i$ does gossip at iteration $k$ , then its update is $x_i^{(k + 1)} = \\frac{1}{1 + |\\mathcal {N}^{\\text{in}}_i|} \\left(x_i^{(k)} + \\sum _{j \\in \\mathcal {N}^{\\text{in}}_i} x_j^{\\tau _{j,i}^{(k)}} + \\alpha g_i^{(k)} \\right),$ where $x_j^{\\tau _{j,i}^{(k)}}$ is the parameter value of the agent $v_j$ , at the time where the message was sent, i.e., $\\tau _{j,i}^{(k)}$ .", "We can analyze Algorithm  in matrix form by stacking all $n$ agents' parameters, $x_i^{(k)} \\in \\mathbb {R}^d$ , into a matrix $\\mathbf {X}^{(k)}$ , and equivalently stacking all of the update vectors, $g_i^{(k)} \\in \\mathbb {R}^d$ , into a matrix $\\mathbf {G}^{(k)}$ .", "In order to represent the state of messages that are in transit (sent but not yet received), for analysis purposes, we augment the graph topology with virtual nodes using a standard graph augmentation [11] (we add $\\tau $ virtual nodes for each non-virtual agent, where each virtual node stores a learner's parameters at a specific point within the last $\\tau $ iterations).", "Let $\\tilde{n}n(\\tau + 1)$ denote the cardinality of the augmented graph's vertex set.", "Equation (REF ) can be re-written as $\\mathbf {X}^{(k + 1)} = \\mathbf {\\tilde{P}}^{(k)}\\left( \\mathbf {X}^{(k)} + \\alpha \\mathbf {G}^{(k)} \\right),$ where $\\mathbf {X}^{(k)}, \\mathbf {G}^{(k)} \\in \\mathbb {R}^{\\tilde{n}\\times d}$ , and the mixing matrix $\\mathbf {\\tilde{P}}^{(k)} \\in \\mathbb {R}^{\\tilde{n}\\times \\tilde{n}}$ corresponding to the augmented graph is row-stochastic for all iterations $k$ , i.e., all entries are non-negative, and all rows sum to 1 .", "Mapping (REF ) to (REF ) may not be obvious, but is quite standard in the recent literature.", "We refer the interested reader to [2], [11].", "[Proof of Proposition REF ] The proof is very similar to the proofs in [3] and [2], and makes use of the graph augmentations in [11], the lower dimensional stochastic matrix dynamics in [5], and the ergodic matrix results in [35].", "Since the matrices $\\mathbf {\\tilde{P}}^{(k)}$ are row-stochastic, their largest singular value is 1, which corresponds to singular vectors in $\\text{span}\\left\\lbrace 1_{\\tilde{n}} \\right\\rbrace $ .", "Let the matrix $\\mathbf {Q} \\in \\mathbb {R}^{(\\tilde{n}- 1) \\times \\tilde{n}}$ define an orthogonal projection onto the space orthogonal to $\\text{span}\\left\\lbrace 1_{\\tilde{n}} \\right\\rbrace $ .", "Associated to each matrix $\\mathbf {\\tilde{P}}^{(k)} \\in \\mathbb {R}^{\\tilde{n}\\times \\tilde{n}}$ there is a unique matrix $\\mathbf {\\tilde{P}}^{\\prime (k)} \\in \\mathbb {R}^{(\\tilde{n}- 1) \\times (\\tilde{n}- 1)}$ such that $\\mathbf {Q} \\mathbf {\\tilde{P}}^{(k)} = \\mathbf {\\tilde{P}}^{\\prime (k)} \\mathbf {Q}$ .", "Let $\\mathbf {\\tilde{P}}^{\\prime }$ denote the collection of matrices $\\mathbf {\\tilde{P}}^{\\prime (k)}$ for all $k$ .", "The spectrum of the matrices $\\mathbf {\\tilde{P}}^{\\prime (k)}$ is the spectrum of $\\mathbf {\\tilde{P}}^{(k)}$ after removing one multiplicity of the singular value 1.", "From (REF ), we have $\\begin{split}\\mathbf {Q} \\mathbf {X}^{(k + 1)} =& \\mathbf {Q} \\mathbf {\\tilde{P}}^{(k)} \\cdots \\mathbf {\\tilde{P}}^{(1)} \\mathbf {\\tilde{P}}^{(0)} \\mathbf {X}^{(0)} + \\alpha \\sum ^{k}_{s=0} \\mathbf {Q} \\mathbf {\\tilde{P}}^{(k)} \\cdots \\mathbf {\\tilde{P}}^{(s + 1)} \\mathbf {\\tilde{P}}^{(s)} \\mathbf {G}^{(s)} \\\\=& \\mathbf {\\tilde{P}}^{\\prime (k)} \\cdots \\mathbf {\\tilde{P}}^{\\prime (1)} \\mathbf {\\tilde{P}}^{\\prime (0)} \\mathbf {Q} \\mathbf {X}^{(0)} + \\alpha \\sum ^{k}_{s=0} \\mathbf {\\tilde{P}}^{\\prime (k)} \\cdots \\mathbf {\\tilde{P}}^{\\prime (s + 1)} \\mathbf {\\tilde{P}}^{\\prime (s)} \\mathbf {Q} \\mathbf {G}^{(s)}.\\end{split}$ Note that $\\mathbf {Q} (\\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)}) = 0$ and $(\\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)})^T 1_{\\tilde{n}} = 0$ .", "Thus $\\left\\Vert \\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)} \\right\\Vert =& \\left\\Vert \\mathbf {Q}(\\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)}) \\right\\Vert \\\\\\le & \\left\\Vert \\mathbf {\\tilde{P}}^{\\prime (k)} \\cdots \\mathbf {\\tilde{P}}^{\\prime (s + 1)} \\mathbf {\\tilde{P}}^{\\prime (0)} \\mathbf {Q} \\mathbf {X}^{(0)} \\right\\Vert + \\alpha \\sum ^{k}_{s=0} \\left\\Vert \\mathbf {\\tilde{P}}^{\\prime (k)} \\cdots \\mathbf {\\tilde{P}}^{\\prime (s + 1)} \\mathbf {\\tilde{P}}^{\\prime (s)} \\mathbf {Q} \\mathbf {G}^{(s)} \\right\\Vert ,$ where we have implicitly also made use of (REF ).", "Defining $\\beta \\sup _{s=0,1,\\ldots , k} \\sigma _2( \\mathbf {\\tilde{P}}^{\\prime (k)} \\cdots \\mathbf {\\tilde{P}}^{\\prime (s + 1)} \\mathbf {\\tilde{P}}^{\\prime (s)})$ , it follows that $\\left\\Vert \\mathbf {X}^{(k + 1)} - \\overline{\\mathbf {X}}^{(k+1)} \\right\\Vert \\le & \\beta ^{k + 1} \\left\\Vert \\mathbf {Q} \\mathbf {X}^{(0)} \\right\\Vert + \\alpha \\sum ^{k}_{s=0} \\beta ^{k + 1 - s} \\left\\Vert \\mathbf {Q} \\mathbf {G}^{(s)} \\right\\Vert .$ Assuming all learners are initialized with the same parameters, the first exponentially decay term on the right hand side of (REF ) vanishes and we have $\\left\\Vert \\mathbf {X}^{k + 1} - \\overline{\\mathbf {X}}^{(k+1)} \\right\\Vert \\le \\alpha \\sum ^{k}_{s=0} \\beta ^{k + 1 - s} \\left\\Vert \\mathbf {G}^{(s)} \\right\\Vert .$ [Proof of Proposition REF ] The proof extends readily from Proposition REF .", "Given the assumptions on the graph sequence, the product of the matrices $\\mathbf {\\tilde{P}}^{(k)} \\cdots \\mathbf {\\tilde{P}}^{(s + 1)} \\mathbf {\\tilde{P}}^{(s)}$ is ergodic for any $k - s \\ge \\tau + B$ (cf. [1]).", "Letting $\\beta \\sup _{s=0,1,\\ldots , k} \\sigma _2( \\mathbf {\\tilde{P}}^{\\prime (k)} \\cdots \\mathbf {\\tilde{P}}^{\\prime (s)})$ , it follows from [35] and [5] that $\\beta < 1$ ." ], [ "While a3c was originally proposed with CPU-based agents with 1-simulator per agent, [28] propose a variant in which each agent manages 16-simulators and performs batched inference on a GPU.", "Figure REF compares 64-simulator learning curves using a3c as originally proposed in [19] to the large-batch variant in [28].", "The large-batch variant appears to be more robust and computationally efficient, therefore we use this GPU-based version of a3c in our main experiments to provide a more competitive baseline." ], [ "Experimental Setup", "All experiments use the network suggested by [7].", "Specifically, the network contains 3 convolutional layers and one hidden layer, followed by a linear output layer for the policy/linear output layer for the critic.", "The hyper-parameters for a2c, a3c and gala-a2c are summarized in Table REF .", "impala hyperparameters are the same as reported in [8] (cf.", "table $G.1$ in their appendix).", "Table: Hyperparameters for both a2c, gala-a2c, and a3cIn all gala experiments we used 16 environments per learner, e.g., in the 64 simulator experiments in Section REF we use 4 learners.", "gala agents communicate using a 1-peer ring network .", "Figure REF shows an example of such a ring network.", "The non-zero weight $p_{i,j}$ of the mixing matrix $\\mathbf {P}$ corresponding to the 1-peer ring are set to $\\frac{1}{1 + |\\mathcal {N}^{\\text{in}}_i|}$ , which is equal to $1/2$ as $|\\mathcal {N}^{\\text{in}}_i| = 1$ for all $i$ in the 1-peer ring graph.", "Figure: Example of an n-agents/1-peer ring communication topology used in our experiment" ] ]
1906.04585
[ [ "FAMED-Net: A Fast and Accurate Multi-scale End-to-end Dehazing Network" ], [ "Abstract Single image dehazing is a critical image pre-processing step for subsequent high-level computer vision tasks.", "However, it remains challenging due to its ill-posed nature.", "Existing dehazing models tend to suffer from model overcomplexity and computational inefficiency or have limited representation capacity.", "To tackle these challenges, here we propose a fast and accurate multi-scale end-to-end dehazing network called FAMED-Net, which comprises encoders at three scales and a fusion module to efficiently and directly learn the haze-free image.", "Each encoder consists of cascaded and densely connected point-wise convolutional layers and pooling layers.", "Since no larger convolutional kernels are used and features are reused layer-by-layer, FAMED-Net is lightweight and computationally efficient.", "Thorough empirical studies on public synthetic datasets (including RESIDE) and real-world hazy images demonstrate the superiority of FAMED-Net over other representative state-of-the-art models with respect to model complexity, computational efficiency, restoration accuracy, and cross-set generalization.", "The code will be made publicly available." ], [ "Introduction", "Images captured in hazy conditions often suffer from absorption and scattering effects caused by floating atmospheric particles such as dust, mist, and fumes, which can result in low contrast, blurry, and noisy images.", "This degraded image quality potentially challenges many subsequent high-level computer vision tasks, $e.g.$ , object detection [1], [2], [3] and segmentation [4], [5], [6].", "Therefore, removing haze and improving image quality benefits these applications, making image dehazing a subject of intense research and practical focus.", "To be specific, image haze removal or dehazing refers to a technique that restores a haze-free image from a single or several observed hazy images.", "Many dehazing approaches have been proposed, which can be categorized into those that: 1) use auxiliary information such as scene depth [7] and polarization [8]; 2) use a sequence of captured images [9]; 3) use a single hazy image [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], as the model input when dehazing.", "Of these, single image dehazing without the need for additional information is of most practical benefit.", "However, as a typical ill-posed problem, single image dehazing remains challenging and requires refinement.", "The presence of haze leads to the combination of an attenuation term corresponding to the absorbing effect and a scattering term corresponding to the scattering effect that occur during imaging.", "Both terms are related to an intermediate variable, that is, transmission, which depends on scene depth.", "One feasible haze removal solution is to estimate the transmission and then recover the clear image by reversing the attenuation and scattering.", "Many single image dehazing methods have been proposed [13], [26], [14], [15], [16], [17], [18], [21], which use either hand-crafted features ($e.g.$ , different image priors) or learning-based features to estimate the haze transmission.", "For example, He et al.", "[14] proposed a simple and effective dark channel prior for single image dehazing, which assumes that the minimum of all the spectral channels in clear images (the “dark channel”) is close to zero.", "The method effectively estimates the haze transmission.", "However, the dark channel prior may not work for some particular scenes such as for white objects, which are similar to atmospheric light, because it underestimates the transmission and leads to over-dehazed artifacts.", "Zhu et al.", "[16] proposed a color attenuation prior that assumes a positive correlation between the scene depth and the haze concentration, which is represented by the subtraction of scene brightness from saturation.", "Then, the scene depth and haze transmission are easily estimated by a regressed linear model based on the above prior.", "Recently, Berman et al.", "[17] proposed a non-local prior based on the assumption that colors in a clear image can be approximated by some distinct colors clustering tightly in RGB space.", "Being affected by haze, each cluster becomes a line in RGB space (haze-line) due to the varying transmission coefficients of the clustered pixels.", "Consequently, the transmission and clear image are estimated according to these haze lines.", "Though prior-based methods are usually simple and effective for many scenes, they share the common limitation of describing specific statistics, which may not work for some images.", "Learning-based methods adopt a data-driven approach to learn a linear/non-linear mapping between features and transmission and so overcomes the limitations of specific priors.", "For example, Tang et al.", "[15] proposed learning a regression model based on random forests from haze-relevant features including the dark channel, local max contrast, hue disparity, and local max saturation.", "They trained the model using a synthetic dataset and tested it on both synthetic and real-world hazy images, which then became common practice in subsequent learning-based methods.", "The learning-based idea for dehazing has subsequently been extended in three ways: 1) more powerful learning models; 2) more effective synthetic methods and larger datasets; 3) end-to-end modeling/training.", "Deep neural networks have now been successfully applied to many computer vision tasks including object recognition, detection, and semantic segmentation.", "By leveraging their powerful representation capacity and end-to-end learning, many deep convolutional neural network (CNN)-based approaches were proposed for image dehazing [18], [19], [20], [21], [22], [23], [24].", "For example, Cai et al.", "[18] proposed an end-to-end trainable deep CNN model called DehazeNet to directly learn the transmission from hazy images, which is superior to contemporary prior-based methods and random forest models [15].", "Ren et al.", "[19] proposed a multi-scale CNN (MSCNN) to learn the transmission map in a fully convolutional manner and explore a multi-scale architecture for coarse-to-fine regression.", "Despite the effectiveness of CNN-based approaches, a separate step is still needed to estimate the atmospheric light.", "Recently, Zhang et al.", "[23] proposed an end-to-end densely connected pyramid dehazing network (DCPDN) to jointly learn the transmission map, atmospheric light, and dehazing.", "They adopted an encoder-decoder architecture with a multi-level pyramid pooling module to learn multi-scale features.", "They also utilized an adversarial loss based on a generative adversarial network [27] to supervise the dehazing network.", "Rather than estimating the intermediate transmission, Li et al .", "[20] proposed an end-to-end CNN model called the all-in-one dehazing network (AOD-Net) to learn the clear image from a hazy one.", "They integrated the transmission and atmospheric light into a single variable by reformulating the hazy imaging model.", "Ren et al.", "[22] proposed a gated fusion network (GFN) by adopting an encoder-decoder architecture, while Li et al.", "[24] also designed an encoder-decoder architecture but based on a conditional generative adversarial network (cGAN) to learn the dehazed image end-to-end.", "Though cGAN and DCPDN have achieved good dehazing results, they contain dozens of convolutional layers and are about 200 MB in size, making them awkward and unlikely to be applicable in the resource-constrained context of a computer vision system.", "In this paper, we aim to develop a fast and accurate deep CNN model for single image dehazing.", "We use a fully convolutional and end-to-end training/testing approach to efficiently process hazy images of arbitrary size.", "To this end, we propose a fast and accurate multi-scale dehazing network called FAMED-Net, which comprises encoders at three scales and a fusion module to directly learn the haze-free image.", "Each encoder consists of cascaded point-wise convolutional layers and pooling layers via a densely connected mechanism.", "Since no larger convolutional kernels are used and features are reused layer-by-layer, FAMED-Net is lightweight and computationally efficient.", "Thorough empirical studies on public synthetic datasets and real-world hazy images demonstrate the superiority of FAMED-Net over representative state-of-the-art models with respect to model complexity, computational efficiency, restoration accuracy, and cross-set generalization.", "The code will be made publicly available at https://github.com/chaimi2013/FAMED-Net.", "The main contributions of this paper can be summarized as follows: $\\bullet $ We devise a novel multi-scale end-to-end dehazing network called FAMED-Net, which implicitly learns efficient statistical image priors for fast and accurate haze removal from a single image.", "$\\bullet $ FAMED-Net leverages fully point-wise convolutions as the basic unit to construct the encoder-decoder architecture, which has a small model size and is computationally efficient.", "$\\bullet $ FAMED-Net outperforms state-of-the-art models on both synthetic benchmarks and real-world hazy images.", "Images captured in hazy condition can be mathematically formulated as [28], [29], [14]: ${I^\\lambda }\\left( x \\right) = {J^\\lambda }\\left( x \\right)t\\left( x \\right) + {A^\\lambda }\\left( {1 - t\\left( x \\right)} \\right),$ where $I$ is the observed hazy image, $J$ is the scene radiance, $A$ is the atmospheric light assumed to be a global constant, $t$ is the haze transmission, $x$ denotes pixel location, and $\\lambda $ denotes the spectral channel, $i.e.$ , $\\lambda \\in \\left\\lbrace {r,g,b} \\right\\rbrace $ .", "The first term, called the attenuation term, represents the haze absorbing effect on scene radiance, while the second term, called the scattering term, represents the haze scattering effect on ambient light.", "$t$ describes the fraction of scene radiance reaching the camera sensor, so is the “transmission”, which depends on scene depth.", "Under the homogeneous haze assumption, the transmission can be expressed as: $t\\left( x \\right) = {e^{ - \\beta d\\left( x \\right)}},$ where $\\beta $ denotes the medium attenuation coefficient and $d$ is the scene depth.", "Recently, Li et al.", "[20] reformulated the imaging model in Eq.", "(REF ) by integrating the transmission and atmospheric light into a single variable $K$ : ${K^\\lambda }\\left( x \\right) \\Delta \\over = \\frac{{\\frac{1}{{t\\left( x \\right)}}\\left( {{I^\\lambda }\\left( x \\right) - {A^\\lambda }} \\right) + \\left( {{A^\\lambda } - 1} \\right)}}{{{I^\\lambda }\\left( x \\right) - 1}},$ ${J^\\lambda }\\left( x \\right) = {K^\\lambda }\\left( x \\right){I^\\lambda }\\left( x \\right) - {K^\\lambda }\\left( x \\right) + 1.$ They designed an end-to-end network (AOD-Net) which learns a direct mapping from a raw hazy image to scene radiance." ], [ "Prior-based and Learning-based Image Dehazing Methods", "As can be seen from the atmospheric scattering model in Eq.", "(REF ), given an observed hazy image $I$ , recovering the scene radiance is ill-posed.", "Different image priors have been proposed to constrain the haze-free image and make the estimate tractable, including the dark channel prior [14], color attenuation prior [16], and non-local prior [17], etc.", "As defined in [14], each pixel value of the dark channel refers to the minimum pixel value on each patch centered at every pixel position.", "Figure REF shows an example of the dark channels on both clear and hazy images.", "As can be seen, the dark channel of a clear image is almost dark everywhere except for the bright sky region, while the dark channel of a hazy image reveals the haze veil due to the haze scattering effect (corresponds to the second term in Eq.", "(REF )).", "Based on the dark channel prior, the transmission can be efficiently estimated from the dark channel map.", "It is noteworthy that the pixel value of the dark channel reveals the hazy density (which is related to scene depth) even though it is calculated locally in a sliding window manner (See the regions and corresponding values indicated by the red boxes).", "It can be explained as follows: 1) the haze effects of both attenuation and scattering which are directly related to scene depth, can be described as a pixel-to-pixel ($i.e.$ , locally) mapping from clear pixel to hazy pixel by the atmospheric scattering model.", "2) the dark channel prior reveals the intrinsic locally statistical property of clear images.", "Similar to [14], our approach also solves the dehazing problem in a local manner which implicitly learns a statistical image prior as will be demonstrated in Section REF .", "Figure: An illustration of dark channels calculated on both clear (the top row) and hazy images (the bottom row).To overcome the limitations of prior-based methods, many deep CNN-based data-driven dehazing models have been proposed since Cai et al.", "[18] proposed DehazeNet, including MSCNN [19], AOD-Net [20], FPCNet [21], DCPDN [23], GFN [22], cGAN [24] and proximal DehazeNet [30].", "These can be categorized into those that: 1) estimate $t$ using CNN [18], [19], [21], [23], [30]; 2) directly learn the scene radiance end-to-end [20], [23], [22], [24], [30].", "Our proposed method falls into the latter category and is partly inspired by AOD-Net [20] and FPCNet [21].", "In contrast to AOD-Net, we propose a fully point-wise CNN to regress $K$ and produce a stronger representation capacity.", "In contrast to FPCNet, we propose: 1) an end-to-end model to regress the scene radiance directly; 2) a multi-scale architecture to handle the scale variance, which achieves much better results than FPCNet while maintaining low model complexity and high computational efficiency; and 3) a new training/testing strategy that negates the need for a pre-processing shuffling step.", "Compared to MSCNN, in which coarse-scale predictions are used as part of the input for the finer scale, the proposed method adopts a Gaussian pyramid architecture and follows a late fusion strategy.", "It produces better dehazing results than MSCNN and runs faster.", "Compared to the recently proposed DCPDN and cGAN, our model is much more compact, $i.e.$ , less than 90 kb vs. about 200 Mb, while having a high restoration accuracy and computational efficiency.", "Figure: Schematic of the end-to-end fully point-wise CNN for single image dehazing, i.e.i.e., FAMED-Net-SS.", "K-encoder comprises cascaded point-wise convolutional layers and pooling layers via a dense connected mechanism for learning KK in Eq.", "()." ], [ "Multi-scale pyramid architecture", "The pyramid structure is a basic idea used for both multi-resolution image representation and multi-scale feature representation in the computer vision area, for example, Gaussian pyramid, Laplacian pyramid, wavelet [31], and SIFT [32].", "Leveraging this classical idea, CNN produces a feature pyramid through stacked convolutional layers and spatial pooling layers.", "Recently, different multi-scale image or feature pyramid architectures have been devised for both low- and high-level computer vision applications, including deep Laplacian pyramid networks for image super-resolution [33], DeepExposure using Laplacian pyramid decomposition [34], deep generative image models [35], Laplacian pyramid reconstructive adversarial network [36], Deeplab using an image pyramid for semantic segmentation [37], and feature pyramid networks for object detection [38].", "Our approach also adopts the Gaussian/Laplacian pyramid architectures for multi-scale fusion (See Figure REF and Figure REF ).", "In contrast to those above methods, the proposed FAMED-Net is specifically devised for single image dehazing.", "Moreover, it leverages fully point-wise convolutions instead of convolutions with large kernels for constructing a lightweight and computationally efficient network." ], [ "Deep Supervision", "Adding auxiliary supervision on intermediate layers within a deep neural network also known as deep supervision is originally proposed by Xie and Tu in the seminal work [39], [40].", "This technique facilitates multi-scale and multi-level feature learning by allowing error information backpropagation from multiple paths and alleviating the problem of vanishing gradients in deep neural networks.", "Deep supervision has been widely adopted in the following work in different areas such as Deeplab for semantic segmentation [37], MSCNN for image dehazing [19], LapSRN for image super-resolution [33], etc.", "We also add supervision on the dehazed image at each scale by leveraging the deep supervision idea." ], [ "A Probabilistic View to Solving the Ill-posed Dehazing Problem", "Eq.", "(REF ) and Eq.", "(REF ) can be re-written as: $\\left( {{I^\\lambda }\\left( x \\right) - {A^\\lambda }} \\right) = \\left( {{J^\\lambda }\\left( x \\right) - {A^\\lambda }} \\right)t\\left( x \\right),$ $\\left( {{I^\\lambda }\\left( x \\right) - 1} \\right) = \\left( {{J^\\lambda }\\left( x \\right) - 1} \\right)\\frac{1}{{{K^\\lambda }\\left( x \\right)}}.$ Applying a logarithmic operation to both sides of the above equation produces the following general form: $y = x + z,$ where $y$ is the observed degraded image, $x$ is the ground truth haze-free image, and $z$ is the intermediate variable related to the degrading process.", "$x$ and $z$ can be estimated using maximum a posteriori estimation (MAP), $i.e.$ , $\\nonumber \\left( {{x^*},{z^*}} \\right) &= \\mathop {\\arg \\max }\\limits _{\\left( {x,z} \\right)} p\\left( {x,z\\left| y \\right.}", "\\right) \\\\ \\nonumber &= \\mathop {\\arg \\max }\\limits _{\\left( {x,z} \\right)} \\frac{{p\\left( {y\\left| {x,z} \\right.}", "\\right)p\\left( {x,z} \\right)}}{{\\int \\limits _X {\\int \\limits _Z {p\\left( {y\\left| {x,z} \\right.}", "\\right)p\\left( {x,z} \\right)dxdz} } }} \\\\&= \\mathop {\\arg \\max }\\limits _{\\left( {x,z} \\right)} p\\left( {y\\left| {x,z} \\right.}", "\\right)p\\left( {z\\left| x \\right.}", "\\right)p\\left( x \\right).$ $p\\left( {y\\left| {x,z} \\right.}", "\\right)$ is the data likelihood, which corresponds to the data fidelity term measuring the reconstruction error.", "When using the L2 loss to supervise network training, it indeed assumes a normal distribution about the reconstruction error (see Section REF and the yellow circle in Figure REF ).", "The L1 loss can also be used to enforce a sparse constraint.", "$p\\left( {z\\left| x \\right.}", "\\right)$ is the conditional distribution of $z$ conditioned on the clear haze-free image.", "For example, DCP [14] assumes that $p\\left( {DarkChannel\\left| x \\right.}", "\\right)$ ($i.e.$ , $p\\left( {1 - t\\left| x \\right.}", "\\right)$ ) concentrates on zeros.", "As with DehazeNet [18] and AOD-Net [20], the networks can implicitly learn $p\\left( {t\\left| x \\right.}", "\\right)$ and $p\\left( {K\\left| x \\right.}", "\\right)$ , as we will show in Section REF .", "$p\\left( x \\right)$ is the prior distribution of $x$ , usually assumed to be long-tailed due to the spatial continuity in natural images (locally smooth regions and sparse abrupt edges) [41].", "Markov Random Fields or simple filters like guided filter are used to model the spatial continuity [42].", "Based on the above analysis, the key is to construct a model that can effectively learn statistical regularities.", "As shown in [21], statistical regularities in natural images can be efficiently learned by point-wise convolutions, which are compact and resists over-fitting.", "Partly inspired by [21], we devise a novel end-to-end fully point-wise CNN for single image dehazing.", "Figure: Schematic of the multi-scale FAMED-Net architecture.", "(a) The Gaussian pyramid architecture with a late fusion module, i.e.i.e., FAMED-Net-GP.", "(b) The Laplacian pyramid architecture with a late fusion module, i.e.i.e., FAMED-Net-LP." ], [ "The Single-scale FAMED-Net: FAMED-Net-SS", "As shown in Figure REF , the network is designed to learn the reformulated variable $K$ in Eq.", "(REF ) and recover the scene radiance according to Eq.", "(REF ) (see [20]).", "There are five point-wise convolutional layers, in which the first four form the K-encoder and the last forms the decoder.", "Features corresponding to different receptive fields are reused via dense connections (see black arcs and cubes in Figure REF ).", "Mathematically, this can be formulated as: ${f^{l + 1}} = {\\varphi ^{l + 1}}\\left( {concat\\left( {{f^k}\\left| {k \\in {\\Lambda ^{l + 1}}} \\right.}", "\\right)} \\right),l \\in \\left[ {0,4} \\right],$ where ${{f^k}}$ represents the learned features from the $k^{th}$ block.", "We denote the input as the 0th block, the hazy image of size $H \\times W \\times 3$ as ${{f^0}}$ , and the decoded features in the 5th block as $K$ , $i.e.$ , $K \\Delta \\over = {f^5}$ .", "${{\\Lambda ^{l + 1}}}$ denotes the index set, which indexes the feature maps used by the $(l+1)^{th}$ block via dense connections ($concat$ ), $i.e.$ , ${\\Lambda ^1} = \\lbrace 0\\rbrace $ , ${\\Lambda ^2} = \\lbrace 1\\rbrace $ , ${\\Lambda ^3} = \\lbrace 1,2\\rbrace $ , ${\\Lambda ^4} = \\lbrace 2,3\\rbrace $ , ${\\Lambda ^5} = \\lbrace 1,2,3,4\\rbrace $ in the proposed network.", "${\\varphi ^{l + 1}}$ denotes the mapping function in the $(l+1)^{th}$ block learned by a combination of a convolutional layer, a batch normalization layer, a ReLU layer and a pooling layer.", "We leverage pooling layers of different kernel sizes ($r^l \\times r^l$ ) after each convolutional layer to aggregate multi-level statistics (features) within the receptive fields, $i.e.$ , $r^l = 2l - 1,l \\in \\left[ {1,4} \\right]$ .", "It is noteworthy that by using a combination of point-wise convolutional layers and a $r^l \\times r^l$ pooling layer, the output node has a receptive field of $r^l \\times r^l$ , which is equivalent to the one using a $r^l \\times r^l$ convolutional layer alone.", "In this way, we retain the representation capacity of the neural network for statistical modeling but using fewer parameters, leading to a more compact architecture.", "Further, no pooling layer and batch normalization layer are used in the final 5th block.", "Since pooling with a $1\\times 1$ kernel is trivial, it is omitted.", "Strides in both the convolutional and pooling layers are set to 1 to retain the feature map size.", "The output feature channels in the K-encoder are kept at 32, $i.e.$ , ${f^l} \\in {R^{H \\times W \\times 32}},l \\in \\left[ {1,4} \\right]$ (see blue cubes in Figure REF ).", "Then, the decoded $K$ map is used to recover the scene radiance according to Eq.", "(REF ) (see the yellow circle in Figure REF ).", "This structure is denoted FAMED-Net-SS, where “SS” stands for single scale.", "We use the L2 loss to supervise the network during training: ${w^*} = \\mathop {\\arg \\min }\\limits _w {\\left\\Vert {J - J\\left( {I;w } \\right)} \\right\\Vert ^2} + \\lambda {\\left\\Vert w \\right\\Vert ^2},$ where ${J\\left( {I;w } \\right)}$ is the estimated scene radiance, $w$ represents learnable parameters of the network, and $\\lambda $ is the weight decay factor in the regularization term." ], [ "The Multi-scale Variants of FAMED-Net: FAMED-Net-GP and FAMED-Net-LP", "Objects at distinct distances are of different size in the captured images, leading to variably sized homogenous regions in the transmission map or $K$ map.", "To handle the multi-scale characteristics, we extend the proposed network to multi-scale by adopting a Gaussian pyramid architecture as shown in Figure REF (a).", "We down-sample the input hazy image to another two scales, $i.e.$ , 1/2 scale and 1/4 scale, respectively.", "Then, we construct a K-encoder for each scale without sharing weights.", "Further, the estimated $K$ maps from the coarse scales are interpolated to the original scale and concatenated as: ${K_{concat}} \\Delta \\over = \\left[ {{K_1};{K_2}{ \\uparrow _{ \\times 2}};{K_3}{ \\uparrow _{ \\times 4}}} \\right],$ where ${K_s}{ \\uparrow _{ \\times m}},s \\in \\left[ {2,3} \\right],m = 2\\left( {s - 1} \\right)$ denote the interpolated $K$ maps.", "Bilinear interpolation is used for both down-sampling and up-sampling.", "Then, we introduce a fusion module to fuse the multi-scale estimates into a more reliable one, which is again implemented by a $1\\times 1$ convolutional layer and a ReLU layer ${\\varphi ^6}$ as: ${K_{fusion}} = {\\varphi ^6}\\left( {{K_{concat}}} \\right).$ Finally, $K_{fusion}$ is used to recover the scene radiance according to Eq.", "(REF ).", "This structure is denoted FAMED-Net-GP, where “GP” stands for Gaussian pyramid.", "The L2 loss is used to supervise the network: $\\begin{array}{c}{w^*} = \\mathop {\\arg \\min }\\limits _w \\sum \\limits _{s = 1,2,3} {{\\alpha _s}{{\\left\\Vert {{J_s} - {J_s}\\left( {I;w} \\right)} \\right\\Vert }^2}} + \\\\\\qquad \\qquad {\\alpha _{fusion}}{\\left\\Vert {{J_1} - {J_{fusion}}\\left( {I;w} \\right)} \\right\\Vert ^2} + \\lambda {\\left\\Vert w \\right\\Vert ^2} \\\\\\end{array},$ where $J_s$ and ${{J_s}\\left( {I;w} \\right)}$ represent the ground truth and the estimated scene radiance at each scale, and ${{J_{fusion}}\\left( {I;w} \\right)}$ represents the estimated scene radiance from the fusion module.", "$\\alpha _s$ and $\\alpha _{fusion}$ are loss weights, which are set to 1.", "In addition to the Gaussian pyramid architecture, we also adopt a Laplacian pyramid architecture for comparison.", "As shown in Figure REF (b), the estimated $K$ map at the coarse scale is interpolated and added to the K-encoder output at the finer scale.", "Mathematically, it can be formulated as: ${K_s} = {K_{s + 1}}{ \\uparrow _{ \\times 2}} + \\Delta {K_s},s \\in \\left[ {1,2} \\right].$ Therefore, it enforces the K-encoder at the finer $s^{th}$ scale to learn a residual $\\Delta {K_s}$ .", "The other parts are kept the same as the Gaussian pyramid one.", "This structure is denoted FAMED-Net-LP, where “LP” stands for Laplacian pyramid.", "It is noteworthy that the receptive field of FAMED-Net-SS is $13\\times 13$ which is similar to the local window size in prior-based dehazing methods, $e.g.$ , $15\\times 15$ in DCP [26] and MRP [43].", "As for FAMED-Net-GP and FAMED-Net-LP, their receptive fields become larger, $i.e.$ , $52\\times 52$ , which enables the network to learn more effective statistical regularities." ], [ "Model Complexity Analysis", "[1]Evaluated with FLOPs, $i.e.$ the number of floating-point multiplication-adds.", "The details of FAMED-Net are shown in Table REF .", "It can be seen that FAMED-Net is very lightweight and compact thanks to the fully point-wise convolutions.", "For example, FAMED-Net-SS only contains 5,987 learnable parameters and has 9.39x10$^7$ FLOPs.", "The number of learnable parameters increases threefold in FAMED-Net-GP, while the FLOPs only increase by about 30%.", "FAMED-Net can process hazy images of arbitrary size due to its fully convolutional structure, with the computational cost increases linearly with the image size.", "To reduce the required FLOPs for large images, we propose a fixed size testing strategy.", "First, we resize the hazy image with the longest side to 360 and input it into the network.", "Then, we resize the estimated $K$ map from the fusion module back to the original size using bilinear interpolation.", "Further, we use the fast-guided filter [44] to refine the interpolated $K$ map.", "The fast-guided filter is $d^2$ -times faster than the original $O(N)$ -guided filter [42], with almost no visible degradation, where $d$ is the down-sampling ratio (refer to [44] for details).", "Finally, the scene radiance is recovered according to Eq.", "(REF ).", "In this way, we can process hazy images of arbitrary size at an almost fixed computational cost.", "We present our comparisons with state-of-the-art models in Table REF including parameters, model size, and runtime.", "These comparisons clearly show that FAMED-Net is lightweight and computationally efficient.", "More details can be found in Section REF .", "Table: Comparison of FAMED-Net and state-of-the-art models with respect to parameters, model size, and runtime.", "[2]The number was calculated on 512x512 images since DCPDN required a fixed-size input.", "To evaluate the performance of FAMED-Net, we compared it with state-of-the-art image prior-based methods including DCP [26], FVR [45], BCCR [46], GRM [47], CAP [16], and NLD [17] and deep CNN-based methods including DehazeNet [18], MSCNN [19], AOD-Net [20], [2], FPCNet [21], GFN [22], and DCPDN [23].", "We adopted the recently proposed RESIDE [3] as the benchmark dataset due to its large scale and diverse data sources and image contents.", "RESIDE contains 110,500 synthetic hazy indoor images (ITS) and 313,950 synthetic hazy outdoor images (OTS) in the training set.", "We reported the PSNR and SSIM for each method on the SOTS test set, which includes both indoor and outdoor scenes (500 of each).", "We also compared the subjective visual effects on real-world hazy images used in the literature.", "Ablation studies were conducted on TestSet-S containing 400 hazy indoor/outdoor images, a dataset initially used in a challenge [48].", "Figure: Statistics of depth levels of image patches (128×128128\\times 128) in the RESIDE training set.FAMED-Net was trained for a total of 400,000 iterations on the combination of ITS and OTS in RESIDE.", "128x128 patches randomly cropped from training images were used for training.", "Figure REF shows the corresponding statistics of depth levels within the training patches.", "We quantized depth maps into 10 uniform levels according to the maximum and minimum depth values.", "Then, we counted the number of unique depth levels within each patch and calculated the histogram and its corresponding cumulative distribution as shown in Figure REF .", "As can be seen, almost 65% patches cover at least 3 depth levels and more than 40% patches cover at least 4 depth levels.", "It is noteworthy that since the sizes of training images from different scenes are around $550\\times 400$ , each $128\\times 128$ patch could cover diverse scene structures as evident by the statistics.", "Consequently, there are different levels of haze in each patch, $i.e.$ , light and dense haze.", "It facilitates FAMED-Net with a receptive field of $52\\times 52$ to learn effective feature representation while avoiding overfitting plain structures.", "Hyper-parameters were tuned on the validation set.", "The batch size was set to 48.", "The initial learning rate was set to 0.00001, which decreased by 10 after 200,000 and 320,000 iterations.", "The momentum and weight decay were set to 0.9 and 0.0001, respectively.", "Average pooling was used unless otherwise specified.", "During testing, the kernel radius of the fast-guided filter was set to 48.", "The regularization parameter epsilon was set to 0.0001.", "The down-sampling factor was set to 4.", "FAMED-Net was implemented in Caffe [49] and run on a workstation with a 3.5 GHz CPU, 32G RAM, and Nvidia Titan XP GPUs." ], [ "Ablations on the Basic Architecture", "First, we conducted ablations on the components of the basic FAMED-Net architecture.", "We sampled a total of 40,000 images from ITS and OTS evenly to form a training set for ablations.", "Moreover, the models were trained in a total of 100,000 iterations.", "The learning rate decreased by 0.1 after 50,000 and 80,000 iterations.", "All other parameters were as described above.", "The results on TestSet-S are listed in Table REF .", "The dehazing results of FAMED-Net-FD4 with batch normalization were much better than FAMED-Net-NoBN.", "FAMED-Net-FD4 was also found to converge faster than FAMED-Net-NoBN.", "We also show the impact of the number of convolutional feature channels on the dehazing results.", "With more channels, the model tended to have a stronger representational capacity and achieved higher PSNR and SSIM scores.", "For example, FAMED-Net-S achieved a gain of 0.3 dB and 0.024 SSIM score over FAMED-Net-FD4 and a gain of 1.5 dB and 0.06 SSIM score over FAMED-Net-NoBN.", "With respect to the multi-scale architecture, with an additional down-scale branch, the PSNR score was improved by 0.2 dB but the SSIM score only decreased marginally.", "With all three scales, FAMED-Net-GP was the best architecture.", "Finally, we increased the feature channels in FAMED-Net-GP, but this only marginally improved the PSNR score and decreased the SSIM score.", "As a trade-off between accuracy and complexity, we chose FAMED-Net-GP as the representative architecture." ], [ "Ablations on Training Data Volume and Training Iterations", "We next investigated the impact of training data volume and training iterations.", "Specifically, we trained FAMED-Net-GP with 400,000 iterations and all the images in ITS and OTS, $i.e.$ , a total of 424,450 images.", "The results are listed in Table REF .", "It can be seen that with sufficient training, FAMED-Net-GP improved.", "Moreover, the PSNR and SSIM significantly improved when FAMED-Net-GP was trained with all the images, producing a gain of 2.14 dB and 0.0425 SSIM score.", "Therefore, more training data benefits the deep neural network by exploiting its powerful representation capacity." ], [ "Additional 3x3 Convolutions for Learning Structural Features", "Due to the fully point-wise convolutional structure, FAMED-Net-GP has limited ability on learn structural features.", "To see whether additional structural features benefit dehazing, we inserted additional 3x3 convolutional layers at the beginning of each scale in FAMED-Net-GP (denoted FAMED-Net-GP-3x3).", "We tested different feature channel configurations including 4 and 8.", "The results are shown in the first three rows in Table REF .", "Compared with FAMED-Net-GP (see the first and last rows in Table REF ), FAMED-Net-GP-3x3 performed better with the same training settings.", "With more 3x3 convolutional channels, FAMED-Net-GP-3x3 trained with all training images was the best architecture, $i.e.$ , 25.94 dB and 0.9180 SSIM score.", "Compared with its counterpart without 3x3 convolutional layers, gains of 0.26 dB and 0.01 SSIM score were achieved.", "However, this came at the cost of additional 6.69% parameters ($i.e.$ , 1152) and 6.66% FLOPs ($i.e.$ , 8.26x10$^6$ )." ], [ "Laplacian Pyramid Architectures", "In Section REF , we also presented a Laplacian pyramid architecture FAMED-Net-LP (see Figure REF (b)).", "Compared with the Gaussian pyramid architecture FAMED-Net-GP (see the last row in Table REF ), FAMED-Net-LP achieved a marginally lower PSNR and a marginally higher SSIM.", "Generally, its performance was comparable to FAMED-Net-GP.", "Since there was no evident benefit to using residual learning, FAMED-Net-GP was used as our default multi-scale architecture in the following experiments." ], [ "The Effectiveness of Max Pooling", "For dehazing, effective local features are usually extracted from extreme pixel values including the dark channel (the minimum value of all the channels within a local patch) [14], local max contrast and saturation [15], and the learned features using the maxout operation in DehazeNet [18].", "Inspired by these studies, we hypothesized that max pooling may be more effective for aggregating local statistics and learning effective features for dehazing.", "To verify this hypothesis, we changed the average pooling operations in all the pooling layers to max pooling.", "This structure is denoted FAMED-Net-GP-MaxP and it was trained using the same settings as FAMED-Net-GP.", "The results are shown in the last row in Table REF .", "Compared with its counterpart using average pooling (last row in Table REF ), FAMED-Net-GP-MaxP achieved a significant gain of 0.83 dB and 0.0091 SSIM score.", "It also outperformed FAMED-Net-GP-3x3 by 0.57 dB and achieved almost the same SSIM score.", "Therefore, we chose FAMED-Net-GP-MaxP as the representative model of the proposed architectures due to its light weight (a total of 17,991 parameters) and computational efficiency (1.24x10$^8$ FLOPs).", "For simplicity, it is denoted FAMED-Net in the following sections.", "To evaluate the performance of FAMED-Net, we compared it with several state-of-the-art methods including DCP [26], FVR [45], BCCR [46], GRM [47], CAP [16], NLD [17], DehazeNet [18], MSCNN [19], AOD-Net [20], [2], FPCNet [21], GFN [22] and DCPDN [23]" ], [ "Results on RESIDE SOTS", "The PSNR and SSIM scores of the different methods are listed in Table REF .", "Several observations can be made.", "1) CNN-based methods [18], [21], [20], [2], [22] generally outperformed the image prior-based methods [26], [45], [46], [47], [16], [17].", "By learning features in a data-driven manner, CNN-based dehazing models had stronger representative capacities than image prior-based models, which are usually limited to specific scenarios.", "2) CNN architecture matters.", "For example, FPCNet achieved a significant gain over its counterpart DehazeNet by using a lightweight, fully point-wise convolutional architecture.", "It achieved the second best SSIM score and even outperformed some complicated networks like AOD-Net, GFN, and DCPDN.", "Further, by integrating the imaging model into the network architecture, the end-to-end AOD-Net recovered the target haze-free image with higher accuracy than the none end-to-end methods [18], [19].", "3) FAMED-Net was the best performing method.", "Moreover, it significantly improved the PSNR and SSIM scores.", "For example, FAMED-Net surpassed the second-best methods by a large margin of 3.6 dB and 0.05 SSIM score.", "Table: Results of FAMED-Net and state-of-the-art methods on RESIDE SOTS.", "Scores in the brackets correspond to the indoor and outdoor subsets, respectively.", "AOD-Net with an asterisk refers to the fine-tuned model with multi-scale SSIM and L2 loss in .", "The best and second-best scores are highlighted in red and blue, respectively.After carefully dissecting the proposed architecture of FAMED-Net and comparing it with state-of-the-art architectures, we can make the following conclusions.", "First, point-wise convolution plays a key role in constructing a compact and lightweight dehazing network.", "Cascaded point-wise convolutional layers are very effective for tackling the ill-posed dehazing problem by aggregating local statistic-based features layer by layer.", "Second, modeling the dehazing task in an end-to-end manner is beneficial.", "Third, a carefully designed multi-scale architecture can handle scale variance in complex scenes while only minimally increasing the computational cost.", "Finally, re-using features via dense connections like [20], [23], [50] leads to a better and more compact model.", "Figure: Subjective comparisons between FAMED-Net and three most related state-of-the-art methods including MSCNN , AOD-Net , FPCNet on synthetic hazy images from RESIDE test set.", "Best viewed in color.Figure: Subjective comparisons between FAMED-Net and state-of-the-art methods including DCP , DehazeNet , MSCNN , AOD-Net , FPCNet , GFN and DCPDN on real-world hazy images.", "Best viewed in color." ], [ "Subjective Evaluation", "Subjective comparison on synthetic hazy images are presented in Figure REF .", "Dehazed results of MSCNN [19] on indoor images have residual haze indicated by the red boxes.", "Besides, MSCNN tended to produce over-saturated results with color distortions as indicated by the red arrows.", "Similar phenomena can also be found in the results of AOD-Net [20].", "Although FPCNet [21] achieved better results, there are some haze residual and color distortions as well.", "Moreover, MSCNN and FPCNet produced noisy results due to the incorrectly estimated transmission in regions enclosed by the blue boxes.", "The proposed FAMED-Net successfully restores the clear images with higher color fidelity and less haze/noise residual.", "It demonstrates the fitting ability of FAMED-Net learned from synthetic training images.", "Next, we present the results on real-world hazy images in Figure REF to compare different methods' generalization ability.", "Close-up views in the red rectangles are also presented.", "It can be seen that DCP, MSCNN, and AOD-Net tended to produce over-saturated results, especially in sky regions.", "MSCNN also exhibits color artifacts, making the dehazed results unrealistic (see the first two images).", "AOD-Net dehazed images appear dimmer than the others.", "DehazeNet achieved better results, but still produced some color artifacts (see the middle part of the first image and the bluish artifact in the second image).", "FPCNet outperformed DehazeNet but retained some haze.", "Using some enhanced results as input and a fusion strategy, GFN generated visually better results.", "However, color distortions in the middle part of the first image and the over-saturated second image are visually unpleasant.", "DCPDN produced better dehazing results and brighter results.", "However, some details are missing due to the over-exposure-like artifacts.", "Generally, FAMED-Net produced better or at least comparable results to state-of-the-art methods, $i.e.$ , clear details with fewer color artifacts and high-fidelity sky regions.", "We also compared image enhancement for anti-halation using different methods in the last row.", "FAMED-Net also produced visually pleasing results.", "More results can be found in the supplement." ], [ "Cross-set Generalization", "We also compared the cross-set generalization between FAMED-Net and two recently proposed methods, GFN and DCPDN.", "We used RESIDE SOTS and TestA in [23] as two test sets.", "We used the pre-trained models of all three methods and did not fine-tune them.", "The results are listed in Table REF .", "It can be seen that FAMED-Net shows better generalization than GFN and DCPDN, which we ascribe to using the large-scale training set and the effectiveness of the proposed architecture.", "Table: Comparison of cross-set generalization.Figure: Dehazed results of DCP , AOD-Net , FPCNet and FAMED-Net on haze-free images." ], [ "Analysis on the Learned Latent Statistical Regularities", "Image prior-based methods including DCP [26], CAP [16] and NLD [17] assume prior statistics on haze-free images, which are used to enforce statistical regularities on recovering the target dehazed results [41].", "The learning-based methods also learn latent statistical regularities [18], [20], [21].", "For example, DehazeNet and FPCNet, which regress the transmission, should produce a transmission map of all 1s for a haze-free image.", "In other words, they should learn dark channel-like statistical priors, $i.e.$ , $1 - t \\approx 0$ .", "As for AOD-Net and FAMED-Net, they regress a latent variable K implicitly.", "For a haze-free image, the atmospheric light is usually assumed to be white, $i.e.$ , $[1,1,1]$ .", "Therefore, the corresponding $K$ can be deduced as $K = \\frac{1}{t}$ from Eq.", "(REF ).", "Also, it should be a map all of 1s, $i.e.$ , $1 - \\frac{1}{{\\widehat{K}}} \\approx 0$ , where ${\\widehat{K}}$ is the mean across three channels.", "To compare the learned statistical regularities of different methods, we collected 100 haze-free images (two examples are shown in the first column of Figure REF ) .", "These images were resized such that the long side was 480 pixels and the short side ranged from 100 to 480 pixels.", "Then, we calculated the dark channel, $t$ , and $K$ within each local patch of size $7\\times 7$ .", "Next, we split the range of pixel value into 20 uniform bin centers and counted the corresponding number of pixels belonging to each bin on all images.", "Finally, we plotted the histograms of dark channel, $1 - t$ , and $1 - \\frac{1}{{\\widehat{K}}}$ for DCP, FPCNet, AOD-Net, and FAMED-Net in Figure REF .", "FAMED-Net learned a much more effective statistical regularity than DCP, FPCNet, and AOD-Net.", "Besides, the statistics of AOD-Net are far from zero.", "In other words, the trained network implicitly assumes that there is haze that needs to be removed in haze-free images.", "Therefore, it leads to over-dehazed artifacts, as seen in the third column.", "This is consistent with the visual results in Figure REF .", "Figure: The learned latent statistical regularities of AOD-Net , FPCNet and FAMED-Net on haze-free images." ], [ "Runtime Analysis", "Following [3], we further compared the runtime of different methods on the indoor images ($620\\times 460$ ) in RESIDE SOTS.", "The results are listed in Table REF in Section REF .", "Results of the classical methods above the line and cGAN are from [3], [24].", "Others are reported using our workstation and the code released by the authors.", "We report the runtime of network forward computation and the whole algorithm including fast-guided filter refinement for FPCNet and FAMED-Net, as shown in separate rows in Table REF .", "The numbers before/after the slash denote the runtime in CPU/GPU mode, $i.e.$ , C/G.", "FAMED-Net runs very fast and reaches 85 fps and 35 fps without/with fast-guided filter refinement.", "In addition, we also list the number of parameters and model size of each CNN model.", "Compared with the recently proposed GFN, cGAN, and DCPDN, FAMED-Net is much more compact and lightweight.", "Figure: Visualization of the estimated transmission maps by FAMED-Net.", "Warm color (red) represents high transmission, i.e.i.e., near camera field with small depth." ], [ "Limitations and Discussions", "As stated in Section REF and demonstrated in Section REF , the proposed FAMED-Net implicitly learns a locally statistical regularity for dehazing like many prior- and learn-based methods [26], [16], [18], [19], [21], [20].", "Though FAMED-Net outperforms these methods by leveraging more efficient architecture, it still has some limitations.", "Some examples of transmission maps estimated by FAMED-Net are shown in the bottom row in Figure REF .", "As indicated by the blue polygons, the transmission in the sky regions is incorrect, leading to under-dehazed artifacts as shown in Figure REF .", "It may be solved by incorporating high-level semantics into the dehazing network.", "However, it comes to the “chicken and egg” dilemma between the low-level enhancement and high-level understanding of degraded images.", "We suppose that it could be solved by jointly modeling the two correlated problems in a unified framework, which we leave as future work.", "Besides, as evident by the low-light enhancement experiments in the supplement and color constancy results in [21], point-wise convolutions could be used for statistical modeling of illumination, color cast, etc.", "Referring to the haze imaging model in [43], we will also exploit FAMED-Net's potential for haze removal in the presence of non-uniform atmosphere light, $e.g.$ , artificial ambient light in nighttime haze environment.", "Extending FAMED-Net to remove heterogeneous haze is also promising by investigating region-based techniques, $e.g.$ , haze density-aware segmentation." ], [ "Conclusions", "In this paper, we introduce a novel fast and accurate multi-scale end-to-end dehazing network called FAMED-Net to tackle the challenging single image dehazing problem.", "FAMED-Net comprises three encoders at different scales and a fusion module, which is able to efficiently learn the haze-free image directly.", "Each encoder consists of cascaded point-wise convolutional layers and pooling layers via a densely connected mechanism.", "By leveraging a fully point-wise structure, FAMED-Net is lightweight and computationally efficient.", "Extensive experiments on public benchmark datasets and real-world hazy images demonstrate the superiority of FAMED-Net over other top performing models: it is a fast, lightweight, and accurate deep architecture for single image dehazing." ], [ "Modification of FAMED-Net for Illumination Balancing", "Since the scene radiance is usually not as bright as the atmospheric light, the recovered haze-free image looks dim [26], especially for the dense haze regions and shading regions.", "It's better to balance the illumination for both visually pleasing and facilitating subsequent high-level tasks.", "Considering the following imaging model used in Retinex literatures [51], [52], [53]: ${S^\\lambda } = {R^\\lambda } \\circ L,\\lambda \\in \\left\\lbrace {r,g,b} \\right\\rbrace ,$ where $S$ represents the observed image, the reflectance $R$ represents the intrinsic property of captured objects, the illumination $L$ represents the various lightness on objects, and $\\circ $ denotes element-wise multiplication.", "Given an observed $S$ , estimating $R$ and $L$ is ill-posed.", "Various smoothness constraints have been proposed to make it tractable [54], [52], [53].", "Instead of estimating the reflectance which typically looks unrealistic, we follow [54] by retaining some amount of illumination to make it enjoys both the desired brightness and the natural appearance.", "To this end, we propose a illumination balancing network (IBNet) to estimate a balanced illumination map from an input image.", "Then we replace the original unbalanced distributed illumination (approximated by the illumination channel in HSV color space) with the estimate.", "Specifically, we construct the IBNet from FAMED-Net with minor modification: 1) changing the 3-channel $K$ in FAMED-Net to the one-channel illumination map; 2) omitting the recovery module depicted by the yellow circle.", "We used L2 loss to supervise the estimated illumination map.", "To prepare the training/test datasets, we applied a fitted non-linear mapping on the illumination channel of each clear image in RESIDE dataset (See Figure REF (a)) and used it to replace the original one to form the illumination unbalanced image (See Figure REF (b)).", "The non-linear mapping was generated for each image specifically by fitting a cubic curve from some randomly selected control points in the right-bottom half plane as shown in Figure REF (c) and other four fixed control points, i.e., (0,0), (0.1,0.1), (0.9,0.9) and (1,1).", "Table: PSNR and SSIM scores of IBNet for illumination balancing on RESIDE TestSet-S generated according to Section ." ], [ "Experimental Results", "We evaluated the proposed IBNet for illumination balancing on RESIDE TestSet-S generated according to Section REF .", "The results are listed in Table REF .", "As can be seen, IBNet, an incarnation of FAMED-Net, achieved good restoration accuracy by enhancing the unbalanced distributed illumination.", "Some subjective visual inspection examples are shown in Figure REF .", "As can be seen, the enhancement results of IBNet on the dehazed images are more visually pleasing, e.g., the illumination has been balanced and details are revealed.", "However, the results also exhibits a few amount of color distortions constrained by the unrealistic synthetic mappings.", "In future work, we will collect real-world low-light dataset for training a better model.", "More subjective comparisons of FAMED-Net and several state-of-the-art methods including DCP [26], DehazeNet [18], MSCNN [19], AOD-Net [20], FPCNet [21], GFN [22] and DCPDN [23] on real-world hazy images are shown in Figure REF and Figure REF .", "As can be seen, FAMED-Net produced better or at least comparable results to state-of-the-art methods with clear details, less color artifacts, and high fidelity in sky regions.", "More subjective comparisons of FAMED-Net and DCP [26], AOD-Net [20] and FPCNet [21] on haze-free images are shown in Figure REF .", "These results demonstrate that FAMED-Net learned a much effective statistical regularity than DCP, FPCNet and AOD-Net.", "Please refer Section V-C-4 in the paper for more details.", "Figure: Subjective comparisons between FAMED-Net and state-of-the-art methods on real-world hazy images.", "Best viewed in color.", "(a) Hazy images.", "(b) DCP .", "(c) DehazeNet .", "(d) MSCNN .", "(e) AOD-Net .", "(f) FPCNet .", "(g) GFN .", "(h) DCPDN .", "(i) FAMED-Net.Figure: Subjective comparisons between FAMED-Net and state-of-the-art methods on real-world hazy images.", "Best viewed in color.", "(a) Hazy images.", "(b) DCP .", "(c) DehazeNet .", "(d) MSCNN .", "(e) AOD-Net .", "(f) FPCNet .", "(g) GFN .", "(h) DCPDN .", "(i) FAMED-Net.Figure: Dehazed results of DCP , AOD-Net , FPCNet and FAMED-Net on haze-free images." ] ]
1906.04334
[ [ "Automated Curriculum Learning for Turn-level Spoken Language\n Understanding with Weak Supervision" ], [ "Abstract We propose a learning approach for turn-level spoken language understanding, which facilitates a user to speak one or more utterances compositionally in a turn for completing a task (e.g., voice ordering).", "A typical pipelined approach for these understanding tasks requires non-trivial annotation effort for developing its multiple components.", "Also, the pipeline is difficult to port to a new domain or scale up.", "To address these problems, we propose an end-to-end statistical model with weak supervision.", "We employ randomized beam search with memory augmentation (RBSMA) to solve complicated problems for which long promising trajectories are usually difficult to explore.", "Furthermore, considering the diversity of problem complexity, we explore automated curriculum learning (CL) for weak supervision to accelerate exploration and learning.", "We evaluate the proposed approach on real-world user logs of a commercial voice ordering system.", "Results demonstrate that when trained on a small number of end-to-end annotated sessions collected with low cost, our model performs comparably to the deployed pipelined system, saving the development labor over an order of magnitude.", "The RBSMA algorithm improves the test set accuracy by 7.8% relative compared to the standard beam search.", "Automated CL leads to better generalization and further improves the test set accuracy by 5% relative." ], [ "Introduction", "Spoken language understanding (SLU) is a core component of voice interaction applications.", "Traditionally, SLU is performed on sentences generated by voice activity detection on user queries.", "In this work, we focus on turn-level spoken language understanding, that is, to understand the ultimate intent when a user speaks one or more utterances compositionally in one turn.", "Figure REF illustrates a voice ordering example for turn-level SLU.", "The user speaks 4 utterances in a sequence to the agent as “I want two cups of americanos and one cup of latte with vanilla all big cup americanos less sugar\".", "The agent interprets the utterances as two order creation actions and two order modification actions, and finally executes the actions and generates the order automatically as “two big cups of americanos with less sugar and one big cup of latte with vanilla\".", "Turn-level SLU consists of multiple sub-tasks.", "Firstly, spoken language contains disfluencies (e.g.", "repetitions or repairs) and no explicit structures (e.g.", "sentence boundaries).", "Hence, disfluency removal and sentence segmentation are necessary for the downstream language understanding component.", "Secondly, coreference resolution, intent segmentation and classification, and slot extraction are needed for inferring the ultimate intent of multiple utterances.", "As shown in Figure 1, the first mention of “americanos\" refers to the product “americanos\" in an order creation action.", "The second mention of “americanos\", through coreference resolution, is decided as referring to the previously mentioned “americanos\", and triggers an order modification action.", "Figure: Overview of our turn-level SLU setup.", "Given a turn of utterances xx, we first extract the tag sequence kk, then parse kk to a program zz that after execution results in the denotation yy.A traditional pipelined approach solves the aforementioned sub-tasks through a sequence of components.", "However, development of these components usually requires non-trivial annotation effort for supervised training and it is not easy to port the pipeline to new domains or scale it up.", "To address these problems, we propose an end-to-end statistical model with weak supervision.", "Weakly supervised learning has been extensively studied in the fields of semantic parsing and program synthesis [2], [6], [7], [8], [10], [13], [12], [3], [9], where indirect supervisions (e.g.", "question-denotation pairs) are adopted.", "Direct supervisions (e.g., question-program pairs) require annotating programs, but annotating programs is known to be expensive and difficult to scale up.", "Compared to direct supervisions, training data for indirect supervisions are easy to collect with low cost.", "The end-to-end learning approach with weak supervision can scale up easily compared to supervised learning approaches.", "It is challenging to develop a semantic parser for turn-level SLU based on question-denotation pairs.", "Firstly, there is a large search space for program exploration.", "As shown in Figure REF , a user speaks multiple utterances in a session, where a utterance can have different intents (e.g.", "creation or modification) and slots.", "Incorrect interpretations of utterances, and incorrect programs $z$ , can accidentally produce the correct target denotations.", "These incorrect programs are denoted spurious programs.", "Sophisticated algorithms are required for solving complicated problems with long promising trajectories and guarding against spurious programs.", "Secondly, the complexity of the problems has high divergence.", "Some turns only have one utterance (easy task), while others may have much more utterances (hard task).", "Policy gradient estimates for long trajectories tend to have more variance with weak supervision [9].", "The hard task needs more diverse and massive amounts of training data than the easy task.", "As a result, uniformly sampling data for exploration and training suffers from sample inefficiency and over-fitting problems.", "In this work, we propose randomized beam search with memory augmentation (RBSMA) for improving exploration of long and promising programs for complicated problems.", "Randomized beam search can improve exploration efficiency [6].", "With the enhancement of memory, RBSMA can learn from failed trials and guide the exploration towards unexplored promising directions.", "With the cache of highest reward programs per turn, RBSMA can re-sample highest reward programs despite the adopted randomized exploration strategy.", "Curriculum learning (CL) can deal with the diversity of sample complexity [8], where the learner focuses on easy ones at first, then gradually puts more weights on more difficult ones.", "Despite great empirical results, most CL methods are based on hand-crafted curricula.", "In this way, an expert defines the level of complexity of a sample and designs curricula, which makes the approach difficult to scale up for complicated problems.", "In this work, we extend automated CL approaches for supervised learning  [4] to weakly supervised learning.", "We use self reward gain as a signal of reward, which measures the learning progress when the learner is fed with a batch of question-denotation pairs sampled from some tasks.", "Then, the reward is used in a nonstationary multi-armed bandit setting, which then determines a stochastic syllabus and provides training data to the learner in a principled order.", "We evaluate our proposed model for turn-level SLU using real-world user logs of a commercial voice ordering system.", "Experimental results demonstrate that when trained on a small number of end-to-end annotated sessions collected with low cost, the proposed model performs comparably to the deployed pipelined system, saving the development labor over an order of magnitude.", "In particular, RBSMA improves the test set accuracy by 7.8% relative compared to the standard beam search.", "Automated CL leads to better generalization and further improves the test set accuracy by 5.0% relative, reaching an overall 10.4% relative gain over standard beam search without CL.", "It should be noted that the proposed model is not limited to the voice ordering applications.", "It can be readily applied in other voice interaction systems for turn-level SLU (e.g., constrain search space by understanding user's multiple queries compositionally), semantic parsing and program synthesis, among others.", "The technical contributions in this work are as follows: We develop an end-to-end statistical model with weak supervision (denotations) for turn-level SLU and find that it can perform well, easily scale up and port to new domains.", "We propose randomized beam search with memory augmentation (RBSMA).", "We show that RBSMA can explore long promising trajectories for complicated problems more efficiently than the standard beam search.", "We develop an automated curriculum learning approach for weakly supervised learning to address the diversity of problem complexities.", "We observe that automated CL can lead to faster training and better generalization." ], [ "Related Work", "Recently there has been a lot of progress in learning neural semantic parsers with weak supervision  [2], [6], [7], [8], [10], [13], [12], [3], [9].", "Systematic search was explored to improve exploration of reinforcement learning (RL) and stability of weak supervision [6], [8].", "Memory Augmented Policy Optimization (MAPO) [9] was proposed using a memory buffer of promising trajectories to reduce the variance of policy optimization.", "In this work, we extend these ideas by proposing randomized beam search with memory augmentation to improve the exploration efficiency.", "CL can deal with the diversity of sample complexity [8].", "In this work, we develop an effective approach extending automated CL for supervised learning [4] to CL for weakly supervised learning." ], [ "Two General Tasks", "Inspired by [3], turn-level SLU is roughly divided into a lexical task (i.e., mapping words and phrases to tags that are parts of a program) and a structural task (i.e., combining tags into a program).", "We collect a typed dictionary and their aliases, and conduct the lexical task by word matching.", "Figure REF shows the matched tag sequence as “Number:Two Product:Americano Number:One Product:Latte Flavor:Vanilla Product:All Size:Big Product:Americano Comment:Less-Sugar\".", "The tagging process filters out noise and task-irrelevant words in the automatic speech recognition (ASR) output, easing the downstream structural task.", "The structural task aims at generating the target action sequence (program) based on the tag sequence.", "Each action in the sequence is a token of the program.", "The action sequence is then executed to generate the final denotation.", "The target action sequence for the above tag sequence is “(create Americano Two) (create Latte One Vanilla) (modify All Big) (modify Americano Less-Sugar)\".", "In our turn-level SLU setup, the structural task is quite challenging due to multiple types of tag manipulations: (1) Tag Deletion: Repetitive tags created due to disfluency should be removed, for example, “americano americano big cup\" should be transformed to “(create Americano Big)\".", "(2) Tag Segmentation: Intent segmentation will group a set of tags and generate a corresponding action.", "For example, “two big americano cold one latte\" should be segmented into two actions, “(create Americano Two Big Cold) (create Latte One)\"We use the heuristics of segmentation based on the Number tag, hence “cold\" is grouped into the first action.. (3) Tag Copy and Assignment: For nested structures, tags on the root node should be copied and assigned to leaf nodes.", "For example, “two hot lattes one big cup one small cup\" should be transformed to “(create Latte One Big Hot) (create Latte One Small Hot)\" .", "(4) Tag Global Assignment: Some tags should be assigned to the node with a long distance for the modification purpose, that is, co-reference resolution is implicitly modeled.", "For example, “one americano two lattes americano big cup\" should be interpreted as “(create Americano One) (create Latte Two) (modify Americano Big)\"." ], [ "Problem Statement", "Given a training set of $N$ examples $\\lbrace (x_i, k_i, y_i)\\rbrace _{i=1}^{N}$ , where $x_i$ is a sequence of utterances within a turn, $k_i$ is the tag sequence of $x_i$ that is produced by the lexical task, $y_i$ is a set of objects that the agent should generate according to $x_i$ .", "Our goal is to learn a semantic parser that maps a turn of utterances $x$ to a program $z$ , such that when $z$ is executed by the agent, it yields the correct denotation $y$ ." ], [ "Program", "In our turn-level SLU setup based on voice ordering, the objects in the denotation $y$ have internal structures.", "For example, an ordered object refers to one product, which contains several properties such as product name and number of cups (Section REF ).", "Based on studying real-world user logs of a commercial voice ordering system, we use two kinds of functions for the tokens in a program: (1) Create Function: (create $p_1 \\cdots p_m$ ) (2) Modify Function: (modify $p_1 \\cdots p_m$ ).", "Here $p_1$ is the key property (e.g.", "product name) and is mandatory, while the other properties are optional and will be set to default values when missing (e.g.", "$p_2$ refers to number of cup, its default value is one), $m$ denotes the number of properties, and $p_1 \\cdots p_m$ are properties and also parameters for a function." ], [ "Model Description", "We decompose the program generation task into two subtasks.", "One is function type (function name) generation, depending on the context; the other is selection of the set of tags produced by the lexical task as parameters for each function type.", "We utilize a seq2seq model based on the semantic parser in [6], [3] and extend it with pointer-generator [11].", "In this way, the decoder vocabulary size is significantly reduced.", "The probability of a program is the product of the probabilities of its tokens given the history: $ p_\\theta (z|x) \\approx p_\\theta (z|k) = \\prod _{t} p_\\theta (z_t|k, z_{1:t-1}) $ .", "We approximate the conditional probability of a program $z$ given the input turn $x$ by the tag sequence $k$ given $x$ .", "$p_\\theta (z_t|k, z_{1:t-1})$ is computed as $p_{gen,t}p_{vocab\\_func}(z_t|k, z_{1:t-1})+ (1-p_{gen,t})\\sum _{i:z_t=k_i}{\\alpha _{t,i}}$ , where $p_{gen,t}$ is probability of the function name generation subtask (rather than selection) for timestep $t$ , $p_{vocab\\_func}(z_t|k, z_{1:t-1})$ is probability of generating function name $z_t$ , $\\alpha _{t,i}$ is the attention weight." ], [ "Learning", "We now describe our exploration-based learning algorithm with weak supervision.", "To use weak supervision, we treat the program $z$ as a latent variable that is approximately marginalized.", "To describe the learning objective, we define $R(z, y)=\\mathbb {1}(y^{^{\\prime }}=y)-||y^{^{\\prime }}-y||$ , where $y^{^{\\prime }}$ is the execution result of $z$ , the first part $\\mathbb {1}(y^{^{\\prime }}=y)\\in \\lbrace 0,1\\rbrace $ is a binary signal indicating whether the task has been completed by producing the target denotation $y$ , the second part $-||y^{^{\\prime }}-y||$ computes the edit distance between the execution result from $z$ and the target denotation, providing a meaningful signal for uncompleted task situations.", "The objective is to maximize the following function: $ \\begin{split}\\sum _{z\\in Z}p_\\theta (z|x)R(z, y) \\approx \\sum _{z\\in B}p_\\theta (z|x)R(z, y)\\end{split}$ where $Z$ is the program space, and $B \\subset Z$ are the programs found by beam search." ], [ "RBSMA : Randomized Beam Search with Memory Augmentation", "Beam search is a powerful approach for facilitating systematic search through the large space of programs for training with weak supervision.", "Typically, at each decoding step, we maintain a beam $B$ of program prefixes of length $n$ , then expand the program prefixes fully to program pool $P$ of length $n+1$ and keep the top $|B|$ program prefixes with the highest model probabilities out of $P$ .", "We explore randomized beam search [6], which combines the standard beam search with the randomized off-policy exploration of RL.", "Extensive studies in RL show that noise injection in the action space (i.e., when decoding program tokens) can significantly improve the exploration efficiency.", "For weak supervision, randomized beam search can increase the chance of finding correct programs.", "Instead of keeping the top $|B|$ scored program prefixes at each decoding step, we either uniformly sample a program prefix out of $P$ with probability $\\epsilon $ or pick the highest scoring program prefix in $P$ with probability $1-\\epsilon $ ($\\epsilon $ -greedy method).", "However, for turn-level SLU, the program space is very large.", "Although this randomized strategy can aid exploration, it could repeatedly sample the same incorrect programs over time, since most programs in the program space are incorrect.", "Hence exploration is still guided by the current model policy and long tail promising trajectories are difficult to be explored.", "For example, most turns only contain creation actions, therefore the model will assign high probabilities for creation actions.", "Thus, it is difficult to explore target programs that are composed of both creation and modification actions.", "To address these problems, we extend randomized beam search with memory augmentation, i.e., RBSMA.", "We maintain a set of fully explored program prefixes $C^{e}$ for turn $x$ .", "We first filter out fully explored program prefixes in $P$ that exist in $C^{e}$ , then select the $|B|$ programs in the remaining $P$ using the $\\epsilon $ -greedy method.", "In a sense, $C^{e}$ enables us to learn from failed trials, considering that most prefixes in $C^{e}$ refer to incorrect programs.", "Hence this approach helps guide the exploration towards unexplored promising directions.", "However, even if we can sample the correct program for once, we can hardly re-sample it due to the adopted randomized exploration strategy, which makes the learning process difficult to converge.", "Therefore, we maintain a cache of highest reward programs $C^{p}$ explored so far for each turn $x$ .", "After a procedure of beam search, we augment the beam search result $S$ with programs in $C^{p}$ and update the highest reward programs in $C^{p}$ with $S$ .", "The pseudo code for RBSMA is shown in Algorithm REF .", "RBSMA Input: turn $x$ , fully explored program prefixes $C^{e}$ , cache of highest reward programs $C^{p}$ , number of decoding steps as $T$ , program pool $P$ , beam $B$ of size $|B|$ , program $z_{1:t}$ of length t Output: beam search result $S$ [1] $B_1 \\leftarrow $ compute beam of programs of length 1 $t$ = 2...$T$ empty $P$ $s \\in B_{t-1}$ Decode $s$ =$z_{1:t-1}$ $\\#$ cont($s$ ): output from one decoding step cont($s$ )=$\\lbrace z_{1:t}|z_{1:t-1}, z_{1:t}、\\notin C^{e}\\rbrace $ cont($s$ ) is empty $\\#$ s:fully explored program prefix insert s in $C^{e}$ $P$ .add(cont($s$ )) $B_t \\leftarrow $ $|B|$ programs from $P$ with $\\epsilon $ -greedy $S=B_T \\cup C^{p}$ $C^{p}$ .update($S$ ) $S$ Recently MAPO [9] was proposed as using a memory buffer of promising trajectories to reduce the variance of policy optimization for program synthesis.", "There are two major differences between our proposed RBSMA and MAPO.", "Firstly, RBSMA is based on beam search and MAPO employs Monte Carlo (MC) style sampling.", "The MC style sampling methods tend to revisit the programs with the highest distribution; whereas, after the highest probability program in a peaky distribution under the model policy, beam search still can use its remaining beam capacity to explore at least $B-1$ other programs.", "Secondly, RBSMA utilizes a randomized exploration strategy, which has been proved to improve the efficiency of exploration and is critical for solving the complicated problems in turn-level SLU." ], [ "Automated Curriculum Learning", "Considering the diversity of problem complexity, and to facilitate faster and better learning, we explore automated CL that organizes data into a curriculum and presents it in a principled order to the learning algorithm.", "We organize the training set $\\lbrace (x_i, k_i, y_i)\\rbrace _{i=1}^{N}$ into $M$ tasks $\\lbrace D_i\\rbrace _{i=1}^{M}$ .", "An ensemble of all the tasks $\\lbrace D_i\\rbrace _{i=1}^{M}$ is a curriculum.", "A sample $b$ is a batch of data $\\lbrace (x_i, k_i, y_i)\\rbrace _{i=1}^{|b|}$ drawn randomly from one of the tasks.", "Inspired by [4], we view a curriculum containing $M$ tasks as an $M$ -armed bandit and design a syllabus as an adaptive policy which seeks to maximize payoffs from this bandit and continuously adapts to optimize the learning progress.", "An agent selects a sequence of arms (tasks) $D_1,..,D_T$ over T rounds.", "After each round, the selected task produces a payoff (real-valued reward) $r_t$ and the payoffs for the other tasks are not observed.", "The bandit is non-stationary because the parameters related to $ p_\\theta (z|x)$ update during training.", "Therefore, the payoff for each arm (task) can change between successive choices.", "Following [4], we use adversarial bandits, denoted Exp3.S algorithm [1], [4], as shown below: $ \\begin{split}\\pi _t(i) &= (1-\\epsilon )\\frac{e^{w_{t,i}}}{\\sum _{j=1}^M e^{w_{t,j}}}+\\frac{\\epsilon }{M} \\\\w_{t,i} &= log[(1-t^{-1})exp\\lbrace w_{t-1,i}+\\eta \\tilde{r}_{t-1,i}\\rbrace \\\\& +\\frac{t^{-1}}{M-1}\\sum _{j\\ne i}exp\\lbrace w_{t-1,j}+\\eta \\tilde{r}_{t-1,j}\\rbrace ] \\\\\\tilde{r}_{s,i} &= \\frac{r_s\\mathbb {1}(a_s=i)}{\\pi _s(i)}\\end{split}$ where $\\pi _t$ is policy that is defined by a set of weights $w_{t,i}$ , $\\epsilon $ refers to the extent of noise injection, $r_s$ is the observed reward at round $s$ ,$a_s$ is the arm selected at round $s$ from $\\pi _t$ based on estimated bandit probability distributions of success, $\\tilde{r}_{s,i}$ is re-scaled reward for arm $i$ , $\\eta $ is the learning step.", "Different from the loss-driven progress signals explored in [4] for supervised learning, for weak supervision, we consider self reward gain (SRG) as the learning progress signal, by comparing the predictions made by the model before and after training on some sample $b$ .", "We denote the model parameters before and after training on $b$ by $\\theta $ and $\\theta ^{^{\\prime }}$ , respectively.", "To avoid bias, we sample another $b^{^{\\prime }}$ from the same task of $b$ .", "$SRG=\\hat{R}(b^{^{\\prime }}, \\theta ^{^{\\prime }})-\\hat{R}(b^{^{\\prime }}, \\theta )$ where program $z_\\theta $ is predicted by model $\\theta $ on $x$ out of $b^{^{\\prime }}$ ; $\\hat{R}(b^{^{\\prime }}, \\theta )$ equals $R(z_\\theta ,y)$ that was defined early in Section ; $\\hat{R}(b^{^{\\prime }}, \\theta ^{^{\\prime }})$ is computed similarly using model $\\theta ^{^{\\prime }}$ .", "Finally, we re-scale $SRG$ to the interval of $[-1,1]$ by min-max normalization for better convergence and assign the rescaled SRG to payoff $r_t$ .", "The pseudo code for Automated CL with SRG is shown in Algorithm REF .", "Automated Curriculum Learning with SRG Initially: $w_{1,i}=0$ [1] $t$ = 1...$T$ $\\pi _t(k) = (1-\\epsilon )\\frac{e^{w_{t,k}}}{\\sum _{j=1}^M e^{w_{t,j}}}+\\frac{\\epsilon }{M}$ Draw task index $k$ from $\\pi _t$ Draw training sample $b$ from $D_k$ Train network $p_{\\theta }$ on $b$ , result in $p_{\\theta ^{^{\\prime }}}$ Draw another sample $b^{^{\\prime }}$ from $D_k$ $SRG=\\hat{R}(b^{^{\\prime }}, \\theta ^{^{\\prime }})-\\hat{R}(b^{^{\\prime }}, \\theta )$ Map $SRG$ to $r_t \\in [-1,1]$ Update $w$ with reward $r_t$ using Exp3.S" ], [ "Dataset", "The example application is voice ordering for coffee.", "One item (object) in a coffee order contains seven properties, summarized in Table REF .", "For evaluation, we only need to collect question-denotation pairs, that is, a session (turn) of user utterances and its final order.", "We find that the final order is easy to annotate and the weak supervision data can be collected with low cost.", "Table: Properties of an ordered item.We create two datasets, namely, recorded100 and log1144.", "recorded100 is composed of 100 sessions by recording customers making orders by talking to a human clerk in a coffee shop, with the orders generated manually by the clerk.", "log1144 is composed of 1144 sessions extracted from real-world user logs of a commercial voice ordering system in a coffee shop, where the orders are labelled manually.", "After manually transcribing user utterances based on ASR output, recorded100 is used as the training set and log1144 as the test set.", "The data statistics are summarized in Table REF .", "One major goal of the proposed model is to use a small amount of weak supervision data collected with low cost to train a high-performing turn-level SLU system.", "Hence we intentionally train on a small amount of training data (recorded100) and test on a much larger test set to evaluate generalization of the proposed model.", "Table: Statistics of the training set recorded100 and test set log1144.In both datasets, we observe users order up to three different items in a session.", "Columns r1, r2, and r3 in Table REF show the percentage of sessions with ordering one, two, or three items, respectively An example of ordering three items (r3) in the test set is “one middle cup of mocha and one big cup of latte with vanilla two cups of regular lattes all take away\"..", "Sessions with one ordered item (easy task) are much more than sessions with three ordered items (the most complex problems in this setup).", "Note that one item has seven different properties, and both creation and modification actions might be included in the session.", "Hence, the program space for r3 is extremely large.", "We evaluate our proposed model on the training and test sets.", "The evaluation metric is accuracy, i.e., the percentage of the execution result $y^{^{\\prime }}$ based on the generated program $z$ equaling the target result $y$ ." ], [ "The Pipelined Baseline", "We take the deployed pipelined system as the baseline.", "In this system, first, we transform the utterances in a turn into a sequence of tags based on the lexical task (Section REF ).", "Second, we remove contiguous repeated tags for disfluency removal.", "Third, inspired by the shift-reduce parser, we maintain a stack of tags and a set of ordered items.", "Initially, we empty the stack and the set.", "Then, we look ahead each of un-scanned tags, and make decisions of actions based on hand-crafted rules.", "Some decision may shift the current tag to the stack, while others reduce the current stack.", "Then we decide whether the reduce action is a creation action or a modification action also based on hand-crafted rules.", "The baseline approach scales up poorly (e.g., for more combinations of order items) and is difficult to port to new domains." ], [ "Details of Model and Training", "The seq2seq model for program generation consists of a BiLSTM encoder and a feedforward decoder with dimensions of hidden states as 30 and 50, respectively.", "The decoder takes input encoder hidden states as well as embeddings of the last 5 decoded tokens and bag-of-words vector of all the decoded tokens.", "The decoding beam size is 40.", "Token embedding dimension is 12.", "Encoder input tag embeddings are initialized as follows.", "Given $n$ types of tags in total, the tag embedding dimension is $1+n*2$ .", "The first dimension is the index of this tag in the set of all tags; the $(2i-1)^{th} (i \\in [1,n])$ dimension has value 1 or 0, indicating whether the tag is in type $i$ or not; the $2i^{th}$ dimension is the index of the tag in type $i$ .", "Tag embeddings are then optimized end-to-end.", "We define $||y^{^{\\prime }}-y||$ in $R(z, y)$ as the total number of different properties between items in $y^{^{\\prime }}$ and in $y$ .", "To train the parameters $p_\\theta (z|x)$ , similar to [5], we re-scale $R(z, y)$ based on its ranking for better convergence and optimization towards better programs.", "We set the reward of programs with top-ranked $R(z, y)$ as 1.0, and set the reward for the rest as 0.", "Note that the original $R(z, y)$ is discrete, thus there are multiple top-ranked programs with re-scaled reward 1.0.", "We also employ code assistance to help prune the search space by checking syntax of partially generated programs, following previous weak supervision work [8].", "To encourage exploration with $\\epsilon $ -greedy, we set $\\epsilon =0.5$ in RBSMA.", "For automated CL, we define tasks based on the difficulty of the denotation $y$ .", "A straightforward measure of difficulty is the number of ordered items in $y$ .", "Here we assign the training data that includes just one ordered item in the denotation to task $D_1$ (easy task), and data with multiple ordered items to task $D_2$ (hard task).", "The parameters for the Exp3.S algorithm (Section REF ) are $\\eta =0.1$ , $\\epsilon =0.05$ .", "We uniformly sample from all the tasks for the first 80 steps for warm up.", "Adam is used for optimization, with learning rate 0.001, and mini-batch size of 8Hyperparameters are optimized based on the training set accuracy.." ], [ "Results", "We compare the proposed model (denoted WeakSup) to the deployed pipelined system (baseline).", "For ablation analysis, we evaluate the following variants of our approach.", "WeakSup_SBS_Uni is variant of WeakSup based on the standard beam search, i.e., removing the memory in RBSMA and set $\\epsilon =0$ , also with uniformly sampling from both easy and hard tasks, i.e., no CL.", "WeakSup_SBS is variant of WeakSup based on the standard beam search and with CL.", "WeakSup_Uni is variant of WeakSup with RBSMA but no CL.", "Table: Accuracy on the training set and test set.As shown in Table REF , WeakSup performs slightly better than the deployed Pipelined system on the test set, while trained only on a small number of end-to-end annotated sessions collected with low cost.", "We observe that 92.3% errors made by WeakSup_SBS on the training set belong to the hard task (i.e., ordering more than one item) since they are more difficult for exploration.", "WeakSup achieves 100% accuracy on the training dataset, demonstrating that long promising programs for complicated problems can be explored by RBSMA.", "Pipelined achieves 95% accuracy on the training set, indicating that it is difficult to cover all the hard problems through hand-crafted rules.", "Without CL, RBSMA improves the test set accuracy by 5.1% relative (from 78.1% to 82.1%) over the standard beam search; with CL, 7.8% relative (from 80.0% to 86.2%).", "CL improves generalization on the test data.", "With RBSMA, CL achieves 5% relative gain (comparing WeakSup with WeakSup_Uni), while maintaining similar training set accuracy.", "The combination of RBSMA and CL improves the test set accuracy by 10.4% relative (78.1% to 86.2%).", "The total training time for the proposed model WeakSup is 1.5 hours on one Tesla M40 GPU.", "Summing up data preparation and computation time, the proposed model saves development effort over an order of magnitude compared to the deployed pipelined system." ], [ "Analysis of Exploration Strategy", "We study exploration strategies comparing the standard beam search vs. RBSMA, automated CL (cl) vs. uniform sampling (uniform).", "We evaluate beam_searh_uniform, beam_searh_cl, rbsma_uniform, rbsma_cl.", "We measure the exploration progress by the training set accuracy for a given epoch, shown in Figure REF .", "Figure: Analysis of exploration strategy.We have two observations.", "First, exploration with RBSMA progresses slowly at the very beginning compared to beam search (probably due to the randomized exploration strategy), but catches up and solves all the training set problems in the end.", "In contrast, exploration with the standard beam search stops progressing after epoch 52.", "Second, automated CL progresses faster than uniform sampling for most of the time.", "Particularly after epoch 35, rbsma_cl significantly improves over rbsma_uniform, probably due to the adaptive policy (sampling more samples from hard task)." ], [ "Analysis of Adaptive Policy of Automated Curriculum Learning", "The efficacy of the adaptive policy of our proposed automated CL algorithm is illustrated in Figure REF .", "Pai(easy) denotes the probability of sampling easy task under the policy, while pai (hard) the probability of sampling hard task.", "Acc(easy) denotes the accuracy of easy task in the training set, and acc(hard) the accuracy of hard task.", "Figure REF reveals a consistent strategy, first focusing on easy task, then alternatively on both easy and hard tasks, and finally focusing more on hard task but still sampling from easy task.", "This automatically learned complex strategy is challenging to achieve even with carefully hand-crafted curricula, due to challenge in defining acceptable performance of tasks.", "Also, our proposed approach continuously samples from easy task when learning hard ones, effectively addressing the forgetting problem, as acc(easy) in Figure REF does not degrade when acc(hard) improves.", "In contrast, effectively crafting these mixing strategies is challenging [14].", "Figure: Average policy and accuracy over epochs." ], [ "Conclusion", "We present an end-to-end statistical model with weak supervision for turn-level SLU.", "We propose two techniques for better exploration and generalization: (1) RBSMA for complicated problems with long programs, (2) automated CL for weakly supervised learning for dealing with the diversity of problem complexity.", "Experimental results on real-world user logs show that our model performs comparably to the deployed pipelined system, greatly saving the development labor.", "Both RBSMA and automated CL significantly improve exploration efficiency and generalization." ] ]
1906.04291
[ [ "Extensions to the halo occupation distribution model for more accurate\n clustering predictions" ], [ "Abstract We test different implementations of the halo occupation distribution (HOD) model to reconstruct the spatial distribution of galaxies as predicted by a publicly available semi-analytical model (SAM).", "We compare the measured two-point correlation functions of the HOD mock catalogues and the SAM samples to quantify the fidelity of the reconstruction.", "We use fixed number density galaxy samples selected according to stellar mass or star formation rate (SFR).", "We develop three different schemes to populate haloes with galaxies with increasing complexity, considering the scatter of the satellite HOD as an additional parameter in the modelling.", "We first modify the SAM output, removing assembly bias and using a standard Navarro-Frenk-White density profile for the satellite galaxies as the target to reproduce with our HOD mocks.", "We find that all models give similar reproductions of the two-halo contribution to the clustering signal, but there are differences in the one-halo term.", "In particular, the HOD mock reproductions work equally well using either the HOD of central and satellites separately or using a model that also accounts for whether or not the haloes contain a central galaxy.", "We find that the HOD scatter does not have an important impact on the clustering predictions for stellar mass selected samples.", "For SFR selections, we obtain the most accurate results assuming a negative binomial distribution for the number of satellites in a halo.", "The scatter in the satellites HOD is a key consideration for HOD mock catalogues that mimic ELG or SFR selected samples in future galaxy surveys." ], [ "Introduction", "In the current cosmological paradigm, the Universe is composed of a filamentary network of structures shaped by gravity.", "In this framework, dark matter haloes correspond to overdense regions that evolve by gravitational instability due to mergers and interactions with other haloes.", "Galaxy formation occurs inside haloes where baryons collapse in the gravitational potentials and the condensation of cold gas allows the formation of stars and the evolution of galaxies .", "A detailed description of the halo-galaxy connection enables using the galaxies to constrain the cosmological model.", "The evolution of dark matter haloes can be followed, to high accuracy, using N-body simulations which use a set of cosmological parameters as inputs.", "In contrast, the evolution of galaxies in haloes involves many physical processes that are still poorly understood.", "The fate of baryons within dark matter haloes has been modelled using different approaches, such as hydrodynamical simulations that provide an insight into the formation and evolution of galaxies , [42].", "However, these models are computationally expensive and cannot be run over the large volumes needed for cosmological studies.", "Alternatively, the effect of baryons can be probed in such large volumes using semi-analytical models (SAMs) of galaxy formation.", "These start from haloes extracted from a large volume dark matter only simulation and use simplified physical models of the processes that shape the evolution of baryons [12], [3], [5], [45].", "Hence, SAMs make predictions for the abundance and clustering of galaxies that can be compared and tested with large surveys.", "Another way to describe the galaxy population is with the halo occupation distribution (HOD) framework [6], [39], [43], .", "This is an empirical approach that provides a relation between the mass of haloes and the number of galaxies hosted by them.", "This is expressed as the probability distribution $P(N|M_h)$ that a halo of virial mass $M_h$ hosts $N$ galaxies which satisfy some selection criteria.", "This approach provides insight into the halo-galaxy connection and can be used to study galaxy clustering [7], , [13], , .", "Furthermore, the HOD parameters can be tuned in detail because they only aim to reproduce a limited set of observables such as the galaxy number density and clustering.", "Thus, HOD modelling is one of the most efficient ways to populate very large volumes or to produce many realizations required for, e.g., estimating covariance matrices using mock galaxy catalogues [37], [35].", "These mock catalogues can then be used to test and develop new algorithms that will be used for the next generation of surveys.", "The study of star forming emission line galaxies (ELGs) has gained interest over the last decade as they will be targeted by surveys such as Euclid and the Dark Energy Spectroscopic Instrument (DESI) surveys [34], [18].", "The luminosity of an emission line depends on a number of factors, including the star formation rate (SFR), gas metallicity and the conditions in the HII regions [38].", "Even though ELGs samples are related to star formation, they are not the same as SFR-selected samples.", "Still, a similar HOD approach can be used to study both galaxy populations [24], [11], [10].", "In particular, the shape of the HOD in SFR selected samples is more complex than the case of the more widely studied stellar mass selected samples [14], [25].", "For example, the occupation function of central galaxies in ELG samples does not follow the canonical step-like form.", "Accurate modelling of the HOD will provide the more realistic mock catalogues needed for the analysis of future observational samples.", "Here, we use the HOD formalism to test three different ways to populate dark matter haloes with galaxies.", "The prescriptions of these models aim to replicate as accurately as possible the target galaxy populations of a SAM sample.", "The comparison between the galaxy population in the mock catalogues and SAM samples is done via the analysis of their two-point correlation function (2PCF), which is related to the power spectra of density fluctuations and is sensitive to cosmology [22].", "We also include the scatter of the HOD of satellites in our modelling, and quantify the impact of using this additional parameter on the clustering.", "The outline of this paper is as follows.", "The definition of galaxy samples used and the basic properties of the N-body simulation and the SAM are given in Section .", "The correlation functions and the HODs of the samples are presented in Section .", "In Section  we introduce the HOD models used to build the mock catalogues and the recipes employed to perform this procedure.", "The main results and analysis are discussed in Section  while in Section  we present our conclusions.", "Appendix  shows the predicted occupation functions for a particular HOD model." ], [ "Simulation data", "In this section we give a brief overview of the galaxy formation model used (§ 2.1) and the N-body simulation in which it is implemented (§ 2.2)." ], [ "Galaxy formation model", "A galaxy formation model needs to take into account a variety of physical processes such as radiative cooling of gas; AGN, supernovae and photoionisation feedback; chemical evolution; star formation; disc instabilities; collapse and merging of dark matter haloes; and galaxy mergers.", "These affect the fate of baryons in haloes which lead to the formation and evolution of galaxies.", "Several physical processes such as star formation and gas cooling are not fully understood due to their complexity.", "As a consequence, a set of free parameters are used in the equations that model these processes.", "These free parameters are tuned in order to reproduce observations such as the luminosity functions, colours and the distribution of morphological types.", "In this context, different SAMs usually have their own implementations to model these physical processes, predicting different galaxy populations.", "Here we use the outputs at $z=0$ from the SAM of [27] (hereafter G13) which is a version of the L-GALAXIES code from the Munich group [21], [16], [20], [26], [29].", "The outputs are publicly available from the Millennium Archivehttp://gavo.mpa-garching.mpg.de/Millennium/.", "The samples used here are defined according to three different number densities where we rank the galaxies in the SAM by their stellar mass or SFR in a decreasing way (hereafter the SAM samples).", "These samples are useful in order to compare with observational catalogues with similar space densities.", "Table REF shows the three number densities and the cuts in stellar mass and SFR used in each case." ], [ "The Millennium simulation", "The distribution of dark matter haloes used in this work is drawn from the Millennium-WMAP7 simulation [27] which is identical with the Millennium Simulation , but with updated cosmological parameters that match the results from the WMAP7 observations.", "This version assumes a flat $\\Lambda $ CDM universe considering $\\Omega _m = 0.27$ , $\\Omega _{\\Lambda } = 0.73$ , $h = H_0/(\\rm 100\\ km\\ s^{-1}\\ Mpc^{-1}) = 0.704$ and $\\sigma _{8} = 0.81$ .", "The simulation was carried out in a box-size of $500\\ h^{-1} \\rm Mpc$ following $2160^3$ particles of mass $9.31\\times 10^{8}h^{-1}\\rm M_{\\odot }$ .", "The run produced 61 simulation snapshots from $z=50$ up to $z=0$ .", "G13 use a friends-of-friends group finding algorithm (FOF) to identify dark matter haloes in each snapshot [19] and then run SUBFIND to identify the subhaloes .", "Halo merger trees are constructed for each output and track the evolution of haloes through cosmic time.", "These trees are the starting points for the SAM.", "Table: The first column shows the abundance of galaxies in the three density samples used here.", "The second and third columns show the cuts applied to G13 galaxies in stellar mass and star formation rate, respectively, to achieve these abundances." ], [ "Characterization of the SAM galaxy samples", "This section introduces the statistics used to characterize the distribution of galaxies, starting with the measurement of the correlation function (§ 3.1), the form of the HOD predicted by the SAM (§ 3.2) and the scatter in the HOD (§ 3.3)." ], [ "Clustering measurement: two-point galaxy correlation function", "The spatial two-point correlation function, $\\xi (r)$ , measures the excess probability of finding a pair of galaxies at a given separation with respect to a random distribution.", "We compute the 2PCF of the galaxy samples with the Corrfunc code [44].", "Fig.", "REF shows the 2PCF of the stellar mass (top) and SFR (bottom) selected samples for the different space densities.", "For the former, the amplitude of the clustering increases with decreasing number density, as we consider more massive galaxies.", "The impact of the inclusion of these massive galaxies is stronger at small scales and is weaker at large scales.", "In contrast, for the SFR selected galaxies the amplitude of the 2PCF for the different samples remains largely unchanged except for small scales where the satellite-satellite pairs make an important contribution to the clustering amplitude.", "For both selections, the satellite fraction increases with increasing number density.", "In the 2PCF, we can distinguish between the contribution from galaxy pairs in the same halo and from different haloes.", "The former are the main contributors to the amplitude of the 2PCF on small scales, namely the one-halo term which dominates up to $\\sim 1\\ h^{-1} {\\rm Mpc}$ , while galaxy pairs between different haloes contribute mostly to the two-halo term which determines the clustering on large scales.", "In this regime the total number of galaxies in the halo, regardless of whether they are satellites or the central, drives the amplitude of the clustering, acting as a weighting for the bias of each halo in computing an overall “effective” bias for the sample (see e.g.", "[4]).", "The one-halo term is sensitive not only to the number of satellites, but also depends on their spatial distribution.", "Figure: Two-point correlation functions (ξ(r)\\xi (r)) of different galaxy samples from G13 and defined in Table .", "(Top) stellar mass and SFR selected samples (bottom).", "Colors indicate each sample as labeled in the bottom panel.", "The fraction of satellites in each sample is shown in both panels, with the color indicating the sample number density." ], [ "The halo occupation function predicted by the SAMs", "The galaxy populations in the SAMs depend on the choices adopted for the modelling of the baryonic processes.", "Hence, depending on the SAM employed, different galaxy catalogues with different luminosity functions, stellar mass functions or correlation functions can be obtained for the same dark matter simulation.", "For example, [14] studied the effects on the clustering predicted from different SAMs and found some differences particularly in galaxy samples selected by SFR and cold gas mass.", "Moreover, they show that the shapes of the HODs are model-dependent which reflects the differences in the implementation of physical processes in each SAM.", "For example, the specific modelling of dynamical friction affects the satellite population in SAMs.", "Here we are not interested in the detailed shape of the HOD predicted by a particular SAM, but on how best to use the occupation functions to populate dark matter haloes with galaxies to produce a similar spatial distribution to that resulting from a SAM.", "The HOD is usually broken down into the contribution from central and satellite galaxies.", "Fig.", "REF shows these two components for stellar mass and SFR selected samples with the same number density for the G13 SAM.", "Here, each HOD is computed in bins of width 0.08 dex in the logarithm of the halo mass where the position of each $\\langle N \\rangle $ value is plotted at the median value within each bin.", "The striking difference in the shape of the HOD of centrals between the two selections is due to the different galaxies that are included.", "Massive centrals tend to be red galaxies hosted by massive dark haloes.", "Such centrals are included in stellar mass selected samples but not when selecting by SFR.", "The galaxies in the SFR samples correspond mainly to blue star-forming galaxies excluding luminous red galaxies with high stellar mass but low SFR.", "It is noteworthy that the fraction of halos that contain a central passing the SFR selection never reaches unity for the sample plotted in Fig.", "REF .", "These features of the HOD of SFR galaxies have been noted in SAMs before [14], [15], [25] and inferred for blue galaxies in the SDSS .", "This shows that a significant number of haloes in the SFR selected samples do not host a central as their SFR is below the threshold.", "The same situation is found in the other number density samples.", "Note that, in observational samples, the ranking of galaxies in order of their emission line luminosity may not correspond to the ranking in SFR due to dust attenuation, which means that the highest SFR galaxies may not necessarily have the brightest emission lines.", "We estimate the uncertainties of the HOD values using jackknife resampling [37], dividing the simulation volume into 10 slices.", "We use the position of the centre of the potential of haloes to classify the galaxies within each halo.", "The resulting errors are shown as the shaded regions in Fig.", "REF , and they are negligible for all halo masses except at the high mass end and for the HOD of centrals selected by SFR.", "Because of the simple relation between halo mass and occupation number, the HOD represents a useful approach for the construction of mock galaxy catalogues.", "Here we have described the first moment of the HOD, the main ingredient to building-mocks recipes.", "Nevertheless, it is important to also consider the second moment, i.e the dispersion in the HOD of satellites.", "Figure: The HOD predicted by G13 for stellar mass (top) and SFR selected samples (bottom), for a number density of 10 -2.5 h 3 Mpc -3 10^{-2.5} h^3 \\rm Mpc^{-3}.", "Black lines show the HOD for the full sample and red and blue indicate the HOD for central and satellite galaxies, respectively.", "The red and blue shaded regions represent jackknife errors calculated using 10 subsamples.The horizontal black dotted line shows an average occupation value of unity." ], [ "The predicted dispersion in the halo occupation number", "When the simplest HOD approach is used to build mock catalogues, the mean of the distribution is the main parameter.", "Central galaxies are assumed to follow a nearest-integer distribution where the mean $\\rm \\langle N_{cen} \\rangle $ is between zero and one.", "For satellites, a Poisson distribution with mean $\\rm \\langle N_{sat} \\rangle $ is the most widely assumed distribution [33], .", "In G13, satellites are classified as type-1 if they are hosted by a resolved subhalo, and type-2 or orphans if the subhalo has been destroyed by tidal effects and is no longer identified.", "[9] found that the number of low mass subhaloes in main haloes in the Millennium-II Simulation [8] is well described by a negative binomial distribution which corresponds to a super-Poissonian statistic as its scatter is larger than a Poisson distribution.", "This suggests that the type-1 satellite population can also be described by this distribution.", "Based on the outputs from the SAM presented in [30], and using the Bolshoi [32] and MultiDark [41] simulations, [31] showed that ignoring this non-Poissonity in the HOD of subhaloes results in systematic errors in the predicted clustering of galaxies.", "Here we extend the application of the negative binomial distribution by checking whether the HOD of G13 satellites, that is including type-1 and type-2, is well described by this statistic.", "We expect that the HOD scatter is model-dependent because of the different treatments of dynamical friction.", "Moreover, as some galaxy properties, such as SFR and stellar mass, have a model-dependent scatter it is reasonable to assume the same for HODs.", "For example, [28] showed that different galaxy formation models do not have the same dispersion in the stellar mass-halo mass relation.", "Therefore our results are specific to the G13 model.", "It is likely that a different SAM would require an adjustment to to the value of $\\beta $ to describe the scatter of the satellite HOD.", "Nevertheless we expect our general results to hold for any SAM, and that the satellite distribution displays more scatter than Poisson.", "The Poisson and negative binomial distributions differ in their shapes so it is useful to parametrise the departure from the Poisson scatter.", "We use the parameter $\\beta $ (defined below) to denote this departure.", "For a Poisson distribution the variance is given by the mean value of the random variable, namely $\\rm \\langle N_{sat}\\rangle $ , with the standard deviation given by $\\sigma = \\sqrt{\\rm \\langle N_{sat}\\rangle }$ .", "The negative binomial distribution has the same mean as the Poisson distribution, but a larger scatter which can be expressed as $\\sigma _{\\rm NB} = \\sigma + \\beta \\sigma .$ where $0<\\beta <1$ .", "Then, $\\beta $ indicates the fractional change in the variance with respect to the Poisson standard deviation $\\sigma $ .", "Under this definition, when $\\beta =0$ the distribution is Poissonian and if $\\beta =1$ the standard deviation is twice that from a Poisson distribution.", "The probability function of the negative binomial distribution is given by $P(N\\left| r,p \\right.)", "= \\frac{\\Gamma (N+r)}{\\Gamma (r)\\Gamma (N+1)}p^r (1-p)^N.$ Here $\\Gamma (x) = (x+1)!$ is the gamma function.", "The parameters $r$ and $p$ are determined by the first moment $\\left< N\\right>$ and second moment $\\sigma ^2$ of the distribution, $ p = \\frac{\\left< N\\right>}{\\sigma _{\\rm NB}^2}, \\hspace{5.69054pt} r = \\frac{\\left< N\\right>^2}{\\sigma _{\\rm NB}^2 - \\left< N\\right>}.$ Thus, we can control the width of the negative binomial distribution through the parameter $\\beta $ and compute the value of $\\sigma _{\\rm NB}^2$ ." ], [ "Generating HOD mock catalogues", "We now describe the procedure followed to build HOD mock galaxy catalogues using the HODs of the SAM samples.", "Section REF presents the three methods we use to populate haloes with galaxies.", "In Section REF we specify the treatment of the scatter in the HOD of satellites.", "Section REF explains how we impose a standard Navarro-Frenk-White (NFW) density profile for satellites.", "Section REF presents the impact of assembly bias in the SAM samples and explains why it must be removed from the SAM in order to compare with the HOD mock catalogues.", "Finally in Section 4.5 we discuss the treatment of the radial distribution of satellite galaxies within haloes." ], [ "The HOD models used to build mocks", "We test three different HOD schemes of increasing complexity.", "This helps us to understand the level of complexity needed to obtain accurate clustering predictions.", "Each model uses occupation functions obtained from linear interpolations of the HOD values in each bin, rather than fitting a parametric form to the SAM HOD.", "The distribution of galaxies can be nearest-integer (centrals only) and Poisson or negative binomial (satellites).", "The 1-HOD model builds mock catalogues using the HOD of all galaxies from the SAM sample (black solid lines in Fig.", "REF ) including both centrals and satellites.", "The model assumes either a Poisson or negative binomial distribution for the occupation number.", "We adopt a Monte Carlo approach to obtain the final number of galaxies.", "This approach does not distinguish between centrals and satellites.", "If the model predicts that $\\rm N \\ge 1$ we assume that this halo hosts a central and $\\rm N_{sat}=N-1$ .", "Because of this, the number of centrals and satellites in the 1-HOD mock catalogues can be notably different with respect to the SAM samples where there are haloes with satellites but no central.", "Moreover, the HODs of these two separate components in the mock catalogues are completely different with respect to the HODs of the SAM samples (see Appendix ).", "However, the total number of galaxies in these mock catalogues is essentially the same as in the SAM samples." ], [ "2-HOD", "The 2-HOD model uses the HOD of centrals and satellites separately, i.e, the red and blue solid lines in Fig.", "REF , respectively.", "Thus, a particular distribution can be assumed for each component and the modelling is done independently for each one.", "For centrals, we use the nearest-integer distribution and for satellites the Poisson or negative binomial distribution.", "This scheme predicts practically the same number of central and satellites as the SAM samples.", "Note that in a non-negligible number of realizations it is possible to get haloes without a central.", "This is more likely for haloes with masses for which $\\rm \\left\\langle N_{cen}\\right\\rangle < 1$ which is more frequently found in SFR-selected samples." ], [ "4-HOD", "This model contains more information about the galaxy population of the SAM sample than the 2-HOD model.", "The 4-HOD requires us to store the number of haloes that host a central ($\\rm N_{cen}$ ) and the number of haloes that do not host a central ($\\rm N_{nocen}$ ) as a function of halo mass.", "Under this definition the total number of haloes in the volume is the sum of both quantities.", "Furthermore, the 4-HOD also needs knowledge of the number of satellites in haloes with a central ($\\rm N_{sat\\_cen}$ ) and without a central ($\\rm N_{sat\\_nocen}$ ).", "Thus, the total number of satellites is the sum of these two quantities.", "With these definitions, we build new HODs for satellites that take into account the population of centrals in the SAM samples.", "The SAM samples contain haloes with satellites but no centrals.", "This is more common in SFR selected samples.", "Indeed the HOD of centrals in these samples indicates that a large number of haloes do not host a central (see Fig REF ), and the 4-HOD takes this feature into account.", "We then define the satellite occupation functions conditioned on whether or not haloes host a central.", "With the four quantities explained above, we can define the conditional HODs, $\\rm \\langle N_{\\rm sat\\_cen}(M_h) \\rangle &= \\rm \\frac{N_{sat\\_cen}}{N_{cen}}(M_h) \\\\\\rm \\langle N_{\\rm sat\\_nocen}(M_h) \\rangle &= \\rm \\frac{N_{sat\\_nocen}}{N_{nocen}}(M_h) $ Fig.", "REF shows the conditional HODs where the main differences are observed at low halo masses.", "Even though the ratio between these two HODs is close to unity, it is the galaxies hosted by these haloes ($\\approx 10^{12} h^{-1} {\\rm M_{\\odot }}$ ) that dominate the amplitude of clustering.", "The conditional HODs are well fitted by a negative binomial distribution, including the HODs of the other number density samples.", "The 4-HOD method uses a Monte Carlo approach to decide if a halo hosts a central galaxy.", "Depending on this outcome, one of the two conditional HODs is then chosen to obtain the number of satellites.", "Figure: Conditional HODs from the 4-HOD method for an SFR selected sample with number density 10 -2.5 h 3 Mpc -3 {\\rm 10^{-2.5}} h^3 \\,{\\rm Mpc^{-3}}.", "Top: Average number of satellites in haloes with a central (cyan) and without a central (magenta).", "Bottom: Ratio of the two HODs shown in the upper panel.", "Shaded regions represent jackknife errors calculated using 10 subsamples." ], [ "Treatment of scatter in the HOD of satellites", "A Poisson distribution is fully described by its first moment.", "In the case of satellites this is $\\rm \\langle N_{sat} \\rangle $ .", "If the distribution of the number of satellites follows instead a negative binomial distribution, an additional parameter $\\beta $ is needed which specifies the increase in the scatter with respect to a Poisson distribution (see Eq. 1).", "We fix the $\\beta $ value so that we reproduce as closely as possible the scatter of the HOD of satellites in a given SAM sample.", "Fig.", "REF shows the scatter of the HOD of satellites in SAMs and 2-HOD mock catalogues for two illustrative $\\beta $ values in a SFR selected sample.", "This shows that a small but non-zero $\\beta $ is required to reproduce the HOD scatter of the SAM sample.", "The same is found for the other number density samples, and for the conditional HODs.", "We do not perform this analysis for the 1-HOD model as satellites are not treated independently in this case.", "It is not possible to replicate the HOD scatter in SAMs more closely as this would require $\\beta $ to be a function of mass.", "Instead, we assume a constant scatter for the HOD of satellites by using the same $\\beta $ for all halo mass bins.", "The accuracy of the $\\beta $ values used are judged by checking the quality of the resulting mocks via comparison of their 2PCFs with the clustering of the shuffled-NFW samples (see § 4.5 below for the definition of this catalogue).", "We show in Section , that when the scatter of the SAM and HOD mocks are matched up to $M_h \\lesssim 10^{13.5} M_{\\odot }h^{-1}$ ($\\beta =0.05$ for SFR-selected samples), we obtain the most accurate clustering predictions.", "In contrast, using larger values for $\\beta $ worsens the predictions (as does using $\\beta =0$ , which corresponds to Poisson scatter).", "Figure: The HOD of satellites in a SAM sample (dashed blue) and a 2-HOD mock catalogue (solid red) contrasting two values of the parameter β\\beta that controls the scatter (see Eq.", "1): β=0.05\\beta =0.05 (top) and β=0.18\\beta =0.18 (bottom).", "The shaded regions show the HOD scatter and the red dotted lines correspond to the scatter in the HOD mocks.", "The subpanels show the ratios between the HOD scatter of the mocks and SAM sample.", "Note that it is not possible to visually distinguish a Poisson scatter from the β\\beta scaled versions plotted in the main panels, but this choice would lead to a larger ratio of variances than the range plotted in the lower subpanels.The satellite HOD is well described by the negative binomial distribution for a wide range of halo masses.", "Fig.", "REF shows the satellite PDF in a particular mass bin for a stellar mass and a SFR selected sample.", "We show negative binomial distributions defined by $\\beta =0.08$ and $\\beta =0.05$ .", "In order to compute the satellite distributions, we split satellites according to whether or not their haloes host a central galaxy, which is relevant for the 4-HOD model.", "The satellite distribution matches with the negative binomial when most of haloes in the bin are included.", "A similar close match is found when comparing with Poisson distributions ($\\beta =0$ ).", "Note that in the SFR selection case most of the haloes do not host a central galaxy, as the HOD of centrals in that bin suggests.", "The opposite behaviour is observed when selecting by stellar mass.", "Figure: Probability distributions of satellites (cyan histograms) that are hosted by haloes with masses within the blue shaded (vertical) mass range of the HODs shown in the insets.", "The galaxies are selected by stellar mass (top) and SFR (bottom), with a number density of 10 -2.5 h 3 Mpc -3 {\\rm 10^{-2.5}} h^3 {\\rm Mpc^{-3}}.", "Left: Haloes without centrals (i.e its central did not enter to the cut).", "Right Same as left panel but considering haloes with centrals.", "Note the high probability to find haloes that do not host centrals in the SFR sample, which is expected by the low value of 〈N cen 〉\\langle \\rm N_{cen} \\rangle in the mass range analyzed and as is shown in the inset.", "The distributions are well described by negative binomial distributions (magenta) in the cases when most of haloes in the bin are included (i.e top right panel and bottom left panel).", "The negative binomial distributions shown here are obtained using a scatter that is 5%5\\% larger than that from a Poisson distribution in the SFR selected sample, and 8%8\\% larger in the stellar mass selection case" ], [ "The radial distribution of satellite galaxies in halos", "The number of satellites in the mock catalogues is obtained from the adopted HOD model (see Sec REF ).", "Their positions in haloes are set according the standard NFW density profile [36] which requires two parameters, the concentration and scale radius.", "The former depends on halo mass and the latter is a function of the virial radius.", "For simplicity, we assume that all haloes in the simulation volume have the same concentration parameter $c=13.98$ which corresponds to the concentration of a halo at redshift $\\rm z=0$ and mass $M_h = 10^{12.5}{\\rm M_{\\odot }}h^{-1}$ .", "We do not use a more realistic model for concentration as we are interested in comparing the HOD models rather than obtain a realistic redistribution of satellites.", "We impose that the maximum distance from a satellite to the halo center is two virial radii which depends on the halo mass.", "This defines the NFW mass profile used to obtain the satellite distances by a Monte Carlo approach.", "We modify the SAM output to impose a similar satellite distribution as described below (§ 4.5)." ], [ "Removing assembly bias from the SAM output", "In order to determine the best methodology to produce HOD mock catalogues, we aim to compare them with the clustering of the original SAM samples via their 2PCF.", "Before making this comparison it is necessary to remove assembly bias from the SAM samples.", "The clustering of dark matter haloes depends on additional properties besides mass.", "For example, [23] showed that the clustering of low mass haloes depends on their formation redshift and other works have found dependencies on concentration and subhalo occupation number among other secondary properties.", "This additional contribution to the clustering is commonly known as assembly bias and potentially changes the galaxy clustering amplitude on large scales.", "The standard HOD approach considers only halo mass as the variable regulating the galaxy population.", "SAMs include assembly bias because they follow the evolution of baryons in halo merger histories that are shaped by the large-scale environment in the N-body simulation.", "Namely, SAMs include a dependence on secondary halo properties as these affect the halo merger history and the evolution of galaxies that live within them.", "Thus, in order to compare the clustering between SAM samples and HOD mocks which use only halo mass as input, it is necessary to remove the assembly bias signal in the former samples.", "Assembly bias can be eliminated from SAM samples through the shuffling technique introduced by [17].", "This consists of randomly exchanging the galaxy populations between haloes of the same mass, thus removing any connection to the assembly history of the haloes.", "This procedure does not change the distances from satellites to their central galaxy in each halo.", "In clustering terms, the one-halo term of this “shuffled” catalogue is the same as the original SAM sample but its two-halo term is different because assembly bias is not present in the shuffled sample.", "If the SAM samples did not have assembly bias, we would measure the same 2PCFs for their shuffled samples as measured for the original output.", "Fig.", "REF shows the correlation functions of a SAM sample and its shuffled version, for both the stellar mass and the SFR selected samples.", "The assembly bias signature, shown in the middle panels, is evident in the clustering differences between these two catalogues at large separations.", "We also show the 2PCF of a modified shuffled sample that will be introduced below.", "The assembly bias signatures remain unchanged for the other samples, but they are noisier for the lowest number density samples as they contain fewer galaxies.", "Figure: Top: Correlation functions of SAM samples (dotted black in the main panel, solid black in the subpanel), and their modifications: the shuffled (dashed magenta) and the shuffled-NFW samples (dashed cyan).", "The galaxy samples are selected by stellar mass (left) and SFR (right), with a number density of 10 -2.5 h 3 Mpc -3 {\\rm 10^{-2.5}} h^3{\\rm Mpc^{-3}}.", "Middle: Ratios between the 2PCF of the SAM and shuffled samples.", "Differences at large scales are signatures of assembly bias.", "Bottom: Ratios of the 2PCF with respect to the 2PCF from the shuffled-NFW samples.", "The differences in the one-halo term below 1 Mpc /h{\\rm 1\\ Mpc/}h indicates the departure of the satellite profiles from a NFW.It can be seen that assembly bias increases the clustering for stellar mass selected samples, as was shown by .", "SFR selected samples, on the other hand, show a decreased clustering amplitude.", "For the intermediate galaxy density sample, the assembly bias enhances the two-halo term by $\\sim 12\\%$ for the stellar mass selected sample and suppresses the amplitude in the SFR selection case by $\\sim 4\\%$ .", "The enhance of clustering amplitude for the other stellar mass selections remains similar.", "For the SFR selections, we observe that suppression on clustering becomes weaker for higher density samples.", "Indeed, assembly bias can enhance the amplitude if the density of the sample if very high, as shown in [15]." ], [ "The shuffled-NFW target catalogue: changing the satellite distribution in the SAM", "The radial profile of satellites in the G13 SAM deviates from the standard NFW profile of dark matter within halos because the SAM associates galaxies with subhalos (or a proxy, such as the most bound particle, in the case of subhalos which are no longer resolved).", "The radial profile of subhalos is different from that of the dark matter (see e.g.", "[1]).", "The choice of which subhalos (and former subhalos) are associated with galaxies is driven by the galaxy formation model, which determines the luminosity of any galaxy associated with a subhalo and whether or not it has merged due to dynamical friction (only type 2 satellites, those that no longer have a resolved subhalo associated with them, are considered as candidates for galaxy mergers).", "The final step before testing the accuracy of the HOD models is to modify the shuffled SAM catalogue to force the satellites in each halo to follow an NFW profile.", "We call the result the shuffled-NFW catalogue.", "Because satellite galaxies in the SAMs and shuffled samples do not follow an NFW profile, the one-halo term of their 2PCFs are different to the one-halo term of the shuffled-NFW sample, as shown in the bottom panel of Fig.", "REF .", "The shuffled-NFW catalogue does not contain assembly bias, and the satellites follow the same NFW profile as adopted in the HOD mocks.", "We note that the shuffled-NFW is not intended to be the “best” prediction of galaxy clustering but rather is the target sample for the reconstructions using the HOD mocks which has a controlled 1-halo clustering pattern to facilitate testing." ], [ "Satellite radial distributions and clustering of HOD mocks", "In Fig.", "REF we show the satellite profiles in the SAM samples, for stellar mass and SFR selected samples separated into the contributions from type-1 satellites and from orphan galaxies.", "To examine the departure from NFW, we produce a SAM-NFW catalogue where satellites in SAM are forced to follow the same NFW profile as used in the HOD mocks.", "For this catalogue, we update the satellite positions in the SAM samples according the same NFW density profile used to produce the HOD mocks.", "Note that this SAM-NFW is different from the shuffled and shuffled-NFW catalogues mentioned above.", "It can be seen that the NFW profile is different from the profile of type-1 satellites and orphans, particularly for the SFR selected sample.", "The profiles in the 2-HOD and 4-HOD mock catalogues are also shown, and they match with the NFW as expected from the construction of the HOD mock.", "Both models also reproduce the NFW profile in the other number density samples.", "The masses of host haloes of 1-HOD satellites do not correspond with the masses in the original SFR selected samples (see Fig.", "REF ).", "Thus, the virial radii of these haloes define NFW density profiles that are different from the profiles in the other models.", "This has an impact on the positions of satellites generating the striking difference with NFW in Fig.", "REF for the SFR selection.", "The same occurs for the stellar mass selections but it is less extreme than the SFR case.", "Figure: Profile of satellites hosted by subhaloes (dashed magenta) and orphans (dashed cyan) in a stellar mass (top) and SFR selected sample (bottom).", "The lines show the SAM with an NFW imposed for all satellites (solid black), and HOD mocks built by the 1-HOD (solid red), 2-HOD (solid blue) and 4-HOD models (solid green) where the HODs of satellites are described by the Poisson distribution.The HOD models predict different galaxy populations for the G13 SAM samples.", "Table REF shows the satellite fraction of the SAM samples and HOD mocks built by the three different models, assuming a Poisson distribution for the HOD of satellites.", "Note that the 2-HOD and 4-HOD predict almost the same satellite fraction as in the SAM samples because of the separate HOD modelling of centrals and satellites.", "Table: Satellite fractions of the galaxy samples used.", "The first column indicates their number densities.", "Column 2,3,4 and 5 show the satellite fraction in the SAM samples and in the HOD mock built using the 1-HOD, 2-HOD and 4-HOD models, respectively.We check the accuracy of each HOD model by comparing the HOD mocks with the shuffled-NFW sample via their 2PCFs.", "Fig.", "REF shows the clustering of the shuffled-NFW and the HOD mocks built using the HOD models described in Sec.", "REF .", "These particular models assume a Poisson distribution for the HOD of satellites.", "It can be seen that the three schemes produce accurate clustering predictions on large scales.", "On small scales, the 2-HOD and 4-HOD models produce similar accurate results while the 1-HOD shows striking differences.", "These deviations come from the overprediction of the number of satellites in the stellar mass selected samples.", "For the SFR selection cases, the difference is due to the notably different occupation function of central and satellites in the 1-HOD mock (see Appendix ).", "The inaccuracy of the 1-HOD modelling is also present for the other number density samples too, whereas the 2-HOD and 4-HOD models produce similar quality results to the one shown here.", "As the 2-HOD and 4-HOD models are clearly the best, we drop the 1-HOD model henceforth.", "Figure: Clustering of HOD mock catalogues of stellar mass (top) and SFR selected samples (bottom) for a number density of n=10 -2.5 /h -3 Mpc 3 {\\rm n=10^{-2.5}}/h^{-3}\\rm Mpc^3.", "The mocks are built using the 1-HOD (red), 2-HOD (blue) and 4-HOD (green) models, assuming a Poisson distribution for the HOD of satellites.", "The clustering of the shuffled catalogue with an NFW profile is shown as the dotted line.", "Subpanels show the ratios of the 2PCF of the mocks with respect to the 2PCF of the shuffle-NFW catalogue" ], [ "Impact of the assumed HOD scatter", "To study the impact of the scatter of the HOD on the clustering, we consider different dispersions for the negative binomial distribution in the construction of HOD mocks.", "Fig.", "REF shows the 2PCF of HOD mocks using different $\\beta $ values.", "For the stellar mass selected samples, the scatter of the HOD does not have a significant impact.", "For the SFR selection, the amplitude of the clustering on small scales is very sensitive to the scatter in the number of satellites.", "We find that increasing $\\beta $ changes the amplitude of the one-halo term.", "When we split the contribution to the clustering from low and high mass haloes, we observe that the scatter mainly impacts the one-halo term of low mass haloes.", "This feature is present in HOD mocks built using the 2-HOD and 4-HOD methods.", "This is reproduced by both HOD models, indicating that this is a feature particular to the SFR selected samples.", "Figure: 2PCF of HOD mocks built using the 2-HOD (right) and 4-HOD (left) models, for stellar mass (top) and SFR selected samples (bottom).", "The HOD models are used to build mock catalogues assuming a Poisson distribution (solid red) and negative binomial distributions of β=0.05\\beta =0.05 (solid blue) and β=0.18\\beta =0.18 (solid green) for the HOD of satellites.", "The clustering of the shuffled-NFW samples (dotted black) is shown in each panel.", "The ratios between the clustering of HOF mocks and shuffled-NFW are shown in the subpanels.The most accurate clustering reconstructions for the G13 samples are obtained when we use the 2-HOD or 4-HOD to build mock catalogues assuming a negative binomial distribution for the HOD of satellites.", "Note that clustering predictions from both models do not show significant differences.", "Fig REF shows the particular results from the 4-HOD modelling for all the space density samples.", "For the G13 SFR selected samples, the 4-HOD (and the 2-HOD) modelling produces the best results when $\\beta =0.05$ , which corresponds to a distribution slightly wider than a Poisson distribution.", "For the case of stellar mass selected samples, the best reproduction is obtained with $\\beta =0.08$ .", "Using instead the Poisson distribution (i.e, $\\beta =0$ ) produces worse results for both selections particularly in the one halo regime.", "For SFR selections, when using $\\beta =0.05$ and $\\beta =0$ , the departures from the shuffled-NFW catalogues are below $\\sim 8\\%$ and $\\sim 15\\%$ , respectively.", "It can be seen that the dispersion of the 2PCFs becomes important in the lowest number density sample.", "However, the assumption of the negative binomial distribution still produces better results, especially in the transition from the one- to the two-halo term.", "For the stellar mass selection cases, the impact on clustering when using different $\\beta $ values is much less significant.", "Indeed, the weak relation between clustering and HOD scatter, shown in Fig.", "REF , suggests that it is not necessary to include additional scatter in the construction of HOD mock catalogues for stellar mass selections.", "To compare with the Poisson distribution, we show also the clustering prediction for the stellar mass samples using $\\beta =0.08$ , for which we obtain the best result.", "Figure: Ratios between the 2PCFs of mock catalogues, constructed with the 4-HOD method, and the shuffled-NFW catalogue.", "The HOD of satellites in the HOD mocks follows either the negative binomial (green) or Poisson distributions (cyan), with the color indicating the value of β\\beta .", "We show results for stellar mass (top) and SFR selected samples (bottom).", "Number densities increase from left to the right as labelled at the top of each column.", "The shaded regions represent jackknife errors calculated using 10 subsamples.Satellites in the G13 SAM are well described by a non-Poisson distribution.", "This is consistent with the HOD of subhaloes found in [9].", "The recipes that build mock catalogues of SFR selected samples using the HOD approach must undertake an analysis of the scatter of the HOD of satellites as it impacts the clustering.", "This analysis will provide the best $\\beta $ to construct a HOD mock of a particular sample.", "For stellar mass samples, the HOD scatter has a weak impact on clustering, so the same analysis is not necessary in the context of HOD mock catalogues." ], [ "Conclusion", "The next generation of surveys will measure the clustering of the galaxy distribution over a wide range of redshifts.", "Mock catalogues have proven to be important tools in preparation for this because of their multiple applications including error analysis, data interpretation and survey planning.", "SAMs are a physical approach to obtain such mocks, but sometimes their direct application to a simulation is not possible due to the limited resolution of the halo merger trees [2] or the trees may not be available, as in the case of the Euclid flagship simulation [40].", "Even if the trees were available at the required resolution, the sheer number of halos in a giga-parsec side N-body simulation may preclude a direct calculation with a SAM.", "The HOD model provides a simple yet efficient way to construct mock catalogues.", "The modelling consists of using a probability distribution to obtain the number of galaxies hosted in a halo of a particular mass.", "This simple method allows us to create large sets of mock catalogues for huge cosmological simulations.", "This is useful in the context of ELGs, as they are targets in current and coming surveys.", "To determine the level of complexity needed to produce accurate mock catalogues, we test different HOD models.", "The 1-HOD uses the HOD of all galaxies making no distinction between centrals and satellites while the 2-HOD uses the HOD of these two components separately.", "The 4-HOD stores additional information about whether or not haloes host a central, and it constructs conditional HODs for satellites taking into account this information.", "Because SAMs include assembly bias by construction, and in their simplest form HOD mocks do not, we remove the assembly bias from the G13 SAM samples by shuffling the galaxy populations among haloes of the same mass creating the shuffled catalogue.", "This allows us to make a direct comparison between the clustering of our mocks and the SAMs from which we extract the HOD measurements.", "For example, We find that, for the intermediate galaxy density sample in the G13 SAM, the assembly bias affects the 2-halo term of the 2PCF of stellar mass selected galaxies increasing the amplitude by $\\sim 12\\%$ .", "For the SFR selected galaxies, in contrast, the assembly bias suppresses the clustering by $\\sim 4\\%$ .", "We also impose the standard NFW profile for satellites in the shuffled catalogue as is done for satellites in the HOD mock catalogues.", "We can then check the accuracy of the HOD models through a comparison between the 2PCFs of the HOD mocks and the shuffled-NFW catalogues.", "The 2-HOD and 4-HOD produce the best mock catalogues as their 2PCFs are in close agreement with the clustering of the shuffled-NFW sample.", "We obtain the best results using a negative binomial distribution for the (conditional) HOD (see Eq.", "1); in previous works this was commonly considered to be a Poisson distribution.", "This is consistent with the subhalo HOD found in [9] using the Millennium-II simulation.", "Furthermore, we found that the assumption of this non-Poisonnian HOD changes the galaxy clustering.", "Previously, [31] found a similar result using subhaloes from the Bolshoi, the MultiDark N-body simulations, and the SAM presented in [30].", "The scatter of the HOD of satellites in G13 is reproduced by a negative binomial distribution up to halo masses of ${\\rm M_h \\lesssim 10^{13.5}M_{\\odot }}h^{-1}$ .", "The galaxies in this halo mass range dominate the amplitude of the 2PCF.", "We quantify the departure from the Poisson distribution with the parameter $\\beta $ (see Eq.", "REF ).", "We obtain the best clustering predictions for SFR selected samples using $\\beta =0.05$ and $\\beta = 0.08$ for stellar mass selected samples.", "These correspond to negative binomial distributions slightly wider than Poisson.", "Because of the specific modelling of different SAMs, we expect that the best $\\beta $ values for each sample are model-dependent.", "For stellar mass selected samples, we find that the HOD scatter has a weak impact on clustering, making the addition of this additional parameter unnecessary in the context of mock catalogues.", "The analysis of the HOD of satellites is important because the width of the distribution (determined by the $\\beta $ parameter) has a large impact on the one-halo term of the 2PCF of mock catalogues that emulate SFR-selected sample and ELG samples.", "If we consider the Poisson distribution for the HOD of satellites ($\\beta =0$ ) the 2PCF of the mock catalogues is underestimated with respect to the clustering of the shuffled-NFW.", "In contrast, using the negative binomial distribution increases the amplitude of clustering in the one-halo regime.", "If we assume a value of $\\beta $ larger than the one present in the distribution of number of satellites, the clustering on small scales is further overestimated.", "We highlight the importance to perform a careful analysis of the satellite HOD if the HOD framework is used to produce mock catalogues, for ELGs or star forming galaxies, following a particular model or observation." ], [ "Acknowledgements", "This work was made possible by the efforts of Gerard Lemson and colleagues at the German Astronomical Virtual Observatory in setting up the Millennium Simulation database in Garching.", "EJ, SC, NP & IZ acknowledge the hospitality of the ICC at Durham University.", "EJ acknowledges support from “Centro de Astronomía y Tecnologías Afines” BASAL 170002.", "NP acknowledges support from Fondecyt Regular 1191813.", "IZ acknowledges support by NSF grant AST-1612085.", "This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No 734374.", "The calculations for this paper were performed on the Geryon computer at the Center for Astro-Engineering UC, part of the BASAL PFB-06, which received additional funding from QUIMAL 130008 and Fondequip AIC-57 for upgrades." ] ]
1906.04298
[ [ "An Introduction to Combinatorics of Determinacy" ], [ "Abstract This article is an introduction to combinatorics under the axiom of determinacy with a focus on partition properties and infinity Borel codes." ], [ "Introduction", "This article is an introduction to combinatorics under the axiom of determinacy.", "The main topics are partition properties and $\\infty $ -Borel codes.", "To illustrate the important ideas, the article will focus on the simplest settings.", "This will mean that one will work at simple pointclass such as ${\\mathbf {\\Sigma }_1^1}$ , small cardinals such as $\\omega _1$ or $\\omega _2$ , or very natural models of determinacy such as $L(\\mathbb {R})$ .", "Despite the simplicity, there are still many interesting combinatorial questions in these settings.", "One purpose of this article is to serve as a reference for some of the notations and backgrounds that will be helpful for some results of Jackson, Trang, and the author on combinatorics around $\\omega _1$ , $\\omega _2$ , and $\\mathbb {R}$ under $\\mathsf {AD}$ and $\\mathsf {AD}^+$ .", "(Section 2 to 6 were written for [4] and [7].", "Section 7 and 8 will be helpful for reading [3] and [5].)", "The article should be accessible with very basic knowledge of descriptive set theory and general set theory.", "Some familiarity with determinacy and games, the pointclass ${\\mathbf {\\Sigma }_1^1}$ , and the bounding principle for ${\\mathbf {\\Sigma }_1^1}$ should suffice for most of the article.", "Knowledge of club sets, ultrafilters, measures, and basic constructibility will also be helpful.", "There will be occasional mentions of the theory of prewellorderings and scales such as the Moschovakis coding lemma or the Kunen-Martin theorem.", "In a few places, one will refer to the result of Kechris that $\\mathsf {AD}$ implies $L(\\mathbb {R}) \\models \\mathsf {DC}_\\mathbb {R}$ .", "The last two sections will require more familiarity with topics in general set theory such as $\\mathrm {HOD}$ and iterated forcing.", "The article has essentially two parts.", "The first part deals with various partition properties which are very powerful tools of $\\mathsf {AD}$ .", "The second part introduces the $\\infty $ -Borel code and the Vopénka forcing which are useful $\\mathsf {AD}^+$ tool.", "Except possibly for a few minor observations, no results in this article are due to the author.", "Section 2 introduces the partition property in its various forms.", "It also defines the associated measures induced by the partition properties.", "Section 3 develops the notation and theory around good codings of functions.", "This section proves Martin's criterion for establishing partition properties.", "Section 4 gives some examples of good coding of functions on $\\omega _1$ .", "This is used to give a simple proof of the weak partition property on $\\omega _1$ and two proofs of the strong partition property on $\\omega _1$ .", "One will give Martin's original proof of the strong partition property using sharps.", "Then one gives a proof due to Kechris of the strong partition property on $\\omega _1$ which uses category arguments and the simple generic coding function for $\\omega _1$ .", "Section 5 develops the theory around the Kunen function which is named after the eponymous Kunen tree.", "Here it will be shown that every function from $\\omega _1$ to $\\omega _1$ has a Kunen function by using the original Kunen tree.", "The Kunen functions are especially useful for choosing representatives of certain ultrapowers.", "This will be used to establish the identity of $\\omega _2$ as the ultrapower ${}^{\\omega _1}\\omega _1 \\slash \\mu $ , where $\\mu $ is the club ultrafilter on $\\omega _1$ .", "Then one will prove a useful sliding lemma and use this to establish the weak partition property on $\\omega _2$ .", "Section 6 is dedicated to proving a result of Martin and Paris which shows that $\\omega _2$ is not a strong partition cardinal.", "However, this section presents a proof of Jackson which produces an explicit partition without a homogeneous subset.", "Section 7 proves some results of Woodin about the structure of $L(\\mathbb {R})$ and the nature of $\\infty $ -Borel codes in $L(\\mathbb {R})$ .", "In particular, it will be shown that in $L(\\mathbb {R})$ , there is an “ultimate” $\\infty $ -Borel code that can be used to generate all other $\\infty $ -Borel codes for sets of reals.", "This is quite useful for cardinality results in $L(\\mathbb {R})$ .", "Section 8 will present some further descriptive set theoretic applications due to Woodin of the Vopénka forcing.", "The countable section uniformization and the Woodin's perfect set dichotomy theorem will be proved here.", "The author would like to thank Stephen Jackson for the numerous conversations concerning the topics that appear here.", "The author would also like to thank Thilo Wienert for carefully reading this article and suggesting many comments." ], [ "Partition Properties", "The following is the usual partition property: Definition 2.1 Let $\\kappa $ be an ordinal, $\\lambda \\le \\kappa $ , and $\\gamma < \\kappa $ , then $\\kappa \\rightarrow (\\kappa )^\\lambda _\\gamma $ indicates that for every $P : [\\kappa ]^\\lambda \\rightarrow \\gamma $ , there is some $\\beta < \\gamma $ and some $A \\subseteq \\kappa $ with $|A| = \\kappa $ so that for all $f \\in [A]^\\lambda $ , $P(f) = \\beta $ .", "The most frequent situation is when $\\gamma = 2$ .", "$\\kappa $ is a strong partition cardinal if and only if $\\kappa \\rightarrow (\\kappa )^\\kappa _2$ .", "$\\kappa $ is a weak partition cardinal if and only if $\\kappa \\rightarrow (\\kappa )^\\alpha _2$ for all $\\alpha < \\kappa $ .", "Definition 2.2 Let $\\kappa $ be an ordinal.", "Let $S : \\kappa \\rightarrow \\kappa $ .", "Let $Y^S = \\lbrace (\\alpha ,\\beta ) : \\alpha < \\kappa \\wedge \\beta < S(\\alpha )\\rbrace $ .", "Let $f : \\kappa \\rightarrow \\kappa $ .", "$f$ is said to have uniform cofinality $S$ if and only if there is a function $g : Y^S \\rightarrow \\kappa $ with the following properties (1) For all $\\alpha < \\kappa $ and all $\\beta < \\gamma < S(\\alpha )$ , $g(\\alpha ,\\beta ) < g(\\alpha ,\\gamma )$ (1) For all $\\alpha < \\kappa $ , $f(\\alpha ) = \\sup \\lbrace g(\\alpha ,\\beta ) : \\beta < S(\\alpha )\\rbrace $ .", "Let $\\mu $ be a measure on $\\kappa $ .", "$f : \\kappa \\rightarrow \\kappa $ is said to have uniform cofinality $S$ $\\mu $ -almost everywhere if and only if there is a $g : Y^S \\rightarrow \\kappa $ as above so that for $\\mu $ -almost all $\\alpha < \\kappa $ , $f(\\alpha ) = \\sup \\lbrace g(\\alpha ,\\beta ) : \\beta < S(\\alpha )\\rbrace $ .", "A function $f : \\kappa \\rightarrow \\kappa $ has uniform cofinality $\\omega $ if and only if $f$ has uniform cofinality $S$ where $S(\\alpha ) = \\omega $ for all $\\alpha < \\kappa $ .", "Definition 2.3 Let $f : \\kappa \\rightarrow \\kappa $ be a function.", "$f$ is discontinuous everywhere if and only if for all $\\alpha < \\kappa $ , $f(\\alpha ) > \\sup \\lbrace f(\\beta ) : \\beta < \\alpha \\rbrace $ .", "(Jackson) A function $f : \\kappa \\rightarrow \\kappa $ has the correct type if and only if $f$ has uniform cofinality $\\omega $ and is discontinuous everywhere.", "Let $[\\kappa ]^\\kappa _*$ denote the subset of $[\\kappa ]^{\\kappa }$ consisting of functions of the correct type.", "There are partition properties formulated for functions of the correct type in which the homogeneous set can be chosen to be club.", "Club homogeneous sets are conceptually useful in various construction.", "Functions of the correct type seem to naturally appear in many proofs establishing the partition properties under $\\mathsf {AD}$ .", "It should be noted that throughout the survey an asterisk, $*$ , will be used to denote the corresponding concept that involves functions of the correct type when there is an ordinary version of this concept.", "Definition 2.4 Let $\\kappa $ be an ordinal.", "Let $\\lambda \\le \\kappa $ and $\\gamma < \\kappa $ .", "Let $\\kappa \\rightarrow _* (\\kappa )^\\lambda _\\gamma $ assert that for all $P : [\\kappa ]^{\\lambda }_* \\rightarrow \\gamma $ , there exists a club $C \\subseteq \\kappa $ and a $\\beta < \\lambda $ so that for all $f \\in [C]^\\lambda _*$ , $P(f) = \\beta $ .", "Such a club $C$ is said to be homogeneous for $P$ taking value $\\beta $ (for functions of the correct type).", "Definition 2.5 Let $\\kappa $ be an ordinal.", "If $A \\subseteq \\kappa $ is such that $|A| = \\kappa $ , then let $\\mathsf {enum}_A : \\kappa \\rightarrow A$ denote the increasing enumeration of $A$ .", "(In context, when one writes $\\mathsf {enum}_A$ , it should be clear what $\\kappa $ is.)", "Fact 2.6 Let $\\kappa $ be an ordinal and $\\lambda \\le \\kappa $ .", "Then $\\kappa \\rightarrow _* (\\kappa )^\\lambda _2$ implies $\\kappa \\rightarrow (\\kappa )^\\lambda _2$ .", "$\\kappa \\rightarrow (\\kappa )^{\\omega \\cdot \\lambda }_2$ implies $\\kappa \\rightarrow _* (\\kappa )^\\lambda _2$ .", "Assume $\\kappa \\rightarrow _* (\\kappa )^\\lambda _2$ .", "Let $P : [\\kappa ]^\\lambda \\rightarrow 2$ .", "Then there is some $C \\subseteq \\omega _1$ club so that $C$ is homogeneous for $P$ for function of the correct type.", "Let $D = \\lbrace \\alpha < \\kappa : (\\exists \\beta \\in \\mathrm {Lim})(\\alpha = \\mathsf {enum}_C(\\beta + \\omega ))\\rbrace $ , where $\\mathrm {Lim}$ refers to the class of limit ordinals.", "Note $|D| = \\kappa $ .", "The next claim is that every $f \\in [D]^\\lambda $ is a function of the correct type.", "Let $g : \\kappa \\times \\omega \\rightarrow \\kappa $ be defined as follows: Suppose $\\gamma < \\kappa $ .", "Let $\\beta _\\gamma $ be the unique limit ordinal $\\beta $ so that $f(\\gamma ) = \\mathsf {enum}_C(\\beta _\\gamma + \\omega )$ .", "Then for each $n \\in \\omega $ , define $g(\\gamma ,n) = \\mathsf {enum}_C(\\beta _\\gamma + n)$ .", "Then it is clear that $f(\\gamma ) = \\sup \\lbrace g(\\gamma , n) : n \\in \\omega \\rbrace $ .", "This shows that $f$ has uniform cofinality $\\omega $ .", "Suppose $\\gamma < \\kappa $ .", "There is a unique limit ordinal $\\beta _\\gamma $ so that $f(\\gamma ) = C(\\beta _\\gamma + \\omega )$ .", "Then for all $\\epsilon < \\gamma $ , $f(\\epsilon ) \\le \\mathsf {enum}_C(\\beta _\\gamma ) < f(\\gamma )$ .", "That is, $\\sup \\lbrace f(\\epsilon ) : \\epsilon < \\gamma \\rbrace \\le \\mathsf {enum}_C(\\beta _\\gamma ) < f(\\gamma )$ .", "$f$ is discontinuous everywhere.", "Thus every $f \\in [D]^{\\lambda }$ belongs to $[C]^\\lambda _*$ .", "Since $C$ is homogeneous for $P$ for functions of the correct type, $D$ is homogeneous for $P$ in the ordinary sense.", "This establishes $\\kappa \\rightarrow (\\kappa )^\\lambda _2$ .", "Now suppose $\\kappa \\rightarrow (\\kappa )^{\\omega \\cdot \\lambda }_2$ .", "Suppose $P : [\\kappa ]^\\lambda \\rightarrow 2$ .", "Let $\\mathsf {block}: [\\kappa ]^{\\omega \\cdot \\lambda } \\rightarrow [\\kappa ]^\\lambda $ be defined by $\\mathsf {block}(f)(\\gamma ) = \\sup \\lbrace \\omega \\cdot \\gamma + n : n \\in \\omega \\rbrace $ .", "Define $P^{\\prime } : [\\kappa ]^{\\omega \\cdot \\lambda } \\rightarrow 2$ by $P^{\\prime }(f) = P(\\mathsf {block}(f))$ .", "Let $A \\subseteq \\omega _1$ be such that $|A| = \\kappa $ and $A$ is homogeneous for $P^{\\prime }$ in the ordinary sense.", "Without loss of generality, suppose $P^{\\prime }(h) = 0$ for all $h \\in [A]^{\\omega \\cdot \\lambda }$ .", "Let $C$ be the collection of limit points of $A$ .", "That is, $C = \\lbrace \\alpha \\in \\kappa : \\alpha = \\sup (C \\cap \\alpha )\\rbrace $ .", "$C$ is a club subset of $\\omega _1$ .", "Suppose $f \\in [C]^{\\lambda }_*$ .", "Since $f$ is of the correct type, let $g : \\lambda \\times \\omega \\rightarrow \\omega _1$ witness that it has uniform cofinality $\\omega $ .", "Let $\\gamma < \\kappa $ .", "Since $f$ is discontinuous everywhere, $\\sup \\lbrace f(\\alpha ) : \\alpha < \\gamma \\rbrace < f(\\gamma )$ .", "Therefore there is some $N_\\gamma \\in \\omega $ so that for all $n \\ge N_\\gamma $ , $g(\\gamma , n) > \\sup \\lbrace f(\\alpha ) : \\alpha < \\gamma \\rbrace $ .", "Since $f(\\gamma )$ is a limit point of $A$ , define by recursion, $h(\\omega \\cdot \\gamma + n)$ to be the least element of $A$ greater than $\\max \\lbrace g(\\gamma , N_\\gamma + n), h(\\omega \\cdot \\gamma + k) : k < n\\rbrace .$ Then $h \\in [A]^{\\omega \\cdot \\gamma }$ and $\\mathsf {block}(h) = f$ .", "Thus $P(f) = P^{\\prime }(h) = 0$ .", "It has been shown that for all $f \\in [C]^{\\lambda }_*$ , $P(f) = 0$ .", "$\\kappa \\rightarrow _* (\\kappa )^\\lambda _2$ has been established.", "Partition properties on a cardinal $\\kappa $ yield other very interesting properties on $\\kappa $ .", "Much of the material of the remainder of this section can be found in [20].", "Fact 2.7 $\\kappa \\rightarrow (\\kappa )^2_2$ implies that $\\kappa $ is regular.", "Suppose $\\eta < \\kappa $ and $h : \\eta \\rightarrow \\kappa $ is a cofinal function.", "Define $P : [\\kappa ]^2 \\rightarrow 2$ by $P(\\alpha ,\\beta ) = {\\left\\lbrace \\begin{array}{ll}0 & \\quad (\\exists \\gamma < \\eta )(\\alpha \\le h(\\gamma ) \\le \\beta ) \\\\1 & \\quad \\text{otherwise}\\end{array}\\right.", "}.$ Let $A \\subseteq \\omega _1$ with $|A| = \\kappa $ be homogeneous for $P$ .", "If $A$ is homogeneous for $P$ taking value 0, then one can show that $\\mathrm {ot}(\\mathrm {rang}(h)) = \\kappa $ , which is a contradiction.", "If $A$ is homogeneous for 1, then one can show that $\\mathrm {rang}(h) \\subseteq \\min A$ .", "This violates $\\kappa \\rightarrow (\\kappa )^2_2$ .", "Definition 2.8 Let $\\kappa $ be a regular cardinal and $\\eta < \\kappa $ be a limit ordinal.", "A set $C \\subseteq \\kappa $ is $\\eta $ -closed if and only if for all $f : \\eta \\rightarrow C$ increasing, $\\sup (f) \\in C$ .", "$C \\subseteq \\kappa $ is a $\\eta $ -club if and only if $C$ is $\\eta $ -closed and unbounded.", "Let $W^\\kappa _\\eta $ denote the filter of sets containing an $\\eta $ -club as a subset.", "Fact 2.9 Let $\\kappa $ be a regular cardinal and $\\eta < \\kappa $ be a limit ordinal.", "Let $\\delta < \\kappa $ and $\\langle C_\\alpha : \\alpha < \\delta \\rangle $ be a sequence of $\\eta $ -clubs.", "Then $\\bigcap _{\\alpha \\in \\delta } C_\\alpha $ is an $\\eta $ -club.", "Clearly $\\bigcap _{\\gamma < \\delta } C_\\gamma $ is $\\eta $ -closed.", "One needs to show that $\\bigcap _{\\gamma <\\delta } C_\\gamma $ is unbounded.", "Fix $\\epsilon < \\kappa $ .", "Let $\\beta _0^0$ be the least element of $C_0$ greater than $\\epsilon $ .", "Suppose for $\\alpha < \\eta $ and $\\gamma < \\delta $ , $\\beta _\\alpha ^\\xi $ has been defined for all $\\xi < \\gamma $ .", "Note $\\sup \\lbrace \\beta ^\\xi _\\alpha : \\xi < \\gamma \\rbrace < \\kappa $ by the regularity of $\\kappa $ .", "Let $\\beta _\\alpha ^\\gamma $ be the least element of $C_\\gamma $ which is larger than $\\sup \\lbrace \\beta ^\\xi _\\alpha : \\xi < \\gamma \\rbrace $ .", "Suppose for $\\alpha < \\eta $ , $\\beta ^{\\gamma }_\\nu $ has been defined for all $\\nu < \\alpha $ , and $\\gamma < \\delta $ .", "By the regularity of $\\kappa $ , $\\sup \\lbrace \\beta ^\\gamma _\\nu : \\nu < \\alpha \\wedge \\gamma < \\delta \\rbrace < \\kappa $ .", "Let $\\beta _\\alpha ^0$ be the least element of $C_0$ greater than $\\sup \\lbrace \\beta ^\\gamma _\\nu : \\nu < \\alpha \\wedge \\gamma < \\delta \\rbrace $ .", "Note that for all $\\gamma _0,\\gamma _1 \\in \\delta $ , $\\sup \\lbrace \\beta _\\alpha ^{\\gamma _0} : \\alpha < \\eta \\rbrace = \\sup \\lbrace \\beta _\\alpha ^{\\gamma _1} : \\alpha < \\eta \\rbrace $ .", "Let $\\lambda $ denote this common value.", "$\\lambda \\in C_\\gamma $ for all $\\gamma < \\delta $ since $C_\\gamma $ is an $\\eta $ -club.", "Thus $\\lambda \\in \\bigcap _{\\gamma < \\delta } C_\\gamma $ and $\\lambda > \\epsilon $ .", "This shows that $\\bigcap _{\\gamma < \\delta } C_\\gamma $ is unbounded.", "In $\\mathsf {ZF}$ , Fact REF does not imply that $W^\\kappa _\\eta $ is $\\kappa $ -complete.", "Suppose $\\langle A_\\alpha : \\alpha < \\delta \\rangle $ where $\\delta < \\kappa $ is a sequence in $W^\\kappa _\\eta $ .", "For each $A_\\alpha $ , there is an $\\eta $ -club $C \\subseteq A_\\alpha $ .", "To apply Fact REF , one would need to produce a sequence of $\\eta $ -clubs, $\\langle C_\\alpha : \\alpha < \\delta \\rangle $ , so that $C_\\alpha \\subseteq A_\\alpha $ for each $\\alpha < \\delta $ .", "This appears to require some choice principle.", "It will be shown next that appropriate partition properties imply that $W^\\kappa _\\gamma $ is an ultrafilter, is $\\kappa $ -complete, and is normal.", "Fact 2.10 Let $\\kappa $ be a regular cardinal and $\\eta < \\kappa $ be a limit ordinal.", "Let $A \\subseteq \\kappa $ be an unbounded set.", "Let $\\mathrm {Lim}^\\eta (A) = \\lbrace \\sup (f) : f \\in [A]^\\eta \\rbrace $ .", "That is, $\\mathrm {Lim}^\\eta (A)$ is the collection of all $\\alpha \\in \\kappa $ which are the supremum of an $\\eta $ -increasing sequence through $A$ .", "Then $\\mathrm {Lim}^\\eta (A)$ is an $\\eta $ -club.", "$\\mathrm {Lim}^\\eta (A)$ is clearly unbounded since $A$ is unbounded.", "Suppose $f : \\eta \\rightarrow \\mathrm {Lim}^\\eta (A)$ is an increasing function.", "For each $\\alpha < \\eta $ , let $g(\\alpha )$ be the least element $\\gamma \\in A$ so that $f(\\alpha ) < \\gamma < f(\\alpha + 1)$ which exists since $f(\\alpha + 1) = \\sup (h)$ for some $h \\in [A]^\\eta $ .", "Thus $\\sup (f) = \\sup (g)$ and $g \\in [A]^\\eta $ .", "Thus $\\sup (f) \\in \\mathrm {Lim}^\\eta (A)$ .", "$\\mathrm {Lim}^\\eta (A)$ is an $\\eta $ -club.", "Fact 2.11 Let $\\kappa $ be a regular cardinal and $\\eta < \\kappa $ be a limit ordinal.", "Suppose $\\kappa \\rightarrow (\\kappa )^\\eta _2$ .", "Then $W^\\kappa _\\eta $ is an ultrafilter.", "Let $A \\subseteq \\kappa $ .", "Define the partition $P : [\\kappa ]^\\eta \\rightarrow 2$ by $P(f) = 1 \\Leftrightarrow \\sup (f) \\in A$ .", "By $\\kappa \\rightarrow (\\kappa )^\\eta _2$ , let $B \\subseteq \\kappa $ with $|B| = \\kappa $ be homogeneous for this partition.", "Without loss of generality, suppose $B$ is homogeneous taking value 1.", "$\\mathrm {Lim}^\\eta (B)$ is an $\\eta $ -club by Fact REF .", "Let $\\alpha \\in \\mathrm {Lim}^\\eta (B)$ .", "Let $f \\in [B]^\\eta $ be such that $\\sup (f) = \\alpha $ .", "Then $P(f) = 1$ implies that $\\alpha = \\sup (f) \\in A$ .", "This shows that $\\mathrm {Lim}^\\eta (B) \\subseteq A$ which implies $A \\in W^\\kappa _\\eta $ .", "If $B$ was homogeneous for $P$ taking value 0, then the same argument would have shown $\\kappa \\setminus A \\in W^\\kappa _\\eta $ .", "Fact 2.12 Assume $\\kappa $ is a regular cardinal and $\\eta < \\kappa $ is a limit ordinal.", "Let $\\lambda < \\kappa $ be infinite.", "If $\\kappa \\rightarrow (\\kappa )^\\eta _\\lambda $ holds, then $W^\\kappa _\\eta $ is $\\lambda ^+$ -complete.", "Let $\\langle A_\\alpha : \\alpha < \\lambda \\rangle $ be a sequence in $W^\\kappa _\\eta $ .", "Suppose $\\bigcap _{\\alpha < \\lambda } A_\\alpha \\notin W^\\kappa _\\eta $ .", "Since $W^\\kappa _\\eta $ is an ultrafilter by Fact REF , one may assume that $\\bigcap _{\\alpha < \\lambda }A_\\alpha = \\emptyset $ by adding one further set to the sequence.", "Let $P : [\\kappa ]^\\eta \\rightarrow \\lambda $ be defined by $P(f)$ is the least $\\alpha < \\lambda $ so that $\\sup (f) \\notin A_\\alpha $ .", "By $\\kappa \\rightarrow (\\kappa )^\\eta _\\lambda $ , there is some $\\alpha ^*$ and a $B \\subseteq \\kappa $ so that $|B| = \\kappa $ and $P(f) = \\alpha ^*$ for all $f \\in [B]^\\eta $ .", "By Fact REF , $\\mathrm {Lim}^\\eta (B) \\in W^\\kappa _\\eta $ .", "However, $\\mathrm {Lim}^\\eta (B) \\cap A_{\\alpha ^*} = \\emptyset $ .", "This contradicts $A_{\\alpha ^*} \\in W^\\kappa _\\eta $ .", "Fact 2.13 Assume $\\kappa $ is a regular cardinal and $\\eta < \\kappa $ is a limit ordinal.", "$\\kappa \\rightarrow (\\kappa )^{\\eta + \\eta }_2$ implies $\\kappa \\rightarrow (\\kappa )^\\eta _\\lambda $ for all $\\lambda < \\kappa $ .", "Therefore, $\\kappa \\rightarrow (\\kappa )^{\\eta + \\eta }_2$ implies $W^\\kappa _\\eta $ is a $\\kappa $ -complete measure on $\\kappa $ .", "If $\\kappa $ has the weak partition property, then $W^\\kappa _\\eta $ is a $\\kappa $ -complete measure on $\\kappa $ for all limit ordinals $\\eta < \\kappa $ .", "Let $P : [\\kappa ]^\\eta \\rightarrow \\lambda $ .", "Define $Q : [\\kappa ]^{\\eta + \\eta } \\rightarrow 2$ by $Q(f_0\\hat{\\ }f_1) = 0$ if and only if $P(f_0) = P(f_1)$ .", "By $\\kappa \\rightarrow (\\kappa )^{\\eta + \\eta }_2$ , let $A \\subseteq \\kappa $ with $|A| = \\kappa $ be homogeneous for $Q$ .", "Suppose $A$ is homogeneous for $Q$ taking value 1.", "Let $Q^{\\prime } : [A]^{\\eta + \\eta } \\rightarrow 2$ be defined by $Q^{\\prime }(f_0\\hat{\\ }f_1) = {\\left\\lbrace \\begin{array}{ll}0 & \\quad P(f_0) < P(f_1) \\\\1 & \\quad P(f_0) > P(f_1)\\end{array}\\right.", "}$ Again by $\\kappa \\rightarrow (\\kappa )^{\\eta + \\eta }_2$ , there is a $B \\subseteq A$ with $|B| = \\kappa $ which is homogeneous for $Q^{\\prime }$ .", "One can check that if $B$ is homogeneous for $Q^{\\prime }$ taking value 1, then one would have an infinite descending sequences of ordinals.", "If $B$ is homogeneous for $Q^{\\prime }$ taking value 0, then one can produce an injection of $\\kappa $ into $\\lambda < \\kappa $ .", "$B$ can not be homogeneous.", "Contradiction.", "Therefore, $A$ must have been homogeneous for $Q$ taking value 0.", "Now the claim is that $A$ is homogeneous for $P$ : Let $f_0,f_1 \\in [A]^\\eta $ .", "Find $f_2 \\in [A]^\\eta $ such that $\\min (f_2)$ is larger than both $\\sup (f_0)$ and $\\sup (f_1)$ .", "Then $f_0\\hat{\\ }f_2 \\in [A]^{\\eta + \\eta }$ and $f_1 \\hat{\\ } f_2 \\in [A]^{\\eta + \\eta }$ .", "Since $A$ is homogeneous for $Q$ taking value 0, $Q(f_0\\hat{\\ } f_2) = 0$ and $Q(f_1\\hat{\\ }f_2) = 0$ .", "This implies that $P(f_0) = P(f_2) = P(f_1)$ .", "Thus $A$ is homogeneous for $P$ .", "Fact 2.14 Let $\\kappa $ be a regular cardinal and $\\eta < \\kappa $ be a limit ordinal.", "Suppose $\\kappa \\rightarrow (\\kappa )^{\\eta + \\eta }_2$ .", "Then $\\kappa $ is a normal $\\kappa $ -complete ultrafilter.", "If $\\kappa $ has the weak partition property, then $W^\\kappa _\\eta $ is a normal $\\kappa $ -complete measure on $\\kappa $ for each limit ordinal $\\eta < \\kappa $ .", "By Fact REF , $W^\\kappa _\\eta $ is a $\\kappa $ -complete ultrafilter.", "Let $F : \\kappa \\rightarrow \\kappa $ be a regressive function.", "That is, $A^{\\prime } = \\lbrace \\alpha \\in \\kappa : F(\\alpha ) < \\alpha \\rbrace \\in W^\\kappa _\\eta $ .", "Let $A \\subseteq A^{\\prime }$ be a $\\eta $ -club set.", "Define $P : [\\kappa ]^\\eta \\rightarrow 2$ by $P(f) = {\\left\\lbrace \\begin{array}{ll}0 & \\quad F(\\sup (f)) < \\min f \\\\1 & \\quad \\text{otherwise}\\end{array}\\right.", "}$ Since $\\kappa \\rightarrow (\\kappa )^{\\eta + \\eta }_2$ implies $\\kappa \\rightarrow (\\kappa )^\\eta _2$ , let $B \\subseteq A$ with $|B| = \\kappa $ be homogeneous for $P$ .", "Let $\\tilde{B} = \\lbrace \\mathsf {enum}_B(\\eta \\cdot \\alpha + \\eta ) : \\alpha < \\kappa \\rbrace $ , where $\\mathsf {enum}_B : \\kappa \\rightarrow \\kappa $ is the increasing enumeration of $B$ .", "Suppose $f \\in [\\tilde{B}]^\\eta $ .", "Since $\\tilde{B} \\subseteq B \\subseteq A$ and $A$ is $\\eta $ -closed, $\\sup (f) \\in A \\subseteq A^{\\prime }$ .", "Therefore, $F(\\sup (f)) < \\sup (f)$ .", "Let $\\gamma < \\eta $ be least so that $F(\\sup (f)) < f(\\gamma )$ .", "Since $f(\\gamma + 1) \\in \\tilde{B}$ , let $f_0 : \\gamma + 1 \\rightarrow B$ be such that for all $\\nu < \\gamma + 1$ , $f(\\gamma ) < f_0(\\nu ) < f(\\gamma + 1)$ .", "Let $f_1 : (\\eta - (\\gamma + 1)) \\rightarrow \\tilde{B}$ be the tail of $f$ above $f(\\gamma )$ .", "Note that $f_2 = f_0\\hat{\\ }f_1$ is an $\\eta $ -sequence in $B$ with the property that $\\min (f_2) > f(\\gamma ) > F(\\sup (f))$ and $\\sup (f_2) = \\sup (f)$ .", "Thus $F(\\sup (f_2)) = F(\\sup (f)) < \\min (f_2)$ .", "Since $f_2 \\in [B]^\\eta $ , one has that $B$ must be homogeneous for $P$ taking value 0.", "Let $f \\in [B]^\\eta $ .", "Let $f^{\\prime } : \\eta \\rightarrow B$ be the increasing enumeration of $\\lbrace \\min B\\rbrace \\cup \\mathrm {rang}(f)$ .", "Thus $P(f^{\\prime }) = 0$ implies that $F(\\sup (f)) = F(\\sup (f^{\\prime })) < \\min (f^{\\prime }) = \\min (B)$ .", "It has been shown that for all $f \\in [B]^\\eta $ , $F(\\sup (f)) < \\min (B)$ .", "Thus for all $\\alpha \\in \\mathrm {Lim}^\\eta (B)$ , $F(\\alpha ) < \\min (B)$ .", "Since $\\mathrm {Lim}^\\eta (B) \\in W^\\kappa _\\eta $ and $W^\\kappa _\\eta $ is $\\kappa $ -complete, there is some $\\xi < \\min (B)$ and $C \\in W^\\kappa _\\eta $ so that $F[C] = \\xi $ .", "It has been shown that $F$ is constant $W^\\kappa _\\eta $ -almost everywhere.", "Normality has been established.", "Under suitable circumstances on $\\kappa $ , one can determine all the normal $\\kappa $ -complete measures on $\\kappa $ .", "Fact 2.15 Let $\\kappa $ be a regular cardinal and $\\eta < \\kappa $ be a limit ordinal.", "Assume $\\kappa \\rightarrow (\\kappa )^{\\eta }_2$ .", "Then $W^\\kappa _\\eta = W^\\kappa _{\\mathrm {cof}(\\eta )}$ .", "Suppose that $\\eta _0 < \\eta _1$ are two infinite regular cardinals less than $\\kappa $ .", "Then $W^\\kappa _{\\eta _0} \\ne W^\\kappa _{\\eta _1}$ .", "Suppose the collection of infinite regular cardinals below $\\kappa $ has cardinality less than $\\kappa $ .", "Let $\\mu $ be any $\\kappa $ -complete normal measure on $\\kappa $ .", "There is some infinite regular cardinal $\\eta < \\kappa $ so that $\\mu $ is equivalent to $W^\\kappa _\\eta $ .", "The partition property $\\kappa \\rightarrow (\\kappa )^{\\eta }_2$ implies that $W^\\kappa _{\\eta }$ and $W^\\kappa _{\\mathrm {cof}(\\eta )}$ are both ultrafilters.", "Suppose $A \\subseteq \\kappa $ is a $\\mathrm {cof}(\\eta )$ -club.", "Let $f \\in [A]^\\eta $ .", "Let $\\rho : \\mathrm {cof}(\\eta ) \\rightarrow \\eta $ be a cofinal increasing sequence.", "Note that $f \\circ \\rho \\in [A]^{\\mathrm {cof}(\\eta )}$ .", "Since $A$ is a $\\mathrm {cof}(\\eta )$ -club, $\\sup (f \\circ \\rho ) = \\sup (f) \\in A$ .", "Thus $A$ is also an $\\eta $ -club.", "This shows that $W^\\kappa _{\\mathrm {cof}(\\eta )} \\subseteq W^\\kappa _\\eta $ .", "Suppose that $\\lnot (W^\\kappa _\\eta \\subseteq W^\\kappa _{\\mathrm {cof}(\\eta )})$ .", "Let $A \\in W^\\kappa _\\eta $ be such that $A \\notin W^\\kappa _{\\mathrm {cof}(\\eta )}$ .", "Since $W^\\kappa _{\\mathrm {cof}(\\eta )}$ is an ultrafilter, $\\kappa \\setminus A \\in W^\\kappa _{\\mathrm {cof}(\\eta )}$ .", "It has already been shown that $W^\\kappa _{\\mathrm {cof}(\\eta )} \\subseteq W^\\kappa _\\eta $ .", "Therefore, $\\kappa \\setminus A \\in W^\\kappa _\\eta $ .", "However, $A,\\kappa \\setminus A \\in W^\\kappa _\\eta $ contradicts the fact that $W^\\kappa _\\eta $ is an ultrafilter.", "It has been established that $W^\\kappa _\\eta = W^\\kappa _{\\mathrm {cof}(\\eta )}$ .", "Now suppose $\\eta _0 < \\eta _1$ are two regular cardinals less than $\\kappa $ .", "Let $A_i = \\lbrace \\alpha < \\kappa : \\mathrm {cof}(\\alpha ) = \\eta _i\\rbrace $ .", "For $i \\in 2$ , $A_i$ is an $\\eta _i$ -club.", "Therefore, $A_i \\in W^\\kappa _{\\eta _i}$ .", "If $W^\\kappa _{\\eta _0} = W^\\kappa _{\\eta _1}$ , then $A_0 \\cap A_1 = \\emptyset \\in W^\\kappa _{\\eta _1}$ .", "This contradicts $W^\\kappa _{\\eta _1}$ is a filter.", "Now suppose the collection $K$ of infinite regular cardinals below $\\kappa $ has cardinality less than $\\kappa $ .", "Let $\\mu $ be a $\\kappa $ -complete normal measure on $\\kappa $ .", "Let $T$ be the set of limit ordinals below $\\kappa $ .", "Since $\\kappa $ and $T$ are in bijection, $\\mu $ is equivalent to a measure concentrating on $T$ .", "Therefore, assume for simplicity that $T \\in \\mu $ .", "For each $\\eta \\in K$ , let $A_\\eta = \\lbrace \\alpha \\in \\kappa : \\mathrm {cof}(\\alpha ) = \\eta \\rbrace $ .", "$T = \\bigcup _{\\eta \\in K} A_\\eta $ .", "Since $\\mu $ is $\\kappa $ -complete and $|K| < \\kappa $ , there is some $\\eta _0$ so that $A_{\\eta _0} \\in \\mu $ .", "The claim is that $\\mu = W^\\kappa _{\\eta _0}$ : Let $B \\in W^\\kappa _{\\eta _0}$ .", "Let $C \\subseteq B$ be an $\\eta _0$ -club set.", "Suppose $B \\notin \\mu $ .", "Then $\\kappa \\setminus B \\in \\mu $ .", "Let $Q = A_{\\eta _0} \\cap (\\kappa \\setminus B)$ .", "Note $Q \\in \\mu $ .", "Let $\\alpha \\in Q$ .", "Note that $\\mathrm {cof}(\\alpha ) = \\eta _0$ and $\\alpha \\notin B$ so $\\alpha \\notin C$ .", "$\\sup (C \\cap \\alpha ) < \\alpha $ since otherwise $\\alpha \\in C$ since $\\mathrm {cof}(\\alpha ) = \\eta _0$ and $C$ is an $\\eta _0$ -club.", "Let $G : Q \\rightarrow \\kappa $ be defined $G(\\alpha ) = \\sup (C \\cap \\alpha )$ .", "$G$ is a regressive function on $Q$ .", "Since $\\mu $ is normal, there is some $\\gamma < \\kappa $ so that $J = \\lbrace \\alpha : G(\\alpha ) = \\gamma \\rbrace \\in \\mu $ .", "For any $\\alpha \\in J$ , $C \\cap \\alpha \\subseteq \\gamma $ .", "Since $J \\in \\mu $ implies that $J$ is unbounded, this implies that $C \\subseteq \\gamma $ .", "This is impossible since $C$ is unbounded." ], [ "Good Coding of Functions", "Definition 3.1 Let $\\kappa $ be a regular cardinal and $\\lambda \\le \\kappa $ be an ordinal.", "A good coding system for ${}^\\lambda \\kappa $ consists of $\\Gamma $ , $\\mathsf {decode}$ , and $\\mathsf {GC}_{\\beta ,\\gamma }$ for each $\\beta < \\lambda $ and $\\gamma < \\kappa $ with the following properties: (1) $\\Gamma $ is a pointclass closed under continuous substitution and $\\exists ^\\mathbb {R}$ .", "Let $\\check{\\Gamma }$ denote the dual pointclass.", "Let $\\Delta = \\Gamma \\cap \\check{\\Gamma }$ .", "(2) $\\mathsf {decode}: \\mathbb {R}\\rightarrow {{P}(\\lambda \\times \\kappa )}$ .", "For all $f \\in {}^\\lambda \\kappa $ , there is some $x \\in \\mathbb {R}$ so that $\\mathsf {decode}(x) = f$ .", "(3) For all $\\beta < \\lambda $ and $\\gamma < \\kappa $ , $\\mathsf {GC}_{\\beta ,\\gamma } \\subseteq \\mathbb {R}$ , $\\mathsf {GC}_{\\beta ,\\gamma } \\in \\Delta $ , and $\\mathsf {GC}_{\\beta ,\\gamma }$ is defined by $x \\in \\mathsf {GC}_{\\beta ,\\gamma }$ if and only if $\\mathsf {decode}(x)(\\beta ,\\gamma ) \\wedge (\\forall \\gamma ^{\\prime } < \\kappa )(\\mathsf {decode}(x)(\\beta ,\\gamma ^{\\prime }) \\Rightarrow \\gamma = \\gamma ^{\\prime }).$ For each $\\beta < \\lambda $ , let $\\mathsf {GC}_\\beta = \\bigcup _{\\gamma < \\kappa } \\mathsf {GC}_{\\beta ,\\gamma }$ .", "(4) (Boundedness property) Suppose $A \\in \\exists ^\\mathbb {R}\\Delta $ and $A \\subseteq \\mathsf {GC}_\\beta $ , then there exists some $\\delta < \\kappa $ so that $A \\subseteq \\bigcup _{\\gamma < \\delta }\\mathsf {GC}_{\\beta ,\\gamma }$ .", "(5) $\\Delta $ is closed under less than $\\kappa $ length wellordered unions.", "Suppose $x \\in \\mathbb {R}$ , let $\\mathsf {fail}(x)$ be the least $\\beta < \\lambda $ so that $x \\notin \\mathsf {GC}_\\beta $ if it exists.", "Otherwise, let $\\mathsf {fail}(x) = \\infty $ .", "Let $\\mathsf {GC}= \\bigcap _{\\beta < \\lambda }\\mathsf {GC}_\\beta $ .", "Note that if $x \\in \\mathsf {GC}$ , then $\\mathsf {decode}(x)$ is the graph of a function in ${}^\\lambda \\kappa $ .", "If $x \\in \\mathsf {GC}$ , then one will use function notations such as $\\mathsf {decode}(x)(\\beta ) = \\gamma $ to indicate $(\\beta ,\\gamma ) \\in \\mathsf {decode}(x)$ .", "Assuming $\\mathsf {AD}$ , (5) follows from the other four conditions.", "This comes from a pointclass argument.", "See the end of the proof of Theorem 2.34 of [12].", "Later, one will apply this to $\\omega _1$ with the associated pointclass ${\\mathbf {\\Sigma }_1^1}$ .", "It is clear that (5) holds in this setting since ${\\mathbf {\\Delta }_1^1}$ is closed under countable unions.", "Remark 3.2 The meaning of $x \\in \\mathsf {GC}_{\\beta ,\\gamma }$ is that $x$ is good at $\\beta $ in the sense that $\\mathsf {decode}(x)$ has successfully mapped $\\beta $ to $\\gamma $ .", "One interprets $x \\in \\mathsf {GC}_{\\beta }$ to be mean that $x$ is good at $\\beta $ in the sense that $\\mathsf {decode}(x)$ has successfully mapped $\\beta $ to some value.", "So a reals $x$ belongs to $\\mathsf {GC}$ means that $x$ is a good code in the sense that $\\mathsf {decode}(x)$ is truly a function from $\\lambda $ to $\\kappa $ .", "Definition 3.3 Let $\\kappa $ be a regular cardinal and let $\\lambda \\le \\kappa $ be an ordinal.", "Let $(\\Gamma ,\\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\lambda ,\\gamma <\\kappa )$ be a coding system for ${}^\\lambda \\kappa $ .", "Let $S^1$ consists of reals $r$ coding a Lipschitz continuous function $\\Xi _r : \\mathbb {R}\\rightarrow \\mathbb {R}$ which has the following property: $(\\forall y)[\\mathsf {fail}(y) \\le \\mathsf {fail}(\\Xi _r(y)) \\wedge (\\mathsf {fail}(y) < \\infty \\Rightarrow \\mathsf {fail}(y) < \\mathsf {fail}(\\Xi _r(y)))].$ Let $S^2$ consists of Player 2 strategies $s$ so that its associated Lipschitz continuous function $\\Xi _s : \\mathbb {R}\\rightarrow \\mathbb {R}$ has the property that $(\\forall x)(\\mathsf {fail}(x) \\le \\mathsf {fail}(\\Xi _s(x))$ .", "Fact 3.4 Let $\\kappa $ be a regular cardinal and $\\lambda \\le \\kappa $ be an ordinal.", "Let $(\\Gamma ,\\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\lambda , \\gamma < \\kappa )$ be a coding system for ${}^\\lambda \\kappa $ .", "If $r \\in S^1$ , then there is a club $C \\subseteq \\kappa $ (obtained uniformly from $r$ ) so that for all $\\delta \\in C$ , for all $\\beta < \\min \\lbrace \\lambda ,\\delta \\rbrace $ and $\\gamma < \\delta $ , $\\Xi _r\\left[\\bigcap _{\\beta ^{\\prime } < \\beta }\\bigcup _{\\gamma ^{\\prime } < \\gamma }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}\\right] \\subseteq \\bigcap _{\\beta ^{\\prime } \\le \\beta } \\bigcup _{\\gamma ^{\\prime } < \\delta } \\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}$ Suppose $s \\in S^2$ , then there is a club $C \\subseteq \\kappa $ so that for all $\\delta \\in C$ , for all $\\beta < \\min \\lbrace \\lambda ,\\delta \\rbrace $ , for all $\\gamma < \\delta $ , $\\Xi _s\\left[\\bigcap _{\\beta ^{\\prime } \\le \\beta }\\bigcup _{\\gamma ^{\\prime } < \\gamma }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}\\right] \\subseteq \\bigcap _{\\beta ^{\\prime } \\le \\beta }\\bigcup _{\\gamma ^{\\prime } < \\delta } \\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}.$ Suppose $r \\in S^1$ .", "For each $\\beta < \\lambda $ and $\\gamma < \\kappa $ , let $R_{\\beta ,\\gamma } = \\bigcap _{\\beta ^{\\prime }<\\beta }\\bigcup _{\\gamma ^{\\prime }<\\gamma }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}.$ So if $x \\in R_{\\beta ,\\gamma }$ , then $\\mathsf {decode}(x)$ is good up to $\\beta $ by successfully mapping each ordinal less than $\\beta $ to some value less than $\\gamma $ .", "By property (5) of the good coding system, $\\Delta $ is closed under less than $\\kappa $ wellordered union and also intersections.", "Property (3) states that $\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }} \\in \\Delta $ for all $\\beta ^{\\prime } < \\lambda $ and $\\gamma ^{\\prime } < \\kappa $ .", "Thus $R_{\\beta ,\\gamma } \\in \\Delta $ .", "Then $\\Xi _r[R_{\\beta ,\\gamma }]$ is $\\exists ^\\mathbb {R}\\Delta $ .", "Claim 1: For all $\\beta ^{\\prime } \\le \\beta $ , $\\Xi _r[R_{\\beta ,\\gamma }] \\subseteq \\mathsf {GC}_{\\beta ^{\\prime }}$ .", "To see Claim 1: Note that $x \\in R_{\\beta ,\\gamma }$ implies that $\\mathsf {fail}(x) \\ge \\beta $ .", "Since $r \\in S^1$ , $\\mathsf {fail}(\\Xi _r(x)) > \\beta $ .", "Thus for all $\\beta ^{\\prime } \\le \\beta $ , $\\Xi _r(x) \\in \\mathsf {GC}_{\\beta ^{\\prime }}$ .", "This shows Claim 1.", "Therefore $\\Xi _r[R_{\\beta ,\\gamma }]$ is a $\\exists ^\\mathbb {R}\\Delta $ subset of $\\mathsf {GC}_{\\beta ^{\\prime }}$ for each $\\beta ^{\\prime } \\le \\beta $ .", "By the boundedness property (4) of a good coding system, there is some $\\epsilon _{\\beta ^{\\prime }} < \\kappa $ so that $\\Xi _r[R_{\\beta ,\\gamma }] \\subseteq \\bigcup _{\\gamma ^{\\prime } < \\epsilon _{\\beta ^{\\prime }}} \\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}$ .", "Since $\\kappa $ is regular, let $\\Upsilon (\\beta ,\\gamma )$ be the least $\\epsilon < \\kappa $ so that $\\beta < \\epsilon $ , $\\gamma < \\epsilon $ , and for all $\\beta ^{\\prime } \\le \\beta $ , $\\Xi _r[R_{\\beta ,\\gamma }] \\subseteq \\bigcup _{\\gamma ^{\\prime } < \\epsilon }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}$ .", "Thus $\\Upsilon : \\lambda \\times \\kappa \\rightarrow \\kappa $ is a well defined function.", "Let $C = \\lbrace \\delta : (\\forall \\beta < \\delta )(\\forall \\gamma < \\delta )(\\Upsilon (\\min \\lbrace \\lambda ,\\beta \\rbrace ,\\gamma ) < \\delta )\\rbrace $ .", "Claim 2: $C$ is a club.", "To see Claim 2: Let $\\alpha < \\kappa $ .", "Let $\\alpha _0 = \\alpha $ .", "If $\\alpha _n$ has been defined, let $\\alpha _{n + 1} = \\Upsilon (\\min \\lbrace \\lambda ,\\alpha _n\\rbrace ,\\alpha _n)$ .", "Let $\\alpha _\\infty = \\sup \\lbrace \\alpha _n : n \\in \\omega \\rbrace $ .", "By definition of $\\Upsilon $ , $\\alpha _n < \\alpha _{n + 1}$ for all $n$ .", "Let $\\beta < \\min \\lbrace \\lambda , \\alpha _\\infty \\rbrace $ and $\\gamma < \\alpha _\\infty $ .", "Then there is some $n$ so that $\\beta < \\alpha _n$ and $\\gamma < \\alpha _n$ .", "Then $\\Upsilon (\\min \\lbrace \\lambda ,\\beta \\rbrace ,\\gamma ) \\le \\Upsilon (\\min \\lbrace \\lambda ,\\alpha _n\\rbrace ,\\alpha _n) = \\alpha _{n + 1} < \\alpha _\\infty $ .", "This shows that $\\alpha _\\infty \\in C$ .", "As $\\alpha < \\alpha _\\infty $ , $C$ is unbounded.", "It is straightforward to show that $C$ is closed.", "Claim 2 has been shown.", "Fix a $\\delta \\in C$ .", "Pick a $\\beta < \\min \\lbrace \\lambda ,\\delta \\rbrace $ and a $\\gamma < \\delta $ .", "Suppose $x \\in R_{\\beta ,\\gamma }$ .", "Let $\\beta ^{\\prime } \\le \\beta $ .", "Then $\\Xi _r(x) \\in \\bigcup _{\\gamma ^{\\prime } < \\Upsilon (\\beta ,\\gamma )} \\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }} \\subseteq \\bigcup _{\\gamma ^{\\prime } < \\delta }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}.$ Since $\\beta ^{\\prime } \\le \\beta $ was arbitrary, $\\Xi _r(x) \\in \\bigcap _{\\beta ^{\\prime }\\le \\beta } \\bigcup _{\\gamma ^{\\prime }<\\delta }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}.$ This completes the proof of the result for $S^1$ .", "The argument for $S^2$ is similar.", "Remark 3.5 Let $r \\in S^1$ .", "Let $C$ be the club produced by Fact REF .", "This club $C$ is called the club on which the opposite player (Player 2) has taken control of the output.", "This means the following: In most applications, $r$ codes a Lipschitz function coming from a Player 1 winning strategy.", "(Hence the notation $S^1$ .)", "Let $\\delta \\in C$ .", "Fact REF states that for any code $y$ so that $\\mathsf {decode}(y)$ successfully defines a function up to some $\\beta < \\min \\lbrace \\lambda ,\\delta \\rbrace $ and takes value some $\\gamma $ strictly below $\\delta $ , then the Player 1 strategy coded by $r$ when played against $y$ produces some code $x$ which successfully defines a function at and below $\\beta $ and still takes value below $\\delta $ .", "This observation is used in a game to show that although Player 1 may have a winning strategy which determines that a final sequence will land in a certain payoff set, there is a club $C$ so that if the opposite player (Player 2) plays suitable codes for functions through this club, Player 2 actually determines the value of the final sequence.", "A similar statement holds if Player 2 has the winning strategy in this game.", "This is the main idea of the following game of Martin to establish the partition properties and of an earlier game of Solovay to show the club filter is an ultrafilter.", "Definition 3.6 Suppose $\\kappa $ is a regular cardinal and $\\lambda $ is such that $\\omega \\cdot \\lambda < \\kappa $ .", "Suppose $f \\in {}^{\\omega \\cdot \\lambda }\\kappa $ .", "Let $\\mathsf {block}: {}^{\\omega \\cdot \\lambda }\\kappa \\rightarrow {}^\\lambda \\kappa $ be defined by $\\mathsf {block}(f)(\\alpha ) = \\sup \\lbrace f(\\omega \\cdot \\alpha + k) : k \\in \\omega \\rbrace $ .", "Suppose $f,g \\in {}^{\\omega \\cdot \\lambda }\\kappa $ .", "Let $\\mathsf {joint}: {}^{\\omega \\cdot \\lambda }\\kappa \\times {}^{\\omega \\cdot \\lambda }\\kappa \\rightarrow {}^\\lambda \\kappa $ be defined by $\\mathsf {joint}(f,g)(\\alpha ) = \\sup \\lbrace f(\\omega \\cdot \\alpha + k),g(\\omega \\cdot \\alpha + k) : k \\in \\omega \\rbrace .$ Theorem 3.7 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Suppose $\\lambda , \\kappa $ are ordinals such that $\\omega \\cdot \\lambda \\le \\kappa $ .", "Suppose there is a good coding system $(\\Gamma ,\\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta \\in \\omega \\cdot \\lambda ,\\gamma < \\kappa )$ for ${}^{\\omega \\cdot \\lambda }\\kappa $ .", "Then $\\kappa \\rightarrow _* (\\kappa )^\\lambda _2$ holds.", "Let $P : [\\kappa ]^\\lambda _* \\rightarrow 2$ .", "Consider the following game $\\begin{tikzpicture}\\node at (0,0) {G};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {x_0};\\node at (4,.5) {x_1};\\node at (6,.5) {x_2};\\node at (8,.5) {x_3};\\node at (11,.5) {x};$ t (1,-.5) II; t (3,-.5) $y_0$ ; t (5,-.5) $y_1$ ; t (7,-.5) $y_2$ ; t (9,-.5) $y_3$ ; t (11,-.5) $y$ ; $where Player 1 and 2 take turns playing integers.", "Player 1 produces the real $ $, and Player 2 produces the real $ y$.", "Player 1 wins the game if and only the conjunction of the following two conditions hold:$ (1) $\\mathsf {fail}(x) > \\mathsf {fail}(y) \\vee \\mathsf {fail}(x) = \\mathsf {fail}(y) = \\infty $ .", "(2) $(\\mathsf {fail}(x) = \\mathsf {fail}(y) = \\infty ) \\Rightarrow P(\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(y))) = 0$ .", "(Case I) Suppose Player 1 has a winning strategy $\\sigma $ in this game.", "The strategy $\\sigma $ induces a Lipschitz function $\\Xi _\\sigma $ .", "Explicitly, $\\Xi _\\sigma (y)$ is just the response of Player 1 using $\\sigma $ when Player 2 plays using $y$ .", "Note that $\\sigma \\in S^1$ .", "Let $C$ be the club from Fact REF .", "On $C$ , Player 2 takes control of the output in the sense below.", "Let $D$ be the limit points of $C$ .", "Let $f \\in [D]^{\\lambda }_*$ .", "Since $f$ is of the correct type, let $F : \\lambda \\times \\omega \\rightarrow \\kappa $ be a witness to $f$ having uniform cofinality $\\omega $ .", "For each $\\alpha < \\lambda $ , let $\\nu _\\alpha = \\sup \\lbrace f(\\xi ) : \\xi < \\alpha \\rbrace $ .", "Let $g(\\omega \\cdot \\alpha )$ be the least element of $C$ greater than $\\max \\lbrace \\nu _\\alpha ,F(\\alpha ,0)\\rbrace $ .", "Suppose $g(\\omega \\cdot \\alpha + k)$ has been defined for some $k \\in \\omega $ .", "Let $g(\\omega \\cdot \\alpha + k + 1)$ be the least element of $C$ greater than $\\max \\lbrace g(\\omega \\cdot \\alpha +k),F(\\alpha , k + 1)\\rbrace $ .", "Now $g \\in [C]^{\\omega \\cdot \\lambda }$ , $g$ is discontinuous everywhere, and $\\mathsf {block}(g) = f$ .", "By property (2) of the good coding system, there is some $y \\in \\mathsf {GC}$ so that $\\mathsf {decode}(y) = g$ .", "Let Player 2 play this $y$ against Player 1 using $\\sigma $ .", "Since $y \\in \\mathsf {GC}$ , $\\mathsf {fail}(y) = \\infty $ .", "Thus $\\mathsf {fail}(\\Xi _\\sigma (r)) = \\infty $ .", "For $\\beta < \\omega \\cdot \\lambda $ , let $\\epsilon _\\beta = \\sup \\lbrace g(\\alpha ) : \\alpha < \\beta \\rbrace $ .", "Since $g$ is discontinuous everywhere, $\\epsilon _\\beta < g(\\beta )$ .", "Then $\\Xi _\\sigma (y) \\in \\Xi _\\sigma \\left[\\bigcap _{\\beta ^{\\prime } < \\beta }\\bigcup _{\\gamma ^{\\prime } < \\epsilon _\\beta }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}\\right] \\subseteq \\bigcap _{\\beta ^{\\prime }\\le \\beta }\\bigcup _{\\gamma ^{\\prime } < g(\\beta )}\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}$ by the definition of $C$ .", "Hence $\\mathsf {decode}(\\Xi _\\sigma (y))(\\beta ) < g(\\beta )$ .", "Thus for all $\\alpha < \\lambda $ , $\\sup \\lbrace \\mathsf {decode}(\\Xi _\\sigma (y))(\\omega \\cdot \\alpha + k), \\mathsf {decode}(y)(\\omega \\cdot \\alpha + k) : k \\in \\omega \\rbrace = \\sup \\lbrace g(\\omega \\cdot \\alpha + k) : k \\in \\omega \\rbrace = f(\\alpha ).$ This shows that $\\mathsf {joint}(\\mathsf {decode}(\\Xi _\\sigma (y)),\\mathsf {decode}(y)) = f$ .", "Since $\\sigma $ is a Player 1 winning strategy, one has that $P(f) = 0$ .", "Since $f \\in [D]^{\\lambda }_*$ was arbitrary, $D$ is homogeneous for $P$ taking value 0.", "(Case II) Suppose Player 2 has a winning strategy $\\tau $ .", "Note that Player 2 wins if and only if the disjunction of the following holds: (1) $\\mathsf {fail}(x) < \\mathsf {fail}(y) \\vee (\\mathsf {fail}(x) = \\mathsf {fail}(y) < \\infty )$ .", "(2) $\\mathsf {fail}(x) = \\mathsf {fail}(y) = \\infty \\wedge P(\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(y))) = 1$ .", "Therefore, $\\tau \\in S^2$ .", "Let $C$ be the club coming from Fact REF .", "$C$ is the club for which Player 1 takes control of the output.", "One may assume that $C$ consists of only limit ordinals.", "Let $D$ be the limit points of $C$ .", "Let $f \\in [D]^\\lambda _*$ .", "As before, there is a $g \\in [C]^{\\omega \\cdot \\lambda }$ such that $\\mathsf {block}(g) = f$ .", "Let $x \\in \\mathsf {GC}$ be such that $\\mathsf {decode}(x) = g$ .", "Since $\\mathsf {fail}(x) = \\infty $ , $\\mathsf {fail}(\\Xi _\\tau (x)) = \\infty $ and $\\Xi _\\tau (x) \\in \\mathsf {GC}$ as well.", "Let $\\beta < \\omega \\cdot \\lambda $ .", "Note that $g(\\beta ) + 1 < g(\\beta + 1)$ $\\Xi _\\tau (x) \\in \\Xi _\\tau \\left[\\bigcap _{\\beta ^{\\prime } \\le \\beta }\\bigcup _{\\gamma ^{\\prime } < g(\\beta ) + 1} \\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}\\right] \\subseteq \\bigcap _{\\beta ^{\\prime }\\le \\beta }\\bigcup _{\\gamma ^{\\prime } < g(\\beta + 1)} \\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}$ by definition of $C$ .", "Hence $\\mathsf {decode}(\\Xi _\\tau (x))(\\beta ) < g(\\beta + 1)$ .", "Thus for all $\\alpha < \\lambda $ , $\\sup \\lbrace \\mathsf {decode}(x)(\\omega \\cdot \\alpha + k), \\mathsf {decode}(\\Xi _\\tau (x))(\\omega \\cdot \\alpha + k) : k \\in \\omega \\rbrace = \\sup \\lbrace g(\\omega \\cdot \\alpha + k) : k \\in \\omega \\rbrace = f(\\alpha ).$ This shows that $\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(\\Xi _\\tau (x))) = f$ .", "Since $\\tau $ is a Player 2 winning strategy, one has that $P(f) = 1$ .", "$D$ is homogeneous for $P$ taking value 1.", "The proof is complete.", "Theorem 3.8 (Almost everywhere uniformization on codes) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\kappa $ be a regular cardinal and $\\lambda < \\kappa $ .", "Let $(\\Gamma ,\\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\omega \\cdot \\lambda , \\gamma < \\kappa )$ be a good coding system for ${}^{\\omega \\cdot \\lambda }\\kappa $ .", "Let $R \\subseteq [\\kappa ]^\\lambda _* \\times \\mathbb {R}$ be a relation.", "There is a club $C \\subseteq \\kappa $ and a Lipschitz continuous function $F : \\mathbb {R}\\rightarrow \\mathbb {R}$ so that for all $x \\in \\mathsf {GC}$ with $\\mathsf {decode}(x) \\in [C]^{\\omega \\cdot \\lambda }$ and $\\mathsf {block}(\\mathsf {decode}(x)) \\in [C]^\\lambda _* \\cap \\mathrm {dom}(R)$ , $R(\\mathsf {block}(\\mathsf {decode}(x)),F(x))$ .", "Define a game $G$ as follows: $\\begin{tikzpicture}\\node at (0,0) {G};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {x_0};\\node at (4,.5) {x_1};\\node at (6,.5) {x_2};\\node at (8,.5) {x_3};\\node at (11,.5) {x};$ t (1,-.5) II; t (3,-.5) $y_0,z_0$ ; t (5,-.5) $y_1,z_1$ ; t (7,-.5) $y_2,z_2$ ; t (9,-.5) $y_3,z_3$ ; t (11,-.5) $y,z$ ; $Player 2 wins if and only if the disjunction of the following hold:$ (1) $\\mathsf {fail}(x) < \\mathsf {fail}(y) \\vee \\mathsf {fail}(x) = \\mathsf {fail}(y) < \\infty $ .", "(2) $(\\mathsf {fail}(x) = \\mathsf {fail}(y) = \\infty ) \\wedge (\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(y)) \\in \\mathrm {dom}(R) \\Rightarrow R(\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(y)),z))$ .", "Claim 1: Player 2 has a winning strategy.", "To prove Claim 1: Suppose not.", "By $\\mathsf {AD}$ , Player 1 has a winning strategy $\\sigma $ .", "Note that Player 1 winning condition is the conjunction of the follow: (1) $\\mathsf {fail}(y) < \\mathsf {fail}(x) \\vee \\mathsf {fail}(x) = \\mathsf {fail}(y) = \\infty $ .", "(2) $\\mathsf {fail}(x) = \\mathsf {fail}(y) = \\infty \\Rightarrow [\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(y)) \\in \\mathrm {dom}(R) \\wedge \\lnot R(\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(y)),z)]$ .", "Let $\\Xi _\\sigma $ be the associated continuous function.", "Note that $(\\forall y)(\\forall z)[\\mathsf {fail}(y) \\le \\mathsf {fail}(\\Xi _\\sigma (y,z)) \\wedge (\\mathsf {fail}(y) < \\infty \\Rightarrow \\mathsf {fail}(y) < \\mathsf {fail}(\\Xi _\\sigma (y,z)))].$ That is, if one fixes any $z$ , the associated function $F_z(y) = \\Xi _\\sigma (y,z)$ belongs to $S^1$ .", "With a small modification to the argument of Fact REF , there is a club $C$ with the property that for all $\\delta \\in C$ , for all $z \\in \\mathbb {R}$ , for all $\\beta < \\min \\lbrace \\lambda ,\\delta \\rbrace $ , and $\\gamma < \\delta $ $F_z\\left[\\bigcap _{\\beta ^{\\prime }<\\beta }\\bigcup _{\\gamma ^{\\prime } < \\gamma }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}\\right] \\subseteq \\bigcap _{\\beta ^{\\prime }\\le \\beta }\\bigcup _{\\gamma ^{\\prime } < \\gamma }\\mathsf {GC}_{\\beta ^{\\prime },\\gamma ^{\\prime }}.$ That is, $C$ is a club for which Player 2 controls the output against $F_z$ for every $z \\in \\mathbb {R}$ .", "Let $D$ be the limit points of $C$ .", "Let $f \\in [D]^{\\lambda }_*$ .", "As in the proof of Theorem REF , let $g \\in [D]^{\\omega \\cdot \\lambda }$ be such that $\\mathsf {block}(g) = f$ .", "Let $y \\in \\mathsf {GC}$ be such that $\\mathsf {decode}(y) =g$ .", "By the same argument as in Theorem REF , one has that for all $z$ , $\\mathsf {joint}(\\mathsf {decode}(\\Xi _\\sigma (y,z)),\\mathsf {decode}(y)) = f$ .", "Since $\\sigma $ is a Player 1 winning strategy, one must have that $f \\in \\mathrm {dom}(R)$ .", "Since $f \\in \\mathrm {dom}(R)$ , there is some $z^*$ so that $R(f,z^*)$ .", "If Player 2 plays $(y,z^*)$ , then Player 1 loses using $\\sigma $ .", "This contradicts $\\sigma $ being a Player 1 winning strategy.", "Claim 1 has been shown.", "Let $\\tau $ be a Player 2 winning strategy in this game.", "Let $G(x) = \\pi _1[\\Xi _\\tau (x)]$ , where $\\pi _1$ refers to the projection onto the first coordinate.", "Then $G$ is a Lipschitz continuous function satisfying the condition from the definition of $S^2$ .", "By Fact REF , let $C$ be the club on which Player 2 takes controls of the output.", "Again one may assume $C$ consists of only limit ordinals.", "Let $D$ be the limit points of $C$ .", "Let $f \\in [D]^{\\lambda }_* \\cap \\mathrm {dom}(R)$ .", "Let $g \\in [C]^{\\omega \\cdot \\lambda }$ be such that $\\mathsf {block}(g) = f$ .", "Let $x \\in \\mathsf {GC}$ be such that $\\mathsf {decode}(x) = g$ .", "By the argument in Theorem REF , $\\mathsf {joint}(\\mathsf {decode}(x),\\mathsf {decode}(G(x))) = f$ .", "Since $\\tau $ is a Player 2 winning strategy, one must have that $R(f,\\pi _2(\\Xi _r(x)))$ .", "Define $F(x) = \\pi _2(\\Xi _r(x))$ .", "Then $F$ is a Lipschitz function with the desired uniformization property.", "Theorem 3.9 Let $\\kappa $ be a regular cardinal and $\\lambda \\le \\kappa $ .", "Suppose $(\\Gamma , \\mathsf {decode}, \\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\lambda , \\gamma < \\kappa )$ is a good coding system for ${}^\\lambda \\kappa $ .", "Let $M \\models \\mathsf {AD}$ be an inner model containing all the reals and within $M$ , $(\\Gamma ,\\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\lambda ,\\gamma <\\kappa )$ is a good coding system.", "Then for any $\\Phi : [\\kappa ]^\\lambda \\rightarrow \\kappa $ , there is a club $D$ , necessarily in $M$ by the Moschovakis coding lemma, so that $\\Phi \\upharpoonright [D]^\\lambda _* \\in M$ .", "The hypothesis implies that $\\kappa < \\Theta ^M$ .", "Let $\\pi : \\mathbb {R}\\rightarrow \\kappa $ be a surjection in $M$ .", "Define a relation $R \\subseteq [\\kappa ]^\\lambda _* \\times \\mathbb {R}$ by $R(f,x) \\Leftrightarrow \\Phi (f) = \\pi (x)$ Let the Lipschitz function $F : \\mathbb {R}\\rightarrow \\mathbb {R}$ and club $C \\subseteq \\kappa $ be the objects given by Theorem REF .", "Let $D$ be the set of limit points of $C$ .", "Let $f \\in [D]^\\lambda _*$ .", "Let $x \\in \\mathsf {GC}$ be any function such that $\\mathsf {decode}(x) \\in [C]^{\\omega \\cdot \\lambda }_*$ and $\\mathsf {block}(\\mathsf {decode}(x)) = f$ .", "Then $R(\\mathsf {block}(\\mathsf {decode}(x)), F(x))$ .", "This means that $\\Phi (f) = \\pi (F(x))$ .", "Note that for any $x$ and $x^{\\prime }$ so that $\\mathsf {decode}(x),\\mathsf {decode}(x^{\\prime }) \\in [C]^{\\omega \\cdot \\lambda }_*$ and $\\mathsf {block}(\\mathsf {decode}(x)) = \\mathsf {block}(\\mathsf {decode}(x^{\\prime }))$ , one has that $\\pi (F(x)) = \\pi (F(x^{\\prime })) = \\Phi (\\mathsf {block}(\\mathsf {decode}(x)))$ .", "By the Moschovakis coding lemma and the fact that $\\pi \\in M$ , $C$ and hence $D$ belongs to $M$ .", "So within $M$ , one can define $\\Phi \\upharpoonright [D]^{\\lambda }_*$ as follows, $\\Phi (f) = \\gamma $ if and only if there exists some $x \\in \\mathsf {GC}$ so that $\\mathsf {decode}(x) \\in [C]^{\\omega \\cdot \\lambda }_*$ , $\\mathsf {block}(\\mathsf {decode}(x)) = f$ , and $\\pi (F(x)) = \\gamma $ .", "By the above, this is well defined and works.", "One will say that a cardinal $\\kappa $ is “reasonable” if one is in the situation where there exists a good coding system: Definition 3.10 (Jackson) Let $\\kappa $ be a regular cardinal and $\\lambda \\le \\kappa $ .", "$\\kappa $ is $\\lambda $ -reasonable if and only if there is a good coding system for ${}^\\lambda \\kappa $ ." ], [ "Reasonableness at $\\omega _1$", "Definition 4.1 Fix some recursive pairing function $\\pi : \\omega \\times \\omega \\rightarrow \\omega $ .", "A real $x \\in {{}^\\omega 2}$ codes a relation $R_x$ defined as follows: $R_x(m,n) \\Leftrightarrow x(\\pi (m,n)) = 1$ .", "Let the domain of $x$ be $\\mathrm {dom}(x) = \\lbrace n : (\\exists m)(R_x(m,n) \\vee R_x(m,n))\\rbrace $ .", "Let $\\mathrm {LO}$ be the set of reals $x$ so that $R_x$ is a linear ordering on its domain.", "Let ${\\mathrm {WO}}$ be the set of reals $x$ so that $R_x$ is a well ordering.", "$\\mathrm {LO}$ is an arithmetic set of reals.", "$\\mathrm {WO}$ is $\\Pi _1^1$ .", "If $x \\in {\\mathrm {WO}}$ , then $\\mathrm {ot}(x)$ is the order type of $x$ .", "If $\\beta < \\mathrm {ot}(x)$ , let $n_\\beta ^x$ denote the unique element of $\\mathrm {dom}(x)$ whose rank according to $R_x$ is $\\beta $ .", "If $\\alpha < \\omega _1$ , then let ${\\mathrm {WO}}_\\alpha = \\lbrace x \\in {\\mathrm {WO}}: \\mathrm {ot}(x) = \\alpha \\rbrace $ .", "${\\mathrm {WO}}_{<\\alpha } = \\lbrace x \\in {\\mathrm {WO}}: \\mathrm {ot}(x) < \\alpha \\rbrace $ .", "${\\mathrm {WO}}_{\\le \\alpha }$ , ${\\mathrm {WO}}_{>\\alpha }$ , and ${\\mathrm {WO}}_{\\ge \\alpha }$ are defined similarly.", "${\\mathrm {WO}}_\\alpha $ , ${\\mathrm {WO}}_{<\\alpha }$ , and ${\\mathrm {WO}}_{\\le \\alpha }$ are ${\\mathbf {\\Delta }_1^1}$ (in any element of ${\\mathrm {WO}}_\\alpha $ ).", "Definition 4.2 The ordertype function $\\mathrm {ot}: {\\mathrm {WO}}\\rightarrow \\omega _1$ is a $\\Pi _1^1$ norm.", "Let $\\preceq $ denote the induced prewellordering: $x \\preceq y$ if and only if $\\mathrm {ot}(x) \\le \\mathrm {ot}(y)$ .", "Being a $\\Pi _1^1$ -norm implies that there is a $\\Sigma _1^1$ relation $\\le _{\\Sigma _1^1}$ and a $\\Pi _1^1$ relation $\\le _{\\Pi _1^1}$ so that $y \\in {\\mathrm {WO}}\\Rightarrow (\\forall x)[(x \\in {\\mathrm {WO}}\\wedge x \\preceq y) \\Leftrightarrow (x \\le _{\\Sigma _1^1} y) \\Leftrightarrow (x \\le _{\\Pi _1^1} y)].$ It is useful to have a concrete coding of a nice collection of club subsets of $\\omega _1$ .", "Fact 4.3 Let $\\tau $ be a Player 2 strategy with the property that for all $w \\in \\mathrm {WO}$ , $\\tau (w) \\in \\mathrm {WO} \\wedge \\mathrm {ot}(w) < \\mathrm {ot}(\\tau (w))$ .", "Let $C_\\tau = \\lbrace \\eta : (\\forall w \\in \\mathrm {WO}_{<\\eta })(\\mathrm {ot}(\\tau (w)) < \\eta )\\rbrace $ .", "$C_\\tau $ is a club.", "Let $R_\\alpha = \\lbrace \\tau (w) : w \\in \\mathrm {WO}_{<\\alpha }\\rbrace $ .", "$R_\\alpha \\subseteq \\mathrm {WO}$ is ${\\mathbf {\\Sigma }_1^1}$ .", "By boundedness, $\\sup \\lbrace \\mathrm {ot}(v) : v \\in R_\\alpha \\rbrace < \\omega _1$ .", "Let $\\Phi (\\alpha ) = \\sup \\lbrace \\mathrm {ot}(\\tau (w)) : w \\in \\mathrm {WO}_{<\\alpha }\\rbrace + 1$ .", "By the above, $\\Phi (\\alpha )$ is defined.", "It is clear that $C_\\tau $ is closed.", "Let $\\alpha < \\omega _1$ .", "Let $\\alpha _0 = \\alpha $ .", "Let $\\alpha _{n + 1} = \\Phi (\\alpha _n)$ .", "Let $\\eta = \\sup \\lbrace \\alpha _n : n \\in \\omega \\rbrace $ .", "Let $\\beta < \\eta $ .", "There is some $n$ so that $\\beta < \\alpha _n$ .", "Let $w \\in \\mathrm {WO}_\\beta \\subseteq \\mathrm {WO}_{<\\alpha _n}$ .", "Thus $\\mathrm {ot}(\\tau (w)) < \\Phi (\\alpha _n) = \\alpha _{n + 1} < \\eta $ .", "So $\\eta \\in C_\\tau $ .", "This shows that $C_\\tau $ is unbounded.", "Definition 4.4 Let $\\mathsf {clubcode}_{\\omega _1}$ be the set of Player 2 strategies so that for all $w \\in \\mathrm {WO}$ , $\\tau (w) \\in \\mathrm {WO} \\wedge \\mathrm {ot}(w) < \\mathrm {ot}(\\tau (w))$ .", "Note that $\\mathsf {clubcode}_{\\omega _1}$ is a $\\Pi _2^1$ set of reals.", "Fact 4.5 Assume that $\\tau \\in \\mathsf {clubcode}_{\\omega _1}$ .", "The relation $S(w)$ defined by $w \\in {\\mathrm {WO}}\\wedge \\mathrm {ot}(w) \\in C_\\tau $ is $\\mathbf {\\Pi }_1^1$ .", "Assume $\\mathsf {AC}_\\omega ^\\mathbb {R}$ .", "If $\\alpha < \\omega _1$ , then the relation $T_\\alpha (w)$ defined by $w \\in {\\mathrm {WO}}_{<\\alpha } \\wedge \\mathrm {ot}(w) \\in C_\\tau $ is ${\\mathbf {\\Delta }_1^1}$ .", "Note that $S(w)$ holds if and only if $w \\in {\\mathrm {WO}}\\wedge (\\forall v)(v <_{\\Sigma _1^1} w \\Rightarrow \\tau (v) <_{\\Pi _1^1} w)$ , where $<_{\\Sigma _1^1}$ and $<_{\\Pi _1^1}$ are $\\Sigma _1^1$ and $\\Pi _1^1$ relations, respectively, coming from the ordertype function, $\\mathrm {ot}$ , being a $\\Pi _1^1$ -norm.", "Note that $\\alpha \\cap C_\\tau $ is a countable set.", "Using $\\mathsf {AC}_\\omega ^\\mathbb {R}$ , let $r \\in \\omega $ be such that $\\lbrace \\mathrm {ot}(r_n) : n \\in \\omega \\rbrace = \\alpha \\cap C_\\tau $ .", "(Here $r_n(m) = r(\\langle n,m\\rangle )$ , where $\\langle \\cdot , \\cdot \\rangle : \\omega \\times \\omega \\rightarrow \\omega $ is a recursive bijective pairing function.)", "One can see that $T_\\alpha $ is ${\\mathbf {\\Delta }_1^1}$ using as parameters $r$ and some element of ${\\mathrm {WO}}_\\alpha $ .", "Fact 4.6 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $C \\subseteq \\omega _1$ be a club.", "There is a $\\tau \\in \\mathsf {clubcode}_{\\omega _1}$ so that $C_\\tau \\subseteq C$ .", "Consider the game $G_C$ : $\\begin{tikzpicture}\\node at (0,0) {G_C};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {w_0};\\node at (4,.5) {w_1};\\node at (6,.5) {w_2};\\node at (8,.5) {w_3};\\node at (11,.5) {w};$ t (1,-.5) II; t (3,-.5) $v_0$ ; t (5,-.5) $v_1$ ; t (7,-.5) $v_2$ ; t (9,-.5) $v_3$ ; t (11,-.5) $v$ ; $Player 2 wins this game if and only if$ w WO (v WO ot(v) > ot(w) ot(v) C).", "$$ Claim 1: Player 2 has the winning strategy.", "Suppose $\\sigma $ is a Player 1 winning strategy.", "Note that $\\sigma [\\mathbb {R}] \\subseteq \\mathrm {WO}$ and is ${\\mathbf {\\Sigma }_1^1}$ .", "By boundedness, $\\gamma = \\sup \\lbrace \\mathrm {ot}(w) : w \\in \\sigma [\\mathbb {R}]\\rbrace < \\omega _1$ .", "Since $C$ is unbounded, let $\\delta \\in C$ be such that $\\delta > \\gamma $ .", "Let $v \\in \\mathrm {WO}_\\delta $ .", "If Player 2 plays $v$ against $\\sigma $ , $v \\in \\mathrm {WO}$ , $\\mathrm {ot}(v) = \\delta \\in C$ , and $\\mathrm {ot}(\\sigma (v)) \\le \\gamma < \\delta = \\mathrm {ot}(v)$ .", "Player 2 has won.", "This contradicts $\\sigma $ being a Player 1 winning strategy.", "Let $\\tau $ be a Player 2 winning strategy.", "It is clear that $\\tau \\in \\mathsf {clubcode}_{\\omega _1}$ .", "Claim 2: $C_\\tau \\subseteq C$ .", "Suppose $\\eta \\in C_\\tau $ .", "Let $\\beta < \\eta $ .", "Let $w \\in \\mathrm {WO}_\\beta $ .", "Then $\\beta < \\mathrm {ot}(\\tau (w)) < \\eta $ and $\\mathrm {ot}(\\tau (w)) \\in C$ .", "Since $\\beta < \\eta $ was arbitrary and $C$ is closed, $\\eta \\in C$ .", "Fact 4.7 Suppose $A \\subseteq \\mathsf {clubcode}_{\\omega _1}$ is ${\\mathbf {\\Sigma }_1^1}$ .", "Then uniformly from $A$ , there is a club $C$ so that for all $\\tau \\in A$ , $C \\subseteq C_\\tau $ .", "For each $\\beta < \\omega _1$ , let $R_\\beta = \\lbrace \\tau (w) : \\tau \\in A \\wedge w \\in \\mathrm {WO}_{<\\beta }\\rbrace $ .", "$R_\\beta \\subseteq \\mathrm {WO}$ and is ${\\mathbf {\\Sigma }_1^1}$ .", "By the boundedness principle, $\\sup \\lbrace \\mathrm {ot}(v) : v \\in R_\\beta \\rbrace < \\omega _1$ .", "Let $\\Phi (\\beta ) = \\sup \\lbrace \\mathrm {ot}(v) : v \\in R_\\beta \\rbrace + 1$ .", "Let $C = \\lbrace \\eta : (\\forall \\beta < \\eta )(\\Phi (\\beta ) < \\eta )\\rbrace $ .", "As before, one can check that $C$ is a club subset of $\\omega _1$ .", "Fix $\\tau \\in A$ .", "Let $\\eta \\in C$ .", "Suppose $\\beta < \\eta $ .", "Let $w \\in \\mathrm {WO}_\\beta $ .", "Then $\\beta < \\mathrm {ot}(\\tau (w)) < \\Phi (\\beta ) < \\eta $ .", "Thus $\\eta \\in C_\\tau $ .", "Fact 4.8 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\langle \\mathcal {A}_\\alpha : \\alpha < \\omega _1\\rangle $ be such that each $\\mathcal {A}_\\alpha $ is a nonempty $\\subseteq $ -downward closed collection of club subsets of $\\omega _1$ .", "Then there is a sequence $\\langle C_\\alpha : \\alpha < \\omega _1 \\rangle $ with each $C_\\alpha \\subseteq \\omega _1$ a club subset of $\\omega _1$ and $C_\\alpha \\in \\mathcal {A}_\\alpha $ .", "In particular: Let $\\mu $ be the club measure on $\\omega _1$ .", "Let $\\langle A_\\alpha : \\alpha < \\omega _1\\rangle $ be a sequence of sets in $\\mu $ , that is each $A_\\alpha $ contains a club subset of $\\omega _1$ .", "Then there is a sequence $\\langle C_\\alpha : \\alpha < \\omega _1\\rangle $ with each $C_\\alpha \\subseteq \\omega _1$ a club subset of $\\omega _1$ so that $C_\\alpha \\subseteq A_\\alpha $ .", "Consider the game $\\begin{tikzpicture}\\node at (0,0) {G};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {w(0)};\\node at (4,.5) {w(1)};\\node at (6,.5) {w(2)};\\node at (8,.5) {w(3)};\\node at (11,.5) {w};$ t (1,-.5) II; t (3,-.5) $z(0)$ ; t (5,-.5) $z(1)$ ; t (7,-.5) $z(2)$ ; t (9,-.5) $z(3)$ ; t (11,-.5) $z$ ; $where Player 2 wins if and only if $ WO(z clubcode1 Cz Aot(w))$.$ Claim 1: Player 2 has a winning strategy in this game.", "To prove this, suppose otherwise that Player 1 has a winning strategy $\\sigma $ .", "Note that $\\sigma [\\mathbb {R}] \\subseteq {\\mathrm {WO}}$ .", "Since $\\sigma [\\mathbb {R}]$ is a ${\\mathbf {\\Sigma }_1^1}$ subset of ${\\mathrm {WO}}$ , by the boundedness principle, there is a $\\zeta \\in \\omega _1$ so that for all $w \\in \\sigma [\\mathbb {R}]$ , $\\mathrm {ot}(w) < \\zeta $ .", "By $\\mathsf {AC}_\\omega ^\\mathbb {R}$ , pick $\\langle C_\\alpha : \\alpha < \\zeta \\rangle $ with the property that for all $\\alpha < \\zeta $ , $C_\\alpha \\in \\mathcal {A}_\\alpha $ .", "Let $C = \\bigcap _{\\alpha < \\zeta } C_\\alpha $ which is also a club.", "By Fact REF , there is some $z \\in \\mathrm {clubcode}_{\\omega _1}$ so that $C_z \\subseteq C$ .", "Note that since for each $\\alpha < \\zeta $ , $C_z \\subseteq C_\\alpha $ , $C_\\alpha \\in \\mathcal {A}_\\alpha $ , and $\\mathcal {A}_\\alpha $ is $\\subseteq $ -downward closed, one has that $C_z \\in \\mathcal {A}_\\alpha $ for each $\\alpha < \\zeta $ .", "Thus Player 2 wins against $\\sigma $ by playing $z$ .", "Contradiction.", "Thus let $\\tau $ be a Player 2 winning strategy in this game.", "For each $\\alpha < \\omega _1$ , let $P_\\alpha = \\tau [{\\mathrm {WO}}_\\alpha ]$ .", "Note that $P_\\alpha \\subseteq \\mathrm {clubcode}_{\\omega _1}$ with the property that for all $z \\in P_\\alpha $ , $C_z \\in \\mathcal {A}_\\alpha $ .", "Note that $P_\\alpha $ is ${\\mathbf {\\Sigma }_1^1}$ .", "By Fact REF , there is a uniform procedure to obtain from the set $P_\\alpha $ , a club $C_\\alpha $ with the property that $C_\\alpha \\subseteq C_z$ for all $z \\in P_\\alpha $ .", "In particular since $\\mathcal {A}_\\alpha $ is $\\subseteq $ -downward closed, $C_\\alpha \\in \\mathcal {A}_\\alpha $ .", "This completes the proof.", "Fact 4.9 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\lambda < \\omega _1$ .", "Then $\\omega _1$ is $\\lambda $ -reasonable.", "Note that $\\omega _1$ is regular by $\\mathsf {AC}_\\omega ^\\mathbb {R}$ .", "Let $\\Gamma = {\\mathbf {\\Sigma }_1^1}$ .", "Note that ${\\mathbf {\\Delta }_1^1}$ is closed under countable unions.", "Thus (1) and (5) hold.", "For each $x \\in \\mathbb {R}$ , let $x_n \\in \\mathbb {R}$ be defined by $x_n(k) = x(\\langle n,k\\rangle )$ , where $\\langle \\cdot ,\\cdot \\rangle : \\omega \\times \\omega \\rightarrow \\omega $ denotes a fixed recursive bijective pairing function.", "Fix some $w \\in {\\mathrm {WO}}_\\lambda $ with $\\mathrm {dom}(w) = \\omega $ .", "Define $\\mathsf {decode}(x)(\\alpha ,\\beta )$ if and only if $x_{n^w_\\alpha } \\in {\\mathrm {WO}}_\\beta $ .", "Suppose $f \\in {}^\\lambda \\omega _1$ .", "By $\\mathsf {AC}_\\omega ^\\mathbb {R}$ , for each $\\beta < \\lambda $ , let $u_\\beta \\in {\\mathrm {WO}}_{f(\\beta )}$ .", "Let $x \\in \\mathbb {R}$ be such that $x_{n^w_\\beta } = u_\\beta $ .", "Thus $\\mathsf {decode}(x) = f$ .", "This shows (2).", "Let $\\mathsf {GC}_{\\beta ,\\gamma } = \\lbrace x : x_{n^w_{\\beta }} \\in {\\mathrm {WO}}_\\gamma \\rbrace $ .", "Note that $\\mathsf {GC}_{\\beta ,\\gamma }$ is ${\\mathbf {\\Delta }_1^1}$ .", "It satisfies the property in (3).", "Suppose $A \\in {\\mathbf {\\Sigma }_1^1}$ and $A \\subseteq \\mathsf {GC}_\\beta $ .", "Let $R = \\lbrace x_{n^w_\\beta } : x \\in A\\rbrace $ .", "Then $R$ is a ${\\mathbf {\\Sigma }_1^1}$ subset of ${\\mathrm {WO}}$ .", "By the usual boundedness principle, there is some $\\delta < \\omega _1$ so that for all $v \\in R$ , $\\mathrm {ot}(v) < \\delta $ .", "Then $A \\subseteq \\bigcup _{\\gamma < \\delta }\\mathsf {GC}_{\\beta ,\\gamma }$ .", "It has been shown that $({\\mathbf {\\Sigma }_1^1}, \\mathsf {decode}, \\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\lambda , \\gamma < \\omega _1)$ is a good coding system for ${}^\\lambda \\omega _1$ .", "Corollary 4.10 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "$\\omega _1$ is a weak partition cardinal, i.e.", "for all $\\lambda < \\omega _1$ , $\\omega _1 \\rightarrow _* (\\omega _1)^{\\lambda }_2$ .", "Corollary 4.11 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "The club filter $\\mu = W^{\\omega _1}_\\omega $ is the unique countably complete normal measure on $\\omega _1$ .", "By Fact REF .", "Fact 4.12 Let $\\alpha < \\omega _1$ .", "Fix a good coding system for ${}^{\\omega \\cdot \\alpha } \\omega _1$ .", "Suppose $z \\in \\mathsf {clubcode}_{\\omega _1}$ .", "Let $D$ be the set of limit points of $C_z$ .", "Let $f \\in [D]^{\\alpha }_*$ .", "The set of $\\lbrace x \\in \\mathbb {R}: \\mathsf {decode}(x) \\in [C_z]^{\\omega \\cdot \\alpha } \\wedge \\mathsf {block}(\\mathsf {decode}(x)) = f\\rbrace $ is a ${\\mathbf {\\Delta }_1^1}$ set Fix any $x^*$ so that $\\mathsf {block}(\\mathsf {decode}(x^*)) = f$ .", "Since $\\omega _1$ is regular, $\\sup (f) = \\gamma < \\omega _1$ .", "Using $x^*$ as a parameter, one can now check that the desired set is ${\\mathbf {\\Delta }_1^1}$ by unraveling the definition of the coding system in Theorem REF and using Fact REF .", "The original method of establishing that $\\omega _1$ is $\\omega _1$ -reasonable involves sharps for reals which will be concisely reviewed below.", "See [23] for more details.", "The reader who would prefer to avoid sharps can skip ahead to Theorem REF to see the argument of Kechris that uses the Banach-Mazur game and category.", "Let ${L}= \\lbrace \\dot{\\in }, \\dot{E}\\rbrace $ be a language where $\\dot{\\in }$ is a binary relation symbol and $\\dot{E}$ is a unary relation symbol.", "For each ${L}$ -formula $\\varphi (w,v_1,...,v_{k})$ , whose free variable are exactly those listed, then let $h_\\varphi (v_1,...,v_k)$ be a formal function which will be called the Skolem function associated to $\\varphi $ .", "Define the language ${L}^I = \\lbrace \\dot{\\in }, \\dot{E}, \\dot{c}_n : n \\in \\omega \\rbrace $ where for each $n \\in \\omega $ , $\\dot{c}_n$ is a distinct constant symbol.", "Let $\\mathsf {skolem}$ denote the smallest class of function $h(v_1,...,v_k)$ containing $h_\\varphi $ for all $\\varphi $ and closed under composition.", "${L}^S$ consist of $\\dot{\\in }$ , $\\dot{E}$ , constants $c_n$ for all $n$ , and new distinct constant symbols $h(c_{i_1},...,c_{i_k})$ for all $i_1 < ... < i_k$ in $\\omega $ and $h \\in \\mathsf {skolem}$ One will assume one has fixed a recursive coding of formulas and terms of ${L}^S$ .", "One will identify terms or formulas with their associated integer code.", "Let $\\dot{\\prec }$ denote the ${L}$ -formula that defines the canonical $L[\\dot{E}]$ wellordering.", "Every formula $\\phi $ of ${L}^S$ can be converted into a ${L}^I$ formula $\\tilde{\\phi }$ that roughly amounts to recursively replacing each Skolem term $h_\\varphi (v_1,...,v_k)$ with its intended meaning that it should represent the $\\dot{\\prec }$ -least solution to $\\varphi $ if it exists and $\\emptyset $ otherwise.", "Definition 4.13 Let $A(T)$ assert the following: (1) $T$ is the set of integer code for a complete and consistent theory extending $\\mathsf {ZF}+ V = L[\\dot{E}]$ .", "The statement $(\\forall x)(x \\in \\dot{E} \\Rightarrow x \\in \\omega )$ belongs to $T$ .", "(2) (Indiscernibility) For each ${L}$ -formula $\\varphi (v_1,...,v_k)$ and increasing sequences of integers $(a_1,...,a_k)$ and $(b_1,...,b_k)$ , the following sentence belongs to $T$ : $``\\varphi (c_{a_1},...,c_{a_k}) \\Leftrightarrow \\varphi (c_{b_1},...,c_{b_k})\".$ (3) (Unbounded) Let $h(v_1,...,v_k) \\in \\mathsf {skolem}$ .", "The following sentence belongs to $T$ : $``h(c_1,...,c_k) \\in \\mathrm {ON} \\Rightarrow h(c_1,...,c_k) < c_{k + 1}\".$ (This is actually a ${L}^S$ -sentence $\\phi $ .", "Precisely, one means that $\\tilde{\\phi }\\in T$ .)", "(4) (Remarkability) Suppose $h(v_0,...,v_k,w_1,...,v_j) \\in \\mathsf {skolem}$ .", "Then the following sentence belongs to $T$ : $``[h(c_0,...,c_k,c_{a_1},...,c_{a_j}) \\in \\mathrm {ON} \\wedge h(c_0,...,c_k,c_{a_1},...,c_{a_j}) < c_{a_1}]$ $\\Rightarrow h(c_1,...,c_k,c_{a_1},...,c_{a_j}) = h(c_1,...,c_k,c_{b_1},...,c_{b_k})\"$ for all increasing sequences of integers $(a_1,...,a_j)$ and $(b_1,...,b_j)$ such that $k < a_1$ and $k < b_1$ .", "Note that $A(T)$ is an arithmetical statement asserting syntactical conditions on the ${L}^I$ -theory $T$ .", "Now assume $T$ is a set of integers so that $A(T)$ .", "Let $K$ be a linear ordering.", "For each $a \\in K$ , let $c_a$ be distinct new constant symbols.", "Let ${L}^{S,K}$ consists of $\\dot{\\in }$ , $\\dot{E}$ , and new constant symbols $h(c_{a_1},...,c_{a_k})$ for each $h \\in \\mathsf {skolem}$ and increasing tuple $(a_1,...,a_k)$ in $K$ .", "An equivalence relation is defined on the new constants of ${L}^{S,K}$ by have $h_1(c_{a_1},...,c_{a_n}) \\sim h_2(c_{a_1},...,c_{a_n})$ if and only $\\tilde{\\phi }\\in T$ where $\\phi $ is the ${L}^S$ formula $h_1(c_1,...,c_n) = h_2(c_1,...,c_n)$ .", "The membership relation can be defined on Skolem constant by referring to $T$ in a similar manner.", "By taking quotients, one obtains a structure denoted $\\Gamma (T,K)$ .", "$K$ embeds into $\\Gamma (T,K)$ by the map $j(a) = [c_a]_\\sim $ .", "Fact 4.14 Let $T \\subseteq \\omega $ be such that $A(T)$ .", "Then $\\Gamma (T,\\alpha )$ is wellfounded for all ordinals $\\alpha $ if and only if $\\Gamma (T,\\alpha )$ is wellfounded for all $\\alpha < \\omega _1$ .", "Suppose $w \\in {\\mathrm {WO}}$ and $(\\omega ,<_w)$ is its associated wellordering relation on $\\omega $ .", "One can check that in this case one can find a structure on $\\omega $ which is isomorphic to $\\Gamma (T,\\mathrm {ot}(w))$ .", "This structure is recursive in $T$ and $w$ and produced uniformly from $T$ and $w$ .", "In such case that $w \\in {\\mathrm {WO}}$ , one will denote $\\Gamma (T,w)$ to be this structure on $\\omega $ recursive in $T$ and $w$ .", "Suppose $M$ is an $\\dot{\\in }$ structure on $\\omega $ .", "For each $k \\in \\omega $ , there is a recursive function $\\mathsf {ST}$ so that $\\mathsf {ST}(M,k)$ is a structure on $\\omega $ isomorphic to $(k,\\dot{\\in }^M)$ .", "Since $\\Gamma (T,w)$ is considered as a structure on $\\omega $ , for each $k \\in \\Gamma (T,w)$ such that $\\Gamma (T,w) \\models k$ is an ordinal, $\\mathsf {ST}(\\Gamma (T,w),k)$ gives (uniformly) a structure on $\\omega $ isomorphic to ordinal $k$ in $\\Gamma (T,w)$ .", "Note that for each $\\beta < \\omega _1$ , the set of $(T,w)$ such that $\\beta $ is an initial segment of the ordinal of $\\Gamma (T,w)$ is a ${\\mathbf {\\Delta }_1^1}$ set (using any element of ${\\mathrm {WO}}_\\beta $ as a parameter).", "Consider formulas $\\varphi (v,w)$ stating $v \\in \\mathrm {ON} \\wedge \\psi (v,w)$ where $\\psi $ is some other ${L}$ -formula.", "After fixing an enumeration of such formulas, one has an enumeration $\\langle t_n : n \\in \\omega \\rangle $ of Skolem constants whose intention is to name ordinals.", "For each $\\beta < \\omega _1$ and $\\gamma < \\omega _1$ , the set of $(T,w,n)$ such that $\\beta + \\omega $ is an initial segment of $\\Gamma (T,w)$ and $t_n^{\\Gamma (T,w)}(\\beta ) < \\gamma $ is a ${\\mathbf {\\Delta }_1^1}$ set (using any elements of ${\\mathrm {WO}}_\\beta $ and ${\\mathrm {WO}}_\\gamma $ ).", "Similarly for $t_n^{\\Gamma (T,w)}(\\beta ) = \\gamma $ .", "Let $\\mathrm {B}(T)$ assert that for all $\\alpha < \\omega _1$ , $\\Gamma (T,\\alpha )$ is wellfounded.", "This is equivalent to $(\\forall w)(w \\in {\\mathrm {WO}}\\Rightarrow \\Gamma (T,w) \\text{ is wellfounded}).$ This is $\\Pi _2^1$ .", "Definition 4.15 A set $T$ is a sharp of a real if and only if $A(T) \\wedge B(T)$ .", "Thus the statement that $T$ is a sharp of a real is $\\Pi _2^1$ .", "$T$ is the sharp of a real $x$ if and only if $T$ is a sharp of a real and for all $n \\in \\omega $ , $``n \\in \\dot{E}\" \\in T$ if and only if $n \\in x$ .", "It can be shown that if there is a $T$ such that $T$ is a sharp of $x$ , then this $T$ is unique.", "Therefore $x^\\sharp $ will denote this unique $T$ for the real $x$ .", "Solovay showed that every subset of $\\omega _1$ is constructible from a real under $\\mathsf {AD}$ : Fact 4.16 (Solovay) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "There is an single formula $\\theta (x,\\alpha )$ so that for all $A \\subseteq \\omega _1$ , there is an $x \\in \\mathbb {R}$ so that $\\alpha \\in A \\Leftrightarrow L[x] \\models \\theta (x,\\alpha )$ .", "Let $A \\subseteq \\omega _1$ .", "Consider the game $\\begin{tikzpicture}\\node at (0,0) {S_A};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {x(0)};\\node at (4,.5) {x(1)};\\node at (6,.5) {x(2)};\\node at (8,.5) {x(3)};\\node at (11,.5) {x};$ t (1,-.5) II; t (3,-.5) $y(1)$ ; t (5,-.5) $y(2)$ ; t (7,-.5) $y(3)$ ; t (9,-.5) $y(4)$ ; t (11,-.5) $y$ ; $$ Player 2 wins if the disjunction of the following holds (1) $x \\notin {\\mathrm {WO}}$ .", "(2) $x \\in {\\mathrm {WO}}\\Rightarrow (\\forall n)((y_n \\in {\\mathrm {WO}}) \\wedge (\\exists \\gamma \\ge \\mathrm {ot}(x))(\\lbrace \\mathrm {ot}(y_n) : n \\in \\omega \\rbrace = A \\cap \\gamma ))$ .", "(Here $y_n$ is defined by $y_n(k) = y(\\langle n,k\\rangle )$ .)", "Suppose Player 1 has a winning strategy $\\sigma $ .", "Then for all $y \\in \\mathbb {R}$ , $\\sigma (y) \\in {\\mathrm {WO}}$ .", "Then $E = \\lbrace v : (\\exists y)(v = \\sigma (y))\\rbrace $ is an ${\\mathbf {\\Sigma }_1^1}$ subset of ${\\mathrm {WO}}$ .", "Thus there is some $\\gamma < \\omega _1$ so that $\\mathrm {ot}(v) < \\gamma $ for all $v \\in E$ .", "Let $y \\in \\mathbb {R}$ be such that for all $n$ , $y_n \\in {\\mathrm {WO}}$ and $\\lbrace \\mathrm {ot}(y_n) : n \\in \\omega \\rbrace = A \\cap \\gamma $ .", "Then Player 1 using $\\sigma $ loses if Player 2 plays this $y$ .", "By $\\mathsf {AD}$ , Player 2 must have a winning strategy $\\tau $ .", "Note that $\\alpha \\in A$ if and only if $L[\\tau ]$ satisfies that $1_{\\mathrm {Coll}(\\omega ,\\alpha + 1)}$ forces that for all $w \\in {\\mathrm {WO}}_{\\alpha + 1}$ , there is some $n$ so that $\\mathrm {ot}(\\tau (w)_n) = \\alpha $ .", "The formula $\\theta $ is defined to assert this statement.", "The real associated to $A$ is of course $\\tau $ .", "Fact 4.17 Let $f : \\omega _1 \\rightarrow \\omega _1$ .", "There is some $x \\in \\mathbb {R}$ and an $n \\in \\omega $ so that for all $\\alpha < \\omega _1$ , $f(\\alpha ) = t_n^{\\Gamma (x^\\sharp ,\\beta )}(\\alpha )$ whenever $\\beta > \\alpha $ .", "Consider $A = \\lbrace \\langle \\alpha ,\\beta \\rangle : f(\\alpha ) = \\beta \\rbrace $ where $\\langle \\cdot , \\cdot \\rangle $ denotes some constructible pairing function.", "By Fact REF , there is some real $x$ so that $\\langle \\alpha ,\\beta \\rangle \\in A$ if and only if $L[x] \\models \\theta (x,\\langle \\alpha ,\\beta \\rangle )$ .", "Consider the formula $\\psi (\\beta ,\\alpha )$ if and only if $\\theta (\\dot{E},\\langle \\alpha ,\\beta \\rangle )$ .", "There is some $n$ so that $t_n$ is the Skolem function associated to this formula.", "Fix two ordinals $\\alpha < \\beta $ .", "Since $\\Gamma (x^\\sharp ,\\beta )$ is of the form $L_{\\beta ^{\\prime }}[x]$ for some $\\beta ^{\\prime } \\ge \\beta $ and $\\Gamma (x^\\sharp ,\\beta ) \\prec _{\\omega } L[x]$ , $t_n^{\\Gamma (x^\\sharp ,\\beta )}(\\alpha ) = f(\\alpha )$ .", "It should be noted that coding functions by simply coding their graph as a subset of $\\omega _1$ is generally not good enough.", "For each $\\alpha $ , if one had to search for the corresponding $\\beta $ which is the value of $f(\\alpha )$ , the coding system will be too complex.", "In this case, one uses the Skolem term to output the value of $f$ when given $\\alpha $ .", "This needs to be handled in the proof of Kechris as well.", "See Remark REF .", "There it is handled by a modified version of the Solovay game used above.", "Theorem 4.18 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "$\\omega _1$ is $\\omega _1$ -reasonable.", "If $x \\in \\mathbb {R}= {{}^\\omega \\omega }$ , let $\\mathsf {cut}(x) \\in {{}^\\omega \\omega }$ be defined by $\\mathsf {cut}(x)(k) = x(k + 1)$ .", "One will now define a good coding system for ${}^{\\omega _1}\\omega _1$ .", "The pointclass of the coding system is ${\\mathbf {\\Sigma }_1^1}$ .", "For each $x \\in \\mathbb {R}$ , let $\\mathsf {decode}(x)(\\alpha ,\\beta )$ holds if and only if $A(\\mathsf {cut}(x)) \\wedge t_{x(0)}^{\\Gamma (\\mathsf {cut}(x),\\alpha + \\omega )}(\\alpha ) = \\beta $ .", "Define $\\mathsf {GC}_{\\beta ,\\gamma }$ by $x \\in \\mathsf {GC}_{\\beta ,\\gamma }$ if and only if $\\mathsf {decode}(x)(\\beta ,\\gamma )$ holds.", "Suppose $f : \\omega _1 \\rightarrow \\omega _1$ .", "Let $z$ be the real and $n$ be the natural number obtained by applying Fact REF to $f$ .", "Let $x \\in \\mathbb {R}$ be such that $x(0) = n$ and $\\mathsf {cut}(x) = z^\\sharp $ .", "Then $\\mathsf {decode}(x) = f$ .", "Such an $x$ will be called a sharp code for $f$ .", "Let $\\beta < \\omega _1$ and $\\gamma < \\omega _1$ , note that $\\mathsf {GC}_{\\beta ,\\gamma }$ consists of those $x \\in \\mathbb {R}$ such that the following holds (i) $A(\\mathsf {cut}(x))$ (ii) $\\beta $ is in the wellfounded part of $\\Gamma (\\mathsf {cut}(x),\\beta + \\omega )$ (iii) $t_{x(0)}^{\\Gamma (\\mathsf {cut}(x),\\beta + \\omega )}(\\beta ) = \\gamma $ .", "By the discussion earlier, $\\mathsf {GC}_{\\beta ,\\gamma }$ is ${\\mathbf {\\Delta }_1^1}$ .", "Note that since $t_{x(0)}$ is a Skolem function, $\\tau _{x(0)}^{\\Gamma (\\mathsf {cut}(x),\\beta + \\omega )}$ is a function on its domain.", "Now suppose $A \\subseteq \\mathsf {GC}_{\\beta }$ is ${\\mathbf {\\Sigma }_1^1}$ .", "Let $E = \\lbrace v \\in \\mathbb {R}: (\\exists x)(x \\in A \\wedge v = \\mathsf {ST}(\\Gamma (\\mathsf {cut}(x),\\beta + \\omega ),t_{x(0)}^{\\Gamma (\\mathsf {cut}(x),\\beta + \\omega )})\\rbrace $ .", "Then $E$ is a ${\\mathbf {\\Sigma }_1^1}$ subset of ${\\mathrm {WO}}$ .", "By the boundedness lemma, there is some $\\delta < \\omega _1$ so that for all $v \\in E$ , $\\mathrm {ot}(v) < \\delta $ .", "Thus $A \\subseteq \\bigcup _{\\gamma < \\delta } \\mathsf {GC}_{\\beta ,\\gamma }$ .", "It has been shown that $({\\mathbf {\\Sigma }_1^1},\\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\omega _1,\\gamma < \\omega _1)$ is a good coding system for ${}^{\\omega _1}\\omega _1$ .", "Next, one will give the argument of Kechris that $\\omega _1$ is a strong partition cardinal.", "This proof uses category and the Kechris-Woodin generic coding idea.", "However at $\\omega _1$ , the generic coding function is essentially trivial and exists without even $\\mathsf {AD}$ .", "For many other purposes, the generic coding function is very useful.", "Note that in Martin's proof, the indiscernibility models are used to make complexity computations.", "In the argument of Kechris, category quantifiers will be used to ensure sets have the correct complexity.", "First some results on the Banach-Mazur games.", "See [16] Section 21.C and 21.D for the proofs of these results: Fact 4.19 (Banach-Mazur Game) Assume $\\mathsf {ZF}$ .", "Let $A \\subseteq {{}^\\omega \\omega }$ .", "Define $G^*(A)$ by $\\begin{tikzpicture}\\node at (0,0) {G^*_A};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {s_0};\\node at (4,.5) {s_2};\\node at (6,.5) {s_4};$ t (1,-.5) II; t (3,-.5) $s_1$ ; t (5,-.5) $s_3$ ; t (7,-.5) $s_5$ ; $Player 1 and 2 alternatingly play nonempty strings in $ <$.", "Let $ x = s0 s1 s2 ...$ be the concatenation of the moves.", "Player 2 wins $ GA*$ if and only if $ x A$.$ (1) $A$ is comeager if and only if Player 2 has a winning strategy in $G^*_A$ .", "(2) There is some $s \\in {{}^{<\\omega }\\omega }$ so that ${{}^\\omega \\omega }\\setminus A$ is comeager in $N_s$ if and only if Player 1 has a winning strategy in $G_A^*$ .", "Fact 4.20 (Unfolded Banach-Mazur Game) Assume $\\mathsf {ZF}$ .", "Let $A \\subseteq {{}^\\omega \\omega }$ and $B \\subseteq {{}^\\omega \\omega }\\times {{}^\\omega \\omega }$ be such that for all $x \\in A$ , there exists a $y \\in {{}^\\omega \\omega }$ so that $B(x,y)$ .", "Let $G^*_{A,B}$ be the following game $\\begin{tikzpicture}\\node at (0,0) {G^*_{A,B}};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {s_0};\\node at (4,.5) {s_2};\\node at (6,.5) {s_4};$ t (1,-.5) II; t (3,-.5) $s_1,y(0)$ ; t (5,-.5) $s_3,y(1)$ ; t (7,-.5) $s_5,y(2)$ ; $Player 1 plays nonempty strings in $ <$.", "Player 2 plays nonempty strings in $ <$ and an element of $$.", "Let $ x = s0 s1 s2...$ be the concatenation of the strings played by both player.", "Let $ y$ be the real produced by the extra elements of $$ played by Player 2.", "Player 2 wins $ G*A,B$ if and only if $ x A$ and $ (x,y) B$.$ If Player 2 wins $G_{A,B}^*$ , then Player 2 wins $G^*_A$ .", "If Player 1 wins $G_{A,B}^*$ , then Player 1 wins $G^*_A$ .", "Corollary 4.21 Assume $\\mathsf {ZF}$ .", "Let $A \\subseteq {{}^\\omega \\omega }$ be a ${\\mathbf {\\Sigma }_1^1}$ set.", "Let $B \\subseteq {{}^\\omega \\omega }\\times {{}^\\omega \\omega }$ be a $\\mathbf {\\Pi }_1^0$ set so that $A = \\pi _1[B]$ .", "$A$ is comeager if and only if Player 2 has a winning strategy in $G^*_{A,B}$ .", "Note that if $A = \\pi _1[B]$ , then the winning condition for Player 2 in $G^*_{A,B}$ is equivalent to merely $(x,y) \\in B$ .", "Thus in this case $G^*_{A,B}$ is a closed game.", "Thus the determinacy of such games hold under $\\mathsf {ZF}$ .", "Now the result follows from Fact REF and REF .", "If $\\alpha < \\omega _1$ , one can define a topology on ${}^\\omega \\alpha $ be declaring the basic open sets to be sets of the form $N_s^\\alpha = \\lbrace f \\in {}^\\omega \\alpha : f \\supseteq s\\rbrace $ , where $s \\in {}^{<\\omega }\\alpha $ .", "Using this topology, one can define the notions of comeagerness and meagerness for ${}^\\omega \\alpha $ .", "As $\\alpha $ is countable, ${}^\\omega \\alpha $ with this topology is homeomorphic to ${{}^\\omega \\omega }$ .", "Observe that the set $\\mathrm {surj}_\\alpha = \\lbrace f \\in {}^\\omega \\alpha : f \\text{ is a surjection}\\rbrace $ is a comeager subset of ${}^\\omega \\alpha $ .", "If $A \\subseteq \\mathbb {R}\\times {}^\\omega \\alpha $ , then one writes $(\\forall ^*_\\alpha f)A(x,f)$ as an abbreviation for the statement that $A(x,f)$ holds for comeagerly many $f$ in ${}^\\omega \\alpha $ .", "Corollary 4.22 Assume $\\mathsf {ZF}$ .", "Let $A \\subseteq {{}^\\omega \\omega }\\times {{}^\\omega \\omega }$ be a ${\\mathbf {\\Sigma }_1^1}$ set.", "Then $A_0(x) \\Leftrightarrow (\\forall _\\omega ^* y)A(x,y)$ is ${\\mathbf {\\Sigma }_1^1}$ and $A_1(x) \\Leftrightarrow (\\forall ^*_\\omega y)\\lnot A(x,y)$ is ${\\mathbf {\\Pi }_1^1}$ .", "Let $B \\subseteq {{}^\\omega \\omega }\\times {{}^\\omega \\omega }\\times {{}^\\omega \\omega }$ be such that $A = \\lbrace (x,y) : (\\exists z)((x,y,z) \\in B)\\rbrace $ where $B \\in \\mathbf {\\Pi }_1^0$ .", "For each $x \\in {{}^\\omega \\omega }$ , let $B_x = \\lbrace (y,z) : (x,y,z) \\in B\\rbrace $ and $A_x = \\lbrace y : (x,y) \\in A\\rbrace $ .", "Note that $B_x \\in \\mathbf {\\Pi }_1^0$ and $\\pi _1[B_x] = A_x$ .", "If $\\sigma $ is a Player 1 strategy and $\\tau $ is a Player 2 strategy for a game of the form $G^*_{C,D}$ for some appropriate $C$ and $D$ , then let $x_{\\sigma * \\tau }$ be the real produced by the concatenation of the finite strings played by each player.", "Let $y_{\\sigma * \\tau }$ be the auxiliary sequence produced by the moves of $\\tau $ .", "Note that $A_0(x)$ if and only if Player 2 has a winning strategy in $G^*_{A_x,B_x}$ if and only if $(\\exists \\tau )(\\forall \\sigma )((x_{\\sigma * \\tau }, y_{\\sigma *\\tau }) \\in B_x)$ by Fact REF .", "Since $B_x \\in \\mathbf {\\Pi }_1^0$ and $\\mathbf {\\Pi }_1^0$ is closed under $\\forall ^\\mathbb {R}$ , the latter expression is ${\\mathbf {\\Sigma }_1^1}$ .", "For $s \\in {{}^{<\\omega }\\omega }$ , let $\\Phi _s : N_s \\rightarrow {{}^\\omega \\omega }$ be the canonical homeomorphism between $N_s$ and ${{}^\\omega \\omega }$ .", "Let $A^s \\subseteq {{}^\\omega \\omega }\\times {{}^\\omega \\omega }$ be defined by $(x,y) \\in A^s \\Leftrightarrow (x,\\Phi _s^{-1}(y)) \\in A$ .", "Let $B^s \\subseteq {{}^\\omega \\omega }\\times {{}^\\omega \\omega }\\times {{}^\\omega \\omega }$ be defined by $(x,y,z) \\in B^s$ if and only if $(x,\\Phi _s^{-1}(y),z) \\in B$ .", "For the second statement, note that $\\lnot A_1(x)$ if and only if $\\lnot ((\\forall ^*_\\omega y)\\lnot A(x,y))$ .", "Since ${\\mathbf {\\Pi }_1^1}$ sets have Baire property, the latter holds if and only if there exists a $s \\in {{}^{<\\omega }2}$ so that $A_x$ is comeager in $N_s$ .", "Now there exists an $s \\in {{}^{<\\omega }2}$ so that $A_x$ is comeager in $N_s$ if and only if there exists an $s \\in {{}^{<\\omega }2}$ so that $\\Phi _s[A_x \\cap N_s]$ is comeager if and only if there exists an $s \\in {{}^{<\\omega }2}$ so that Player 2 has a winning strategy in $G^*_{(A^s)_x,(B^s)_x}$ .", "As before, the last expression can be checked to be ${\\mathbf {\\Sigma }_1^1}$ .", "Hence $A_1$ is ${\\mathbf {\\Pi }_1^1}$ .", "The following is the (essentially trivial) version of the Kechris-Woodin generic coding function for $\\omega _1$ .", "(See [18].)", "Fact 4.23 There is a continuous function $G : {}^{\\omega }\\omega _1 \\rightarrow {\\mathrm {WO}}$ so that for all $\\ell \\in {}^\\omega \\omega _1$ such that $\\ell (0) = \\lbrace \\ell (n + 1) : n \\in \\omega \\rbrace $ , $G(\\ell ) \\in {\\mathrm {WO}}$ and $\\mathrm {ot}(G(\\ell )) = \\ell (0)$ .", "For each $\\ell $ as above, let $A_\\ell = \\lbrace n \\in \\omega \\setminus \\lbrace 0\\rbrace : (\\forall m)(\\ell (n) = \\ell (m) \\Rightarrow n \\le m)\\rbrace $ .", "Let $G(\\ell )$ be the wellordering with domain $A_\\ell $ so that $m <_{G(\\ell )} n$ if and only if $\\ell (m) < \\ell (n)$ .", "$G(\\ell ) \\in {\\mathrm {WO}}$ and $\\mathrm {ot}(G(\\ell )) = \\ell (0)$ .", "This function is continuous in the sense that for any $n$ , $G(\\ell )\\upharpoonright n$ is determined by $\\ell \\upharpoonright m$ for some $m$ .", "Fact 4.24 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $f : \\omega _1 \\rightarrow \\omega _1$ .", "There is a Player 2 strategy so that for all $v \\in {\\mathrm {WO}}$ , $\\lbrace (\\alpha ,\\beta ) : (\\exists n)((\\tau (v)_n)_0 \\in {\\mathrm {WO}}_\\alpha \\wedge (\\tau (v)_n)_1 \\in {\\mathrm {WO}}_\\beta )\\rbrace = f \\upharpoonright \\gamma $ for some $\\gamma > \\mathrm {ot}(v)$ .", "Here $\\tau (v)$ is $(v * \\tau )_\\mathrm {odd}$ , i.e.", "the real produced by Player 2 when played against Player 1 playing the bits of $v$ .", "Also $f \\upharpoonright \\gamma = \\lbrace (\\alpha ,\\beta ) \\in f : \\alpha < \\gamma \\rbrace $ .", "Recall that if $x \\in \\mathbb {R}$ , $x_n$ is defined by $x_n(k) = x(\\langle n,k\\rangle )$ .", "This is the $n^\\text{th}$ section of $x$ .", "This result states that when given $v \\in {\\mathrm {WO}}$ , the integer sections of $\\tau (v)$ codes $f$ up to some ordinal $\\gamma $ greater than $\\mathrm {ot}(v)$ .", "Consider the game $S_f$ defined by $\\begin{tikzpicture}\\node at (0,0) {S_f};\\end{tikzpicture}\\node at (1,.5) {I};\\node at (2,.5) {v(0)};\\node at (4,.5) {v(1)};\\node at (6,.5) {v(2)};\\node at (8,.5) {v(3)};\\node at (11,.5) {v};$ t (1,-.5) II; t (3,-.5) $r(0)$ ; t (5,-.5) $r(1)$ ; t (7,-.5) $r(2)$ ; t (9,-.5) $r(3)$ ; t (11,-.5) $r$ ; $Player 2 wins if and only if $ WO$ implies that $ {(,) : (n)((rn)0 WO(rn)1 WO)} = f $ for some $ > ot(v)$.$ By essentially the same bounding argument of Fact REF , one has the Player 2 must have the winning strategy in this game.", "Remark 4.25 In the above statement, it is very important that some section of $\\tau (v)$ contains a code for the image of $f(\\alpha )$ .", "To search for $f(\\alpha )$ , which could be quite large in comparison to $\\alpha $ , would push the complexity of $\\mathsf {GC}_{\\alpha ,\\beta }$ beyond ${\\mathbf {\\Delta }_1^1}$ .", "Instead, a particular winning strategy $\\tau $ will take a code $v$ for $\\alpha $ and ouput $\\tau (v)$ which magically contains codes for $f(\\alpha )$ among its integer sections, $\\lbrace (\\tau (v)_n)_1 : n \\in \\omega \\rbrace $ .", "Theorem 4.26 (Kechris' proof) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "$\\omega _1$ is $\\omega _1$ -reasonable.", "One will define a good coding system for ${}^{\\omega _1}\\omega _1$ witnessing $\\omega _1$ -reasonableness of $\\omega _1$ .", "The associated pointclass is ${\\mathbf {\\Sigma }_1^1}$ .", "Let $\\mathsf {add}: \\omega _1 \\times {}^\\omega \\omega _1 \\rightarrow {}^\\omega \\omega _1$ be defined by $\\mathsf {add}(\\alpha ,g)(n) = {\\left\\lbrace \\begin{array}{ll}\\alpha & \\quad n = 0 \\\\g(n - 1) & \\quad n > 0\\end{array}\\right.", "}$ $\\mathsf {add}(\\alpha ,g)$ simply inserts $\\alpha $ at the beginning of $g$ .", "In some fix recursive manner, every real $x$ codes a Player 2 strategy $\\tau _x$ .", "Now fix $\\alpha ,\\beta \\in \\omega _1$ .", "Define the formula $\\varphi ^\\alpha _{\\beta }(x,z)$ to state that $(\\exists n)([(\\tau _x(z)_n)_0 \\in {\\mathrm {WO}}_\\alpha \\wedge (\\tau _x(z)_n)_1 \\in {\\mathrm {WO}}_\\beta ] \\wedge (\\forall m < n)((\\tau _x(z)_m)_0 \\notin {\\mathrm {WO}}_\\alpha )).$ This formula defines a ${\\mathbf {\\Delta }_1^1}$ relation using elements of ${\\mathrm {WO}}_\\alpha $ and ${\\mathrm {WO}}_\\beta $ as parameters.", "Let $\\phi _\\beta ^\\alpha (x)$ be the statement $(\\forall ^*_\\alpha g)\\varphi ^\\alpha _\\beta (x,G(\\mathsf {add}(\\alpha ,g)))$ , where $G$ is the generic coding function of Fact REF .", "Note that since $\\mathrm {surj}_\\alpha $ is comeager in ${}^\\omega \\alpha $ , $(\\forall ^*_\\alpha g)(G(\\mathsf {add}(\\alpha ,g)) \\in {\\mathrm {WO}}_\\alpha )$ .", "Now fix a bijection $B : \\omega \\rightarrow \\alpha $ .", "Let $T : {{}^\\omega \\omega }\\rightarrow {}^\\omega \\alpha $ be defined by $T(r) = B\\circ r$ .", "Define $\\tilde{G} : {{}^\\omega \\omega }\\rightarrow {\\mathrm {WO}}$ by $\\tilde{G}(r) = G(\\mathsf {add}(\\alpha ,T(r)))$ .", "Now $\\tilde{G}$ is a continuous function with the property that for comeagerly many $r \\in {{}^\\omega \\omega }$ (in the usual sense of comeager), $\\tilde{G}(r) \\in {\\mathrm {WO}}_\\alpha $ .", "$\\phi ^\\alpha _\\beta (x)$ is equivalent to $(\\forall ^*_\\omega r)\\varphi ^\\alpha _\\beta (x,\\tilde{G}(r))$ .", "Since $\\varphi ^\\alpha _\\beta $ defines a ${\\mathbf {\\Delta }_1^1}$ relation (in parameters from ${\\mathrm {WO}}_\\alpha $ and ${\\mathrm {WO}}_\\beta $ ), the collection of $(x,r)$ satisfying $\\varphi ^\\alpha _\\beta (x,\\tilde{G}(r))$ is ${\\mathbf {\\Delta }_1^1}$ using parameters from ${\\mathrm {WO}}_\\alpha $ , ${\\mathrm {WO}}_\\beta $ , and a parameter coding the continuous function $\\tilde{G}$ .", "Now the set defined by $\\phi ^\\alpha _\\beta (x)$ is ${\\mathbf {\\Delta }_1^1}$ by Corollary REF .", "(Note this is not done uniformly in the $\\alpha $ and $\\beta $ .)", "If $x \\in \\mathbb {R}$ , define $\\mathsf {decode}(x)(\\alpha ,\\beta )$ if and only if $\\phi _\\beta ^\\alpha (x)$ .", "Define $\\mathsf {GC}_{\\alpha ,\\beta } = \\lbrace x \\in \\mathbb {R}: \\phi _\\beta ^\\alpha (x)\\rbrace $ .", "$\\mathsf {GC}_{\\alpha ,\\beta }$ is ${\\mathbf {\\Delta }_1^1}$ .", "For any $f : \\omega _1 \\rightarrow \\omega _1$ , Fact REF states that there is some $x$ so that $\\mathsf {decode}(x) = f$ .", "Fix $\\alpha < \\omega _1$ .", "For each $x \\in {{}^\\omega \\omega }$ , let $\\psi _0(x,z)$ assert that $(\\exists n)((\\tau _x(z)_n)_0 \\in {\\mathrm {WO}}_\\alpha )$ .", "$\\psi _0(x,z)$ is ${\\mathbf {\\Delta }_1^1}$ using any code of $\\alpha $ as a parameter.", "Let $\\psi _1(x,y,z)$ be the conjunction of the following statements (1) $\\psi _0(x,z) \\wedge \\psi _0(y,z)$ .", "(2) There exists $w_0,w_1 \\in \\mathbb {R}$ , $n_0,n_1 \\in \\omega $ so that (2a) $(\\tau _x(z)_{n_0})_0 \\in {\\mathrm {WO}}_\\alpha $ , $(\\tau _x(z)_{n_0})_1 = w_0$ , and for all $m < n_0$ , $(\\tau _x(z)_m)_0 \\notin {\\mathrm {WO}}_\\alpha $ .", "(2b) $(\\tau _y(z)_{n_0})_0 \\in {\\mathrm {WO}}_\\alpha $ , $(\\tau _y(z)_{n_1})_1 = w_1$ , and for all $m < n_1$ , $(\\tau _y(z)_m)_0 \\notin {\\mathrm {WO}}_\\alpha $ .", "(2c) $w_0 <_{\\Sigma _1^1} w_1$ , where $<_{\\Sigma _1^1}$ is the $\\Sigma _1^1$ relation witnessing that the ordertype function is a $\\Pi _1^1$ -norm.", "Observe that $\\psi _1(x,y,z)$ is ${\\mathbf {\\Sigma }_1^1}$ .", "Let $A \\subseteq \\mathsf {GC}_\\alpha $ and $A$ is ${\\mathbf {\\Sigma }_1^1}$ .", "Define a relation $\\prec $ on ${{}^\\omega \\omega }$ by $x \\prec y$ if and only if $x \\in A \\wedge y \\in A \\wedge (\\forall ^*_\\alpha g)\\psi _1(x,y,G(\\mathsf {add}(\\alpha ,g))).$ By Corollary REF , $\\prec $ is a ${\\mathbf {\\Sigma }_1^1}$ relation on ${{}^\\omega \\omega }$ .", "By checking the various definitions, one can see that for all $x,y \\in A$ , $\\mathsf {decode}(x)(\\alpha ) < \\mathsf {decode}(y)(\\alpha )$ if and only if $x \\prec y$ .", "Since $\\prec $ is a ${\\mathbf {\\Sigma }_1^1}$ relation, it is an $\\omega $ -Suslin relation.", "By the Kunen-Martin theorem ([17] Section 7), the rank must be less than $\\omega _1$ .", "Since $\\omega _1$ is regular, there is some $\\gamma < \\omega _1$ so that $A \\subseteq \\bigcup _{\\beta < \\gamma } \\mathsf {GC}_\\beta $ .", "(Some presentations of the Kunen-Martin theorem uses $\\mathsf {DC}_\\mathbb {R}$ , but with some care, one can remove the use of $\\mathsf {DC}_\\mathbb {R}$ .", "Another approach in the case of $\\omega $ -Suslin relations is to observe that if $K$ denotes a tree witnessing a relation is $\\omega $ -Suslin, one can prove the Kunen-Martin theorem in $L[K] \\models \\mathsf {AC}$ and argue in $L[K]$ that the Kunen-Martin tree is wellfounded there and hence in the real world.", "Then one shows that this tree still works in the real world $V$ .", "Alternatively, if one assumes Kechris's result that $L(\\mathbb {R}) \\models \\mathsf {AD}$ implies $L(\\mathbb {R}) \\models \\mathsf {DC}_\\mathbb {R}$ , then one can absorb this problem into $L(\\mathbb {R})$ and apply the Kunen-Martin Theorem in $L(\\mathbb {R})$ .)", "It has been shown that $({\\mathbf {\\Sigma }_1^1}, \\mathsf {decode},\\mathsf {GC}_{\\beta ,\\gamma } : \\beta < \\omega _1, \\gamma < \\omega _1)$ is a good coding system for ${}^{\\omega _1}\\omega _1$ .", "Corollary 4.27 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "$\\omega _1 \\rightarrow _* (\\omega _1)^{\\omega _1}_2$ .", "Definition 4.28 Let $\\lambda \\le \\omega _1$ .", "Let $\\mu _{\\omega _1}^\\lambda $ consists of those subsets $X$ of $[\\omega _1]^{\\lambda }$ which contain a set of the form $[C]^{\\lambda }_*$ where $C$ is a club subset of $\\omega _1$ .", "Each $\\mu _{\\omega _1}^\\lambda $ is a countably complete ultrafilter.", "Corollary 4.29 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\lambda \\le \\omega _1$ .", "$\\prod _{[\\omega _1]^{\\lambda }}\\omega _1 \\slash \\mu _{\\omega _1}^\\lambda = (\\prod _{[\\omega _1]^{\\lambda }}\\omega _1 \\slash \\mu _{\\omega _1}^\\lambda )^{L(\\mathbb {R})}$ .", "Since $L(\\mathbb {R}) \\models \\mathsf {DC}$ by a result of Kechris [15], $\\prod _{[\\omega _1]^{\\lambda }}\\omega _1 \\slash \\mu _{\\omega _1}^\\lambda $ is an ordinal.", "Note that the good coding system for ${}^\\lambda \\omega _1$ constructed above belong to $L(\\mathbb {R})$ .", "Now apply Theorem REF .", "Remark 4.30 This section presented two proofs of the strong partition property for $\\omega _1$ .", "The original proof of Martin uses indiscernibility.", "Many early results were proved using this indiscernibility idea.", "For example, Kunen showed in $\\mathsf {AD}+ \\mathsf {DC}$ that $\\delta _3^1$ is a weak partition cardinal and $\\delta _3^1 = \\aleph _{\\omega + 1}$ .", "The second proof of Kechris is perhaps the simplest proof using classical descriptive set theoretic ideas.", "The ideas of the Kechris-Woodin generic coding function is very useful in various different settings.", "However, no proof in the flavor of these two arguments is known to establish the strong partition property at $\\delta _3^1$ .", "There is another proof of the strong partition property for $\\omega _1$ due to Jackson.", "It uses ideas such as the Kunen tree and an analysis of all the measures on $\\omega _1$ due to Kunen.", "An exposition can be found in [10], [12], and [11].", "More generally, Jackson developed the theory of description.", "This theory produces the only known proof that $\\delta _{3}^1$ and in fact all $\\delta _{2n + 1}^1$ are strong partition cardinal under $\\mathsf {AD}+ \\mathsf {DC}$ .", "Jackson has also computed the identity of these cardinals.", "For example, $\\delta _5^1 = \\aleph _{{\\omega ^{\\omega ^\\omega }} + 1}$ and in fact there is a general formula for computing $\\delta _{2n + 1}^1$ .", "See [11] for more information.", "It should noted that Jackson's proof that $\\omega _1$ is a strong partition cardinal, Kunen's result that $\\delta _3^1$ is a weak partition cardinal, and all of Jackson's results mentioned above using description theory are proved using $\\mathsf {DC}$ .", "The original Martin proof and the Kechris proof of the strong partition property of $\\omega _1$ are within $\\mathsf {ZF}+ \\mathsf {AD}$ ." ], [ "Kunen Functions and Partition Properties at $\\omega _2$", "Fact 5.1 Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Let $A \\in \\mu $ , let $\\mathsf {enum}_A : \\omega _1 \\rightarrow A$ be the increasing enumeration of $A$ .", "Then $\\lbrace \\alpha \\in A : \\mathsf {enum}_A(\\alpha ) = \\alpha \\rbrace \\in \\mu $ .", "Let $f : \\kappa \\rightarrow \\kappa $ and $X \\subseteq f[\\kappa ]$ so that $\\lbrace \\alpha < \\kappa : f(\\alpha ) \\in X\\rbrace \\in \\mu $ , then $\\mathsf {enum}_X =_\\mu f$ .", "Suppose not.", "Then $B = \\lbrace \\alpha \\in A : \\alpha < \\mathsf {enum}_A(\\alpha )\\rbrace \\in \\mu $ .", "Since $\\mathsf {enum}_A$ is an order-preserving function, $B = \\lbrace \\alpha \\in A : \\mathsf {enum}_A^{-1}(\\alpha ) < \\alpha \\rbrace $ .", "By normality, there is some $C \\subseteq B$ and $\\gamma < \\kappa $ so that for all $\\alpha \\in C$ , $\\mathsf {enum}_A^{-1}(\\alpha ) = \\gamma $ .", "This contradicts the injectiveness of $\\mathsf {enum}_A$ .", "For the second statement: Let $A = \\lbrace \\alpha : f(\\alpha ) \\in X\\rbrace $ .", "Note that $f \\circ \\mathsf {enum}_A = \\mathsf {enum}_X$ .", "Let $B \\subseteq A$ with $B \\in \\mu $ be such that $\\mathsf {enum}_A(\\alpha ) = \\alpha $ for all $\\alpha \\in B$ .", "Then for all $\\gamma \\in B$ , $\\mathsf {enum}_X(\\alpha ) = f(\\mathsf {enum}_A(\\alpha )) = f(\\alpha )$ .", "Hence $\\mathsf {enum}_X =_\\mu f$ .", "The next result states that an ultrapower of a strong partition cardinal by a measure is a cardinal as long as that ultrapower is well founded.", "Fact 5.2 (Martin) Let $\\kappa $ be a strong partition cardinal and $\\mu $ be a measure on $\\kappa $ .", "If ${}^\\kappa \\kappa \\slash \\mu $ is wellfounded, then ${}^\\kappa \\kappa \\slash \\mu $ is a cardinal.", "Assuming $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {DC}_\\mathbb {R}$ .", "If $\\kappa < \\Theta $ is a strong partition cardinal and $\\mu $ is a measure on $\\kappa $ , then ${}^\\kappa \\kappa \\slash \\mu $ is a cardinal.", "Suppose there is an $h \\in {}^\\kappa \\kappa \\slash \\mu $ and an injection $\\Phi : {}^\\kappa \\kappa \\slash \\mu \\rightarrow \\prod _{\\kappa }h \\slash \\mu $ , where $\\prod _{\\kappa }h \\slash \\mu $ is the collection of $[f]_\\mu $ such that $f <_\\mu h$ .", "Note that $\\prod _{\\kappa } h \\slash \\mu $ is the initial segment of ${}^\\kappa \\kappa \\slash \\mu $ determined by $[h]_\\mu $ .", "Let $\\mathcal {T} = (\\kappa \\times 2, \\sqsubset )$ denote the lexicographic ordering on $\\kappa \\times 2$ .", "If $F : \\kappa \\times 2 \\rightarrow \\kappa $ is a $\\mathcal {T}$ -increasing function, then let $f_0, f_1: \\kappa \\rightarrow \\kappa $ be defined by $f_0(\\alpha ) = F(\\alpha ,0)$ and $f_1(\\alpha ) = F(\\alpha ,1)$ .", "Note that $f_0$ and $f_1$ are increasing functions.", "Define $P : [\\kappa ]^\\mathcal {T} \\rightarrow 2$ by $P(F) = {\\left\\lbrace \\begin{array}{ll}0 & \\quad \\Phi ([f_0]_\\mu ) < \\Phi ([f_1]_\\mu ) \\\\1 & \\quad \\text{otherwise}\\end{array}\\right.", "}$ Since $\\mathrm {ot}(\\mathcal {T}) = \\kappa $ , $\\kappa \\rightarrow (\\kappa )^\\kappa _2$ implies that there is an $E \\subseteq \\kappa $ with $|E| = \\kappa $ which is homogeneous for this partition.", "Suppose $P$ is homogeneous taking value 1.", "For each $n \\in \\omega $ , define $g_n : \\kappa \\rightarrow \\kappa $ by $g_n(\\alpha ) = \\mathsf {enum}_E(\\omega \\cdot \\alpha + n)$ .", "Note that if $m < n$ , then $g_m <_\\mu g_n$ .", "For any $m<n$ , define $G^{m,n} : \\mathcal {T} \\rightarrow \\kappa $ by $G^{m,n}(\\alpha ,0) = g_m(\\alpha )$ and $G^{m,n}(\\alpha ,1) = g_n(\\alpha )$ .", "Note that $G^{m,n}$ is $\\mathcal {T}$ -increasing and $G^{m,n} \\in [\\kappa ]^{\\mathcal {T}}$ .", "Since $P(G^{m,n}) = 1$ , one has that $\\Phi ([g_n]_\\mu ) = \\Phi ([G^{m,n}_1]_{\\mu }) \\le \\Phi ([G^{m,n}_0]_\\mu ) = \\Phi ([g_m]_\\mu )$ .", "Since $\\Phi $ is an injection, one must actually have $\\Phi ([g_n]_\\mu ) < \\Phi ([g_m]_\\mu )$ .", "Thus $\\langle \\Phi ([g_k]_\\mu ) : k \\in \\omega \\rangle $ is an infinite decreasing sequence in ${}^\\kappa \\kappa \\slash \\mu $ .", "This contradicts the assumption that ${}^\\kappa \\kappa \\slash \\mu $ is wellfounded.", "Thus $E$ must be homogeneous for $P$ taking value 0.", "Let $\\ell \\in {}^\\kappa \\kappa $ be such that $[\\ell ]_\\mu > [h]_\\mu $ and $\\ell (\\alpha ) > 0$ for all $\\alpha \\in \\kappa $ .", "Let $U = \\lbrace (\\alpha ,\\beta ) \\in \\kappa \\times \\kappa : \\beta < \\ell (\\alpha )\\rbrace $ .", "Let $\\mathcal {U}$ denote $(U,\\prec )$ where $\\prec $ denotes the lexicographic ordering of $U$ .", "Again, $\\mathrm {ot}(\\mathcal {U}) = \\kappa $ .", "Let $F = \\lbrace \\mathsf {enum}_E(\\omega \\cdot \\alpha ) : \\alpha < \\kappa )\\rbrace $ .", "Note that $|F| = \\kappa $ .", "Pick some $K : \\mathcal {U} \\rightarrow F$ which is order-preserving.", "For any $f <_\\mu \\ell $ , define $k_f : \\kappa \\rightarrow \\kappa $ by $k_f(\\alpha ) = {\\left\\lbrace \\begin{array}{ll}K(\\alpha ,f(\\alpha )) & \\quad f(\\alpha ) < \\ell (\\alpha ) \\\\K(\\alpha ,0) & \\quad \\text{otherwise}\\end{array}\\right.", "}.$ Note that if $f <_\\mu g <_\\mu \\ell $ , then $k_f <_\\mu k_g$ and that for all $f <_\\mu \\ell $ , $k_f \\in [F]^\\kappa $ .", "Define $\\Psi : \\prod _{\\kappa }\\ell \\slash \\mu \\rightarrow \\prod _{\\kappa } h \\slash \\mu $ as follows: Let $A < [\\ell ]_\\mu $ .", "Pick $f \\in A$ , i.e.", "a representative of $A$ .", "Let $\\Psi (A) = \\Phi ([k_f]_{\\mu })$ .", "One can check that $\\Psi $ is well defined.", "The next claim is that $\\Psi $ is order preserving: Suppose $A < B < [\\ell ]_\\mu $ .", "Let $f \\in A$ and $g \\in B$ be representatives for $A$ and $B$ , respectively.", "Since $k_f <_\\mu k_g$ , the set $C = \\lbrace \\alpha \\in \\kappa : k_g(\\alpha ) \\le k_f(\\alpha )\\rbrace \\notin \\mu $ .", "Define $k_g^{\\prime } : \\kappa \\rightarrow E$ by $k_g^{\\prime }(\\alpha ) = k_g(\\alpha )$ if $\\alpha \\notin C$ and $k_g^{\\prime }(\\alpha )$ is the next element of $E$ greater than $k_f(\\alpha )$ if $\\alpha \\in C$ .", "The purpose of defining $F$ in the manner above was to ensure that between two successive elements of $F$ , there are at least $\\omega $ many point of $E$ in between.", "From this, one can verify that $k_g^{\\prime } \\in [E]^\\kappa $ and $k_f(\\alpha ) < k_g^{\\prime }(\\alpha ) < k_f(\\alpha + 1)$ for all $\\alpha < \\kappa $ .", "Note that $[k_g^{\\prime }]_\\mu = [k_g]_\\mu $ since $k_g$ and $k_g^{\\prime }$ agree off of $C \\notin \\mu $ .", "Let $F^{k_f,k_g^{\\prime }} : \\mathcal {T} \\rightarrow \\kappa $ be defined by $F^{k_f,k_g^{\\prime }}(\\alpha ,i) = {\\left\\lbrace \\begin{array}{ll}k_f(\\alpha ) & \\quad i = 0\\\\k_g^{\\prime }(\\alpha ) & \\quad i = 1\\end{array}\\right.", "}$ Again using the main property of $F$ , one can verify that $F^{k_f,k^{\\prime }_g} \\in [\\kappa ]^{\\mathcal {T}}$ .", "Thus $P(F^{k_f,k_g^{\\prime }}) = 0$ .", "Thus $\\Psi (A) = \\Phi ([k_f]_\\mu ) < \\Phi ([k_g^{\\prime }]_\\mu ) = \\Phi ([k_g]_\\mu ) = \\Psi (B)$ .", "It has been shown that $\\Psi $ is order preserving.", "However, this is not possible since $[h]_\\mu < [\\ell ]_\\mu $ .", "This completes the proof.", "Fact 5.3 Let $\\kappa $ be a measurable cardinal possessing a normal $\\kappa $ -complete measure $\\mu $ .", "Let $\\epsilon < \\kappa $ .", "Let $\\mathcal {T}^\\epsilon = (\\kappa \\times \\epsilon , \\sqsubset )$ where $\\sqsubset $ is the lexicographic ordering.", "For $F : \\mathcal {T}^\\epsilon \\rightarrow \\kappa $ which is order preserving and $\\alpha < \\epsilon $ , let $F^\\alpha \\in [\\kappa ]^\\kappa $ be defined by $F^\\alpha (\\gamma ) = F(\\gamma ,\\alpha )$ .", "Let $\\langle f_\\alpha : \\alpha < \\epsilon \\rangle $ be a sequence in $[\\kappa ]^\\kappa $ such that for all $\\alpha < \\beta < \\epsilon $ , $f_\\alpha <_\\mu f_\\beta $ .", "Then there is an $F \\in [\\kappa ]^{\\mathcal {T}^\\epsilon }$ so that for all $\\alpha < \\epsilon $ , $F^\\alpha =_\\mu f_\\alpha $ .", "Moreover, if $D \\subseteq \\kappa $ with $|D| = \\kappa $ and each $f_\\alpha \\in [D]^\\kappa $ , then one can find $F \\in [D]^{\\mathcal {T}^\\epsilon }$ with the above property.", "Suppose $F(\\alpha ^{\\prime },\\beta ^{\\prime })$ has been defined for all $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )$ .", "Then let $F(\\alpha ,\\beta )$ be the least element in the range of $f_\\beta $ greater than $F(\\alpha ^{\\prime },\\beta ^{\\prime })$ for all $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )$ .", "The claim is that $F$ has the desired properties: This is proved by induction on $\\beta < \\epsilon $ .", "Consider $\\beta = 0$ .", "Suppose $A = \\lbrace \\alpha < \\kappa : F(\\alpha ,0) \\ne f_0(\\alpha )\\rbrace \\in \\mu $ .", "This means for each $\\alpha \\in A$ , there is some $\\alpha ^{\\prime } < \\alpha $ , and some $\\beta < \\epsilon $ so that $F(\\alpha ^{\\prime },\\beta ) \\ge f_0(\\alpha )$ .", "Let $g : \\kappa \\rightarrow \\kappa $ be defined by $g(\\alpha )$ is the least such $\\alpha ^{\\prime }$ with the property above if $\\alpha \\in A$ and 0 if $\\alpha \\notin A$ .", "Note that $g(\\alpha ) < \\alpha $ for all $\\alpha \\in A$ .", "By the normality of $\\mu $ , there is some $A^{\\prime } \\subseteq A$ with $A^{\\prime } \\in \\mu $ and some $\\delta $ so that $g(\\alpha ) =\\delta $ for all $\\alpha \\in A^{\\prime }$ .", "This means for all $\\alpha \\in A^{\\prime } \\in \\mu $ , $f_0(\\alpha ) \\le \\sup \\lbrace F(\\delta ,\\beta ) : \\beta < \\epsilon \\rbrace $ .", "By the $\\kappa $ -completeness of $\\mu $ , one may even find a $\\beta ^*$ and a $A^{\\prime \\prime } \\in \\mu $ so that for all $\\alpha \\in A^{\\prime \\prime }$ , $f_0(\\alpha ) \\le F(\\delta ,\\beta ^*)$ .", "This is clearly impossible.", "This shows that $F^0 =_\\mu f_0$ .", "Suppose $\\beta < \\epsilon $ is such that for all $\\beta ^{\\prime } < \\beta $ , $F^{\\beta ^{\\prime }} = f_{\\beta ^{\\prime }}$ .", "Suppose $\\lnot (F^\\beta =_\\mu f_\\beta )$ .", "Then $A = \\lbrace \\alpha : F^\\beta (\\alpha ) \\ne f_\\beta (\\alpha )\\rbrace \\in \\mu $ .", "Since $f_0(\\alpha ) = F^0(\\alpha )$ for almost all $\\alpha $ , it can not be the case that for almost all $\\alpha \\in A$ , there is some $\\alpha ^{\\prime } < \\alpha $ and some $\\beta ^{\\prime } < \\epsilon $ so that $F^{\\beta ^{\\prime }}(\\alpha ^{\\prime }) \\ge f_\\beta (\\alpha )$ .", "Thus for almost all $\\alpha \\in A$ , there is some $\\beta ^{\\prime } < \\beta $ so that $F^{\\beta ^{\\prime }}(\\alpha ) \\ge f_\\beta (\\alpha )$ .", "By the $\\kappa $ -additivity of $\\mu $ , there is some $\\beta ^* < \\beta $ so that for almost all $\\alpha $ , $F^{\\beta ^*}(\\alpha ) \\ge f_\\beta (\\alpha )$ .", "By the induction hypothesis, one has that $F^{\\beta ^*} =_\\mu f_{\\beta ^*}$ .", "One has shown that $f_\\beta \\le _\\mu f_{\\beta ^*}$ despite the fact that $\\beta ^* < \\beta $ .", "Contradiction.", "The result has been established.", "Fact 5.4 (Martin) Let $\\kappa $ be a strong partition cardinal and $\\mu $ is a normal $\\kappa $ -complete measure on a cardinal $\\kappa $ .", "If ${}^\\kappa \\kappa \\slash \\mu $ is wellfounded, then ${}^\\kappa \\kappa \\slash \\mu $ is a regular cardinal.", "Assuming $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {DC}_\\mathbb {R}$ , if $\\kappa < \\Theta $ is a strong partition cardinal and $\\mu $ is a normal $\\kappa $ -complete measure on $\\kappa $ , then ${}^\\kappa \\kappa \\slash \\mu $ is a regular cardinal.", "By Fact REF , $\\delta = \\mathrm {ot}({}^\\kappa \\kappa \\slash \\mu )$ is a cardinal.", "Assume that ${}^\\kappa \\kappa \\slash \\mu $ is not a regular cardinal.", "There is some $\\gamma < \\delta $ and an increasing function $\\Phi : \\gamma \\rightarrow \\delta $ which is cofinal.", "Let $\\mathcal {T}^2$ , $F^0$ , $F^1$ come from the notation of Fact REF .", "Define a partition $P : [\\kappa ]^{\\mathcal {T}^2} \\rightarrow 2$ by $P(F) = {\\left\\lbrace \\begin{array}{ll}0 & \\quad (\\exists \\beta < \\gamma )([F^0]_\\mu < \\Phi (\\beta ) < [F^1]_\\mu ) \\\\1 & \\quad \\text{otherwise}\\end{array}\\right.", "}$ Let $D \\subseteq \\kappa $ with $|D| = \\kappa $ be a homogeneous set for $P$ .", "(Case I) Assume $D$ is homogeneous for $P$ taking value 0.", "By Fact REF , every nonconstant function $f \\in {}^\\kappa D$ has some $g \\in [D]^\\kappa $ so that $f =_\\mu g$ .", "Thus one can show that $[D]^\\kappa \\slash \\mu $ has ordertype $\\delta $ .", "For each $\\alpha $ , let $f_\\alpha $ and $f_{\\alpha + 1}$ be two elements of $[D]^\\kappa $ which represents the elements of $[D]^\\kappa \\slash \\mu $ of rank $\\alpha $ and $\\alpha + 1$ respectively.", "By Fact REF , there is some $F : [D]^{\\mathcal {T}^2} \\rightarrow \\kappa $ so that $F^0 = f_\\alpha $ and $F^1 = F_{\\alpha + 1}$ .", "Then $P(F) = 0$ .", "Hence let $\\nu _\\alpha $ be the least ordinal less than $\\gamma $ so that $[f_\\alpha ] = [F^\\alpha ]_\\mu < \\Phi (\\nu _\\alpha ) < [F^{\\alpha + 1}]_\\mu = [f_{\\alpha + 1}]_\\mu $ .", "One can check that $\\nu _\\alpha $ depends only on $\\alpha $ and not on the choice of $f_{\\alpha }$ and $f_{\\alpha + 1}$ .", "Also if $\\alpha \\ne \\alpha ^{\\prime }$ , then $\\nu _{\\alpha } < \\nu _{\\alpha ^{\\prime }}$ .", "This gives an order preserving map from $\\delta $ into $\\gamma < \\delta $ which is impossible.", "(Case II) Assume $D$ is homogeneous for $P$ taking value 1.", "As argued above, $[D]^\\kappa \\slash \\mu $ has ordertype $\\delta $ so $[D]^\\kappa \\slash \\mu $ is a cofinal subset of $\\kappa ^\\kappa \\slash \\mu $ .", "Fix $f^* \\in [D]^\\kappa \\slash \\mu $ .", "Since $\\Phi $ is a cofinal map, there is some $\\gamma ^* < \\gamma $ so that $[f^*]_\\mu < \\Phi (\\gamma ^*)$ .", "Then there is some $f^{\\prime } \\in [D]^\\kappa $ so that $[f^*]_\\mu < \\Phi (\\gamma ^*) < [f^{\\prime }]_{\\mu }$ .", "By Fact REF , there is some $F \\in [D]^{\\mathcal {T}^2}$ so that $F^0 =_\\mu f^*$ and $F^1 =_\\mu f^{\\prime }$ .", "Thus $P(F) = 0$ .", "This is impossible since $D$ is homogeneous for $P$ taking value 1.", "The failure of both cases would imply $P$ has no homogeneous subset of size $\\kappa $ violating the assumption that $\\kappa $ is a strong partition cardinal.", "This completes the proof.", "Definition 5.5 Let $\\kappa $ be a cardinal.", "Let $\\mu $ be a $\\kappa $ -complete normal ultrafilter on $\\kappa $ .", "A function $f : \\kappa \\rightarrow \\kappa $ is a block function if and only if $\\lbrace \\alpha \\in \\kappa : f(\\alpha ) < |\\alpha |^+\\rbrace \\in \\mu $ .", "Let $\\Xi : \\kappa \\times \\kappa \\rightarrow \\kappa $ .", "For each $\\alpha < \\kappa $ , let $\\delta ^{\\Xi }_\\alpha = \\sup \\lbrace \\Xi (\\alpha ,\\beta ) : \\beta < \\alpha \\rbrace $ .", "Let $\\Xi _\\alpha : \\alpha \\rightarrow \\delta ^\\Xi _\\alpha $ be defined by $\\Xi _\\alpha (\\beta ) = \\Xi (\\alpha ,\\beta )$ .", "$\\Xi $ is a Kunen function for $f$ with respect to $\\mu $ if and only if $K^\\Xi _f = \\lbrace \\alpha < \\kappa : f(\\alpha ) \\le \\delta ^\\Xi _\\alpha \\wedge \\Xi _\\alpha \\text{ is a surjection}\\rbrace \\in \\mu $ .", "(Here, when one says that $\\Xi _\\alpha $ is a surjection, one is considering $\\Xi _\\alpha $ as a function $\\Xi _\\alpha : \\alpha \\rightarrow \\delta ^\\Xi _\\alpha $ .)", "$K^\\Xi _f$ is the set of $\\alpha $ on which $\\Xi $ provides a bounding for $f$ .", "For $\\beta < \\kappa $ , let $\\Xi ^\\beta : \\kappa \\rightarrow \\kappa $ be defined by $\\Xi ^\\beta (\\alpha ) = \\Xi (\\alpha ,\\beta )$ where $\\alpha > \\beta $ and 0 otherwise.", "$\\Xi ^\\beta $ is a block function provided that $\\lbrace \\alpha < \\kappa : \\Xi _\\alpha \\text{ is a surjection}\\rbrace \\in \\mu $ .", "Fact 5.6 Assume $\\mathsf {ZF}$ .", "Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Suppose $f : \\kappa \\rightarrow \\kappa $ is a block function which possesses a Kunen function $\\Xi $ with respect to $\\mu $ .", "Suppose $G \\in \\prod _{\\alpha \\in \\kappa } f(\\alpha ) \\slash \\mu $ .", "Then there is a $\\beta < \\kappa $ so that $[\\Xi ^\\beta ]_\\mu = G$ Take any $g \\in G$ .", "Let $A = \\lbrace \\alpha \\in K^\\Xi _f : g(\\alpha ) < f(\\alpha )\\rbrace $ .", "Note that for $\\alpha \\in A$ , $g(\\alpha ) < f(\\alpha ) < \\delta _\\alpha ^\\Xi $ .", "Define $\\Phi : A \\rightarrow \\kappa $ by $\\Phi (\\alpha )$ is the least $\\beta < \\alpha $ so that $g(\\alpha ) = \\Xi (\\alpha ,\\beta )$ which exists since $\\Xi _\\alpha $ is a surjection onto $\\delta _\\alpha ^\\Xi $ .", "Thus on $A$ , $\\Phi $ is a regressive function.", "By normality, there is some $\\beta < \\kappa $ so that $\\Phi (\\alpha ) = \\beta $ for $\\mu $ -almost all $\\alpha $ .", "Then $\\Xi ^\\beta =_\\mu g$ .", "Hence $[\\Xi ^\\beta ]_\\mu = G$ .", "One can check this $\\beta $ does not depend on the initial choice of $g \\in G$ .", "Definition 5.7 Let $\\kappa $ be a cardinal.", "Let $\\mu $ be a normal measure on $\\kappa $ .", "Let $h$ be a block function.", "Suppose $h$ possesses a Kunen function $\\Xi $ with respect to $\\mu $ .", "An ordinal $\\beta < \\kappa $ is a minimal code (relative to $\\Xi $ ) if and only if for all $\\gamma < \\beta $ , $\\lnot (\\Xi ^\\gamma =_\\mu \\Xi ^\\beta )$ .", "Let $J^\\Xi _h$ be the collection of $\\beta $ which are minimal codes and $\\Xi ^\\beta <_\\mu h$ .", "Define an ordering $\\prec ^\\Xi _h$ on $J^\\Xi _h$ by $\\alpha \\prec _h^\\Xi \\beta $ if and only if $\\Xi ^\\alpha <_\\mu \\Xi ^\\beta $ .", "By Fact REF , for every $G < [h]_\\mu $ , there is a unique $\\beta \\in J^\\Xi _h$ so that $\\Xi ^\\beta \\in G$ .", "In this way, one says that $\\beta $ is a minimal code for $G$ or for any $g \\in G$ with respect to $\\Xi $ .", "Fact 5.8 Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Let $f : \\kappa \\rightarrow \\kappa $ be a block function possessing a Kunen function $\\Xi $ with respect to $\\mu $ .", "Then $\\prod _{\\alpha \\in \\kappa } f(\\alpha ) \\slash \\mu $ , i.e.", "the initial segment of ${}^\\kappa \\slash \\kappa \\slash \\mu $ determined by $[f]_\\mu $ , is a wellordering.", "If every block function has a Kunen function, then $\\prod _{\\alpha < \\kappa } |\\alpha |^+ \\slash \\mu $ is wellfounded.", "For each $F \\in \\prod _{\\alpha < \\kappa }|\\alpha ^+|\\slash \\mu $ , $F < \\kappa ^+$ .", "Thus $\\prod _{\\alpha < \\kappa }|\\alpha |^+ \\slash \\mu \\le \\kappa ^+$ .", "Let $f$ be a block function possessing a Kunen function $\\Xi $ .", "Every $G \\in \\prod _{\\alpha \\in \\kappa } f(\\alpha ) \\slash \\mu $ has a unique $\\beta \\in J^\\Xi _f$ so that $[\\Xi ^\\beta ]_\\mu = G$ .", "There is a bijection of $J^\\Xi _f$ with $\\prod _{\\alpha \\in \\kappa } f(\\alpha ) \\slash \\mu $ given by $\\beta \\mapsto [\\Xi ^\\beta ]_\\mu $ .", "This shows that $|\\prod _{\\alpha \\in \\kappa }f(\\alpha )\\slash \\mu | \\le \\kappa $ .", "Now suppose that $\\prod _{\\alpha \\in \\kappa }f(\\alpha ) \\slash \\mu $ is not wellfounded.", "Then $\\prec ^\\Xi _f$ is an illfounded linear ordering on $J^\\Xi _f \\subseteq \\kappa $ .", "Let $\\beta _0$ be the least (in the usual ordinal ordering) element of $J^\\Xi _f$ so that the initial segment determined by $\\beta _0$ in $(J^\\Xi _f,\\prec ^\\Xi _h)$ has no $\\prec $ -minimal element.", "Suppose $\\beta _n$ has been defined, let $\\beta _{n + 1}$ be the least ordinal in $J^\\Xi _f$ which is $\\prec ^\\Xi _f$ -below $\\beta _n$ .", "This process defines a sequence of ordinals $\\langle \\beta _n : n \\in \\omega \\rangle $ in $J^\\Xi _f$ .", "Then $\\langle \\Xi ^{\\beta _n} : n \\in \\omega \\rangle $ is a sequence with the property that $[\\Xi ^{\\beta _n}]_\\mu $ is an infinite decreasing sequence in $\\prod _{\\alpha < \\kappa } f(\\alpha ) \\slash \\mu $ .", "Let $D_n = \\lbrace \\alpha \\in \\kappa : \\Xi ^{\\beta _{n + 1}}(\\alpha ) < \\Xi ^{\\beta _n}(\\alpha )\\rbrace \\in \\mu $ .", "Then $\\bigcap _{n \\in \\omega } D_n \\in \\mu $ .", "Let $\\xi \\in \\bigcap _{n \\in \\omega } D_n$ .", "Then $\\langle \\Xi ^{\\beta _n}(\\xi ) : n \\in \\omega \\rangle $ is an infinite decreasing sequence of ordinals.", "This is impossible.", "Since $f$ was arbitrary, this shows that $\\prod _{\\alpha < \\kappa }|\\alpha |^+\\slash \\mu $ is wellfounded.", "By the first paragraph, each $F \\in \\prod _{\\alpha < \\kappa } |\\alpha |^+ \\slash \\mu $ has cardinality less than or equal to $\\kappa $ .", "Thus $F < \\kappa ^+$ .", "Thus $\\prod _{\\alpha < \\kappa } |\\alpha |^+ \\slash \\mu \\le \\kappa ^+$ .", "Definition 5.9 Let $\\mu $ be a normal measure on $\\kappa $ .", "Let $h : \\kappa \\rightarrow \\kappa $ be a block function.", "Let $\\Xi $ be a Kunen function for $h$ with respect to $\\mu $ .", "By Fact REF , for each $G < [h]_\\mu $ , there is a minimal code $\\beta \\in J^\\Xi _h$ so that $\\Xi ^\\beta \\in G$ .", "Thus $(J_h^\\Xi ,\\prec ^\\Xi _h)$ has the same ordertype as $[h]_\\mu $ .", "By Fact REF , $[h]_\\mu $ is a wellordering.", "Let $\\epsilon _h^\\Xi \\in \\mathrm {ON}$ denote the ordertype of $([h]_{\\mu },<)$ which is equal to the ordertype of $(J_h^\\Xi ,\\prec ^\\Xi _h)$ .", "Let $\\pi ^\\Xi _h : \\epsilon _h^\\Xi \\rightarrow (J_h^\\Xi , \\prec _h^\\Xi )$ be the unique order-preserving isomorphism.", "Note that every function $f : \\omega _1 \\rightarrow \\omega _1$ is (everywhere) a block function.", "Theorem 5.10 (Kunen) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Every function $f : \\omega _1 \\rightarrow \\omega _1$ has a Kunen function with respect to the club measure $\\mu $ .", "The Kunen function is derived from the Kunen tree which will be defined below.", "Recall that each $x \\in {{}^\\omega \\omega }$ codes a relation $R_x$ defined in Definition REF .", "Also $\\mathrm {WF}$ denotes the collection of $x$ so that $R_x$ is a wellfounded relation.", "Let $\\pi _\\mathrm {pair} : \\omega \\times \\omega \\rightarrow \\omega $ denote a fixed recursive bijection.", "Let $S \\subseteq \\omega \\times \\omega _1$ be the tree of partial rankings of relations on $\\omega $ which is defined as follows: $(s,\\bar{\\alpha }) \\in S$ if and only if $|s| = |\\bar{\\alpha }|$ and for all $i,j < |s|$ , if $s(\\pi _\\mathrm {pair}(i,j)) = 1$ , then $\\bar{\\alpha }(i) < \\bar{\\alpha }(j)$ .", "If $(x,f) \\in [S] = \\lbrace (y,g) : (\\forall n)(y \\upharpoonright n, g\\upharpoonright n) \\in S)\\rbrace $ , then $x \\in \\mathrm {WF}$ since $f$ is a ranking of $R_x$ into $\\omega _1$ .", "Conversely, if $x \\in \\mathrm {WF}$ , then $R_x$ has a ranking $f$ with image in $\\omega _1$ .", "Thus $(x,f) \\in [S]$ .", "It has been shown that $\\mathrm {WF} = \\pi _1[[S]]$ is the projection of $[S]$ onto the first coordinate.", "Let $T$ be a recursive tree on $\\omega \\times \\omega $ so that $\\pi _1[[T]] = \\lbrace a \\in {{}^\\omega \\omega }: (\\exists b)((a,b) \\in [T])\\rbrace = {{}^\\omega \\omega }\\setminus \\mathrm {WF}$ .", "The important observation is that ${{}^\\omega \\omega }\\setminus \\mathrm {WF}$ is a $\\Sigma _1^1$ set which is $\\mathbf {\\Sigma }_1^1$ -complete.", "Let $\\pi _\\mathrm {seq} : {}^{<\\omega }\\omega \\rightarrow \\omega $ be a recursive bijection.", "For each $x \\in {{}^\\omega \\omega }$ , let $\\sigma _x : {}^{<\\omega }\\omega \\rightarrow \\omega $ be a strategy in the usual integer game defined by $\\sigma _x(s) = n$ if and $x(\\pi _\\mathrm {seq}(s)) = n$ .", "Suppose $p \\in {}^{<\\omega }\\omega $ and $s \\in {}^{<\\omega }\\omega $ , then let $\\sigma _{p} * s$ denote the partial play where Player 1 uses the partial strategy $\\sigma _{p}$ and Player 2 plays the bits of $s$ turn by turn.", "The game goes on for as long as $p$ codes a response to the partial play that is produced each turn.", "Define a tree $K$ on $\\omega \\times \\omega \\times \\omega _1 \\times \\omega \\times \\omega $ as follows: $(p,s,\\bar{\\alpha },t,u) \\in K$ if and only if the conjunction of the following holds: (1) $(s,\\bar{\\alpha }) \\in S$ .", "(2) $(t,u) \\in T$ .", "(3) $\\sigma _{p} * s$ is a substring of $t$ .", "The meaning of $K$ becomes clear if one looks at what a path through $K$ represents: If $(x,y,f,v,w) \\in [K]$ , then $\\sigma _x(y) = v$ , $(y,f) \\in [S]$ , and $(v,w) \\in [T]$ .", "Therefore, $y \\in \\mathrm {WF}$ and $v \\in \\mathbb {R}\\setminus \\mathrm {WF}$ since $S$ and $T$ are trees that project onto $\\mathrm {WF}$ and $\\mathbb {R}\\setminus \\mathrm {WF}$ .", "The above defines the tree $K$ .", "One now introduces an arbitrary function $f : \\omega _1 \\rightarrow \\omega _1$ .", "Consider the game $G_f$ defined as follows: $\\begin{tikzpicture}\\node at (0,0) {G_f};\\node at (1,.5) {I};\\node at (2,.5) {y(0)};\\node at (4,.5) {y(1)};\\node at (6,.5) {y(2)};\\node at (8,.5) {y(3)};\\node at (11,.5) {y};\\end{tikzpicture}\\node at (1,-.5) {II};\\node at (3,-.5) {v(0)};\\node at (5,-.5) {v(1)};\\node at (7,-.5) {v(2)};\\node at (9,-.5) {v(3)};\\node at (11,-.5) {v};$ $Player 2 wins if and only if $ WO(v WF rk(Tv) > {f() : ot(y))}$.", "Here $ Tv = {u <: (u,v|u|) T}$.", "Note that since $ T$ projects onto $ WF$, if $ v WF$, then $ Tv$ is a wellfounded tree.$ Player 2 must have the winning strategy: Suppose $\\rho $ is a Player 1 winning strategy.", "$\\rho [{{}^\\omega \\omega }] \\subseteq \\mathrm {WO}$ since otherwise Player 2 can win against $\\rho $ .", "Thus $\\rho [{{}^\\omega \\omega }]$ is a ${\\mathbf {\\Sigma }_1^1}$ subset of ${\\mathrm {WO}}$ .", "By the bounding principle, let $\\mu < \\omega _1$ be such that $\\mathrm {ot}(r) < \\mu $ for all $r \\in \\rho [{{}^\\omega \\omega }]$ .", "Let $\\zeta = \\sup \\lbrace f(\\alpha ) : \\alpha < \\mu \\rbrace $ .", "$\\zeta < \\omega _1$ because $\\omega _1$ is regular.", "Since the projection of $T$ is $\\mathbb {R}\\setminus \\mathrm {WF}$ which is a ${\\mathbf {\\Sigma }_1^1}$ -complete set, one must have that $\\sup \\lbrace \\mathrm {rk}(T_r) : r \\in \\mathrm {WF}\\rbrace = \\omega _1$ .", "Choose some $v \\in \\mathrm {WF}$ so that $\\mathrm {rk}(T_v) > \\zeta $ .", "If Player 2 plays $v$ against $\\rho $ , Player 2 will win.", "This contradicts the fact that $\\rho $ is a Player 1 winning strategy.", "Thus there is a winning strategy $\\sigma $ for the game $G_f$ .", "Let $x \\in {{}^\\omega \\omega }$ be such that $\\sigma _x = \\sigma $ .", "Let $K_x$ be the tree consisting of $(s,\\bar{\\alpha },t,u)$ so that there is an $n \\in \\omega $ such that $|s| = |\\bar{\\alpha }| = |t| = |u| = n$ and $(x\\upharpoonright n, s,\\bar{\\alpha },t,u) \\in K$ .", "For each $\\alpha < \\omega _1$ , let $K_x\\upharpoonright \\alpha $ denote the restrict of $K_x$ to ${}^{<\\omega }(\\omega \\times \\alpha \\times \\omega \\times \\omega )$ .", "Note that $K_x\\upharpoonright \\alpha $ is wellfounded.", "To see this, suppose otherwise.", "This means there is some $y,v,w \\in {{}^\\omega \\omega }$ and $f \\in {}^{<\\omega }\\alpha $ so that $(x,y,f,v,w) \\in [K]$ .", "Thus $\\sigma _x(y) = \\sigma (y) = v$ , $y \\in \\mathrm {WO}$ , and $v \\in \\mathbb {R}\\setminus \\mathrm {WF}$ .", "However, $\\sigma $ is a Player 2 winning strategy and $y \\in \\mathrm {WO}$ , so one must have that $v \\in \\mathrm {WF}$ .", "This is a contradiction.", "Let $\\alpha \\ge \\omega $ .", "Suppose $y \\in \\mathrm {WO}$ with $\\mathrm {ot}(y) = \\alpha $ .", "Then there is a ranking $f$ of $R_y$ using ordinals below $\\alpha $ .", "Let $v = \\sigma _x(y) = \\sigma (y)$ .", "Since $\\sigma $ is a Player 2 winning strategy, $\\mathrm {rk}(T_v) > f(\\mathrm {ot}(y))$ .", "Note that $K_x\\upharpoonright \\alpha $ has a subtree which is isomorphic to $T_v$ .", "In particular, this subtree is $\\hat{T}_v = \\lbrace (y\\upharpoonright n, f\\upharpoonright n, t,u) \\in K_x : n \\in \\omega \\wedge |t| = |u| = n \\wedge (t,u) \\in T_v\\rbrace .$ This implies that $K_x\\upharpoonright \\alpha $ is a wellfounded tree of rank greater than $\\mathrm {rk}(T_v) > f(\\alpha )$ .", "Note that there is a club set of $\\alpha $ so that $\\alpha $ is closed under the Gödel pairing function.", "Using this pairing function on $\\alpha $ , one can define uniformly a bijection $\\pi _\\alpha : \\alpha \\rightarrow {}^{<\\omega }\\alpha $ .", "For all $\\alpha $ closed under the Gödel pairing function, define $\\Xi (\\alpha ,\\beta )$ to be the rank of $\\pi _\\alpha (\\beta )$ in $K_x\\upharpoonright \\alpha $ whenever $\\beta < \\alpha $ and $\\pi _\\alpha (\\beta ) \\in K_x\\upharpoonright \\alpha $ .", "(Technically in the definition of a Kunen function, $\\Xi $ should be a function on $\\omega _1 \\times \\omega _1$ ; however, the value of $\\Xi $ on $(\\alpha ,\\beta )$ where $\\beta \\ge \\alpha $ is not relevant in any applications.)", "If $\\pi _\\alpha (\\beta )$ does not belong to $K_x\\upharpoonright \\alpha $ then let $\\Xi (\\alpha ,\\beta ) = 0$ .", "$\\Xi $ is a Kunen function for $f$ .", "For any $\\alpha \\ge \\omega $ which is closed under the Gödel pairing function, $\\Xi _\\alpha $ is a surjection onto the $\\mathrm {rk}(K_x\\upharpoonright \\alpha )$ .", "Using the notation of Definition REF , $\\mathrm {rk}(K_x\\upharpoonright \\alpha ) = \\delta _\\alpha ^\\Xi $ .", "Also, it was shown above that $\\mathrm {rk}(K_x\\upharpoonright \\alpha ) > f(\\alpha )$ and hence $\\delta _\\alpha ^\\Xi > f(\\alpha )$ .", "This verifies that $\\Xi $ is a Kunen function for $f$ with respect to $\\mu $ .", "Remark 5.11 The tree $K$ produced in Theorem REF is called the Kunen tree.", "There are some simplifications of $K$ that can be made.", "$K$ is a tree on $\\omega \\times \\omega \\times \\omega _1\\times \\omega \\times \\omega $ .", "One can merge the last four coordinates on $\\omega \\times \\omega _1\\times \\omega \\times \\omega $ into $\\omega _1$ to produce a tree on $\\omega \\times \\omega _1$ with the same features.", "One can also Kleene-Brouwer order the various trees to produce linear orderings which can be used to produce the desired Kunen functions.", "See [12] for more details on the Kunen tree.", "In this survey, one will only use the existence of Kunen functions for functions $f : \\omega _1 \\rightarrow \\omega _1$ ; thus, it is not important here how uniformly these Kunen functions are obtained.", "However, the proof shows that all Kunen functions are index uniformly by reals.", "For instance, there is a single tree $K$ , the Kunen tree, so that for any $f : \\omega _1 \\rightarrow \\omega _1$ , there is a section of this tree by some strategy which can be used to produce the Kunen function for $f$ .", "The Kunen tree $K$ is also $\\Delta _1^1$ in the codes.", "See [12] for the details and the precise meaning of these remarks.", "Corollary 5.12 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "$|\\prod _{\\omega _1}\\omega _1 \\slash \\mu | \\le \\omega _2$ .", "Fact 5.13 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Then $\\prod _{\\omega _1} \\omega _1 \\slash \\mu = \\omega _2$ and is a regular cardinal.", "Note that the club filter on $\\omega _1$ , $\\mu $ , is a normal measure on $\\omega _1$ .", "Therefore by Fact REF , the ultrapower is a regular cardinal.", "Thus it must be greater than or equal $\\omega _2$ .", "Then Corollary REF implies that the ultrapower is $\\omega _2$ .", "Definition 5.14 Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Let $h : \\kappa \\rightarrow \\kappa $ be a function so that $h(\\alpha ) > 0$ $\\mu $ -almost everywhere.", "Let $T^h = \\lbrace (\\alpha ,\\beta ) \\in \\kappa \\times \\kappa : \\beta < h(\\alpha )\\rbrace $ .", "Let $\\mathcal {T}^h = (T^h,\\sqsubset )$ where $\\sqsubset $ is the lexicographic ordering.", "Note that $\\mathrm {ot}(\\mathcal {T}^h) = \\kappa $ .", "Suppose $F : \\mathcal {T}^h \\rightarrow \\kappa $ is an order-preserving function.", "Let $g \\in {}^\\kappa \\kappa $ be such that $g <_\\mu h$ .", "Let $A^g = \\lbrace \\alpha : g(\\alpha ) < h(\\alpha )\\rbrace $ .", "Let $F^g \\in {}^\\kappa \\kappa $ be defined by $F^g(\\alpha ) = {\\left\\lbrace \\begin{array}{ll}F(\\alpha ,g(\\alpha )) & \\quad \\alpha \\in A^g \\\\F(\\alpha ,0) & \\quad \\text{otherwise}\\end{array}\\right.", "}$ Note that if $g_1 <_\\mu g_2 <_\\mu h$ , then $F^{g_1} <_\\mu F^{g_2}$ .", "Fix a Kunen function $\\Xi $ for $h$ .", "Recall that $\\epsilon _h^\\Xi $ is the ordertype of the wellordering $(J_h^\\Xi ,\\prec _h^\\Xi )$ and $\\pi ^\\Xi _h : \\epsilon _h^\\Xi \\rightarrow (J_h^\\Xi ,\\prec ^\\Xi _h)$ is the unique order isomorphism.", "If $\\beta \\in \\epsilon _h^\\Xi $ , then let $F^{(\\beta )} = F^{\\Xi ^{\\pi ^\\Xi _h(\\beta )}}$ .", "Let $\\mathsf {funct}(F) : \\epsilon _h^\\Xi \\rightarrow \\mathrm {ON}$ , be defined by $\\mathsf {funct}(F)(\\alpha ) = [F^{(\\alpha )}]_\\mu $ .", "$\\mathcal {T}^h = (T^h, \\sqsubset )$ is order isomorphic to $\\kappa $ .", "It is merely a reorganization of $\\kappa $ into successive blocks of length $h(\\alpha )$ .", "If $F : \\mathcal {T}^h \\rightarrow \\kappa $ is order preserving and $\\beta < \\epsilon ^\\Xi _h$ , then $F^{(\\beta )} : \\kappa \\rightarrow \\kappa $ is defined by letting $F^{(\\beta )}(\\alpha )$ be the value of $F$ at $(\\alpha ,\\Xi ^{\\pi ^\\Xi _h(\\beta )}(\\alpha ))$ , which can be construed to be the $\\Xi ^{\\pi ^\\Xi _h(\\beta )}(\\alpha )^\\text{th}$ element in the $\\alpha ^\\text{th}$ block of length $h(\\alpha )$ .", "Lemma 5.15 (Sliding lemma) Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Let $h : \\kappa \\rightarrow \\kappa $ be a block function possessing a Kunen function $\\Xi $ with respect to $\\mu $ .", "Let $\\langle g_\\alpha : \\alpha \\in \\epsilon _h^\\Xi \\rangle $ be an order preserving sequence elements of ${}^\\kappa \\kappa $ which are non-constant $\\mu $ -almost everywhere: order preserving means that if $\\alpha < \\beta < \\epsilon _h^\\Xi $ , then $g_\\alpha <_\\mu g_\\beta $ .", "Then there exists an order-preserving $F : \\mathcal {T}^h \\rightarrow \\kappa $ so that for all $\\beta < \\epsilon _h^\\Xi $ , $F^{(\\beta )} =_\\mu g_\\beta $ .", "Moreover, if $X \\subseteq \\kappa $ and for all $\\beta < \\epsilon _h^\\Xi $ , $\\mathrm {rang}(g_\\beta ) \\subseteq X$ , then $\\mathrm {rang}(F) \\subseteq X$ .", "Fix some $X \\subseteq \\kappa $ so that $\\mathrm {rang}(g_\\beta ) \\subseteq X$ for all $\\beta < \\epsilon _h^\\Xi $ , $\\mathrm {rang}(g_\\beta ) \\subseteq X$ .", "Define $F : \\mathcal {T}^h \\rightarrow \\kappa $ by recursion as follows: Let $(\\alpha ,\\beta ) \\in T^h$ .", "Suppose $F(\\alpha ^{\\prime },\\beta ^{\\prime })$ has been defined for all $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )$ .", "(Recall $\\mathcal {T}^h = (T^h,\\sqsubset )$ where $\\sqsubset $ is the lexicographic ordering.)", "(Case I) There is a $\\gamma < \\epsilon _h^\\Xi $ so that $\\beta = \\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha )$ : Let $\\gamma $ be least with this property.", "Let $F(\\alpha ,\\beta )$ be the least element in the range $g_\\gamma $ which is greater than or equal to $\\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "(Case II) There is no $\\gamma < \\epsilon _h^\\Xi $ so that $\\beta = \\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha )$ : Let $F(\\alpha ,\\beta )$ be the $\\omega ^\\text{th}$ -element of $X$ above $\\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "Claim: For all $\\gamma < \\epsilon _h^\\Xi $ , $F^{(\\gamma )} =_\\mu g_\\gamma $ .", "This will be proved by induction on $\\epsilon _h^\\Xi $ .", "It is clear $B = \\lbrace \\alpha \\in \\kappa : \\Xi ^{\\pi _h^\\Xi (0)}(\\alpha ) = 0\\rbrace \\in \\mu $ .", "For $\\alpha \\in B$ , $F^{(0)} = F(\\alpha ,\\Xi ^{\\pi _h^\\Xi (0)}(\\alpha )) \\in \\mathrm {rang}(g_{0})$ .", "Thus on $B$ , one must have that $F^{(0)}(\\alpha ) \\ge g_{0}(\\alpha )$ .", "Suppose that $F^{(0)} >_\\mu g_{0}$ .", "Let $C = \\lbrace \\alpha \\in B : F^{(0)}(\\alpha ) > g_0(\\alpha )\\rbrace \\in \\mu $ .", "Since for $\\alpha \\in C$ , $\\Xi ^{\\pi _h^\\Xi (0)}(\\alpha ) = 0$ , there must be some $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,0)$ with $\\alpha ^{\\prime } < \\alpha $ so that $F(\\alpha ^{\\prime },\\beta ^{\\prime }) \\ge g_{0}(\\alpha )$ .", "Define $\\Phi : C \\rightarrow \\kappa $ by letting $\\Phi (\\alpha )$ be the least $\\alpha ^{\\prime } < \\alpha $ so that there exists some $\\beta ^{\\prime }$ with $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )$ and $F(\\alpha ^{\\prime },\\beta ^{\\prime }) \\ge g_{0}(\\alpha )$ .", "Thus $\\Phi $ is regressive on $C \\in \\mu $ .", "There is some $\\alpha ^{\\prime }$ and a $D \\in \\mu $ so that $\\Phi (\\alpha ) = \\alpha ^{\\prime }$ for all $\\alpha \\in D$ .", "Thus for all $\\alpha \\in D$ , $g_{0}(\\alpha ) \\le F(\\alpha ^{\\prime },\\beta ^{\\prime })$ for some $\\beta ^{\\prime } < h(\\alpha ^{\\prime })$ .", "By $\\kappa $ -completeness, there is some $\\bar{\\beta }< h(\\alpha ^{\\prime })$ and an $E \\subseteq D$ with $E \\in \\mu $ so that for all $\\alpha \\in E$ , $g_{0}(\\alpha ) \\le F(\\alpha ^{\\prime },\\bar{\\beta })$ .", "This is impossible since $g_{0}$ is not constant $\\mu $ -almost everywhere.", "It has been shown that $F^{(0)} = g_{0}$ .", "Suppose $\\gamma < \\epsilon _h^\\Xi $ and that it has been shown that $F^{(\\gamma ^{\\prime })} =_{\\mu } g_{\\gamma ^{\\prime }}$ for all $\\gamma ^{\\prime } < \\gamma $ .", "One will seek to show that $F^{(\\gamma )} = g_\\gamma $ .", "Let $A = \\lbrace \\alpha : (\\forall \\gamma ^{\\prime } \\in \\gamma )(\\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha ) \\ne \\Xi ^{\\pi ^\\Xi _h(\\gamma ^{\\prime })}(\\alpha ))\\rbrace $ .", "If $A \\notin \\mu $ , then by $\\kappa $ -completeness, there is some $\\gamma ^{\\prime } \\in \\gamma $ so that $\\Xi ^{\\pi ^\\Xi _h(\\gamma ^{\\prime })} =_{\\mu } \\Xi ^{\\pi ^\\Xi _h(\\gamma )}$ .", "This is impossible since $\\gamma ^{\\prime } < \\gamma $ implies $\\Xi ^{\\pi _h^\\Xi (\\gamma ^{\\prime })} <_\\mu \\Xi ^{\\pi ^\\Xi _h(\\gamma )}$ .", "By induction, it has been shown that $F^{(0)} = g_0$ .", "Let $B \\subseteq A$ with $B \\in \\mu $ have the property that for all $\\alpha \\in B$ , $\\Xi ^{\\pi ^\\Xi _h(0)}(\\alpha ) = 0$ , $F^{(0)}(\\alpha ) = g_{0}(\\alpha )$ , and $g_{\\gamma }(\\alpha ) > g_{0}(\\alpha )$ .", "Suppose $F^{(\\gamma )}$ is not equal to $g_\\gamma $ for $\\mu $ -almost all $\\alpha $ .", "For $\\alpha \\in B$ , $F^{(\\gamma )}(\\alpha ) \\in \\mathrm {rang}(g_\\gamma )$ .", "Thus one must have that there is a set $C \\subseteq B$ with $C \\in \\mu $ so that for all $\\alpha \\in C$ , $F^{(\\gamma )}(\\alpha ) > g_\\gamma (\\alpha )$ .", "There must be some $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha ))$ so that $F(\\alpha ^{\\prime },\\beta ^{\\prime }) \\ge g_\\gamma (\\alpha )$ .", "However, $B$ was chosen so that for all $\\alpha \\in B$ , $\\Xi ^{\\pi _h^\\Xi (0)}(\\alpha ) = 0$ and $F^{(0)}(\\alpha ) = g_{0}(\\alpha ) < g_{\\gamma }(\\alpha )$ .", "Therefore, for $\\alpha \\in C$ , $F^{(0)}(\\alpha ) < g_{\\gamma }(\\alpha ) < F^{(\\gamma )}(\\alpha )$ .", "Thus the least $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha ))$ with $F(\\alpha ^{\\prime },\\beta ^{\\prime }) \\ge g_\\gamma (\\alpha )$ must have that $\\alpha ^{\\prime } = \\alpha $ .", "Therefore, for all $\\alpha \\in C$ , there is some $\\beta ^{\\prime } < \\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha )$ so that $F(\\alpha ,\\beta ^{\\prime }) \\ge g_\\gamma (\\alpha )$ .", "Let $\\Phi (\\alpha )$ be this $\\beta ^{\\prime }$ .", "Then $\\Phi <_\\mu \\Xi ^{\\pi _h^\\Xi (\\gamma )} < h$ .", "Thus there is some $\\bar{\\gamma } < \\gamma $ so that $\\Phi =_\\mu \\Xi ^{\\pi _h^\\Xi (\\bar{\\gamma })}$ .", "By induction, $F^{(\\bar{\\gamma })} =_\\mu g_{\\bar{\\gamma }}$ .", "However by definition of $\\Phi $ , $F^\\Phi \\ge _\\mu g_\\gamma $ .", "Also $F^\\Phi =_\\mu F^{(\\bar{\\gamma })} =_\\mu g_{\\bar{\\gamma }}$ .", "Thus $g_{\\bar{\\gamma }} \\ge _\\mu g_\\gamma $ .", "Since $\\langle g_\\delta : \\delta \\in \\epsilon _h^\\Xi \\rangle $ is an increasing sequence and $\\bar{\\gamma } < \\gamma $ , one has that $g_{\\bar{\\gamma }} <_\\mu g_{\\gamma }$ .", "This is a contradiction.", "This shows that $F^{\\gamma } = g_\\gamma $ .", "The lemma has been proved.", "It will be helpful to have the correct type version of the sliding lemma.", "A sketch of the necessary modification will be given: Lemma 5.16 (Correct-type sliding lemma) Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Let $h : \\kappa \\rightarrow \\kappa $ be a block function possessing a Kunen function $\\Xi $ with respect to $\\mu $ .", "Let $\\langle g_\\alpha : \\alpha < \\epsilon ^\\Xi _h\\rangle $ be an increasing sequence of functions from $\\kappa $ to $\\kappa $ of the correct type which are $\\mu $ -almost everywhere non-constant.", "Further, suppose there is a sequence $\\langle G_\\alpha : \\alpha < \\epsilon _h^\\Xi \\rangle $ where each $G_\\alpha : \\kappa \\times \\omega \\rightarrow \\kappa $ is a function witnessing that $g_\\alpha $ has uniform cofinality $\\omega $ .", "Suppose $\\langle [g_\\alpha ]_\\mu : \\alpha < \\epsilon _h^\\Xi \\rangle $ is discontinuous everywhere.", "Then there exists an order preserving function $F : \\mathcal {T}^h \\rightarrow \\kappa $ which is of the correct type so that for all $\\alpha < \\epsilon _h^\\Xi $ , $F^{(\\alpha )} =_\\mu g_\\alpha $ .", "Moreover, if $C \\subseteq \\kappa $ is an $\\omega $ -club set such that for all $\\beta \\in \\epsilon ^\\Xi _h$ , $\\mathrm {rang}(g_\\beta ) \\subseteq C$ , then $\\mathrm {rang}(F) \\subseteq C$ .", "Fix some $C \\subseteq \\kappa $ be an $\\omega $ -club so that $\\mathrm {rang}(g_\\beta ) \\subseteq C$ for all $\\beta < \\epsilon _h^\\Xi $ .", "Define $F : \\mathcal {T}^h \\rightarrow \\kappa $ by recursion as follows: Let $(\\alpha ,\\beta ) \\in T^h$ .", "Suppose $F(\\alpha ^{\\prime },\\beta ^{\\prime })$ has been defined for all $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubseteq (\\alpha ,\\beta )$ .", "(Case I) There is some $\\gamma < \\epsilon ^\\Xi _h$ so that $\\beta = \\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha )$ : Let $\\gamma $ be least with this property.", "Let $F(\\alpha ,\\beta )$ be the least element in the range of $g_\\gamma $ which is greater than $\\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "(Case II) There is no $\\gamma < \\epsilon ^\\Xi _h$ so that $\\beta = \\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha )$ : Find $\\delta $ least so that $\\mathsf {enum}_C(\\delta ) > \\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "Let $F(\\alpha ,\\beta ) = \\mathsf {enum}_C(\\delta + \\omega )$ .", "Claim 1: $F : \\mathcal {T}^h \\rightarrow C$ is a function of the correct type.", "To prove this: Note that it is clear from the construction that $F$ is discontinuous everywhere.", "One will create a function $H : T^h \\times \\omega \\rightarrow \\kappa $ to witness the uniform cofinality of $F$ .", "Let $(\\alpha ,\\beta ) \\in T^h$ .", "Suppose Case I had occurred at $(\\alpha ,\\beta )$ with $\\gamma < \\epsilon _h^\\Xi $ least so that $\\beta = \\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha )$ .", "Let $\\delta < \\kappa $ be least so that $g_\\gamma (\\delta ) > \\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "Now let $n$ be least so that $G_\\gamma (\\delta ,n) > \\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "Define $H((\\alpha ,\\beta ),m) = G_\\gamma (\\delta ,n + m)$ .", "Suppose Case II had occurred at $(\\alpha ,\\beta )$ .", "Let $H((\\alpha ,\\beta ),m)$ be the $m^\\text{th}$ element of $C$ above $\\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\beta )\\rbrace $ .", "This function $H$ witnesses that $F$ has uniform cofinality $\\omega $ .", "$F$ has the correct type.", "Claim 2: For all $\\gamma < \\epsilon _h^\\Xi $ , $F^{(\\gamma )} =_\\mu g_\\gamma $ .", "To show Claim 2: This will be proved by induction on $\\epsilon ^\\Xi _h$ .", "One will indicate some of the necessary modification from the proof of Lemma REF .", "Suppose that $\\gamma < \\epsilon ^\\Xi _h$ and that it has been shown that $F^{(\\gamma ^{\\prime })} =_\\mu g_{\\gamma ^{\\prime }}$ for all $\\gamma ^{\\prime } < \\gamma $ .", "Let $A = \\lbrace \\alpha : (\\forall \\gamma ^{\\prime } \\in \\gamma )(\\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha ) \\ne \\Xi ^{\\pi ^\\Xi _h(\\gamma ^{\\prime })}(\\alpha ))\\rbrace $ .", "As in Lemma REF , $A \\in \\mu $ .", "Also just as before, one can show that $B \\in \\mu $ where $B \\subseteq A$ and has the property that for all $\\alpha \\in B$ , $\\Xi ^{\\pi ^\\Xi _h(0)}(\\alpha ) = 0$ , $F^{(0)}(\\alpha ) = g_0(\\alpha )$ , and $g_\\gamma (\\alpha ) > g_0(\\alpha )$ .", "Now suppose $F^{(\\gamma )}$ is not equal to $g_\\gamma $ for $\\mu $ -almost all $\\alpha $ .", "In the present situation, there are two ways this can happen: (i) There is a $D \\subseteq B$ with $D \\in \\mu $ so that for all $\\alpha \\in D$ , there exists some $(\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha ))$ so that $F(\\alpha ^{\\prime },\\beta ^{\\prime }) \\ge g_\\gamma (\\alpha )$ .", "As argued in Lemma REF , this can not occur.", "(ii) There is a $D \\subseteq B$ with $D \\in \\mu $ so that for all $\\alpha \\in D$ , $g_\\gamma (\\alpha ) = \\sup \\lbrace F(\\alpha ^{\\prime },\\beta ^{\\prime }) : (\\alpha ^{\\prime },\\beta ^{\\prime }) \\sqsubset (\\alpha ,\\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha ))\\rbrace $ .", "For each $n \\in \\omega $ , let $\\Phi _n : D \\rightarrow \\omega _1$ be defined as follows: Since $\\alpha \\in D \\subseteq B$ , one has that $F^{(0)}(\\alpha ) = g_0(\\alpha ) < g_\\gamma (\\alpha )$ .", "For each $n \\in \\omega $ and $\\alpha \\in D$ , let $\\Phi _n(\\alpha )$ be the least $\\beta ^{\\prime } < \\Xi ^{\\pi ^\\Xi _h(\\gamma )}(\\alpha )$ so that $F(\\alpha ,\\beta ^{\\prime }) > G_\\gamma (\\alpha ,n)$ .", "By Fact REF , there is some $\\gamma _n < \\gamma $ so that $\\Phi _n =_\\mu \\Xi ^{\\gamma _n}$ .", "By the induction hypothesis, $F^{\\Xi ^{\\pi _h^\\Xi (\\gamma _n)}} = F^{(\\gamma _n)} =_\\mu g_{\\gamma _n}$ .", "By construction, $\\sup \\lbrace [g_{\\gamma _n}]_\\mu : n \\in \\omega \\rbrace = [g_\\gamma ]_\\mu $ .", "However, by assumption, $\\langle [g_\\alpha ]_{\\mu } : \\alpha < \\epsilon _h^\\Xi \\rangle $ was discontinuous everywhere.", "This complete the proof of the lemma.", "Fact 5.17 Let $\\mu $ be a normal measure on a cardinal $\\kappa $ .", "Let $h : \\kappa \\rightarrow \\kappa $ be a block function possessing a Kunen function $\\Xi $ with respect to $\\mu $ .", "Suppose $F_0,F_1 \\in [\\omega _1]^{\\mathcal {T}^h}$ have the property that $F_0^{(\\beta )} =_\\mu F_1^{(\\beta )}$ for all $\\beta < \\epsilon ^\\Xi _h$ .", "Then for $\\mu $ -almost all $\\alpha $ , $F_0(\\alpha ,\\beta ) = F_1(\\alpha ,\\beta )$ for all $\\beta < h(\\alpha )$ .", "Suppose $A = \\lbrace \\alpha : (\\exists \\beta < h(\\alpha ))(F_0(\\alpha ,\\beta ) \\ne F_1(\\alpha ,\\beta ))\\rbrace \\in \\mu $ .", "Define $g : \\omega _1 \\rightarrow \\omega _1$ by $g(\\alpha )$ is the least $\\beta < h(\\alpha )$ so that $F_0(\\alpha ,\\beta ) \\ne F_1(\\alpha ,\\beta )$ .", "Since $g <_\\mu h$ , Fact REF implies there is some $\\gamma < \\epsilon ^\\Xi _h$ so that $g =_\\mu \\Xi ^{\\pi _h^\\Xi (\\gamma )}$ .", "Then $\\lnot (F_0^{(\\gamma )} =_\\mu F_1^{(\\gamma )})$ .", "This contradicts the assumptions.", "Fact 5.18 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Let $\\alpha < \\omega _2$ .", "Let $A \\subseteq \\omega _1$ with $|A| = \\omega _1$ .", "Let $B = [A]^{\\omega _1} \\slash \\mu $ , which is a set of cardinality $\\omega _2$ .", "Let $h : \\omega _1 \\rightarrow \\omega _1$ have the property that $h(\\beta ) > 0$ for all $\\beta < \\omega _1$ and $[h]_\\mu = \\alpha $ .", "Let $\\Xi $ be a Kunen function for $h$ with respect to $\\mu $ .", "For all $I \\in [B]^\\alpha $ , there exists some $F: \\mathcal {T}^h \\rightarrow A$ which is order preserving and for all $\\gamma < \\alpha = \\epsilon _h^\\Xi $ , $[F^{(\\gamma )}]_\\mu = I(\\gamma )$ .", "Since $\\omega _2 = {}^{\\omega _1}\\omega _1\\slash \\mu $ is regular by Fact REF , $\\mathrm {rang}(I) < \\omega _2$ .", "Let $k \\in [\\omega _1]^{\\omega _1}$ be such that $\\mathrm {rang}(I) < [k]_\\mu $ .", "Let $\\Xi ^{\\prime }$ be a Kunen function for $k$ with respect to $\\mu $ .", "Let $J = \\lbrace \\beta \\in \\epsilon _k^{\\Xi ^{\\prime }} : (\\exists \\gamma < \\alpha )(I(\\gamma ) = [{\\Xi ^{\\prime }}^{\\pi _{k}^{\\Xi ^{\\prime }}(\\beta )}]_\\mu )\\rbrace $ .", "$J \\subseteq \\epsilon _k^{\\Xi ^{\\prime }}$ and $\\mathrm {ot}(J) = \\alpha = \\epsilon _h^\\Xi $ .", "Let $\\rho : \\epsilon ^\\Xi _h \\rightarrow J$ be the unique order isomorphism.", "Define $g^{\\prime }_\\gamma = {\\Xi ^{\\prime }}^{\\pi _k^{\\Xi ^{\\prime }}(\\rho (\\gamma ))}$ .", "Since $[g^{\\prime }_\\gamma ]_\\mu = I(\\gamma )$ and $I(\\gamma ) \\in B = [A]^{\\omega _1} \\slash \\mu $ , the set $B_\\gamma = \\lbrace \\alpha \\in \\omega _1 : g^{\\prime }_\\gamma (\\alpha ) \\in A\\rbrace \\in \\mu $ .", "By Fact REF , $g^{\\prime }_\\gamma =_\\mu g^{\\prime }_\\gamma \\circ \\mathsf {enum}_{B_\\gamma }$ .", "Let $g_\\gamma = g^{\\prime }_\\gamma \\circ \\mathsf {enum}_{B_\\gamma }$ .", "$[g_\\gamma ]_\\mu = I(\\gamma )$ and $\\mathrm {rang}(g_\\gamma ) \\subseteq A$ .", "The result now follows from Lemma REF .", "Theorem 5.19 (Martin-Paris) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ be the club measure on $\\omega _1$ .", "Then for all $\\alpha < \\omega _2$ , the partition relation $\\omega _2 \\rightarrow (\\omega _2)^\\alpha _2$ holds.", "That is, $\\omega _2$ is a weak partition cardinal.", "By Fact REF , $\\omega _2$ is isomorphic to ${}^{\\omega _1}\\omega _1 \\slash \\mu $ .", "One will identify $\\omega _2$ with ${}^{\\omega _1}\\omega _1 \\slash \\mu $ .", "Let $\\alpha < \\omega _2$ and $P : [\\omega _2]^{\\alpha } \\rightarrow 2$ be a partition.", "Let $h : \\omega _1 \\rightarrow \\omega _1$ be such that $[h]_\\mu = \\alpha $ and fix a Kunen function $\\Xi $ for $h$ .", "Define a partition $Q : [\\omega _1]^{\\mathcal {T}^h} \\rightarrow 2$ by $Q(F) = P(\\mathsf {funct}(F)))$ , where $\\mathsf {funct}(F)$ is an $\\alpha $ -sequence of ordinals in $\\omega _2$ defined in Definition REF (relative to $\\Xi $ ).", "Since $\\mathcal {T}^h$ is order isomorphic to $\\omega _1$ , $\\omega _1 \\rightarrow (\\omega _1)^{\\omega _1}_2$ implies that there is a $A \\subseteq \\omega _1$ with $|A| = \\omega _1$ which is homogeneous for $Q$ .", "Without loss of generality, suppose $A$ is homogeneous for $Q$ taking value 0.", "Note that $[A]^{\\omega _1} \\slash \\mu $ has cardinality ${}^{\\omega _1}\\omega _1\\slash \\mu = \\omega _2$ .", "Let $B$ denote $[A]^{\\omega _1}\\slash \\mu $ .", "Let $I \\in [B]^\\alpha $ .", "By Fact REF , let $F : \\mathcal {T}^h \\rightarrow A$ be such that $[F^{(\\gamma )}]_\\mu = I(\\gamma )$ for all $\\gamma < \\alpha $ .", "Since $F \\in [A]^{\\mathcal {T}^h}$ , one has $0 = Q(F) = P(\\mathsf {funct}(F)) = P(I)$ since $I = \\mathsf {funct}(F)$ .", "$B$ is homogeneous for $P$ taking value 0.", "The proof is complete.", "By Fact REF , a suitable form of the ordinary partition property will imply the appropriate correct type version of the partition property which uses a club set as its version of the homogeneous set.", "In particular, the ordinary weak partition property for $\\omega _2$ proved in Theorem REF implies the correct type club version of the weak partition property, that is $\\omega _2 \\rightarrow _* (\\omega _2)^\\alpha _2$ for all $\\alpha < \\omega _2$ .", "Corollary 5.20 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "$W^{\\omega _2}_\\omega $ and $W^{\\omega _2}_{\\omega _1}$ are the only two $\\omega _2$ -complete normal ultrafilters on $\\omega _2$ .", "By Fact REF .", "Fact 5.21 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ be the club measure on $\\omega _1$ .", "If $C \\subseteq \\omega _1$ is a club subset of $\\omega _1$ , then $[C]^{\\omega _1} \\slash \\mu $ is a club subset of $\\omega _2$ .", "If $D \\subseteq \\omega _2$ is club, then there is a club $C \\subseteq \\omega _1$ so that $[C]^{\\omega _1} \\slash \\mu \\subseteq D$ .", "Let $\\epsilon < \\omega _2$ .", "Suppose $\\langle \\nu _\\gamma : \\gamma < \\epsilon \\rangle $ is an increasing sequence in $[C]^{\\omega _1} \\slash \\mu $ .", "Let $h : \\omega _1 \\rightarrow \\omega _1$ with $h(\\alpha ) > 0$ for all $\\alpha \\in \\omega _1$ be such that $\\epsilon = [h]_\\mu $ .", "Let $\\Xi $ be a Kunen function for $h$ with respect to $\\mu $ .", "By Fact REF , there is a function $F : \\mathcal {T}^h \\rightarrow C$ which is order preserving so that $[F^{(\\gamma )}]_\\mu = \\nu _\\gamma $ .", "Let $\\ell : \\omega _1 \\rightarrow \\omega _1$ be defined by $\\ell (\\alpha ) = \\sup \\lbrace F(\\alpha ,\\beta ) : \\beta < h(\\alpha )\\rbrace $ .", "Since $\\mathrm {rang}(F) \\subseteq C$ , one has that $\\ell (\\alpha ) \\in C$ for each $\\alpha \\in \\omega _1$ .", "Suppose $j : \\omega _1 \\rightarrow \\omega _1$ is such that $j <_\\mu \\ell $ .", "Let $B = \\lbrace \\alpha \\in \\omega : j(\\alpha ) < \\ell (\\alpha )\\rbrace \\in \\mu $ .", "For $\\alpha \\in B$ , let $p(\\alpha )$ be the least $\\beta < h(\\alpha )$ so that $j(\\alpha ) < F(\\alpha ,\\beta )$ .", "For $\\alpha \\notin B$ , let $p(\\alpha ) = 0$ .", "Since $p <_\\mu h$ , Fact REF implies there $\\gamma < \\alpha $ so that $p =_\\mu \\Xi ^{\\pi ^\\Xi _h(\\gamma )}$ .", "Then $j <_\\mu F^{(\\gamma )}$ .", "Thus $[j]_\\mu < \\nu _{\\gamma }$ .", "This establishes that $[\\ell ]_\\mu $ is the limit of $\\langle \\nu _\\gamma : \\gamma < \\epsilon \\rangle $ .", "Since $\\ell \\in [C]^{\\omega _1}$ , this shows the supremum belongs to $[C]^{\\omega _1}\\slash \\mu $ .", "This shows that $[C]^{\\omega _1} \\slash \\mu $ is closed.", "It is easy to see that $[C]^{\\omega _1} \\slash \\mu $ is unbounded.", "Now suppose $D \\subseteq \\omega _2$ is club.", "Let $\\mathcal {T}^2 = (\\omega _1 \\times 2, \\sqsubset )$ where $\\sqsubset $ is the lexicographic ordering.", "If $F : \\mathcal {T}^2 \\rightarrow \\omega _1$ is an increasing function, then let $F_0,F_1 \\in [\\omega _1]^{\\omega _1}$ be defined by $F_i(\\alpha ) = F(\\alpha ,i)$ .", "Note that $F_0 <_\\mu F_1$ .", "Define $P : [\\omega _1]^{\\mathcal {T}^2} \\rightarrow 2$ by $P(F) = 0$ if and only if $(\\exists \\alpha \\in D)([F_0]_\\mu < \\alpha < [F_1]_\\mu )$ .", "By $\\omega _1 \\rightarrow _* (\\omega _1)^{\\omega _1}_2$ (the correct type strong partition property), there is a club $C \\subseteq \\omega _1$ which is homogeneous for $P$ (for all functions $F \\in [C]^{\\mathcal {T}^2}_*$ of correct type).", "Suppose $C$ is homogeneous for $P$ taking value 1.", "Fix some $\\alpha _0 \\in [C]^{\\omega _1}_* \\slash \\mu $ .", "Let $f_0 \\in [C]^{\\omega _1}_*$ with $[f_0]_\\mu = \\alpha _0$ .", "Pick $\\beta < \\omega _2$ with $\\alpha _0 < \\beta $ .", "Since $[C]^{\\omega _1}_* \\slash \\mu $ is cofinal through $\\omega _2$ , pick some $\\alpha _1 \\in [C]^{\\omega _1}_* \\slash \\mu $ with $\\beta < \\alpha _1$ .", "Let $f_1 \\in [C]^{\\omega _1}_*$ be such that $[f_1]_\\mu = \\alpha _1$ .", "By an argument as in Fact REF with some additional care to maintain the correct type, there is a function $F \\in [C]^{\\mathcal {T}^2}_*$ so that $[F_0]_\\mu = \\alpha _0$ and $[F_1]_\\mu = \\alpha _1 > \\beta $ .", "Since $P(F) = 1$ , there is no element of $D$ between $\\alpha _0$ and $\\alpha _1$ .", "In particular, there are no elements of $D$ between $\\alpha _0$ and $\\beta $ .", "However since $\\beta $ is arbitrary, this implies $D \\subseteq \\alpha _0$ .", "This contradicts that $D$ is unbounded.", "This shows that $C$ is homogeneous for $P$ taking value 0.", "Let $\\tilde{C} = \\lbrace \\alpha \\in C : \\mathsf {enum}_C(\\alpha ) = \\alpha \\rbrace $ which is a club subset of $\\omega _1$ .", "Let $f \\in [\\tilde{C}]^{\\omega _1}$ .", "Suppose $h <_\\mu f$ .", "For each $\\alpha < \\omega _1$ , let $\\gamma _\\alpha $ be least so $h(\\alpha ) < \\mathsf {enum}_C(\\gamma _\\alpha )$ .", "Let $f_0(\\alpha ) = \\mathsf {enum}_C(\\gamma _\\alpha + \\omega )$ and $f_1 = \\mathsf {enum}_C(\\gamma _\\alpha + \\omega + \\omega )$ .", "Since $f(\\alpha ) \\in \\tilde{C}$ , $f_0(\\alpha ) < f_1(\\alpha ) < f(\\alpha )$ .", "Let $g_0(\\alpha , n) = \\mathsf {enum}_C(\\gamma _\\alpha + n)$ and $g_1(\\alpha , n) = \\mathsf {enum}_C(\\gamma _\\alpha + \\omega + n)$ .", "Note that $h <_\\mu f_0 <_\\mu f_1 <_\\mu f$ , $f_0, f_1 \\in [C]^{\\omega _1}_*$ , and $g_0,g_1$ witnesses that $f_0,f_1$ are functions of the correct type.", "As above, one can find some $F \\in [C]^{\\mathcal {T}^2}_*$ so that $F_0 =_\\mu f_0$ and $F_1 =_\\mu f_1$ .", "Since $P(F) = 0$ , there is some $\\alpha \\in D$ so that $[h]_\\mu < [f_0]_\\mu < \\alpha < [f_1]_\\mu < [f]_\\mu $ .", "Since $D$ is a club and $h<_\\mu f$ was arbitrary, this shows that $[f]_\\mu \\in D$ .", "It has been shown that for all $D \\subseteq \\omega _1$ , there is some club $\\tilde{C} \\subseteq \\omega _1$ so that $[\\tilde{C}]^{\\omega _1}\\slash \\mu \\subseteq D$ ." ], [ "Failure of the Strong Partition Property at $\\omega _2$", "Fact 6.1 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ be the club measure on $\\omega _1$ .", "Suppose $f \\in {}^{\\omega _1}\\omega _1$ is a function of uniform cofinality $\\omega $ .", "Then $\\mathrm {cof}([f]_\\mu ) = \\omega $ .", "Suppose $f$ has uniform cofinality $\\omega $ .", "Let $g : \\omega _1 \\times \\omega \\rightarrow \\omega _1$ be such that for all $\\alpha < \\omega _1$ , $f(\\alpha ) = \\sup \\lbrace g(\\alpha , n) : n \\in \\omega \\rbrace $ .", "Let $f_n : \\omega _1 \\rightarrow \\omega _1$ be defined by $f_n(\\alpha ) = g(\\alpha ,n)$ .", "Then $m < n$ implies $f_m <_\\mu f_n$ .", "Suppose $h <_\\mu f$ .", "Let $A = \\lbrace \\alpha : h(\\alpha ) < f(\\alpha )\\rbrace $ .", "Define $k(\\alpha )$ to be the least $n \\in \\omega $ so that $h(\\alpha ) < f_n(\\alpha )$ whenever $\\alpha \\in A$ .", "By the countable additivity of $\\mu $ , there is some $n^*$ so that $k(\\alpha )=n^*$ for $\\mu $ -almost all $\\alpha \\in \\kappa $ .", "Then $h <_\\mu f_{n^*}$ .", "Thus $[f]_\\mu = \\sup \\lbrace [f_n]_\\mu : n \\in \\omega \\rbrace $ .", "So $\\mathrm {cof}([f]_\\mu ) = \\omega $ .", "Fact 6.2 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ be the club measure on $\\omega _1$ .", "Let $C \\subseteq \\omega _1$ be a club.", "Then $[C]^{\\omega _1}_* \\slash \\mu $ is an $\\omega $ -club subset of $\\omega _2$ .", "Moreover, for every $\\omega $ -club $D \\subseteq \\omega _2$ , there is a club $C \\subseteq \\omega _1$ so that $[C]^{\\omega _1}_* \\slash \\mu \\subseteq D$ .", "It is clear that $[C]^{\\omega _1}_* \\slash \\mu $ is unbounded.", "Suppose $\\langle \\nu _n : n \\in \\omega \\rangle $ is an increasing $\\omega $ -sequence in $[C]^{\\omega _1}_* \\slash \\mu $ .", "By $\\mathsf {AC}_\\mathbb {R}^\\omega $ , let $\\langle g_n : n \\in \\omega \\rangle $ be such that $\\nu _n = [g_n]_\\mu $ and each $g_n$ is of the correct type.", "Using $\\mathsf {AC}_\\omega ^\\mathbb {R}$ , one can also select a sequence $\\langle k_n : n \\in \\omega \\rangle $ so that $k_n$ witnesses that $g_n$ has uniform cofinality $\\omega $ .", "By Fact REF , one may assume $g_n : \\omega _1 \\rightarrow C$ .", "Using Lemma REF , let $F : \\mathcal {T}^\\omega \\rightarrow C$ be an order preserving map of the correct type so that for all $n \\in \\omega $ , $F^n =_\\mu g_n$ .", "Let $g : \\omega _1 \\rightarrow C$ be defined by $g(\\alpha ) = \\sup \\lbrace F^n(\\alpha ) : n \\in \\omega \\rbrace $ .", "Note that $g$ is of the correct type.", "As before, one can check that $[g]_\\mu = \\sup \\lbrace \\nu _n : n \\in \\omega \\rbrace $ and $[g]_\\nu \\in [C]^{\\omega _1}_* \\slash \\mu $ .", "This shows that $[C]^{\\omega _1}_* \\slash \\mu $ is an $\\omega $ -club subset of $\\omega _2$ .", "Let $D \\subseteq \\omega _2$ be an $\\omega $ -club subset of $\\omega _2$ .", "Define $P : [\\omega _1]^{\\omega _1}_* \\rightarrow 2$ by $P(f) = {\\left\\lbrace \\begin{array}{ll}0 & \\quad [f]_\\mu \\notin D \\\\1 & \\quad [f]_\\mu \\in D\\end{array}\\right.", "}$ By the correct type partition property $\\omega _1 \\rightarrow _* (\\omega _1)^{\\omega _1}_*$ , there is a club $C \\subseteq \\omega _1$ which is homogeneous for $P$ (in the correct type sense).", "Since it was shown that $[C]^{\\omega _1}_* \\slash \\mu $ is an $\\omega $ -club, $([C]^{\\omega _1}_* \\slash \\mu ) \\cap D \\ne \\emptyset $ .", "Thus $C$ must be homogeneous for $D$ taking value 1.", "It has been shown that every $\\omega $ -club $D \\subseteq \\omega _2$ contains an $\\omega $ -club of the form $[C]^{\\omega _1}_* \\slash \\mu $ .", "Fact 6.3 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Let $C \\subseteq \\omega _1$ be club.", "Let $B = [C]^{\\omega _1}_* \\slash \\mu $ which is an $\\omega $ -club subset of $\\omega _2$ .", "Let $\\epsilon < \\omega _2$ .", "Let $h : \\omega _1 \\rightarrow \\omega _1$ with $h(\\alpha ) > 0$ for all $\\alpha < \\omega _1$ and $[h]_\\mu = \\epsilon $ .", "Let $\\Xi $ be a Kunen function for $h$ .", "Let $\\mathcal {F} \\in [B]^{\\epsilon }_*$ (be of correct type).", "Let $\\mathcal {G} : [\\omega _2]^{\\epsilon } \\times \\omega $ be a function witnessing that $\\mathcal {F}$ has uniform cofinality $\\omega $ with the property that for all $\\alpha _0 < \\alpha _1 < \\epsilon $ and $m,n \\in \\omega $ , $\\mathcal {G}(\\alpha _0,m) < \\mathcal {G}(\\alpha _1,n)$ .", "Then there is a sequence $\\langle G_n : n \\in \\omega \\rangle $ with each $G_n: \\mathcal {T}^{h} \\rightarrow \\omega _1$ so that $\\mathcal {G}(\\alpha ,n) = [G_n^{(\\alpha )}]_\\mu $ .", "There is an $F \\in [C]^{\\mathcal {T}^{h}}_*$ so that for all $\\alpha < \\epsilon $ , $[F^{(\\alpha )}]_\\mu = \\mathcal {F}(\\alpha )$ .", "For each $n$ , let $\\mathcal {G}_n : \\epsilon \\rightarrow \\omega _1$ be defined by $\\mathcal {G}_n(\\alpha ) = \\mathcal {G}(\\alpha ,n)$ .", "Using Fact REF on $h$ , $\\Xi $ , and $\\mathcal {G}_n$ , one obtains $G_n$ .", "Fix $\\gamma < \\epsilon $ .", "Define $g^{\\prime }_{\\gamma ,n} : \\omega _1 \\rightarrow \\omega _1$ by defined by $g^{\\prime }_{\\gamma ,n}(\\alpha ) = {\\left\\lbrace \\begin{array}{ll}G_n^{(\\gamma )}(\\alpha ) & \\quad \\Xi ^{\\pi _h^\\Xi (\\gamma )}(\\alpha ) < h(\\alpha )\\\\0 & \\quad \\text{otherwise}\\end{array}\\right.", "}$ Let $f^{\\prime }_\\gamma : \\omega _1 \\rightarrow \\omega _1$ be defined by $f^{\\prime }_\\gamma (\\alpha ) = \\sup \\lbrace g^{\\prime }_{\\gamma ,n}(\\alpha ) : n \\in \\omega \\rbrace $ .", "Note that $\\mathcal {F}(\\gamma ) = [f^{\\prime }_\\gamma ]_{\\mu }$ .", "Claim 1: The set $D_\\gamma $ of $\\alpha < \\omega _1$ so that $f^{\\prime }_\\gamma $ is discontinuous at $\\alpha $ belongs to $\\mu $ .", "Suppose $\\omega _1 \\setminus D_\\gamma \\in \\mu $ and hence contains a club $E_\\gamma $ .", "Since $\\mathcal {F}(\\gamma ) \\in [C]^{\\omega _1}_* \\slash \\mu $ , there is some $\\bar{f}_\\gamma \\in [C]^{\\omega _1}_*$ (of the correct type) so that $f^{\\prime }_\\gamma =_\\mu \\bar{f}_\\gamma $ .", "Let $J_\\gamma = \\lbrace \\alpha : \\bar{f}_\\gamma (\\alpha ) = f^{\\prime }_\\gamma (\\alpha )\\rbrace \\in \\mu $ .", "Let $K_\\gamma \\subseteq J_\\gamma $ be a club.", "Then $E_\\gamma \\cap K_\\gamma $ is a club subset of $\\omega _1$ .", "Let $\\langle \\lambda _n : n \\in \\omega \\rangle $ be an increasing sequence in $E_\\gamma \\cap K_\\gamma $ .", "Then $\\lambda = \\sup \\lbrace \\lambda _n : n \\in \\omega \\rangle \\in E_\\gamma \\cap K_\\gamma $ since $E_\\gamma \\cap K_\\gamma $ is a club.", "Since $\\lambda _n$ and $\\lambda $ belong to $K_\\gamma \\subseteq J_\\gamma $ , $\\bar{f}_\\gamma (\\lambda _n) = f^{\\prime }_\\gamma (\\lambda _n)$ and $\\bar{f}_\\gamma (\\lambda ) = f^{\\prime }_\\gamma (\\lambda )$ .", "Since $\\lambda \\in E_\\gamma $ , one has that $f^{\\prime }_\\gamma $ is continuous at $\\lambda $ .", "However, the previous statement implies that $\\bar{f}_\\gamma $ is also continuous at $\\lambda $ .", "But $\\bar{f}_\\gamma $ is discontinuous everywhere since it is of the correct type.", "This proves the claim.", "Since $[f^{\\prime }_\\gamma ]_\\mu = \\mathcal {F}(\\gamma ) \\in [C]^{\\omega _1}_*\\slash \\mu $ , there is some $\\bar{f}_\\gamma \\in [C]^{\\omega _1}_*$ so that $f^{\\prime }_\\gamma =_\\mu \\bar{f}_\\gamma $ .", "Thus $P_\\gamma = \\lbrace \\alpha \\in \\omega _1: f_\\gamma ^{\\prime }(\\alpha ) \\in C\\rbrace \\in \\mu $ .", "Then $Q_\\gamma = P_\\gamma \\cap D_\\gamma \\in \\mu $ .", "Let $f_\\gamma = f^{\\prime }_\\gamma \\circ \\mathsf {enum}_{Q_\\gamma }$ .", "By Fact REF , $f^{\\prime }_\\gamma =_\\mu f_\\gamma $ .", "Thus $[f_\\gamma ]_\\mu = \\mathcal {F}(\\gamma )$ .", "Note that $f_\\gamma $ is discontinuous everywhere.", "Define $g_\\gamma : \\omega _1 \\times \\omega \\rightarrow \\omega _1$ by $g_\\gamma (\\alpha ,n) = g^{\\prime }_{\\gamma ,n}(\\mathsf {enum}_{Q_\\gamma }(\\alpha ))$ .", "$g_\\gamma $ witnesses that $f_\\gamma $ has uniform cofinality $\\omega $ .", "Thus $f_\\gamma $ is a function of the correct type.", "By the uniformity of the construction, one has now produced a sequence $\\langle f_\\gamma : \\gamma < \\epsilon \\rangle $ of functions of the correct type and $\\langle g_\\gamma : \\gamma < \\epsilon \\rangle $ so that $[f_\\gamma ]_\\mu = \\mathcal {F}(\\gamma )$ and $g_\\gamma $ witnesses the uniform cofinality of $f_\\gamma $ .", "Now apply Lemma REF to $h$ , $\\Xi $ , $\\langle f_\\gamma : \\gamma < \\epsilon \\rangle $ , and $\\langle g_\\gamma : \\gamma < \\epsilon \\rangle $ to obtained the desired function $F \\in [C]^{\\mathcal {T}^{h}}_*$ .", "Fact 6.4 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ be the club measure on $\\omega _1$ .", "Suppose $f$ has uniform cofinality $\\mathsf {id}$ .", "Then $[f]_\\mu $ (as an ordinal in $\\omega _2$ ) has cofinality $\\omega _1$ .", "Let $T^\\mathsf {id} = \\lbrace (\\alpha ,\\beta ) : \\beta < \\alpha \\rbrace $ .", "Let $g : T^\\mathsf {id} \\rightarrow \\omega _1$ witness that $f$ has uniform cofinality $\\mathsf {id}$ .", "For each $\\beta < \\omega _1$ , let $f_\\beta (\\gamma ) = {\\left\\lbrace \\begin{array}{ll}g(\\gamma ,\\beta ) & \\quad \\gamma > \\beta \\\\0 & \\quad \\text{otherwise}\\end{array}\\right.", "}$ One can show that $\\langle [f_\\beta ]_\\mu : \\beta < \\omega _1\\rangle $ is an increasing sequence cofinal below $[f]_\\mu $ .", "Fact 6.5 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Let $f : \\omega _1 \\rightarrow \\omega _1$ be a function so that $f(\\alpha )$ is a limit ordinal for $\\mu $ -almost all $\\alpha $ .", "Then either $f$ has uniform cofinality $\\omega $ or uniform cofinality $\\mathsf {id}$ $\\mu $ -almost everywhere but not both.", "Let $\\Xi $ be a Kunen function for $f$ .", "Let $A = \\lbrace \\alpha : f(\\alpha ) \\in \\mathrm {Lim}\\rbrace \\in \\mu $ .", "By Definition REF , for all $\\alpha \\in K^\\Xi _f$ , $f(\\alpha ) < \\delta _\\alpha ^\\Xi = \\sup \\lbrace \\Xi (\\alpha ,\\beta ) : \\beta < \\alpha \\rbrace $ .", "Fix $\\alpha \\in K^\\Xi _f \\cap A$ , define $\\epsilon _0^\\alpha $ be the least $\\epsilon < \\alpha $ so that $\\Xi (\\alpha ,\\epsilon ) < f(\\alpha )$ .", "Suppose $\\beta < \\alpha $ and $\\epsilon ^\\alpha _{\\beta ^{\\prime }}$ has been defined for all $\\beta ^{\\prime } < \\beta < \\alpha $ .", "If $\\sup \\lbrace \\Xi _\\alpha (\\epsilon ^\\alpha _{\\beta ^{\\prime }}) : \\beta ^{\\prime } < \\beta \\rbrace = f(\\alpha )$ , then let $\\alpha ^*$ denote this $\\beta $ and declare the construction to have terminated.", "Otherwise, let $\\epsilon ^\\alpha _\\beta $ be the least $\\epsilon < \\alpha $ so that $\\sup \\lbrace \\Xi _\\alpha (\\epsilon ^\\alpha _{\\beta ^{\\prime }}) : \\beta ^{\\prime } < \\beta \\rbrace < \\Xi _\\alpha (\\epsilon ) < f(\\alpha )$ .", "This defines a sequence $\\langle \\epsilon ^\\alpha _\\beta : \\beta < \\alpha ^*\\rangle $ where $\\alpha ^* \\le \\alpha $ .", "Note that $\\langle \\epsilon ^\\alpha _\\beta : \\beta < \\alpha ^*\\rangle $ is an increasing sequence and $\\sup \\lbrace \\Xi _\\alpha (\\epsilon _\\beta ^\\alpha ) : \\beta < \\alpha ^*\\rbrace = f(\\alpha )$ .", "Consider the function $\\Phi : K_f^\\Xi \\cap A \\rightarrow \\omega _1$ by $\\Phi (\\alpha ) = \\alpha ^*$ .", "Let $B = \\lbrace \\alpha \\in A \\cap K_f^\\Xi : \\Phi (\\alpha ) = \\alpha ^* < \\alpha \\rbrace $ .", "(Case I) $B \\in \\mu $ .", "Then $\\Phi $ is a regressive function.", "Since $\\mu $ is normal, there is a $C \\subseteq B$ and $\\delta < \\omega _1$ so that $C \\in \\mu $ and $\\Phi (\\alpha ) = \\delta $ for all $\\alpha \\in C$ .", "Let $\\phi : \\omega \\rightarrow \\delta $ be a cofinal sequence.", "Let $g : \\omega _1 \\times \\omega \\rightarrow \\omega _1$ be defined by $g(\\alpha ,n) = {\\left\\lbrace \\begin{array}{ll}n & \\quad \\alpha \\notin C \\\\\\Xi _\\alpha (\\epsilon ^\\alpha _{\\phi (n)}) & \\quad \\alpha \\in C\\end{array}\\right.", "}$ The function $g$ witnesses that $f$ is $\\mu $ -almost everywhere a function of uniform cofinality $\\omega $ .", "(Case II) $B \\notin \\mu $ .", "Let $C = (K_f^\\Xi \\cap A)\\setminus B$ .", "For all $\\alpha \\in C$ , $\\Phi (\\alpha ) = \\alpha ^* = \\alpha $ .", "Let $T^\\mathsf {id} = \\lbrace (\\alpha ,\\beta ) : \\alpha < \\omega _1 \\wedge \\beta < \\alpha \\rbrace $ .", "Define $g : T^\\mathsf {id} \\rightarrow \\omega _1$ by $g(\\alpha ,\\beta ) = {\\left\\lbrace \\begin{array}{ll}\\beta & \\quad \\alpha \\notin C \\\\\\Xi _\\alpha (\\epsilon ^\\alpha _\\beta ) & \\quad \\alpha \\in C\\end{array}\\right.", "}$ Then $g$ witnesses that $f$ has uniform cofinality $\\mathsf {id}$ $\\mu $ -almost everywhere.", "Fact REF implies that if $f$ has uniform cofinality $\\omega $ , then $[f]_\\mu $ has cofinality $\\omega $ .", "Fact REF implies that if $f$ has uniform cofinality $\\mathsf {id}$ , then $[f]_\\mu $ has cofinality $\\omega _1$ .", "Since $\\omega _1$ is regular under $\\mathsf {AD}$ , $f$ cannot have both uniform cofinality.", "In the previous result, it is shown that $f$ cannot have uniform cofinality $\\omega $ and $\\mathsf {id}$ by showing these uniform cofinalities correspond to different cofinalities of $[f]_\\mu $ .", "This is however not the case in general.", "The following argument avoids considering the function in the ultrapower.", "This argument can be generalized to show functions from $f : [\\omega _1]^n \\rightarrow \\omega _1$ can only have one uniform cofinality.", "Fact 6.6 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Every function $f : \\omega _1 \\rightarrow \\omega _1$ has uniform cofinality $\\omega $ or $\\mathsf {id}$ $\\mu $ -almost everywhere but not both.", "This is already proved in Fact REF , so one will just give an argument that no function can have both uniform cofinality which does not involve the ultrapower.", "Suppose that $f : \\omega _1 \\rightarrow \\omega _1$ $\\mu $ -almost everywhere has both uniform cofinality $\\omega $ and $\\mathsf {id}$ .", "Let $g_\\omega : \\omega _1 \\times \\omega \\rightarrow \\omega _1$ witness that $f$ has uniform cofinality $\\omega $ $\\mu $ -almost everywhere.", "Let $g_{\\mathsf {id}} : \\omega _1 \\times \\omega _1 \\rightarrow \\omega _1$ witness that $f$ has uniform cofinality $\\mathsf {id}$ $\\mu $ -almost everywhere.", "Let $A_\\omega \\in \\mu $ be the set of $\\alpha < \\omega _1$ so that $f(\\alpha ) = \\sup \\lbrace g_\\omega (\\alpha ,n) : n \\in \\omega \\rbrace $ .", "Let $A_{\\mathsf {id}} \\in \\mu $ be the set of $\\alpha < \\omega _1$ so that $f(\\alpha ) = \\sup \\lbrace g_\\mathsf {id}(\\alpha ,\\beta ) : \\beta < \\alpha \\rbrace $ .", "Let $A = A_\\omega \\cap A_{\\mathsf {id}}$ which also belongs to $\\mu $ .", "For each $\\beta \\in A$ and $\\alpha < \\beta $ , let $n_{\\alpha ,\\beta }$ be least $n \\in \\omega $ so that $g_\\omega (\\beta ,n) > g_{\\mathsf {id}}(\\beta ,\\alpha )$ which exists since $\\beta \\in A$ .", "Consider the function $\\Phi : [A]^2 \\rightarrow \\omega $ defined by $\\Phi (\\alpha ,\\beta ) = n_{\\alpha ,\\beta }$ .", "By the weak partition property and the countable additivity of $\\mu $ , one has that there is some $B \\subseteq A$ with $B \\in \\mu $ and $n \\in \\omega $ so that for all $(\\alpha ,\\beta ) \\in [B]^2$ , $n_{\\alpha ,\\beta } = n$ .", "Now let $\\beta \\in B$ be a limit point of $B$ , that is $\\sup B\\cap \\beta = \\beta $ .", "For each $\\alpha < \\beta $ with $\\alpha \\in B$ , one has that $n_{\\alpha ,\\beta } = n$ .", "Then $f(\\beta ) = \\sup \\lbrace g_{\\mathsf {id}}(\\beta ,\\alpha ) : \\alpha \\in \\beta \\rbrace \\le g_\\omega (\\beta ,n) < f(\\beta )$ .", "Contradiction.", "Next one will consider the ultrapower, $\\mathrm {ult}(V,\\mu )$ , where $\\mu $ is the club measure on $\\omega _1$ .", "One needs some care when working with this structure as one may not have Łoś's theorem without $\\mathsf {AC}$ .", "It will be shown later that Łoś's theorem fails for this ultrapower and in fact $\\mathrm {ult}(V,\\mu )$ is not a model $\\mathsf {ZF}$ .", "Fact 6.7 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Suppose $f : \\omega _1 \\rightarrow V$ is such that $\\mathrm {ult}(V,\\mu ) \\models [f]_\\mu \\subseteq \\omega _2$ , then there is an $f^{\\prime } : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ so that $[f^{\\prime }]_\\mu = [f]_\\mu $ .", "By Corollary REF , one knows that $\\omega _2 = \\prod _{\\omega _1} \\omega _1 \\slash \\mu $ .", "Thus in $\\mathrm {ult}(V,\\mu )$ , each $\\zeta < \\omega _2$ is represented by some $h : \\omega _1 \\rightarrow \\omega _1$ .", "Now let $f^{\\prime }(\\alpha ) = f(\\alpha )\\cap \\omega _1$ .", "The claim is that $[f^{\\prime }]_\\mu = [f]_\\mu $ .", "Suppose $h : \\omega _1 \\rightarrow V$ is a function so that $\\mathrm {ult}(V,\\mu ) \\models [h]_\\mu \\in [f]_\\mu $ .", "Since $\\mathrm {ult}(V,\\mu ) \\models [f]_\\mu \\subseteq \\omega _2$ , one must have that $A = \\lbrace \\alpha \\in \\omega _1 : h(\\alpha ) \\in f(\\alpha ) \\cap \\omega _1\\rbrace \\in \\mu $ .", "Let $h^{\\prime } : \\omega _1 \\rightarrow \\omega _1$ be defined by $h^{\\prime }(\\alpha ) = h(\\alpha )$ if $\\alpha \\in A$ and $h^{\\prime }(\\alpha ) = \\emptyset $ otherwise.", "Note that $[h]_\\mu = [h^{\\prime }]_\\mu $ and $[h^{\\prime }]_\\mu \\in [f^{\\prime }]_\\mu $ .", "This shows that $[f]_\\mu \\subseteq [f^{\\prime }]_\\mu $ .", "It is clear that $[f^{\\prime }]_\\mu \\subseteq [f]_\\mu $ .", "Thus $[f]_\\mu = [f^{\\prime }]_\\mu $ .", "Fact 6.8 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ and $\\mu $ is the club measure on $\\omega _1$ .", "Suppose $h : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ has the property that for all $\\alpha < \\omega _1$ , $|h(\\alpha )| = \\omega _1$ and $\\mathsf {enum}_{h(\\alpha )}$ is a function of correct type.", "Suppose there is a function $G$ so that for each $\\alpha < \\omega _1$ , $G(\\alpha ) : \\omega _1\\times \\omega \\rightarrow \\omega _1$ is a witness to $\\mathsf {enum}_{h(\\alpha )}$ having uniform cofinality $\\omega $ , then $\\mathsf {enum}_{[h]_\\mu } : \\omega _2 \\rightarrow \\omega _2$ is a function of correct type.", "For each $\\xi < \\omega _1$ , let $g_\\xi : \\omega _1 \\times \\omega \\rightarrow \\omega _1$ be defined by $g_\\xi = G(\\xi )$ .", "Hence $g_\\xi $ is a witnesses to $\\mathsf {enum}_{h(\\alpha )}$ having the correct type.", "For each $\\xi < \\omega _2$ , let $p_\\xi : \\omega _1 \\rightarrow \\omega _1$ have the property that for all $\\alpha < \\omega _1$ , $p_\\xi (\\alpha ) \\in h(\\alpha )$ and $[p_\\xi ]_\\mu $ represents the $\\xi ^\\text{th}$ -element of $[h]_\\mu $ .", "(Note that one does not have a uniform procedure for finding $p_\\xi $ as $\\xi $ ranges over ordinals below $\\omega _2$ .)", "Define $k_{\\xi ,n} : \\omega _1 \\rightarrow \\omega _1$ by $k_{\\xi ,n}(\\alpha ) = g_\\xi (\\mathsf {enum}_{h(\\xi )}^{-1}(p_\\xi (\\alpha )),n)$ .", "Let $\\delta _{\\xi ,n} = [k_{\\xi ,n}]_\\mu $ .", "Note that if $p^{\\prime }_\\xi =_\\mu p_\\xi $ and $k^{\\prime }_{\\xi ,n}$ was defined in the same manner as $k$ using $p^{\\prime }_\\xi $ instead of $p_\\xi $ , then $k^{\\prime }_{\\xi ,n} =_\\mu k_{\\xi ,n}$ .", "This shows that $\\delta _{\\xi ,n}$ is well defined independent of the choice of $p_\\xi $ .", "Define $r : \\omega _2 \\times \\omega \\rightarrow \\omega _2$ by $r(\\xi ,n) = \\delta _{\\xi ,n}$ .", "One can check that $r$ witnesses that $\\mathsf {enum}_{[h]_\\mu }$ has uniform cofinality $\\omega $ .", "Let $\\zeta \\in [h]_\\mu $ .", "Let $\\ell : \\omega _1 \\rightarrow \\omega _1$ be such that $[\\ell ]_\\mu = \\zeta $ .", "One may assume that for all $\\alpha $ , $\\ell (\\alpha ) \\in h(\\alpha )$ .", "Let $\\iota : \\omega _1 \\rightarrow \\omega _1$ be defined by $\\iota (\\alpha ) = \\sup (h(\\alpha ) \\cap \\ell (\\alpha ))$ .", "Note that $\\iota (\\alpha ) < \\ell (\\alpha )$ since $\\mathsf {enum}_{h(\\alpha )}$ was assumed to be discontinuous everywhere.", "One can check that every element of $[h]_\\mu $ which is below $[\\ell ]_\\mu $ is below $[\\iota ]_\\mu < [\\ell ]_\\mu $ .", "Thus $\\mathsf {enum}_{[h]_\\mu }$ is discontinuous everywhere.", "Fact 6.9 Suppose $h : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ with $|h(\\alpha )| = \\omega _1$ for all $\\alpha < \\omega _1$ .", "Let $\\xi < \\omega _2$ and $\\ell : \\omega _1 \\rightarrow \\omega _1$ be such that $[\\ell ]_\\mu = \\xi $ .", "$\\mathsf {enum}_{[h]_\\mu }(\\xi )$ is represented by the function $g : \\omega _1 \\rightarrow \\omega _1$ defined by $g(\\alpha ) = \\mathsf {enum}_{h(\\alpha )}(\\ell (\\alpha ))$ .", "For each $\\ell : \\omega _1 \\rightarrow \\omega _1$ , let $g_\\ell : \\omega _1 \\rightarrow \\omega _1$ be defined by $g_\\ell (\\alpha ) = \\mathsf {enum}_{h(\\alpha )}(\\ell (\\alpha ))$ .", "Note that if $\\ell =_\\mu \\ell ^{\\prime }$ , then $g_\\ell =_\\mu g_{\\ell ^{\\prime }}$ .", "Note also that $[g_\\ell ]_\\mu \\in [h]_\\mu $ .", "For each $\\xi < \\omega _1$ , let $\\gamma _\\xi \\in \\omega _2$ be defined by $\\gamma _\\xi = [g_\\ell ]_\\mu $ where $\\ell $ is any function so that $[\\ell ]_\\mu = \\xi $ .", "This is well defined by the previous paragraph.", "$\\langle \\gamma _\\xi : \\xi < \\omega _2\\rangle $ is an increasing sequence through $[h]_\\mu $ .", "Let $k : \\omega _1 \\rightarrow \\omega _1$ be such that $[k]_\\mu \\in [h]_\\mu $ .", "Note $A = \\lbrace \\alpha \\in \\omega _1 : k(\\alpha ) \\in h(\\alpha )\\rbrace \\in \\mu $ .", "By modifying $k$ off $A$ , one will assume without loss of generality that $k(\\alpha ) \\in h(\\alpha )$ for all $\\alpha < \\omega _1$ .", "Let $\\ell (\\alpha ) = \\mathsf {enum}_{h(\\alpha )}^{-1}(k(\\alpha ))$ .", "Then $k =_\\mu g_{\\ell }$ .", "This shows that $\\mathsf {enum}_{[h]_\\mu }(\\xi ) = \\gamma _\\xi $ .", "This completes the proof.", "Fact 6.10 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ and let $\\mu $ be the club measure on $\\omega _1$ .", "Suppose $E \\subseteq \\omega _2$ is such that $|E| = \\omega _1$ .", "Then $E \\in \\mathrm {ult}(V,\\mu )$ .", "Let $\\zeta = \\mathrm {ot}(E)$ .", "Let $h : \\omega _1 \\rightarrow \\omega _1$ be such that $[h]_\\mu = \\zeta $ and $h(\\alpha ) > 0$ for all $\\alpha < \\omega _1$ .", "Let $\\Xi $ be a Kunen function for $h$ .", "Using Fact REF , there is an $F : \\mathcal {T}^h \\rightarrow \\omega _1$ which is increasing so that for all $\\xi < \\zeta $ , $[F^{(\\xi )}]_\\mu = \\mathsf {enum}_E(\\xi )$ .", "Let $g : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ be defined by $g(\\alpha ) = \\lbrace F(\\alpha ,\\beta ) : \\beta < h(\\alpha )\\rbrace $ .", "The claim is that $[g]_\\mu = E$ .", "Suppose $p : \\omega _1 \\rightarrow \\omega _1$ is such that $[p]_\\mu \\in [g]_\\mu $ .", "Note $A = \\lbrace \\alpha : p(\\alpha ) \\in g(\\alpha )\\rbrace \\in \\mu $ .", "By modifying $p$ off $A$ , one may assume that for all $\\alpha $ , $p(\\alpha ) \\in g(\\alpha )$ .", "Let $f(\\alpha )$ be the $\\beta < h(\\alpha )$ so that $F(\\alpha ,\\beta ) = p(\\alpha )$ .", "Then $p = F^f = F^{(\\xi )} = \\mathsf {enum}_E(\\xi )$ for some $\\xi < \\zeta $ .", "Then $[g]_\\mu \\subseteq E$ .", "It is straightforward to see that $E \\subseteq [g]_\\mu $ .", "Fact 6.11 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Suppose $D \\subseteq \\omega _2$ is such that there is some $g : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ so that $[g]_\\mu = D$ .", "Suppose $E \\subseteq \\omega _2$ is such that $\\sup E < \\min D$ , then $D \\cup E \\in \\mathrm {ult}(V,\\mu )$ .", "By Fact REF , $E \\in \\mathrm {ult}(V,\\mu )$ .", "Thus there is a $g^{\\prime } : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ so that $[g^{\\prime }]_\\mu = E$ .", "Let $g^{\\prime \\prime } : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ be defined by $g^{\\prime \\prime }(\\alpha ) = g(\\alpha ) \\cup g^{\\prime }(\\alpha )$ .", "One can check that $[g^{\\prime \\prime }]_\\mu = D \\cup E$ .", "Fact 6.12 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ and let $\\mu $ be the club measure on $\\omega _1$ .", "Let $D = \\lbrace \\alpha < \\omega _2 : \\mathrm {cof}(\\alpha ) = \\omega _1\\rbrace $ .", "Then $D \\notin \\mathrm {ult}(V,\\mu )$ .", "Suppose $D \\in \\mathrm {ult}(V,\\mu )$ .", "By Fact REF , there is some $h : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ so that $D = [h]_\\mu $ .", "(Case I) $A = \\lbrace \\alpha < \\omega _1 : h(\\alpha ) \\in \\mu \\rbrace \\in \\mu $ .", "By Fact REF , there is a sequence $\\langle C_\\alpha : \\alpha \\in A\\rangle $ of club subsets of $\\omega _1$ so that $C_\\alpha \\subseteq h(\\alpha )$ for all $\\alpha \\in A$ .", "Define $\\ell : \\omega _1 \\rightarrow \\omega _1$ by $\\ell (\\alpha ) = \\mathsf {enum}_{C_{\\mathsf {enum}_A(\\alpha )}}(\\omega )$ .", "Let $\\ell ^{\\prime } : \\omega _1 \\rightarrow \\omega _1$ be defined by $\\ell ^{\\prime }(\\alpha ) = {\\left\\lbrace \\begin{array}{ll}\\mathsf {enum}_{C_\\alpha }(\\omega ) & \\quad \\alpha \\in A \\\\\\min (h(\\alpha )) & \\quad \\alpha \\notin A\\end{array}\\right.", "}$ Clearly $[\\ell ^{\\prime }]_\\mu \\in [h]_\\mu $ .", "Note that $\\ell = \\ell ^{\\prime } \\circ \\mathsf {enum}_A$ .", "By Fact REF , $\\ell =_\\mu \\ell ^{\\prime }$ .", "Define $g : \\omega _1 \\times \\omega \\rightarrow \\omega $ by $g(\\alpha ,n) = \\mathsf {enum}_{C_{\\mathsf {enum}_A(\\alpha )}}(n)$ .", "The function $g$ witnesses that $\\ell $ has uniform cofinality $\\omega $ .", "Thus $\\mathrm {cof}([\\ell ]_\\mu ) = \\omega $ by Fact REF .", "Since $[\\ell ]_\\mu \\in D = \\lbrace \\alpha < \\omega _2 : \\mathrm {cof}(\\alpha ) = \\omega _1\\rbrace $ , one has a contradiction.", "(Case II) $A = \\lbrace \\alpha < \\omega _1 : h(\\alpha ) \\in \\mu \\rbrace \\notin \\mu $ .", "Then $B = \\omega _1 \\setminus A = \\lbrace \\alpha < \\omega _1 : \\omega _1 \\setminus h(\\alpha ) \\in \\mu \\rbrace \\in \\mu $ .", "By Fact REF , there is a sequence $\\langle C_\\alpha : \\alpha \\in B\\rangle $ of club subsets of $\\omega _1$ so that $C_\\alpha \\subseteq \\omega _1 \\setminus h(\\alpha )$ .", "Define $\\ell : \\omega _1 \\rightarrow \\omega _1$ by $\\ell (\\alpha ) = \\mathsf {enum}_{C_{\\mathsf {enum}_B(\\alpha )}}(\\alpha )$ .", "Let $\\ell ^{\\prime } : \\omega _1 \\rightarrow \\omega _1$ be defined by $\\ell ^{\\prime }(\\alpha ) = {\\left\\lbrace \\begin{array}{ll}\\mathsf {enum}_{C_\\alpha }(\\alpha ) & \\quad \\alpha \\in B \\\\\\min h(\\alpha ) & \\quad \\alpha \\notin B\\end{array}\\right.", "}$ Note that $[\\ell ^{\\prime }]_\\mu \\notin [h]_\\mu $ .", "Observe $\\ell = \\ell \\circ \\mathsf {enum}_B$ .", "By Fact REF , $\\ell =_\\mu \\ell ^{\\prime }$ .", "Let $g: \\mathcal {T}^{\\mathsf {id}} \\rightarrow \\omega _1$ be defined by $g(\\alpha ,\\beta ) = \\mathsf {enum}_{C_{\\mathsf {enum}_{B}(\\alpha )}}(\\beta )$ if $\\beta < \\alpha $ .", "Then $g$ witnesses that $\\ell $ has uniform cofinality $\\mathrm {id}$ .", "Therefore, $\\mathrm {cof}([\\ell ]_\\mu ) = \\omega _1$ by Fact REF .", "However, $[\\ell ]_\\mu \\notin D = \\lbrace \\alpha < \\omega _2 : \\mathrm {cof}(\\alpha ) = \\omega _1\\rbrace $ yields a contradiction.", "The proof is complete.", "One can now show that $\\mu $ does not satisfy Łoś's theorem and in fact the ultrapower does not satisfy the $\\mathsf {ZF}$ axioms.", "Fact 6.13 Assume $\\mathsf {ZF}+ \\mathsf {AD}+ V = L(\\mathbb {R})$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Then $\\mathrm {ult}(L(\\mathbb {R}),\\mu )$ is not a model of $\\mathsf {ZF}$ .", "Thus Łoś's theorem fails for $\\mu $ .", "Note that $L(\\mathbb {R}) \\models \\mathsf {AD}$ implies $L(\\mathbb {R}) \\models \\mathsf {DC}_\\mathbb {R}$ and hence $L(\\mathbb {R}) \\models \\mathsf {DC}$ by a result of Kechris [15].", "Thus $\\mathrm {ult}(L(\\mathbb {R}),\\mu )$ may be considered a transitive inner model of $L(\\mathbb {R})$ .", "One can check that $\\mathbb {R}\\subseteq \\mathrm {ult}(L(\\mathbb {R}),\\mu )$ .", "If $\\mathrm {ult}(L(\\mathbb {R}),\\mu )$ is an inner model of $\\mathsf {ZF}$ containing all the reals of $L(\\mathbb {R})$ , then one must have that $\\mathrm {ult}(L(\\mathbb {R}),\\mu ) = L(\\mathbb {R})$ .", "This is impossible since Fact REF asserts that $\\mathrm {ult}(L(\\mathbb {R}),\\mu )$ is missing a subset of $\\omega _2$ which belongs to $L(\\mathbb {R})$ .", "Theorem 6.14 (Jackson) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $\\mu $ denote the club measure on $\\omega _1$ .", "Define a partition $P : [\\omega _2]^{\\omega _2}_* \\rightarrow 2$ by $P(f) = 0 \\Leftrightarrow \\mathrm {rang}(f) \\in \\mathrm {ult}(V,\\mu ).$ Then there is no club $D \\subseteq \\omega _2$ and no $i \\in 2$ so that $P(f) = i$ for all $f \\in [D]^{\\omega _2}_*$ .", "(Case I) Suppose there is a club $D \\subseteq \\omega _2$ so that $P(f) = 1$ for all $f \\in [D]^{\\omega _2}_*$ .", "By Fact REF , there is a club $C \\subseteq \\omega _1$ so that $[C]^{\\omega _1}\\slash \\mu \\subseteq D$ .", "Let $A = \\lbrace \\alpha : (\\exists \\gamma )(\\alpha = \\mathsf {enum}_{C}(\\gamma + \\omega )\\rbrace $ .", "Note that $\\mathsf {enum}_A : \\omega _1 \\rightarrow \\omega _1$ is a function of the correct type.", "Let $g : \\omega _1 \\times \\omega \\rightarrow \\omega $ witness that $\\mathsf {enum}_A$ has uniform cofinality $\\omega $ .", "For each $\\xi < \\omega _1$ , let $A_\\xi = \\lbrace \\alpha \\in A : \\alpha \\ge \\xi \\rbrace $ .", "Let $g_\\xi : \\omega _1 \\times \\omega \\rightarrow \\omega $ be defined by $g_\\xi (\\alpha ,n) = g(\\mathsf {enum}_A^{-1}(\\mathsf {enum}_{A_\\xi }(\\alpha )),n)$ .", "Then for each $\\xi < \\omega _1$ , $g_\\xi $ witnesses that $\\mathsf {enum}_{A_\\xi }$ has uniform cofinality $\\omega $ .", "Let $h : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ be defined by $h(\\xi ) = A_{\\xi }$ .", "By Fact REF , $\\mathsf {enum}_{[h]_\\mu } : \\omega _2 \\rightarrow \\omega _2$ is a function of the correct type.", "Let $f = \\mathsf {enum}_{[h]_\\mu }$ .", "One can check that $\\mathrm {rang}(f) = [h]_\\mu \\subseteq [C]^{\\omega _1} \\slash \\mu $ .", "Thus $f \\in [D]^{\\omega _2}_*$ .", "However since $\\mathrm {rang}(f) = [h]_\\mu \\in \\mathrm {ult}(V,\\mu )$ , one must have that $P(f) = 0$ .", "Contradiction.", "(Case II) Suppose there is a club $D \\subseteq \\omega _2$ so that $P(f) = 0$ for all $f \\in [D]^{\\omega _2}_*$ .", "Fix any $f \\in [D]^{\\omega _2}_*$ .", "Since $P(f) = 0$ , there is some $h : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ so that $\\mathrm {rang}(f) = [h]_\\mu $ by Fact REF .", "Now let $A \\subseteq \\omega _2$ be arbitrary.", "$\\mathsf {enum}_{f[A]} : \\omega _2 \\rightarrow D$ is a function of the correct type since $f : \\omega _2 \\rightarrow D$ is a function of the correct type.", "Then $P(\\mathsf {enum}_{f[A]}) = 0$ .", "Thus $f[A] \\in \\mathrm {ult}(V,\\mu )$ .", "There is some $g : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ so that $[g]_\\mu = f[A]$ .", "Let $k : \\omega _1 \\rightarrow {{P}(\\omega _1)}$ be defined by $k(\\alpha ) = \\mathsf {enum}_{h(\\alpha )}^{-1}[g(\\alpha )]$ .", "The claim is that $[k]_\\mu = A$ .", "To show $[k]_\\mu \\subseteq A$ : Suppose $\\xi \\in [k]_\\mu $ .", "There is a representative $\\ell : \\omega _1 \\rightarrow \\omega _1$ of $\\xi $ so that for all $\\mu $ -almost all $\\alpha $ , $\\ell (\\alpha ) \\in k(\\alpha )$ .", "There is a function $p : \\omega _1 \\rightarrow \\omega _1$ so that for $\\mu $ -almost all $\\alpha $ , $p(\\alpha ) \\in g(\\alpha )$ and $\\ell (\\alpha ) = \\mathsf {enum}_{h(\\alpha )}^{-1}(p(\\alpha ))$ .", "In particular $[p]_\\mu \\in [g]_\\mu = f[A]$ .", "By Fact REF , the $\\xi ^\\text{th}$ element of $[h]_\\mu $ is represented by the function $q : \\omega _1 \\rightarrow \\omega _1$ defined by $q(\\alpha ) = \\mathsf {enum}_{h(\\alpha )}(\\ell (\\alpha ))$ .", "However $\\mu $ -almost everywhere, $q(\\alpha ) = \\mathsf {enum}_{h(\\alpha )}(\\mathsf {enum}_{h(\\alpha )}^{-1}(p(\\alpha ))) = p(\\alpha )$ .", "Thus $q =_\\mu p$ .", "It has been shown that the $\\xi ^\\text{th}$ element of $[h]_\\mu = \\mathrm {rang}(f)$ belongs to $[g]_\\mu = f[A]$ .", "Thus $\\xi \\in A$ .", "To show that $A \\subseteq [k]_\\mu $ : Let $\\xi \\in A$ .", "Let $\\ell : \\omega _1 \\rightarrow \\omega _1$ be such that $[\\ell ]_\\mu = \\xi $ .", "By Fact REF , the $\\xi ^\\text{th}$ -element of $[h]$ is represented by the function $q(\\alpha ) = \\mathsf {enum}_{h(\\alpha )}(\\ell (\\alpha ))$ .", "Thus $[q]_\\mu \\in f[A] = [g]_\\mu $ .", "So $\\mu $ -almost everywhere, $q(\\alpha ) = \\mathsf {enum}_{h(\\alpha )}(\\ell (\\alpha )) \\in g(\\alpha )$ .", "Thus for $\\mu $ -almost all $\\alpha $ , $\\ell (\\alpha ) \\in \\mathsf {enum}_{h(\\alpha )}^{-1}[g(\\alpha )] = k(\\alpha )$ .", "It has been shown that $\\xi = [\\ell ]_\\mu \\in [k]_\\mu $ .", "The claim has been shown.", "Since $A$ was arbitrary, this implies that everywhere subset of $\\omega _2$ belongs to $\\mathrm {ult}(V,\\mu )$ .", "This contradicts Fact REF .", "It has been shown that the partition $P$ has no club set which is homogeneous.", "Corollary 6.15 (Martin and Paris) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "The partition relation $\\omega _2 \\rightarrow (\\omega _2)^{\\omega _2}_2$ does not hold.", "Thus $\\omega _2$ is a weak partition cardinal which is not a strong partition cardinal.", "Remark 6.16 The example of Jackson from Theorem REF gives an explicit example of a partition $P : [\\omega _2]^{\\omega _2}_* \\rightarrow 2$ which has no homogeneous club subset.", "Martin and Paris original argument roughly shows that if $\\omega _2 \\rightarrow (\\omega _2)^{\\omega _2}_2$ holds, then $\\omega _3$ would satisfy $\\omega _3 \\rightarrow (\\omega _3)^{\\alpha }_2$ for all $\\alpha < \\omega _1$ .", "Fact REF implies that $\\omega _3$ must be regular.", "However, it can be shown that $\\omega _3$ is a singular cardinal of cofinality $\\omega _2$ .", "For more information on the result of Martin and Paris, see [20] and specifically Lemma 5.19.", "Also see Section 13 of [14].", "Corollary 6.17 Let $\\sigma \\in [\\omega _2]^{<\\omega _2}_*$ .", "Define $P_\\sigma : [\\omega _2 \\setminus (\\sup (\\sigma ) + \\omega )]^{\\omega _2}_* \\rightarrow 2$ by $P(\\sigma \\hat{\\ }f)$ , where $P$ is the partition from Theorem REF .", "$P_\\sigma $ also does not have have a club homogeneous set.", "Essentially the same argument as Theorem REF with the assistance of Fact REF ." ], [ "$L(\\mathbb {R})$ as a Symmetric Collapse Extension of {{formula:5e3716ba-690d-45bc-b27b-9cd10f05e13e}}", "Definition 7.1 Let $S \\subseteq \\mathrm {ON}$ be a set of ordinals.", "Let $\\varphi $ be a formula of set theory.", "The pair $(S,\\varphi )$ is called an $\\infty $ -Borel code.", "Let $n \\in \\omega $ .", "Define $\\mathfrak {B}^n_{(S,\\varphi )} = \\lbrace x \\in \\mathbb {R}^n : L[S,x] \\models \\varphi (S,x)\\rbrace $ .", "Let $A \\subseteq \\mathbb {R}^n$ .", "$(S,\\varphi )$ is said to be an $\\infty $ -Borel code for $A$ if and only if $A = \\mathfrak {B}^n_{(S,\\varphi )}$ .", "If a set $A \\subseteq \\mathbb {R}^n$ has an $\\infty $ -Borel code, then $A$ has a very absolute definition.", "That is, in ordered to determine membership of $x$ in $A$ , one needs only to ask whether $\\varphi (S,x)$ holds in $L[S,x]$ , which is the minimal model of $\\mathsf {ZFC}$ containing $x$ and the code set $S$ .", "Definition 7.2 Let $P$ be a set.", "Recall a set $A$ is ordinal definable from $P$ if and only if there is a formula $\\varphi $ of set theory, a finite tuple of ordinals $\\bar{\\alpha }$ , and a finite tuple $\\bar{p}$ of elements of $P$ so that $A = \\lbrace x : \\varphi (x,\\bar{\\alpha },\\bar{p})\\rbrace $ .", "Using the reflection theorem, one can show that a set $A$ is ordinal definable if and only if there is some $\\xi \\in \\mathrm {ON}$ , tuple of ordinals $\\bar{\\alpha }$ , tuple $\\bar{p}$ from $P$ , and formula $\\varphi $ so that all these objects belong to $V_\\xi $ and $A = \\lbrace x \\in V_\\xi : V_\\xi \\models \\varphi (x,\\bar{\\alpha },\\bar{p})\\rbrace $ .", "This shows the collection $\\mathrm {OD}_P$ of all sets which are ordinal definable from $P$ forms a first order class.", "If $P$ has an $\\mathrm {OD}_P$ wellordering, then $\\mathrm {OD}_P$ has a wellordering which is definable with parameter from ordinals and $P$ .", "Thus there is a bijection of $\\mathrm {OD}_P$ with $\\mathrm {ON}$ .", "Let $\\mathrm {HOD}_P$ denote the subclass of $\\mathrm {OD}_P$ which is hereditarily $\\mathrm {OD}_P$ .", "That is, $\\mathrm {HOD}_P$ consists of those $x \\in \\mathrm {OD}_P$ such that $\\mathrm {tc}(\\lbrace x\\rbrace ) \\subseteq \\mathrm {OD}_P$ , where $\\mathrm {tc}$ refers to the transitive closure.", "As a matter of convention, if $S \\subseteq \\mathrm {ON}$ is a set of ordinals, one will often write $\\mathrm {OD}_S$ and $\\mathrm {HOD}_S$ for $\\mathrm {OD}_{\\lbrace S\\rbrace }$ and $\\mathrm {HOD}_{\\lbrace S\\rbrace }$ .", "Definition 7.3 Let $n \\in \\omega $ and $S \\subseteq \\mathrm {ON}$ be a set of ordinals.", "Let ${}_n\\mathbb {O}_S$ denote the forcing of nonempty ordinal definable in $S$ subsets of $\\mathbb {R}^n$ .", "Let the ordering be $\\le _{{}_n\\mathbb {O}_S} = \\subseteq $ .", "The largest element is $1_{{}_n\\mathbb {O}_S} = \\mathbb {R}^n$ .", "Since there is a definable (in $S$ ) bijection of the class $\\mathrm {OD}_S$ with the ordinal $\\mathrm {ON}$ , one can identify ${}_n\\mathbb {O}_S$ as a set of ordinals in $\\mathrm {HOD}_S$ .", "In this way, ${}_n\\mathbb {O}_S \\in \\mathrm {HOD}_S$ .", "${}_n\\mathbb {O}_S$ is called the $n$ -dimensional $S$ -Vopěnka forcing.", "If $n = 1$ , ${}_1\\mathbb {O}_S$ will be denoted simply $\\mathbb {O}_S$ .", "Definition 7.4 Let $n \\in \\omega $ and $S \\subseteq \\mathrm {ON}$ be a set of ordinals.", "Let ${}_n\\mathbb {A}_S$ denote the forcing of nonempty subsets of $\\mathbb {R}^n$ which possess $\\mathrm {OD}_S$ $\\infty $ -Borel codes.", "${}_n\\mathbb {A}_S$ is ordered by $\\le _{{}_n\\mathbb {A}_S} = \\subseteq $ .", "It has a largest element $1_{{}_n\\mathbb {O}_S} = \\mathbb {R}^n$ .", "Since ${}_n\\mathbb {A}_S \\subseteq {}_n\\mathbb {O}_S$ , one can consider ${}_n\\mathbb {A}_S$ to be a forcing of $\\mathrm {HOD}_S$ .", "${}_n\\mathbb {A}_S$ will be called the $n$ -dimensional $\\mathrm {OD}_S$ $\\infty $ -Borel code forcing.", "Remark 7.5 One can be more specific about how ${}_n\\mathbb {A}_S$ is coded as a set of ordinals.", "One can identify ${}_n\\mathbb {A}_S$ with a (set sized) collection of pairs of $(S^{\\prime },\\varphi )$ , where $S^{\\prime }$ is a $\\mathrm {OD}_S$ set of ordinals and $\\varphi $ is a formula.", "Using the canonical global wellordering of $\\mathrm {HOD}_S$ , let $\\langle (S_\\alpha , \\varphi _\\alpha ) : \\alpha < \\delta \\rangle $ , for some ordinal $\\delta $ , be an enumeration of $\\infty $ -Borel codes that include at least one code for each element of ${}_n\\mathbb {A}_S$ .", "Fix $\\langle \\phi _n : n \\in \\omega \\rangle $ to be a coding of formulas of set theory by natural numbers.", "Using the Gödel pairing function, let $K = \\lbrace (\\alpha ,\\beta ,n) : \\beta \\in S_\\alpha \\wedge \\phi _n = \\varphi _\\alpha \\rbrace $ .", "One will often identify ${}_n\\mathbb {A}_S$ with this set of ordinals $K$ .", "If this is done, then from ${}_n\\mathbb {A}_S$ , one can obtain uniformly $\\infty $ -Borel codes for each condition in ${}_n\\mathbb {A}_S$ .", "In nearly every regard, the Vopěnka forcing is a more practical forcing than $\\mathbb {A}_S$ .", "It will shown that in $\\mathsf {ZF}+ \\mathsf {AD}+ V = L(\\mathbb {R})$ , $\\mathbb {O}_S$ and $\\mathbb {A}_S$ are identical.", "To establish this, one will prove a structural theorem about $L(\\mathbb {R})$ due to Woodin that involves the forcing $\\mathbb {A}_S$ .", "The presentation of Woodin's result that $L(\\mathbb {R})$ is a symmetric collapse extension of its $\\mathrm {HOD}$ follows closely [21].", "Of particular importance to the study of cardinals and combinatorics in $L(\\mathbb {R}) \\models \\mathsf {AD}$ will be existence of an ultimate $\\infty $ -Borel code which follows from the proof.", "For simplicity, $S = \\emptyset $ in the following results.", "The result can be appropriately relativized.", "The main benefit of $\\mathbb {A}$ over $\\mathbb {O}$ is the following result: Fact 7.6 Let $x \\in \\mathbb {R}^n$ .", "Then there is a generic filter $G_x \\subseteq {}_n\\mathbb {A}$ which is ${}_n\\mathbb {A}$ -generic over $\\mathrm {HOD}$ so that $\\mathrm {HOD}[G_x] = \\mathrm {HOD}[x]$ , where $\\mathrm {HOD}[x]$ refers to the smallest transitive model of $\\mathsf {ZF}$ extending $\\mathrm {HOD}$ and containing $x$ .", "For simplicity, let $n = 1$ .", "Let $x \\in \\mathbb {R}$ .", "Let $G_x = \\lbrace p \\in \\mathbb {A}: x \\in p\\rbrace $ .", "Using the convention of Remark REF , from $\\mathbb {A}\\in \\mathrm {HOD}$ , one can obtain an enumeration, $\\langle (S_p,\\varphi _p) : p \\in \\mathbb {A}\\rangle $ so that $(S_p,\\varphi _p)$ is an $\\mathrm {OD}$ $\\infty $ -Borel for the condition $p$ .", "First to show that $G_x$ is an $\\mathbb {A}$ -generic filter over $\\mathrm {HOD}$ : Let $A \\subseteq \\mathbb {A}$ be a maximal antichain which belongs to $\\mathrm {HOD}$ and is hence $\\mathrm {OD}$ .", "Considering $\\mathbb {A}$ as the set $K$ defined in Remark REF and using the fact that $A$ is $\\mathrm {OD}$ , one can find a formula $\\varphi $ so that $(\\mathbb {A},\\varphi )$ is an $\\mathrm {OD}$ -Borel code for $\\bigcup A$ .", "Therefore, $\\bigcup A = \\mathbb {R}$ since otherwise $\\mathbb {R}\\setminus \\bigcup A$ would be a nonempty set with an $\\mathrm {OD}$ $\\infty $ -Borel code.", "Then $\\mathbb {R}\\setminus \\bigcup A$ is a condition of $\\mathbb {A}$ which is incompatible with every element of $A$ .", "This contradicts $A$ being a maximal antichain.", "Thus $x \\in \\bigcup A$ .", "There is some $a \\in A$ so that $x \\in a$ .", "Thus $a \\in G_x$ .", "It has been shown that $G_x \\cap A \\ne \\emptyset $ .", "$G_x$ is $\\mathbb {A}$ -generic over $\\mathrm {HOD}$ .", "Thinking of $\\mathbb {R}= {{}^\\omega 2}$ , let $b_n = \\lbrace x \\in \\mathbb {R}: x(n) = 1\\rbrace $ .", "Note that $b_n \\ne \\emptyset $ and $b_n$ clearly has an $\\mathrm {OD}$ $\\infty $ -Borel code.", "Thus $b_n \\in \\mathbb {A}$ .", "Note that $b_n \\in G_x$ if and only if $x(n) = 1$ .", "Thus $x \\in \\mathrm {HOD}[G_x]$ .", "Note that $p \\in G_x$ if and only if $x \\in p$ if and only if $L[S_p,x] \\models \\varphi _p(S_p,x)$ .", "Note that $V \\models L[S_p,x] \\models \\varphi _p(S_p,x)$ if and only if $\\mathrm {HOD}[x] \\models L[S_p,x]\\models \\varphi _p(S_p,x)$ .", "(This is an application of the important absoluteness property of $\\infty $ -Borel codes.)", "Thus $G_x$ can be defined in $\\mathrm {HOD}[x]$ as the set of $p$ such that $L[S_p,x] \\models \\varphi _p(S_p,x)$ .", "This shows $G_x \\in \\mathrm {HOD}[x]$ .", "It has been shown that $\\mathrm {HOD}[x] = \\mathrm {HOD}[G_x]$ .", "Definition 7.7 Suppose $n \\in \\omega $ .", "For each $s \\in {}^n\\omega $ , let $b_s = \\lbrace x \\in \\mathbb {R}^n : x(s) = 1\\rbrace $ .", "Let $\\dot{x}_\\mathrm {gen}^n = \\lbrace (\\check{s},b_{s}) : s \\in {}^n\\omega \\rbrace $ .", "In light of the argument of Fact REF , if $x \\in \\mathbb {R}^n$ , then $\\dot{x}_\\mathrm {gen}^n[G_x] = x$ .", "Definition 7.8 Let $\\mathbb {P}$ and $\\mathbb {Q}$ be two forcings.", "A surjective map $\\pi : \\mathbb {Q}\\rightarrow \\mathbb {P}$ is a forcing projection if and only if the following holds: (1) For all $q_0,q_1 \\in \\mathbb {Q}$ , $q_0 \\le _\\mathbb {Q}q_1$ implies $\\pi (q_0) \\le _\\mathbb {P}\\pi (q_1)$ and $\\pi (1_\\mathbb {Q}) = 1_\\mathbb {P}$ .", "(2) For all $q \\in \\mathbb {Q}$ and $p^{\\prime } \\in \\mathbb {P}$ such that $p^{\\prime } \\le _\\mathbb {P}\\pi (q)$ , there exists some $q^{\\prime } \\le _\\mathbb {Q}q$ so that $\\pi (q^{\\prime }) = p^{\\prime }$ .", "Let $\\pi : \\mathbb {Q}\\rightarrow \\mathbb {P}$ be a projection.", "If $D \\subseteq \\mathbb {P}$ is dense, then $\\pi ^{-1}[D]$ is dense in $\\mathbb {Q}$ .", "This implies that if $G \\subseteq \\mathbb {Q}$ is a $\\mathbb {Q}$ -generic over $V$ , then $\\pi [G]$ is a $\\mathbb {P}$ -generic over $V$ .", "Let $G \\subseteq \\mathbb {P}$ is $\\mathbb {P}$ -generic over $V$ , then let $\\mathbb {Q}\\slash G = \\lbrace q \\in \\mathbb {Q}: \\pi (q) \\in G\\rbrace $ .", "Let $\\le _{\\mathbb {Q}\\slash G} = \\le _\\mathbb {Q}\\upharpoonright \\mathbb {Q}\\slash G$ .", "Let $\\dot{G} \\in V^\\mathbb {P}$ denote the $\\mathbb {P}$ -name for the generic filter.", "Let $\\mathbb {Q}\\slash \\dot{G}$ denote the $\\mathbb {P}$ -name so that $1_\\mathbb {P}\\Vdash _\\mathbb {P}\\mathbb {Q}\\slash \\dot{G} = \\lbrace p \\in \\check{\\mathbb {Q}}: \\check{\\pi }(p) \\in \\dot{G}\\rbrace $ .", "It can be checked that $\\mathbb {Q}$ embeds densely into the iteration $\\mathbb {P}* (\\mathbb {Q}\\slash \\dot{G})$ .", "Therefore, these two forcings are equivalent as forcings.", "Also if $G$ is $\\mathbb {P}$ -generic over $V$ then $(\\mathbb {Q}\\slash \\dot{G})[G] = \\mathbb {Q}\\slash G$ .", "Suppose $H$ is $\\mathbb {Q}$ -generic over $V$ .", "Let $G = \\pi [H]$ be the associated $\\mathbb {P}$ -generic filter.", "One can check that $H$ is $(\\mathbb {Q}\\slash G)$ -generic over $V[G]$ , $G * H$ is $(\\mathbb {P}* \\mathbb {Q}\\slash \\dot{G})$ -generic over $V$ and $V[G][H] = V[G * H] = V[H]$ .", "Definition 7.9 For the moment, consider $\\mathbb {R}= {{}^\\omega 2}= {{P}(\\omega )}$ .", "If $x,y \\in \\mathbb {R}$ , then one writes $x \\le _T y$ if and only if there is a Turing machine (taking oracle input) so that $x$ can be computed from this Turing machine when given $y$ as its oracle.", "Since Turing programs can be coded by natural numbers, for any $x \\in \\mathbb {R}$ , there are only countably many $y \\in \\mathbb {R}$ so that $y \\le _T x$ .", "A Turing program is also absolute between models of $\\mathsf {ZF}$ with the same $\\omega $ .", "Define $x =_T y$ if and only if $x \\le _T y$ and $y \\le _T x$ .", "Let $\\mathcal {D}= {{P}(\\omega )} \\slash =_T$ denote the collection of $=_T$ equivalence classes.", "An element of $\\mathcal {D}$ is called a Turing degree.", "If $x \\in \\mathbb {R}$ , then one uses the notation $[x]_T$ rather than $[x]_{=_T}$ to denote the Turing degree of $x$ .", "As observed above, each Turing degree contains only countable many reals.", "If $X,Y \\in \\mathcal {D}$ , one defines $X \\le Y$ if and only if there exist $x \\in X$ and $y \\in Y$ so that $x \\le _T y$ .", "(One can check this is a well defined relation.)", "A Turing cone of reals with base $x$ is the collection $C_x = \\lbrace y \\in \\mathbb {R}: x \\le _T y\\rbrace $ .", "A Turing cone of degrees with base $X$ is the collection $C_X = \\lbrace Y \\in \\mathcal {D}: X \\le Y\\rbrace $ .", "Let $\\mu _\\mathcal {D}\\subseteq {{P}(\\mathcal {D})}$ consists of those subsets of $\\mathcal {D}$ which contain a Turing cone of degrees.", "$\\mu _\\mathcal {D}$ is a filter.", "Fact 7.10 (Martin) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "$\\mu _\\mathcal {D}$ is a countably complete ultrafilter.", "Let $K \\subseteq \\mathcal {D}$ .", "Let $\\tilde{K} = \\lbrace x \\in \\mathbb {R}: x \\in K\\rbrace $ .", "Note that $\\tilde{K}$ is a $=_T$ -invariant subset of $\\mathbb {R}$ .", "Consider the usual game $G_{\\tilde{K}}$ $\\begin{tikzpicture}\\node at (0,0) {G_{\\tilde{K}}};\\node at (1,.5) {I};\\node at (2,.5) {a(0)};\\node at (4,.5) {a(1)};\\node at (6,.5) {a(2)};\\node at (8,.5) {a(3)};\\node at (11,.5) {a};\\end{tikzpicture}\\node at (1,-.5) {II};\\node at (3,-.5) {b(0)};\\node at (5,-.5) {b(1)};\\node at (7,-.5) {b(2)};\\node at (9,-.5) {b(3)};\\node at (11,-.5) {b};$ $where Player 1 wins if and only if $ b K$, where $ a b$ is defined by $ ab(2n) = a(n)$ and $ a b(2n + 1) = b(n)$.$ By $\\mathsf {AD}$ , one of the two players has a winning strategy.", "Suppose Player 1 has a winning strategy $\\sigma $ .", "Let $Z = [\\sigma ]_T$ be the Turing degree of $\\sigma $ .", "The claim is that $C_Z \\subseteq K$ : Let $Y \\in \\mathcal {D}$ be such that $Z \\le _T Y$ .", "Pick any $y \\in Y$ .", "Thus $\\sigma \\le _T y$ .", "Let $\\sigma * y$ denote the result of the play where Player 1 uses $\\sigma $ and Player 2 plays the bits of $y$ each turn.", "Since $\\sigma $ is a Player 1 winnings strategy, $\\sigma * y \\in \\tilde{K}$ .", "Since $\\sigma \\le _T y$ , $\\sigma * y \\le _T y$ and clearly $y \\le _T \\sigma * y$ .", "Thus $\\sigma * y =_T y$ .", "Since $\\tilde{K}$ is $=_T$ -invariant, $y \\in \\tilde{K}$ .", "Thus $[y]_T = Y \\in K$ .", "This shows that if Player 1 has a winning strategy in $G_{\\tilde{K}}$ , then $K \\in \\mu _\\mathcal {D}$ .", "A similar argument shows that if Player 2 has a winning strategy then $\\mathcal {D}\\setminus K \\in \\mu _\\mathcal {D}$ .", "Thus $\\mu _\\mathcal {D}$ is an ultrafilter.", "Next to show countable completeness: Suppose $\\langle K_n : n \\in \\omega \\rangle $ is a sequence in $\\mu _\\mathcal {D}$ .", "By $\\mathsf {AC}_\\omega ^\\mathbb {R}$ , let $x_n$ be such that the cone above $X_n = [x_n]_T$ is contained in $K_n$ .", "Let $x = \\bigoplus x_n = \\lbrace (n,m) : m \\in x_n\\rbrace $ and $X = [x]_T$ .", "Then the cone above $X$ is contained inside of $\\bigcap _{n \\in \\omega } K_n$ .", "Thus $\\bigcap _{n \\in \\omega } K_n \\in \\mu _{\\mathcal {D}}$ .", "Remark 7.11 Let $S$ be a set.", "Note that if $x =_T y$ , then $L[S,x] = L[S,y]$ .", "Therefore, if $X \\in \\mathcal {D}$ , one will often write $L[S,X]$ to denote $L[S,x]$ for any $x \\in X$ .", "The canonical constructibility wellordering is based on the hierarchy $\\lbrace L_\\alpha [S,x] : \\alpha \\in \\mathrm {ON}\\rbrace $ .", "Even if $x =_T y$ , the levels $L_\\alpha [S,x]$ and $L_\\alpha [S,y]$ can differ.", "Thus, the canonical constructibility wellordering on $L[S,x]$ is not invariant under $=_T$ .", "If $V$ is a model of $\\mathsf {ZF}$ (in the language $\\dot{\\in }$ ), then the canonical wellordering $<^{\\mathrm {HOD}_S^V}$ of $\\mathrm {HOD}^V_S$ depends only on $V$ .", "Since $x =_T y$ implies that $L[S,x] = L[S,y]$ , one has that $\\mathrm {HOD}_S^{L[S,x]} = \\mathrm {HOD}_S^{L[S,y]}$ and $<^{\\mathrm {HOD}_S^{L[S,x]}} = <^{\\mathrm {HOD}_S^{L[S,y]}}$ .", "(Note that although the constructibility hierarchy of $L[S,x]$ is naturally formulated in the language $\\lbrace \\dot{\\in }, \\dot{E}_0, \\dot{E}_1\\rbrace $ , where $\\dot{E}_0$ and $\\dot{E}_1$ are unary predicate symbols meant to interpret $S$ and $x$ , when constructing $\\mathrm {HOD}_S^{L[S,x]}$ , $L[S,x]$ is considered as merely a $\\lbrace \\dot{\\in }\\rbrace $ -structure.)", "Thus if $X$ is a Turing degree, one will often write $\\mathrm {HOD}_S^{L[S,X]}$ and $\\le ^{\\mathrm {HOD}_S^{L[S,X]}}$ to refer to $\\mathrm {HOD}^{L[S,x]}_S$ and $<^{\\mathrm {HOD}_S^{L[S,x]}}$ for any $x \\in X$ .", "The invariance of the canonical wellordering of $\\mathrm {HOD}$ allows one to take ultrapowers of local $\\mathrm {HOD}$ 's (models of the form $\\mathrm {HOD}_S^{L[S,X]}$ ) by the Martin's measure.", "This is a very powerful technique as indicated by the following results.", "The existence of the canonical wellorderings of the local $\\mathrm {HOD}_S^{L[S,X]}$ shows that the resulting ultraproduct satisfies Łoś's theorem: Fact 7.12 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $S \\subseteq \\mathrm {ON}$ be a set of ordinals.", "$\\prod _{X \\in \\mathcal {D}}\\mathrm {HOD}_S^{L[S,X]}\\slash \\mu _\\mathcal {D}$ satisfies the Łoś's theorem: Let $\\varphi $ be a formula.", "Let $f_0,...,f_{n - 1}$ be functions on $\\mathcal {D}$ with the property that for all $X \\in \\mathcal {D}$ , $f_i(X) \\in \\mathrm {HOD}_S^{L[S,X]}$ .", "Then $\\prod _{X \\in \\mathcal {D}}\\mathrm {HOD}_S^{L[S,X]} \\slash \\mu _\\mathcal {D}\\models \\varphi ([f_0]_{\\mu _\\mathcal {D}},...,[f_{n - 1}]_{\\mu _\\mathcal {D}}) \\Leftrightarrow \\lbrace X \\in \\mathcal {D}: \\mathrm {HOD}_{S}^{L[S,X]} \\models \\varphi (f_0(X),...,f_{n - 1}(X))\\rbrace \\in \\mu _\\mathcal {D}.$ Only the existential quantification case requires a choice-like principle.", "One will give a sketch: Let $\\mathcal {M}$ denote this ultraproduct.", "Let $\\varphi $ be a formula and assume inductively one has already shown the result for $\\varphi $ .", "Suppose $K = \\lbrace X \\in \\mathcal {D}: \\mathrm {HOD}^{L[S,X]}_S \\models (\\exists v)\\varphi (v,f_0(X),...,f_{n - 1}(X))\\rbrace \\in \\mu _\\mathcal {D}.$ Let $g$ be defined on $K$ by letting $g(X)$ be the $\\mathrm {HOD}_S^{L[S,X]}$ -least $v$ so that $\\mathrm {HOD}_S^{L[S,X]}\\models \\varphi (v,f_0(X),...,f_{n - 1}(X))$ .", "Then using the induction hypothesis, one can show that $\\mathcal {M}\\models \\varphi ([g]_{\\mu _\\mathcal {D}},[f]_{\\mu _\\mathcal {D}}, ..., [f_{n - 1}]_{\\mu _\\mathcal {D}})$ .", "Thus $\\mathcal {M}\\models (\\exists v)\\varphi (v,[f]_{\\mu _\\mathcal {D}},...,[f_{n - 1}]_{\\mu _\\mathcal {D}})$ .", "There is no claim that $\\prod _{X \\in \\mathcal {D}}\\mathrm {HOD}_{S}^{L[S,X]} \\slash \\mu _\\mathcal {D}$ is a wellfounded model.", "This is true assuming $\\mathsf {DC}$ .", "It is open whether $\\mathsf {AD}$ implies this.", "Fact 7.13 (Woodin) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {DC}_\\mathbb {R}$ .", "Suppose $A \\subseteq \\mathbb {R}^2$ has an $\\infty $ -Borel code $(S,\\varphi )$ , then $B(x) = (\\exists ^\\mathbb {R}y)A(x,y)$ has an $\\infty $ -Borel code which is $\\mathrm {OD}_S$ .", "Note that $L(S,\\mathbb {R}) \\models \\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {DC}$ .", "Work in $L(S,\\mathbb {R})$ .", "Let $\\mathcal {M}= \\prod _{X \\in \\mathcal {D}} \\mathrm {HOD}_S^{L[S,X]} \\slash \\mu _\\mathcal {D}$ .", "$\\mathcal {M}$ is an $S$ -definable class which is wellfounded by $\\mathsf {DC}$ .", "Implicitly, it will be assumed that all objects in this ultrapower have been Mostowski collapsed.", "By Fact REF , $\\mathcal {M}$ satisfies Łoś's theorem.", "Define $\\Phi _{\\mathbb {A}_S^\\infty }$ on $\\mathcal {D}$ by $\\Phi _{\\mathbb {A}_S^\\infty }(X) = \\mathbb {A}_S^{L[S,X]}$ .", "(Recall Remark REF concerning the convention on $\\mathbb {A}_S$ .)", "Let $\\mathbb {A}^\\infty _S = [\\Phi _{A^\\infty _S}]_{\\mu _\\mathcal {D}}$ .", "By Łoś's theorem, $\\mathbb {A}_S^\\infty $ is a forcing poset.", "Let $\\lambda _X = |\\mathbb {A}_S|^{\\mathrm {HOD}_S^{L[S,X]}}$ .", "Let $\\Phi _\\lambda $ be a function on $\\mathcal {D}$ defined by $\\Phi _\\lambda (X) = \\lambda _X$ .", "Let $\\lambda = [\\Phi _{\\lambda }]_{\\mu _\\mathcal {D}}$ .", "By Łoś's theorem, $\\mathcal {M}\\models \\lambda = |\\mathbb {A}^\\infty _S|$ .", "Let $\\Phi _{S^\\infty }$ be a function of $\\mathcal {D}$ defined by $\\Phi _{S^\\infty }(X) = S$ .", "Let $S^\\infty = [\\Phi _{S^\\infty }]_{\\mu _\\mathcal {D}}$ .", "Claim: For all $a \\in \\mathbb {R}$ , $a \\in B \\Leftrightarrow L[S^\\infty ,\\mathbb {A}_S^\\infty ,a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda )} \\Vdash (\\exists b)L[S^\\infty ,a,b] \\models \\varphi (S^\\infty ,a,b).$ To prove the claim: First observe that for all $X \\in \\mathcal {D}$ , $\\lambda _X$ is countable in $L(S,\\mathbb {R})$ which is a model of $\\mathsf {AD}$ .", "To see this: Note that $\\mathbb {R}^{L[S,X]}$ is countable since it is a wellorderable collection of reals; thus, there is a bijection of $\\omega $ with $\\mathbb {R}^{L[S,X]}$ .", "$\\mathbb {A}_S^{L[S,X]}$ is a collection of subsets of $\\mathbb {R}^{L[S,X]}$ in $L[S,X]$ .", "Identifying $\\mathbb {R}^{L[S,X]}$ with $\\omega $ , this collection $\\mathbb {A}_S^{L[S,X]}$ can be identified as a wellorderable collection of $\\mathbb {R}$ , as well.", "Thus $\\lambda _X = |\\mathbb {A}_S|^{L[S,X]}$ is countable in $L(S,\\mathbb {R})$ .", "By the same reasoning, $(2^{\\lambda _X})^{L[S,X]}$ is countable in $L(S,\\mathbb {R})$ .", "$(\\Leftarrow )$ For all $X \\in \\mathcal {D}$ so that $a \\in X$ , one can define $\\mathrm {HOD}_S^{L[S,X]}[a]$ as in Fact REF .", "Let $\\mathcal {M}[a] = \\prod _{X \\in \\mathcal {D}} \\mathrm {HOD}^{L[S,X]}_S[a] \\slash \\mu _\\mathcal {D}.$ Assume that $V \\models L[S^\\infty ,\\mathbb {A}_S^\\infty ,a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda )} \\Vdash (\\exists b)L[S^\\infty ,a,b] \\models \\varphi (S^\\infty ,a,b).$ Thus $\\mathcal {M}[a] \\models L[S^\\infty ,\\mathbb {A}_S^\\infty ,a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda )} \\Vdash (\\exists b)L[S^\\infty ,a,b] \\models \\varphi (S^\\infty ,a,b).$ By the idea of the proof of Łoś's theorem (Fact REF ), for $\\mu _\\mathcal {D}$ -almost all $X \\in \\mathcal {D}$ with $a \\in X$ , $L[S,\\mathbb {A}_S^{L[S,X]},a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda _X)}\\Vdash (\\exists b)L[S,a,b]\\models \\varphi (S,a,b).$ Fix such an $X$ .", "Since $(2^{\\lambda _X})^{L[S,X]}$ is countable in $L(S,\\mathbb {R})$ , there is a $g \\in L(S,\\mathbb {R})$ so that $g \\subseteq \\mathrm {Coll}(\\omega ,\\lambda _X)$ is $\\mathrm {Coll}(\\omega ,\\lambda _X)$ -generic over $\\mathrm {HOD}_{S}^{L[S,X]}[a]$ .", "Then $g$ is also generic over $L[S,\\mathbb {A}_S^{L[S,X]},a]$ .", "By the forcing theorem, $L[S,\\mathbb {A}_S^{L[S,X]},a][g] \\models (\\exists b)L[S,a,b] \\models \\varphi (S,a,b)$ .", "Pick some $b \\in L[S,\\mathbb {A}_S^{L[S,X]},a][g]$ witnessing the existential.", "Then one has in particular that $L[S,a,b] \\models \\varphi (S,a,b)$ .", "Since $(S,\\varphi )$ is the $\\infty $ -Borel code for $A$ , one has that $(a,b) \\in A$ .", "Thus $a \\in B$ .", "$(\\Rightarrow )$ Suppose that $a \\in B$ .", "Let $b \\in \\mathbb {R}$ be such that $(a,b) \\in A$ .", "Let $X$ be a Turing degree such that $[a \\oplus b] \\le X$ .", "By Fact REF , let $G_{a \\oplus b}$ be the $\\mathbb {A}_{S}^{L[S,X]}$ -generic over $\\mathrm {HOD}_S^{L[S,X]}$ -filter derived from $a \\oplus b$ .", "Note that $G_{a\\oplus b}$ is also $\\mathbb {A}_S^{L[S,X]}$ -generic over $L[S,\\mathbb {A}_S^{L[S,X]}]$ .", "Using the convention from Remark REF and an argument similar to Fact REF , one can recover $a \\oplus b$ from $G_{a,b}$ in $L[S,\\mathbb {A}_S^{L[S,X]}][G_{a,b}]$ .", "Thus $L[S,\\mathbb {A}_S^{L[S,X]}][G_{a\\oplus b}] \\models (\\exists y)L[S,a,y]\\models \\varphi (S,a,y).$ Since $|\\mathbb {A}_S^{L[S,X]}|$ is a forcing of size $\\lambda _X$ , one has that $\\mathbb {A}_S^{L[S,X]}$ regularly embeds into $\\mathrm {Coll}(\\omega ,\\lambda _X)$ by [13] Corollary 26.8.", "There is some $g \\subseteq \\mathrm {Coll}(\\omega ,\\lambda _X)$ which is generic over $L[S,\\mathbb {A}_S^{L[S,X]}]$ so that $G_{a\\oplus b} \\in L[S,\\mathbb {A}_S^{L[S,X]}][g]$ .", "Thus $L[S,\\mathbb {A}_S^{L[S,X]}][g] \\models (\\exists y)L[S,a,y]\\models \\varphi (S,a,y).$ Note that $a \\in L[S,\\mathbb {A}_S^{L[S,X]}][g]$ .", "By one of the main properties of the collapse forcing ([13] Corollary 26.10), there is some $h \\subseteq \\mathrm {Coll}(\\omega ,\\lambda _X)$ which is $\\mathrm {Coll}(\\omega ,\\lambda _X)$ -generic over $L[S,\\mathbb {A}_S^{L[S,X]}][a]$ so that $L[S,\\mathbb {A}_S^{L[S,X]}][a][h] = L[S,\\mathbb {A}_S^{L[S,X]}][g]$ .", "Thus $L[S,\\mathbb {A}_S^{L[S,X]}][a][h] \\models (\\exists y)L[S,a,y]\\models \\varphi (S,a,y).$ There is some $p \\in \\mathrm {Coll}(\\omega ,\\lambda _X)$ which forces the inner statement.", "By the homogeneity of $\\mathrm {Coll}(\\omega ,\\lambda _X)$ , $1_{\\mathrm {Coll}(\\omega ,\\lambda _X)}$ forces this statement: $L[S,\\mathbb {A}_S^{L[S,X]}][a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda _X)} \\Vdash (\\exists y)L[S,a,y]\\models \\varphi (S,a,y).$ In particular, $\\mathrm {HOD}_{S}^{L[S,X]}[a] \\models L[S,\\mathbb {A}_S^{L[S,X]},a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda _X)} \\Vdash (\\exists y)L[S,a,y]\\models \\varphi (S,a,y).$ By Łoś's theorem (Fact REF ), one has $\\mathcal {M}[a] \\models L[S^\\infty ,\\mathbb {A}_S^\\infty ,a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda )} \\Vdash (\\exists y)L[S^\\infty ,a,y]\\models \\varphi (S^\\infty ,a,y).$ So in particular, $L[S^\\infty ,\\mathbb {A}_S^\\infty ,a] \\models 1_{\\mathrm {Coll}(\\omega ,\\lambda )} \\Vdash (\\exists y)L[S^\\infty ,a,y]\\models \\varphi (S^\\infty ,a,y).$ This completes the proof of the claim.", "Let $J$ be a set of ordinals that codes $S^\\infty $ and $\\mathbb {A}_S^\\infty $ in some fixed way.", "Let $\\psi (J,a)$ be the formula that asserts $1_{\\mathrm {Coll}(\\omega ,\\lambda )} \\Vdash (\\exists y)L[S^\\infty ,a,y] \\models \\varphi (S^\\infty ,a,y).$ $(J,\\psi )$ is an $\\infty $ -Borel code for $B$ .", "Fact 7.14 (Woodin) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {DC}$ .", "Let $1 \\le m \\le n$ .", "Let $\\pi _{n,m} : \\mathbb {R}^n \\rightarrow \\mathbb {R}^m$ be defined by $\\pi _{n,m}(s) = s\\upharpoonright m$ .", "Define $\\pi _{n,m} : {}_n\\mathbb {A}\\rightarrow {}_m\\mathbb {A}$ by $\\pi _{n,m}(b) = \\pi _{n,m}[b]$ , where the latter $\\pi _{n,m}$ refers to the earlier function $\\pi _{n,m} : \\mathbb {R}^n \\rightarrow \\mathbb {R}^m$ .", "Then $\\pi _{n,m}$ is a forcing projection.", "By Fact REF , one can show that $\\pi _{n,m}(p) \\in {}_m\\mathbb {A}$ for all $p \\in {}_n\\mathbb {A}$ .", "Now suppose that $q \\in {}_n\\mathbb {A}$ .", "Suppose $p^{\\prime } \\in {}_m\\mathbb {A}$ is such that $p^{\\prime } \\le _{{}_m\\mathbb {A}} \\pi _{n,m}(q)$ .", "Let $q^{\\prime } = \\lbrace x \\in \\mathbb {R}^n : x\\upharpoonright n \\in \\pi _{n,m}(q)\\rbrace $ .", "Since $\\pi _{n,m}(q) \\in {}_m\\mathbb {A}$ (that is, it has an $\\mathrm {OD}$ $\\infty $ -Borel code), $q^{\\prime }$ has an $\\mathrm {OD}$ $\\infty $ -Borel code and hence belongs to ${}_n\\mathbb {A}$ .", "Clearly, $\\pi _{n,m}(q^{\\prime }) = p^{\\prime }$ .", "This establishes that $\\pi _{n,m}$ is a forcing projection.", "Now one will work in $L(\\mathbb {R}) \\models \\mathsf {AD}$ .", "Kechris ([15]) showed that if $\\mathsf {AD}$ holds, then $L(\\mathbb {R}) \\models \\mathsf {DC}_\\mathbb {R}$ .", "Thus, for the following results, the background theory $\\mathsf {ZF}+ \\mathsf {AD}$ is sufficient.", "Definition 7.15 Using the projections from Fact REF , let ${}_\\omega \\mathbb {A}$ be the finite support direct limit of $\\langle {}_n\\mathbb {A}: n \\in \\omega \\setminus \\lbrace 0\\rbrace \\rangle $ .", "That is, ${}_{\\omega }\\mathbb {A}$ is the collection of $p : (\\omega \\setminus \\lbrace 0\\rbrace ) \\rightarrow \\bigcup _{n \\in \\omega }{}_n\\mathbb {A}$ so that for all $m \\le n$ , $\\pi _{n,m}(p(n)) = p(m)$ and there exists a $N \\in \\omega $ so that for all $k \\ge N$ , $p(k) = p(N) \\times \\mathbb {R}^{k - N}$ .", "The least such $N$ is denoted $\\dim (p)$ , the dimension of $p$ .", "For $n \\in \\omega $ and $p \\in {}_\\omega \\mathbb {A}$ , let $\\pi _{\\omega ,n}(p) = p(n)$ .", "Each $\\pi _{\\omega ,n} : {}_\\omega \\mathbb {A}\\rightarrow {}_n\\mathbb {A}$ is a forcing projection.", "Since each ${}_n\\mathbb {A}\\in \\mathrm {HOD}$ is identified as a set of ordinals (see Remark REF ) and the projection maps are in $\\mathrm {HOD}$ , ${}_\\omega \\mathbb {A}$ belongs to $\\mathrm {HOD}$ and may be identified as a set of ordinals having the property expressed in Remark REF .", "Let $m \\le n$ , let $\\tau _m$ be a ${}_n\\mathbb {A}$ -name for the last real of the $\\dot{x}_\\mathrm {gen}^m$ , which is a name for the generic $m$ -tuple coming from a ${}_n\\mathbb {A}$ -generic filter.", "(Technically, $\\tau _m$ is different for each $n$ , but the projection can be used to interpret it in suitable $n$ 's.)", "Let ${\\dot{\\mathbb {R}}_\\mathrm {sym}}$ be a ${}_\\omega \\mathbb {A}$ -name so that $1_{{}_\\omega \\mathbb {A}}\\Vdash {\\dot{\\mathbb {R}}_\\mathrm {sym}}= \\lbrace \\tau _n : n \\in \\omega \\setminus \\lbrace 0\\rbrace \\rbrace $ .", "Observe that from Fact REF , every $z \\in \\mathbb {R}^n$ induces a ${}_n\\mathbb {A}$ -generic filter $G_z$ over $\\mathrm {HOD}$ so that $\\mathrm {HOD}[z] = \\mathrm {HOD}[G_z]$ .", "Note that $\\dot{x}_\\mathrm {gen}[G_z] = z$ and $\\tau _m[G_z] = z(m)$ for all $m < n$ .", "Theorem 7.16 (Woodin) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "Suppose $g \\subseteq \\mathrm {Coll}(\\omega ,\\mathbb {R})$ -generic over $L(\\mathbb {R})$ .", "From $g$ , one can derive a ${}_\\omega \\mathbb {A}$ -generic over $\\mathrm {HOD}^{L(\\mathbb {R})}$ filter $G_g$ .", "Let $g \\subseteq \\mathrm {Coll}(\\omega ,\\mathbb {R})$ be a generic over $\\mathrm {HOD}^{L(\\mathbb {R})}$ .", "Let $G_g \\subseteq {}_\\omega \\mathbb {A}$ be the collection of condition $p \\in {}_\\omega \\mathbb {A}$ so that $g \\upharpoonright \\dim (p) \\in p(\\dim (p))$ .", "The claim is that $G_g$ is ${}_\\omega \\mathbb {A}$ -generic over $\\mathrm {HOD}^{L(\\mathbb {R})}$ .", "Suppose $D \\subseteq {}_\\omega \\mathbb {A}$ belongs to $\\mathrm {HOD}^{L(\\mathbb {R})}$ and is dense.", "Let $\\tilde{D} \\subseteq \\mathrm {Coll}(\\omega ,\\mathbb {R})$ be the collection of $s \\in \\mathrm {Coll}(\\omega ,\\mathbb {R})$ so that there is some $p \\in {}_\\omega \\mathbb {A}$ with $\\dim (p) = |s|$ , $s \\in p(|s|)$ , and $p \\in D$ .", "(Note that $s$ as a condition of $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ is a finite tuple of reals.)", "One will show that $\\tilde{D}$ is dense in $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ .", "Let $s \\in \\mathrm {Coll}(\\omega ,\\mathbb {R})$ .", "Let $n = |s|$ .", "Let $E = \\lbrace p \\in {}_n\\mathbb {A}: (\\exists q \\in {}_\\omega \\mathbb {A})(\\dim (q) \\ge n \\wedge q \\in D \\wedge \\pi _{\\omega ,n}(q) = p)\\rbrace .$ First, one will show $E$ is dense in ${}_n\\mathbb {A}$ .", "Let $r \\in {}_n\\mathbb {A}$ .", "Since $D$ is dense in ${}_\\omega \\mathbb {A}$ , there is some $q \\in {}_\\omega \\mathbb {A}$ with $\\dim (q) \\ge n$ so that $q \\le _{{}_\\omega \\mathbb {A}} r$ and $q \\in D$ .", "Then $p = \\pi _{\\omega ,n}(q)$ belongs to $E$ and $p \\le _{{}_n\\mathbb {A}} r$ .", "Thus $E$ is dense in ${}_n\\mathbb {A}$ .", "Let $G_s^n$ be the ${}_n\\mathbb {A}$ -generic over $\\mathrm {HOD}^{L(\\mathbb {R})}$ filter derived from $s$ .", "By genericity, pick some $p \\in G_s^n \\cap E$ .", "Thus there is some $q \\in D$ so that $\\pi _{\\omega ,n}(q) = p$ .", "Let $s^{\\prime } \\supseteq s$ be such that $s^{\\prime } \\in q$ .", "This means that $s^{\\prime } \\le _{\\mathrm {Coll}(\\omega ,\\mathbb {R})} s$ and $s^{\\prime } \\in \\tilde{D}$ .", "It has been shown that $\\tilde{D}$ is dense in $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ .", "Now since $g \\subseteq \\mathrm {Coll}(\\omega ,\\mathbb {R})$ is $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ -generic over $L(\\mathbb {R})$ , $g \\cap \\tilde{D} \\ne \\emptyset $ .", "There is some $n \\in \\omega $ so that $g \\upharpoonright n \\in \\tilde{D}$ .", "By definition, there is some $p \\in D$ so that $g\\upharpoonright n \\in p(n)$ .", "Thus $p \\in G_g \\cap D$ .", "This shows that $G_g$ is ${}_\\omega \\mathbb {A}$ -generic over $\\mathrm {HOD}$ .", "Fact 7.17 (Woodin) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "$\\mathrm {HOD}^{L(\\mathbb {R})} \\models 1_{{}_\\omega \\mathbb {A}} \\Vdash $ “the reals of $L({\\dot{\\mathbb {R}}_\\mathrm {sym}})$ is ${\\dot{\\mathbb {R}}_\\mathrm {sym}}$ ”.", "Let $p \\in {}_\\omega \\mathbb {A}$ be some condition and let $n = \\dim (p)$ .", "Let $s \\in \\mathbb {R}^n$ with $s \\in p(n)$ .", "Consider $s$ as a condition of $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ .", "Let $g \\subseteq \\mathrm {Coll}(\\omega )$ be $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ -generic over $L(\\mathbb {R})$ containing $s$ .", "An easy density argument shows that if $g$ is considered as a function from $\\omega $ to $\\mathbb {R}$ , $g$ must be a surjection onto $\\mathbb {R}^{L(\\mathbb {R})}$ .", "Let $G_g$ be the ${}_\\omega \\mathbb {A}$ -generic filter over $\\mathrm {HOD}^{L(\\mathbb {R})}$ derived from $g$ .", "Then $\\mathrm {HOD}^{L(\\mathbb {R})}[G_g] \\models $ “the reals of $L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g])$ is ${\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g]$ ” since $L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g]) = L(\\mathbb {R})$ .", "Thus there is some $q \\le _{{}_\\omega \\mathbb {A}} p$ so that $\\mathrm {HOD}^{L(\\mathbb {R})} \\models q \\Vdash _{{}_\\omega \\mathbb {A}}$ “the reals of $L({\\dot{\\mathbb {R}}_\\mathrm {sym}})$ is ${\\dot{\\mathbb {R}}_\\mathrm {sym}}$ ”.", "Since $p$ was an arbitrary condition, one has that $1_{{}_\\omega \\mathbb {A}}$ forces this same statement.", "Theorem 7.18 (Woodin) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V + L(\\mathbb {R})}$ .", "Let $s \\in \\mathbb {R}^n$ .", "Suppose $G^n_s$ is the ${}_n\\mathbb {A}$ -generic filter over $\\mathrm {HOD}^{L(\\mathbb {R})}$ derived from $s$ .", "Let $\\varphi $ be a formula.", "Suppose $z \\in \\mathrm {HOD}^{L(\\mathbb {R})}[G^n_s]$ .", "Then $\\mathrm {HOD}^{L(\\mathbb {R})}[G_s^n] \\models 1_{{}_\\omega \\mathbb {A}\\slash G_s^n} \\Vdash _{{}_\\omega \\mathbb {A}\\slash G^n_s} L({\\dot{\\mathbb {R}}_\\mathrm {sym}}) \\models \\varphi (\\check{z},\\dot{x}_\\mathrm {gen}^n)$ or $\\mathrm {HOD}^{L(\\mathbb {R})}[G_s^n] \\models 1_{{}_\\omega \\mathbb {A}\\slash G_s^n} \\Vdash _{{}_\\omega \\mathbb {A}\\slash G_s^n} L({\\dot{\\mathbb {R}}_\\mathrm {sym}}) \\models \\lnot \\varphi (\\check{z},\\dot{x}_\\mathrm {gen}^n)$ In $L(\\mathbb {R})$ , one either has $L(\\mathbb {R}) \\models \\varphi (z,s)$ or $L(\\mathbb {R}) \\models \\lnot \\varphi (z,s)$ .", "The claim is that which ever case occurs, this is the side that is forced over $\\mathrm {HOD}^{L(\\mathbb {R})}[G_s^n]$ .", "So without loss generality, suppose $L(\\mathbb {R}) \\models \\varphi (z,s)$ .", "Take any $q \\in {}_\\omega \\mathbb {A}\\slash G_s^n$ with $\\dim (q) \\ge n$ .", "Let $m = \\dim (q)$ .", "This means that $\\pi _{\\omega ,n}(q) \\in G_s^n$ .", "Let $r \\in q$ be a $\\mathbb {R}^m$ sequence extending $s$ .", "Let $g \\subseteq \\mathrm {Coll}(\\omega ,\\mathbb {R})$ be a $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ -generic over $L(\\mathbb {R})$ filter so that $r \\subseteq g$ .", "Let $G_g$ be the ${}_\\omega \\mathbb {A}$ -generic over $\\mathrm {HOD}^{L(\\mathbb {R})}$ filter derived from $g$ .", "Note that $G_g$ is a ${}_\\omega \\mathbb {A}\\slash G_s^n$ -generic filter over $\\mathrm {HOD}^{L(\\mathbb {R})}[G_s^n]$ (see the basic properties of projection from Definition REF ) and that $q \\in G_g$ .", "Using Fact REF and the fact that $\\mathrm {HOD}[G^n_s][G_g] = \\mathrm {HOD}[G_g]$ (also see Definition REF ), $\\mathrm {HOD}[G_s^n][G_g] \\models L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g])\\models \\varphi (z, \\dot{x}_\\mathrm {gen}^n[G_s^n])$ since $L(\\mathbb {R}) \\models \\varphi (z,s)$ .", "Hence there is some $q^{\\prime } \\le _{{}_\\omega \\mathbb {A}\\slash G_s^n} q$ so that $\\mathrm {HOD}[G_s^n] \\models q^{\\prime } \\Vdash _{{}_\\omega \\mathbb {A}\\slash G_s^n} L({\\dot{\\mathbb {R}}_\\mathrm {sym}}) \\models \\varphi (\\check{z},\\dot{x}_\\mathrm {gen}^n).$ Since $q \\in {}_\\omega \\mathbb {A}\\slash G_s^n$ was arbitrary, $1_{{}_\\omega \\mathbb {A}\\slash G_s^n}$ forces this statement.", "This completes the proof.", "Theorem 7.19 (Woodin) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "Let $s \\in \\mathbb {R}^n$ , $z \\in L[{}_\\omega \\mathbb {A},s]$ , and $\\varphi $ be a formula.", "Then $L(\\mathbb {R}) \\models \\varphi (s,z)$ if and only if $L[{}_\\omega \\mathbb {A},s] \\models 1_{{}_\\omega \\mathbb {A}\\slash G_s^n} \\Vdash _{{}_\\omega \\mathbb {A}\\slash G_s^n} L({\\dot{\\mathbb {R}}_\\mathrm {sym}}) \\models \\varphi (\\dot{x}_\\mathrm {gen}^n, \\check{z})$ Recall ${}_\\omega \\mathbb {A}\\in \\mathrm {HOD}^{L(\\mathbb {R})}$ .", "Using the convention from Remark REF , one may assume that from a $p \\in {}_\\omega \\mathbb {A}$ , one can obtain an $\\infty $ -Borel code for this condition.", "From this, one can see that given $s$ and ${}_\\omega \\mathbb {A}$ , one can reconstruct $G_s^n$ .", "Also the names $\\dot{x}_\\mathrm {gen}^n$ and ${\\dot{\\mathbb {R}}_\\mathrm {sym}}$ belong to $L[{}_\\omega \\mathbb {A}]$ .", "$(\\Rightarrow )$ Using the idea from Fact REF , one can show that $L[{}_\\omega \\mathbb {A},s] = L[{}_\\omega \\mathbb {A}][G_s^n]$ .", "Take any $p \\in {}_\\omega \\mathbb {A}\\slash G^n_s$ .", "In particular, $p \\in {}_\\omega \\mathbb {A}$ .", "Pick some $r \\in \\mathrm {Coll}(\\omega ,\\mathbb {R})$ with $r \\supseteq s$ and $r \\in p(|r|)$ .", "Let $g \\subseteq \\mathrm {Coll}(\\omega ,\\mathbb {R})$ be $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ -generic over $L(\\mathbb {R})$ and $r \\subseteq g$ .", "Using ${}_\\omega \\mathbb {A}$ , one can reconstruct $G_g$ which is a ${}_\\omega \\mathbb {A}$ -generic filter over $\\mathrm {HOD}^{L(\\mathbb {R})}$ .", "Thus $G_g$ is $({}_\\omega \\mathbb {A}\\slash G^n_s)$ -generic over $\\mathrm {HOD}^{L(\\mathbb {R})}[G^n_s]$ and hence generic over $L[{}_\\omega \\mathbb {A},s]$ .", "Since $L(\\mathbb {R}) \\models \\varphi (s,z)$ , $L[{}_\\omega \\mathbb {A},s][G_g] \\models L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g]) \\models \\varphi (s,\\check{z})$ because $L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g]) = L(\\mathbb {R})$ .", "Pick a $q \\le _{{}_\\omega \\mathbb {A}} p$ so that $L[{}_\\omega \\mathbb {A},s] \\models q \\Vdash _{{}_\\omega \\mathbb {A}\\slash G_s^n} L({\\dot{\\mathbb {R}}_\\mathrm {sym}})\\models \\varphi (s,\\check{z}).$ Since $p$ was arbitrary, $1_{{}_\\omega \\mathbb {A}\\slash G_s^n}$ forces this statement.", "$(\\Leftarrow )$ Let $g \\supseteq s$ be a $\\mathrm {Coll}(\\omega ,\\mathbb {R})$ -generic filter over $L(\\mathbb {R})$ .", "Let $G_g$ be the ${}_\\omega \\mathbb {A}$ -generic filter of $\\mathrm {HOD}^{L(\\mathbb {R})}$ derived from $g$ .", "It is $({}_\\omega \\mathbb {A}\\slash G_s^n)$ -generic over $\\mathrm {HOD}^{L(\\mathbb {R})}[G^n_s]$ and hence generic over $L[{}_\\omega \\mathbb {A},s]$ .", "Then one has $L[{}_\\omega \\mathbb {A},s][G_g] \\models L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g]) \\models \\varphi (s,z).$ However, $L({\\dot{\\mathbb {R}}_\\mathrm {sym}}[G_g]) = L(\\mathbb {R})$ .", "Thus $L(\\mathbb {R}) \\models \\varphi (s,z)$ .", "Corollary 7.20 Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "Every set of reals has an $\\infty $ -Borel code.", "Moreover, if $A$ is $\\mathrm {OD}_{s}$ where $s$ is a finite tuple of reals, then $A$ has an $\\infty $ -Borel code in $L[{}_{\\omega }\\mathbb {A},s] \\subseteq \\mathrm {HOD}_s$ .", "Suppose $A \\subseteq \\mathbb {R}$ .", "In $L(\\mathbb {R})$ , every set is definable from ordinals and some reals.", "Let $s \\in \\mathbb {R}^n$ , $\\bar{\\alpha }$ be a tuple of ordinals, and $\\varphi $ be a formula so that $r \\in A \\Leftrightarrow \\varphi (s,r,\\bar{\\alpha })$ .", "By Theorem REF , one has that $r \\in A$ if and only if $L(\\mathbb {R}) \\models L[{}_\\omega \\mathbb {A},s,r] \\models 1_{{}_\\omega \\mathbb {A}\\slash G_{s\\hat{\\ }r}^{n + 1}} \\Vdash L({\\dot{\\mathbb {R}}_\\mathrm {sym}}) \\models \\varphi (s,r,\\bar{\\alpha }).$ Let $S$ be a set of ordinals coding $({}_\\omega \\mathbb {A},s,\\bar{\\alpha })$ .", "Let $\\psi $ be the formula expressing the inner forcing statement.", "One has that $(S,\\psi )$ is an $\\infty $ -Borel code for $A$ .", "Note that ${}_\\omega \\mathbb {A}$ can be considered an ultimate $\\infty $ -Borel code in the sense that for any $A \\subseteq \\mathbb {R}$ , there is some tuple of reals, tuple of ordinals, and a formula so that together with ${}_\\omega \\mathbb {A}$ they form an $\\infty $ -Borel code for $A$ .", "This is very useful in diagonalization arguments.", "Corollary 7.21 Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "Then $\\mathrm {HOD}^{L(\\mathbb {R})} = L[{}_\\omega \\mathbb {A}]$ .", "Since ${}_\\omega \\mathbb {A}\\in \\mathrm {HOD}^{L(\\mathbb {R})}$ , $L[{}_\\omega \\mathbb {A}]\\subseteq \\mathrm {HOD}^{L(\\mathbb {R})}$ .", "$\\mathrm {HOD}^{L(\\mathbb {R})} \\subseteq L[{}_\\omega \\mathbb {A}]$ by Theorem REF .", "Corollary 7.22 Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "$\\mathbb {A}= \\mathbb {O}$ , that is the forcing with nonempty subsets of $\\mathbb {R}$ possessing $\\mathrm {OD}$ $\\infty $ -Borel codes is the same as the Vopénka forcing.", "This follows from Corollary REF .", "Therefore, in the following, one will use the more practical Vopénka forcing $\\mathbb {O}$ rather than $\\mathbb {A}$ ." ], [ "The Vopénka Forcing", "Fact 8.1 ([9] Theorem 2.4.)", "Let $M$ be a transitive inner model of $\\mathsf {ZF}$ .", "Let $S \\in M$ be a set of ordinals.", "Suppose $K$ is an $\\mathrm {OD}^{M}_S$ set of ordinals and $\\varphi $ is a formula.", "Let $N$ be some transitive inner model with $\\mathrm {HOD}_S^M \\subseteq N$ .", "Suppose $p = \\lbrace x \\in \\mathbb {R}^M : L[K,x] \\models \\varphi (K,x)\\rbrace $ is a condition of $\\mathbb {O}^M_S$ (i.e.", "is nonempty).", "Then $N \\models p \\Vdash _{\\mathbb {O}_S^M} L[\\check{K},\\dot{x}_\\mathrm {gen}] \\models \\varphi (\\check{K}, \\dot{x}_\\mathrm {gen})$ .", "Suppose not.", "Then there is some $q^{\\prime } \\le _{\\mathbb {O}^M_S} p$ so that $N \\models q^{\\prime } \\Vdash _{\\mathbb {O}^M_S} L[\\check{K},\\dot{x}_\\mathrm {gen}] \\models \\lnot \\varphi (\\check{K},\\dot{x}_\\mathrm {gen})$ .", "Because every $\\mathbb {O}_S^M$ -generic filter over $N$ is also generic over $\\mathrm {HOD}_S^M$ , one can find some $q \\le _{\\mathbb {O}_S^M} q^{\\prime }$ so that $\\mathrm {HOD}_S^M \\models q \\Vdash _{\\mathbb {O}_S^M} L[\\check{K},\\dot{x}_\\mathrm {gen}] \\models \\lnot \\varphi (\\check{K},\\dot{x}_\\mathrm {gen})$ .", "Since $q \\ne \\emptyset $ , let $y \\in q$ .", "Let $G_y$ be the $\\mathbb {O}_S^M$ -generic filter over $\\mathrm {HOD}_S^M$ derived from $y$ .", "Note that $q \\in G_y$ .", "Thus $\\mathrm {HOD}_S^M[G_y] \\models L[K,y] \\models \\lnot \\varphi (K,y)$ .", "Hence $L[K,y] \\models \\lnot \\varphi (K,y)$ .", "This contradicts $q \\subseteq p$ .", "Fact 8.2 Let $M$ be an inner model of $\\mathsf {ZF}$ .", "Let $S \\in M$ be a set of ordinals.", "Let $N$ be an inner model of $\\mathsf {ZF}$ such that $N \\supseteq \\mathrm {HOD}_S^M$ .", "Let $n \\ge 1$ be a natural number.", "Suppose $(g_0,...,g_{n - 1})$ is an ${}_n\\mathbb {O}_S^M$ -generic reals over $N$ .", "Then each $g_0$ ,...,$g_{n - 1}$ is $\\mathbb {O}^M_S$ -generic over $N$ .", "Here $g$ is a $\\mathbb {O}_S^M$ -generic real over $N$ if and only if there is a filter $G$ which $\\mathbb {O}_S^M$ -generic over $N$ so that $g = \\dot{x}_\\mathrm {gen}[G]$ .", "For simplicity assume $n = 2$ .", "Let $\\pi _1 : \\mathbb {R}^2 \\rightarrow \\mathbb {R}$ be the projection onto the first coordinate.", "For each $p \\in {}_2\\mathbb {O}_S^M$ , $\\pi _1[p] \\in \\mathbb {O}_S^M$ .", "Let $(g_0,g_1)$ be ${}_2\\mathbb {O}_S^M$ -generic over $N$ .", "Let $G_{(g_0,g_1)}$ be the ${}_2\\mathbb {O}^M_S$ -generic filter over $N$ which adds $(g_0,g_1)$ .", "Let $G = \\lbrace \\pi _1(p) : p \\in G_{(g_0,g_1)}\\rbrace $ .", "$G$ is a filter on $\\mathbb {O}_S^M$ .", "Suppose $D \\subseteq \\mathbb {O}_S^M$ belongs to $N$ and is dense open.", "Let $D^{\\prime } = \\lbrace p \\in {}_2\\mathbb {O}_S^M : \\pi _1(p) \\in D\\rbrace $ .", "Let $r \\in {}_2\\mathbb {O}_S^M$ .", "Since $D$ is dense, there is some $r^{\\prime } \\le _{\\mathbb {O}_S^M} \\pi _1(r)$ with $r^{\\prime } \\in D$ .", "Let $s = (r^{\\prime } \\times \\mathbb {R}) \\cap r$ .", "Note $s \\in {}_2\\mathbb {O}_S^M$ , $\\pi _1(s) = r^{\\prime } \\in D$ , and $s \\le _{{}_2\\mathbb {O}_S^M} r$ .", "Hence $s \\in D^{\\prime }$ .", "This shows that $D^{\\prime }$ is dense in ${}_2\\mathbb {O}_S^M$ .", "By genericity, there is some $r \\in D^{\\prime } \\cap G_{(g_0,g_1)}$ .", "Then $\\pi _1(r) \\in D \\cap G$ .", "This shows that $G$ is $\\mathbb {O}^M_S$ -generic over $N$ .", "Since $g_0$ is the real added by $G$ , $g_0$ is a $\\mathbb {O}_S^M$ -generic real over $N$ .", "First, one will give a proof of Woodin's countable section uniformization.", "This follows a presentation of this result in [19].", "It is not known whether countable section uniformization is provable in $\\mathsf {AD}$ alone.", "Lemma 8.3 Assume $\\mathsf {ZF}+ \\mathsf {AD}$ .", "Let $R \\subseteq \\mathbb {R}\\times \\mathbb {R}$ be a relation.", "Suppose there is a set of ordinals $S$ with the property that for all $x \\in \\mathbb {R}$ so that $R_x = \\lbrace y \\in \\mathbb {R}: R(x,y)\\rbrace \\ne \\emptyset $ (i.e.", "$x \\in \\mathrm {dom}(R)$ ), for $\\mu _\\mathcal {D}$ -almost all Turing degrees $Z$ , $R_x \\cap \\mathrm {HOD}_{S,x}^{L[S,x,Z]} \\ne \\emptyset $ .", "Then $R$ has a uniformization.", "Think of $\\mathbb {R}= {{}^\\omega 2}$ .", "Let $x \\in \\mathrm {dom}(R)$ .", "Let $E^x = \\lbrace Z \\in \\mathcal {D}: R_x \\cap \\mathrm {HOD}_{S,x}^{L[S,x,Z]} \\ne \\emptyset \\rbrace $ .", "By the assumption, $E^x \\in \\mu _\\mathcal {D}$ .", "For each $n \\in \\omega $ and $i \\in 2$ , let $E^x_{n,i}$ be the collection of $Z \\in E^x$ so that $z(n) = i$ , where $z$ is the $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ -least element of $R_x \\cap \\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ according to the canonical wellordering of $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ .", "Note that $E^x = E^x_{n,0} \\cup E^x_{n,1}$ .", "Since $\\mu _\\mathcal {D}$ is an ultrafilter and $E^x \\in \\mu _\\mathcal {D}$ , for each $n \\in \\omega $ , there is a unique $a^x_n \\in 2$ so that $E^x_{n,a_n^x} \\in \\mu _\\mathcal {D}$ .", "Define a function $\\Phi : \\mathrm {dom}(R) \\rightarrow {{}^\\omega 2}$ by $\\Phi (x)(n) = a_n^x$ .", "The claim is that $\\Phi $ is a uniformization for $R$ .", "Suppose $x \\in \\mathrm {dom}(R)$ .", "Since $\\mu _\\mathcal {D}$ is a countably complete ultrafilter on $\\mathcal {D}$ , $\\bigcap _{n \\in \\omega } E^x_{n,a^x_n} \\in \\mu _\\mathcal {D}$ .", "Pick any $Z \\in \\bigcap _{n \\in \\omega } E^x_{n,a^x_n}$ .", "Then $\\Phi (x)$ is the $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ -least element of $R_x \\cap \\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ .", "In particular, $\\Phi (x) \\in R_x$ .", "$\\Phi $ is a uniformization.", "Theorem 8.4 (Woodin's countable section uniformization) Assume $\\mathsf {ZF}+ \\mathsf {AD}$ and that all sets of reals have an $\\infty $ -Borel code.", "Let $R \\subseteq \\mathbb {R}\\times \\mathbb {R}$ be such that for all $x \\in \\mathrm {dom}(R)$ , $R_x = \\lbrace y \\in \\mathbb {R}: R(x,y)\\rbrace $ is countable.", "Then $R$ has a uniformization function, that is some function $F : \\mathrm {dom}(R) \\rightarrow \\mathbb {R}$ so that for all $x \\in \\mathrm {dom}(R)$ , $R(x,F(x))$ .", "In particular, countable section uniformization holds in $L(\\mathbb {R}) \\models \\mathsf {AD}$ .", "Let $(S,\\varphi )$ be an $\\infty $ -Borel code for $R$ , that is $\\mathfrak {B}^2_{(S,\\varphi )} = R$ .", "Let $x \\in \\mathrm {dom}(R)$ .", "Thus there is some $y^*$ such that $R(x,y^*)$ .", "Let $Z \\in \\mathcal {D}$ be such that $Z \\ge [y^*]_T$ .", "In the model $L[S,x,Z]$ , define $p = \\lbrace y \\in \\mathbb {R}: L[S,x,y] \\models \\varphi (S,x,y)\\rbrace .$ This condition is clearly $\\mathrm {OD}_{S,x}^{L[S,x,Z]}$ .", "Note $[y^*]_T \\le Z$ implies that $y^* \\in L[S,x,Z]$ .", "Since in the real world $V$ , $R(x,y^*)$ holds, one has that $V \\models L[S,x,y^*] \\models \\varphi (S,x,y^*)$ .", "Thus $L[S,x,Z] \\models L[S,x,y^*] \\models \\varphi (S,x,y^*)$ .", "So $y^* \\in p$ .", "Hence $p \\ne \\emptyset $ .", "It has been shown that $p \\in \\mathbb {O}_{S,x}^{L[S,x,Z]}$ .", "Claim 1: There is a dense below $p$ set of conditions in $\\mathbb {O}_{S,x}^{L[S,x,Z]}$ which forces a value for $\\dot{x}_\\mathrm {gen}$ .", "To prove Claim 1: Suppose otherwise.", "As argued in the proof of Fact REF , since $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}\\models \\mathsf {AC}$ , ${P}(\\mathbb {O}_{S,x}^{L[S,x,Z]})^{\\mathrm {HOD}_{S,x}^{L[S,x,Z]}}$ is countable in the real world $V$ .", "Let $\\langle D_n : n \\in \\omega \\rangle $ enumerate (in the real world) all the dense open subsets of $\\mathbb {O}_{S,x}^{L[S,x,Z]}$ that belong to $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ .", "Let $p_\\emptyset $ be any condition below $p$ that meets $D_0$ .", "Let $m_\\emptyset = 0$ .", "Suppose for some $\\sigma \\in {{}^{<\\omega }2}$ , $p_\\sigma $ and $m_\\sigma $ have been defined.", "First find some $p^{\\prime } \\le p_\\sigma $ so that $p^{\\prime } \\in D_{|\\sigma | + 1}$ .", "Since $p^{\\prime }$ can not determine $\\dot{x}_\\mathrm {gen}$ , there is some $N \\ge m_\\sigma $ and two conditions $q_0$ and $q_1$ below $p^{\\prime }$ so that (1) $q_0$ and $q_1$ determine $\\dot{x}_\\mathrm {gen} \\upharpoonright N + 1$ and (2) For $i \\in 2$ , $q_i \\Vdash \\dot{x}_\\mathrm {gen}(\\check{N}) = \\check{i}$ .", "(That is, $q_i$ forces the generic real to take value $i$ at $N$ .)", "Let $q_0$ and $q_1$ be the $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ -least pair with the above property.", "Now let $p_{\\sigma \\hat{\\ }i} = q_i$ and $m_{\\sigma \\hat{\\ }i} = N + 1$ .", "Observe that $p_{\\sigma \\hat{\\ }i} \\in D_{n + 1}$ (as $D_{n + 1}$ is dense open).", "This completes the construction of a sequence $\\langle p_{\\sigma } : \\sigma \\in {{}^{<\\omega }2}\\rangle $ of conditions in $\\mathbb {O}^{L[S,x,Z]}_{S,x}$ .", "For each $f \\in {{}^\\omega 2}$ , let $G_f$ be the $\\le _{\\mathbb {O}_{S,x}^{L[S,x,Z]}}$ upward closure of the set $\\lbrace p_{f\\upharpoonright n} : n \\in \\omega \\rbrace $ .", "By construction, $G_f$ is $\\mathbb {O}_{S,x}^{L[S,x,Z]}$ -generic over $\\mathrm {HOD}^{L[S,x,Z]}_{S,x}$ .", "Also by construction, if $f \\ne g$ , then $\\dot{x}_\\mathrm {gen}[G_f] \\ne \\dot{x}_\\mathrm {gen}[G_g]$ .", "Hence $A = \\lbrace \\dot{x}_\\mathrm {gen}[G_f] : f \\in {{}^\\omega 2}\\rbrace $ is an uncountable set of reals.", "Subclaim 1.1: $A \\subseteq R_x$ .", "To see Subclaim 1.1: Note that for all $f \\in {{}^\\omega 2}$ , $p \\in G_f$ .", "Note that the condition $p$ takes the form specified in Fact REF .", "Hence the fact asserts that $\\mathrm {HOD}_{S,x}^{L[S,x,Z]} \\models p \\Vdash _{\\mathbb {O}_{S,x}^{L[S,x,Z]}} L[S,x,\\dot{x}_\\mathrm {gen}] \\models \\varphi (\\check{S},\\check{x},\\dot{x}_\\mathrm {gen}).$ Therefore, $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}[G_f] \\models L[S,x,\\dot{x}_\\mathrm {gen}[G_f]] \\models \\varphi (S,x,\\dot{x}_\\mathrm {gen}[G_f]).$ So in particular, $V \\models L[S,x,\\dot{x}_\\mathrm {gen}[G_f]] \\models \\varphi (S,x,\\dot{x}_\\mathrm {gen}[G_f]).$ Since $R = \\mathfrak {B}^2_{(S,\\varphi )}$ , $R(x,\\dot{x}_\\mathrm {gen}[G_f])$ for each $f \\in {{}^\\omega 2}$ .", "This shows that $A \\subseteq R_x$ which completes the proof of the subclaim.", "However, $A$ is uncountable and $R_x$ was assumed to be countable.", "Contradiction.", "This completes the proof of Claim 1.", "Let $D \\subseteq \\mathbb {O}_{S,x}^{L[S,x,Z]}$ be the dense below $p$ set that witnesses Claim 1.", "Take any $q \\in D$ with $q \\le p$ .", "Define $t \\in {{}^\\omega 2}$ by $t(n) = i \\Leftrightarrow q \\Vdash \\dot{x}_\\mathrm {gen}(\\check{n}) = \\check{i})$ .", "It is clear that $t \\in \\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ .", "Again since ${P}(\\mathbb {O}_{S,x}^{L[S,x,Z]})^{\\mathrm {HOD}_{S,x}^{L[S,x,Z]}}$ is countable in $V$ , let $G$ be any $\\mathbb {O}_{S,x}^{L[S,x,Z]}$ -generic filter over $\\mathrm {HOD}^{L[S,x,Z]}_{S,x}$ with $q \\in G$ (and hence $p \\in G$ ).", "By the forcing theorem, $\\dot{x}_\\mathrm {gen}[G] = t$ .", "Using Fact REF as above, one has $\\mathrm {HOD}_{S,x}^{L[S,x,Z]}[G]\\models L[S,x,t] \\models \\varphi (S,x,t)$ .", "Hence $V \\models L[S,x,t] \\models \\varphi (S,x,t)$ .", "Thus $R(x,t)$ .", "So $t \\in R_x \\cap \\mathrm {HOD}_{S,x}^{L[S,x,Z]}$ .", "It has been shown that $R_x \\cap \\mathrm {HOD}_{S,x}^{L[S,x,Z]} \\ne \\emptyset $ for any $Z \\in \\mathcal {D}$ with $Z \\ge _T [y^*]_T$ .", "Lemma REF implies that $R$ has a uniformization.", "In $\\mathsf {ZF}$ : The Silver's dichotomy [22] states that for every ${\\mathbf {\\Pi }_1^1}$ equivalence $E$ on $\\mathbb {R}$ , $\\mathbb {R}\\slash E$ injects into $\\omega $ or $\\mathbb {R}$ injects into $\\mathbb {R}\\slash E$ .", "Burgess [1] showed that if $E$ is a ${\\mathbf {\\Sigma }_1^1}$ equivalence relation, $\\mathbb {R}\\slash E$ injects into $\\omega $ , $\\mathbb {R}\\slash E$ is in bijection with $\\omega _1$ , or $\\mathbb {R}$ inject into $\\mathbb {R}\\slash E$ .", "Note that both results imply that either $\\mathbb {R}\\slash E$ is wellorderable or $\\mathbb {R}$ injects into $\\mathbb {R}\\slash E$ .", "Woodin's perfect set dichotomy is an extension of this property to all equivalence relations on $\\mathbb {R}$ assuming $\\mathsf {AD}$ , all sets of reals have $\\infty $ -Borel codes, and the ultrapower, $\\prod _{X \\in D} \\omega _1 \\slash \\mu _\\mathcal {D}$ , is wellfounded.", "The wellfoundedness of the ultrapower is certainly a consequence of $\\mathsf {DC}$ .", "The proof of Woodin's perfect set dichotomy presented below follows the outline of Hjorth's [9] generalization of the [8] $E_0$ -dichotomy in $L(\\mathbb {R}) \\models \\mathsf {AD}$ and Harrington's proof of the Silver's dichotomy.", "Harrington's proof uses the Gandy-Harrington forcing of nonempty $\\Sigma _1^1$ definable subsets of $\\mathbb {R}$ .", "The Vopénka forcing is simply the $\\mathrm {OD}$ version of the Gandy-Harrington forcing.", "The argument presented below appears in [6] where the uniformity of this proof is needed to make further conclusions about wellordered disjoint unions of quotients of “smooth” equivalence relations with countable classes.", "Theorem 8.5 (Woodin's perfect set dichotomy) Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "Let $E$ be an equivalence relation on $\\mathbb {R}$ .", "Then either (i) $\\mathbb {R}\\slash E$ is wellorderable.", "(ii) $\\mathbb {R}$ inject into $\\mathbb {R}\\slash E$ .", "Let $E$ be an equivalence relation on $\\mathbb {R}$ .", "An $E$ -component is a nonempty set $A$ so that for all $x,y \\in A$ , $x \\ E \\ y$ .", "That is, an $E$ -component is simply a nonempty subset of an $E$ -class.", "By Corollary REF , every set of reals has an $\\infty $ -Borel code.", "Let $(S,\\varphi )$ be an $\\infty $ -Borel code for $E$ ; that is, $E = \\mathfrak {B}^2_{(S,\\varphi )}$ .", "Throughout this argument, $E$ will always be considered as the set defined by the $\\infty $ -Borel code $(S,\\varphi )$ .", "(Case I) For all $X \\in \\mathcal {D}$ , for all $a \\in \\mathbb {R}^{L[S,X]}$ , there is some $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component containing $a$ .", "In other words, for all degrees $X \\in \\mathcal {D}$ , in the local model $L[S,X]$ , every real belongs to an $\\mathrm {OD}_S$ $E$ -component.", "For each $\\alpha \\in \\prod _{X \\in \\mathcal {D}} \\omega _1 \\slash \\mu _\\mathcal {D}$ , let $f : \\mathcal {D}\\rightarrow \\omega _1$ be such that $f$ is a representative for $\\alpha $ .", "Define $A_\\alpha \\subseteq \\mathbb {R}$ as follows: $a \\in A_\\alpha $ if and only if for $\\mu _\\mathcal {D}$ -almost all $X$ , $a$ belongs to the $f(X)^\\text{th}$ $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component according to the canonical wellordering of $\\mathrm {HOD}_S^{L[S,X]}$ .", "(If there is no $F(X)^\\text{th}$ $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component, then one just let this set be $\\emptyset $ .)", "One can check that $A_\\alpha $ is well defined, independent of the choice of $f$ representing $\\alpha $ .", "$A_\\alpha $ is an $E$ -component: To see this, let $a,b \\in A_\\alpha $ .", "Thus there is a set $K \\in \\mu _\\mathcal {D}$ so that for all $X \\in K$ , $[a\\oplus b]_T \\le X$ and $a$ and $b$ both belong to the $f(X)^\\text{th}$ $\\mathrm {OD}_S$ $E$ -component of $\\mathrm {HOD}_{S}^{L[S,X]}$ .", "Thus, $L[S,X] \\models a \\ E \\ b$ .", "Since $E$ is always defined by it $\\infty $ -Borel code, one has, $L[S,X] \\models L[S,a,b] \\models \\varphi (S,a,b)$ .", "Thus $V \\models L[S,a,b] \\models \\varphi (S,a,b)$ .", "Therefore, in $V$ , $a \\ E \\ b$ .", "It has been shown that $A_\\alpha $ is an $E$ -component.", "For every $a \\in \\mathbb {R}$ , there is some $\\alpha \\in \\prod _{X \\in \\mathcal {D}}\\omega _1 \\slash \\mu _\\mathcal {D}$ such that $a \\in A_\\alpha $ : To see this.", "Define $f: \\mathcal {D}\\rightarrow \\omega _1$ by letting $f(X)$ be the least $\\beta $ so that $a$ belongs to the $\\beta ^\\text{th}$ $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component of $L[S,X]$ according to the canonical wellordering of $\\mathrm {HOD}_{S}^{L[S,X]}$ .", "This $\\beta $ exists due to the Case I assumption.", "Let $\\alpha = [f]_{\\mu _\\mathcal {D}}$ .", "Then $a \\in A_\\alpha $ .", "Since $L(\\mathbb {R}) \\models \\mathsf {DC}$ , $\\prod _{X \\in \\mathcal {D}} \\omega _1 \\slash \\mu _\\mathcal {D}$ is a wellordered set.", "Thus $\\langle A_\\alpha : \\alpha \\in \\prod _{X \\in \\mathcal {D}} \\omega _1 \\slash \\mu _\\mathcal {D}\\rangle $ is a wellordered sequence of $E$ -components with the property that every real belongs to some $A_\\alpha $ .", "One can now wellorder $\\mathbb {R}\\slash E$ as follows: For two $E$ -classes $u,v \\in \\mathbb {R}\\slash E$ , $u \\prec v$ if and only if the least $\\alpha $ so that $A_\\alpha \\subseteq u$ is less than the least $\\alpha $ so that $A_\\alpha \\subseteq v$ .", "Thus $\\prec $ wellorders $\\mathbb {R}\\slash E$ .", "(Case II) There exists an $X \\in \\mathcal {D}$ and an $a \\in \\mathbb {R}^{L[S,X]}$ so that there is no $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component containing $a$ .", "In other word, there is a particular local model $L[S,X]$ so that within this model, there is a real $a$ which does not belong to any $\\mathrm {OD}_S$ $E$ -component.", "Fix such a degree $X \\in \\mathcal {D}$ .", "One will always work in this local model $L[S,X]$ .", "In $L[S,X]$ , define $u$ be the collection of reals of $L[S,X]$ that do not belong to any $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component.", "Note that $u \\ne \\emptyset $ by the Case II assumption and $u$ is $\\mathrm {OD}_S^{L[S,X]}$ .", "Thus $u$ is condition of $\\mathbb {O}_S^{L[S,X]}$ .", "Let $\\dot{x}_\\mathrm {left}$ and $\\dot{x}_\\mathrm {right}$ be $\\mathbb {O}_S^{L[S,X]} \\times \\mathbb {O}_S^{L[S,X]}$ -names so that $\\dot{x}_\\mathrm {left}$ and $\\dot{x}_\\mathrm {right}$ is the evaluation of the $\\mathbb {O}_S^{L[S,X]}$ -name $\\dot{x}_\\mathrm {gen}$ according to the left and right $\\mathbb {O}_S^{L[S,X]}$ -generic filter, respectively, coming from an $\\mathbb {O}_S^{L[S,X]} \\times \\mathbb {O}_S^{L[S,X]}$ -generic filter.", "Claim 1: $\\mathrm {HOD}_{S}^{L[S,X]} \\models (u,u) \\Vdash _{\\mathbb {O}_S^{L[S,X]}\\times \\mathbb {O}_S^{L[S,X]}} \\lnot (\\dot{x}_\\mathrm {left} \\ E \\ \\dot{x}_\\mathrm {right})$ .", "To see Claim 1: Suppose not.", "Then there is some $(v,w) \\le _{\\mathbb {O}^{L[S,X]}_S \\times \\mathbb {O}_S^{L[S,X]}} (u,u)$ so that $\\mathrm {HOD}_S^{L[S,X]} \\models (v,w) \\Vdash _{\\mathbb {O}_S^{L[S,X]}\\times \\mathbb {O}_S^{L[S,X]}} \\dot{x}_\\mathrm {left} \\ E \\ \\dot{x}_\\mathrm {right}$ .", "Subclaim 1.1: Suppose $G_0$ and $G_1$ are $\\mathbb {O}_S^{L[S,X]}$ -generic filter over $\\mathrm {HOD}_S^{L[S,X]}$ which belong to $V$ and contain the condition $v$ , then $\\dot{x}_\\mathrm {gen}[G_0] \\ E \\ \\dot{x}_\\mathrm {gen}[G_1]$ .", "(Note that $G_0$ and $G_1$ are not necessarily mutually generic.)", "To see Subclaim 1.1: As before, since $\\mathrm {HOD}_S^{L[S,X]} \\models \\mathsf {AC}$ , ${P}(\\mathbb {O}_S^{L[S,X]})^{\\mathrm {HOD}^{L[S,X]}_S}$ is countable in the real world which is a model of $\\mathsf {AD}$ .", "Therefore, one can find in the real world $V$ , an $H \\subseteq \\mathbb {O}_S^{L[S,X]}$ which is $\\mathbb {O}_S^{L[S,X]}$ -generic over both $\\mathrm {HOD}_S^{L[S,X]}[G_0]$ and $\\mathrm {HOD}_S^{L[S,X]}[G_1]$ and $w \\in H$ .", "Applying the forcing theorem, one has that $\\mathrm {HOD}_{S}^{L[S,X]}[G_0][H] \\models \\dot{x}_\\mathrm {gen}[G_0] \\ E \\ \\dot{x}_\\mathrm {gen}[H]$ and $\\mathrm {HOD}_S^{L[S,X]}[G_1][H] \\models \\dot{x}_\\mathrm {gen}[G_1] \\ E \\ \\dot{x}_\\mathrm {gen}[H]$ .", "As $E$ is defined by its $\\infty $ -Borel code $(S,\\varphi )$ , one has $\\mathrm {HOD}_S^{L[S,X]}[G_0][H] \\models L[S,\\dot{x}_\\mathrm {gen}[G_0],\\dot{x}_\\mathrm {gen}[H]] \\models \\varphi (S,\\dot{x}_\\mathrm {gen}[G_0], \\dot{x}_\\mathrm {gen}[H])$ $\\mathrm {HOD}_S^{L[S,X]}[G_1][H] \\models L[S,\\dot{x}_\\mathrm {gen}[G_1],\\dot{x}_\\mathrm {gen}[H]] \\models \\varphi (S,\\dot{x}_\\mathrm {gen}[G_1], \\dot{x}_\\mathrm {gen}[H])$ In particular $V\\models L[S,\\dot{x}_\\mathrm {gen}[G_0],\\dot{x}_\\mathrm {gen}[H]] \\models \\varphi (S,\\dot{x}_\\mathrm {gen}[G_0], \\dot{x}_\\mathrm {gen}[H])$ $V\\models L[S,\\dot{x}_\\mathrm {gen}[G_1],\\dot{x}_\\mathrm {gen}[H]] \\models \\varphi (S,\\dot{x}_\\mathrm {gen}[G_1], \\dot{x}_\\mathrm {gen}[H])$ Thus in the real world, $\\dot{x}_\\mathrm {gen}[G_0] \\ E \\ \\dot{x}_\\mathrm {gen}[H]$ and $\\dot{x}_\\mathrm {gen}[G_1] \\ E \\ \\dot{x}_\\mathrm {gen}[H]$ .", "By the transitivity of the equivalence relation $E$ , $\\dot{x}_\\mathrm {gen}[G_0] \\ E \\ \\dot{x}_\\mathrm {gen}[G_1]$ .", "This proves Subclaim 1.1.", "Observe that there must be some $a,b \\in v$ so that $\\lnot (a \\ E \\ b)$ since otherwise $v$ would be an $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component containing $a$ .", "This is impossible since $v \\le _{\\mathbb {O}^{L[S,X]}_S} u$ implies that $v \\subseteq u$ and $u$ consists of those $a \\in \\mathbb {R}$ which do not belong to any $\\mathrm {OD}_S^{L[S,X]}$ $E$ -component.", "Thus $p = v \\times v \\setminus E = \\lbrace (a,b) \\in v \\times v : \\lnot (a \\ E \\ b)\\rbrace $ is a nonempty $\\mathrm {OD}_S^{L[S,X]}$ subset of $\\mathbb {R}^2$ .", "Thus $p \\in {}_2\\mathbb {O}_S^{L[S,X]}$ .", "Let $\\dot{x}_\\mathrm {gen}^2$ denote the generic element of $\\mathbb {R}^2$ added by ${}_2\\mathbb {O}_S^{L[S,X]}$ .", "Let $\\tau _0$ and $\\tau _1$ be the ${}_2\\mathbb {O}_S^{L[S,X]}$ -names for the first and second coordinate of $\\dot{x}_\\mathrm {gen}^2$ .", "Using the $\\infty $ -Borel code $(S,\\varphi )$ , the condition $q = \\lbrace (a,b) : \\lnot (a \\ E \\ b)\\rbrace = \\lbrace (a,b) : L[S,a,b] \\models \\lnot \\varphi (S,a,b)\\rbrace $ takes the form specified in Fact REF .", "Therefore, Fact REF implies that $q \\Vdash _{{}_2\\mathbb {O}_S^{L[S,X]}} L[S,\\tau _0,\\tau _1] \\models \\lnot \\varphi (S,\\tau _0,\\tau _1)$ .", "As before, in the real world, find some $G \\subseteq {}_2\\mathbb {O}_S^{L[S,X]}$ containing $p$ which is ${}_2\\mathbb {O}_S^{L[S,X]}$ -generic over $\\mathrm {HOD}_S^{L[S,X]}$ .", "Since $p \\le _{{}_2\\mathbb {O}_S^{L[S,X]}} q$ , one has that $q \\in G$ .", "Hence $\\mathrm {HOD}_{S}^{L[S,X]}[G] \\models L[S,\\tau _0[G],\\tau _1[G]] \\models \\lnot \\varphi (S,\\tau _0[G],\\tau _1[G]).$ In particular, $V \\models L[S,\\tau _0[G],\\tau _1[G]] \\models \\lnot \\varphi (S,\\tau _0[G],\\tau _1[G]).$ Since $(S,\\varphi )$ is the $\\infty $ -Borel code for $E$ , one has that $\\lnot (\\tau _0[G] \\ E \\ \\tau _1[G])$ .", "However, $\\tau _0[G]$ and $\\tau _1[G]$ are just the two coordinates of $\\dot{x}^2_\\mathrm {gen}[G]$ , which is the generic element of $\\mathbb {R}^2$ added by $G$ .", "Fact REF implies that $\\tau _0[G]$ and $\\tau _1[G]$ are individually $\\mathbb {O}_S^{L[S,X]}$ -generic filters over $\\mathrm {HOD}_S^{L[S,X]}$ .", "Then Subclaim 1.1 implies that $\\tau _0[G] \\ E \\ \\tau _1[G]$ .", "Contradiction.", "This shows Claim 1.", "As before, ${P}(\\mathbb {O}_S^{L[S,X]} \\times \\mathbb {O}_S^{L[S,X]})^{\\mathrm {HOD}_S^{L[S,X]}}$ is countable in the real world.", "Let $\\langle D_n : n \\in \\omega \\rangle $ enumerate all the dense open subsets of $\\mathbb {O}_S^{L[S,X]} \\times \\mathbb {O}_S^{L[S,X]}$ which belong to $\\mathrm {HOD}_S^{L[S,X]}$ .", "By intersecting, one may assume that $D_{n + 1} \\subseteq D_n$ for all $n \\in \\omega $ .", "Now in the usual manner, one will build a perfect tree of conditions whose distinct branches correspond to mutually generic filters.", "The details follow: Let $p_\\emptyset $ be any condition below $u$ .", "Suppose for some $n \\in \\omega $ , $p_\\sigma $ has been defined for all $\\sigma \\in {}^n 2$ .", "For each $\\sigma \\in {}^n 2$ , $(p_\\sigma ,p_\\sigma ) \\le _{\\mathbb {O}_S^{L[S,X]}\\times \\mathbb {O}_S^{L[S,X]}} (u,u)$ and Claim 1 implies that there must be some $q_0 \\le _{\\mathbb {O}_S^{L[S,X]}} p_\\sigma $ and $q_1 \\le _{\\mathbb {O}_S^{L[S,X]}} p_\\sigma $ which are incompatible conditions.", "Let $(\\rho _{\\sigma \\hat{\\ }0},\\rho _{\\sigma \\hat{\\ }1})$ denote the $\\mathrm {HOD}_S^{L[S,X]}$ -least such pair $(q_0,q_1)$ .", "For $\\sigma \\in {}^{n + 1} 2$ , let $\\rho _\\sigma ^{-1} = \\rho _\\sigma $ .", "Let $\\langle \\sigma _i : i < 2^{n + 1}\\rangle $ enumerate ${}^{n + 1}2$ .", "Now suppose that $\\rho _\\sigma ^j$ has been defined for all $\\sigma \\in {}^{n + 1}2$ and $-1 \\le j < 2^{n + 1}$ .", "By considering all possible pairs and extending by density, find a collection $\\lbrace \\rho _{\\sigma _i}^{j + 1} : 0 \\le i < 2^{n + 1}\\rbrace $ with the property that (1) $\\rho _\\sigma ^{j + 1} \\le _{\\mathbb {O}_s^{L[S,X]}} \\rho _\\sigma ^j$ (2) $(\\rho _{\\sigma _{j + 1}}^{j + 1},\\rho _{\\sigma _{\\ell }}^{j + 1}) \\in D_n$ and $(\\rho _{\\sigma _{\\ell }}^{j + 1},\\rho _{\\sigma _{j + 1}}^{j + 1}) \\in D_n$ whenever $\\ell \\ne j + 1$ .", "For $\\sigma \\in {}^{n + 1}2$ , let $p_\\sigma = \\rho _\\sigma ^{2^{n + 1} - 1}$ .", "This defines a sequence $\\langle p_\\sigma : \\sigma \\in {{}^{<\\omega }2}\\rangle $ .", "For each $f \\in {{}^\\omega 2}$ , let $G_f$ be the $\\le _{\\mathbb {O}_S^{L[S,X]}}$ -upward closure of $\\lbrace p_{f \\upharpoonright n} : n \\in \\omega \\rbrace $ .", "By construction, if $f \\ne g$ , then $G_f \\times G_g$ is an $\\mathbb {O}_S^{L[S,X]} \\times \\mathbb {O}_S^{L[S,X]}$ -generic filter over $\\mathrm {HOD}_S^{L[S,X]}$ containing the condition $(u,u)$ .", "Therefore, by Claim 1, for any $f,g \\in {{}^\\omega 2}$ with $f \\ne g$ , $\\lnot (\\dot{x}_\\mathrm {gen}[G_f] \\ E \\ \\dot{x}_{\\mathrm {gen}}[G_g])$ .", "Thus $A = \\lbrace \\dot{x}_\\mathrm {gen}[G_f] : f \\in {{}^\\omega 2}\\rbrace $ is a perfect set of $E$ -inequivalent reals.", "Thus $\\Phi : \\mathbb {R}\\rightarrow \\mathbb {R}\\slash E$ defined by $\\Phi (f) = [\\dot{x}_\\mathrm {gen}[G_f]]_E$ is an injection.", "Corollary 8.6 Assume $\\mathsf {ZF}+ \\mathsf {AD}+ \\mathsf {V = L(\\mathbb {R})}$ .", "If $X$ is a surjective image of $\\mathbb {R}$ (equivalently $X \\in L_\\Theta (\\mathbb {R})$ ), then $X$ is either wellorderable or $\\mathbb {R}$ injects into $X$ .", "In fact, as observed by [2], in $L(\\mathbb {R})$ , this dichotomy actually holds for all sets $X$ in $L(\\mathbb {R})$ not just $X \\in L_\\Theta (\\mathbb {R})$ ." ] ]
1906.04344
[ [ "Indecomposable $0$-Hecke modules for extended Schur functions" ], [ "Abstract The extended Schur functions form a basis of quasisymmetric functions that contains the Schur functions.", "We provide a representation-theoretic interpretation of this basis by constructing $0$-Hecke modules whose quasisymmetric characteristics are the extended Schur functions.", "We further prove these modules are indecomposable." ], [ "Introduction", "The Schur basis of the algebra of symmetric functions $\\mathrm {Sym}$ has applications in wide-ranging areas of mathematics, including representation theory of the symmetric group and the geometry of Grassmannians.", "Important generalizations of $\\mathrm {Sym}$ include the quasisymmetric functions $\\mathrm {QSym}$ and the noncommutative symmetric functions $\\mathrm {NSym}$ , which are dual to one another as Hopf algebras.", "The symmetric functions $\\mathrm {Sym}$ can be realised as a subalgebra of $\\mathrm {QSym}$ and as a quotient algebra of $\\mathrm {NSym}$ .", "There has been significant recent interest in constructing bases of $\\mathrm {QSym}$ and $\\mathrm {NSym}$ that generalize or share properties with the Schur functions.", "Central and well-studied examples include the quasisymmetric Schur [10] and dual immaculate [2] bases of $\\mathrm {QSym}$ , and their dual bases in $\\mathrm {NSym}$ : the noncommutative Schur [4] and immaculate bases [2], respectively.", "The extended Schur basis of $\\mathrm {QSym}$ was defined in [1] as the stable limits of polynomials arising from application of Kohnert's algorithm [13] to certain cell diagrams.", "The nomenclature comes from the fact the extended Schur basis contains the Schur basis of $\\mathrm {Sym}$ , thus extends the Schur basis to a basis of $\\mathrm {QSym}$ .", "It was proved in [1] that extended Schur functions expand positively in the fundamental basis [8] of $\\mathrm {QSym}$ ; dual immaculate and quasisymmetric Schur functions also expand positively in this basis.", "The dual basis to the extended Schur functions is known as the shin basis of $\\mathrm {NSym}$ , introduced in [6].", "The noncommutative Schur, immaculate and shin bases are described in [5] as the three canonical Schur-like bases of $\\mathrm {NSym}$ .", "The interpretation of Schur functions as characters of irreducible representations of the symmetric group raises a natural question about potential representation-theoretic interpretations of generalisations of the Schur basis.", "Recently, in [3], modules of the 0-Hecke algebra were constructed whose quasisymmetric characteristics [7] are exactly the dual immaculate quasisymmetric functions.", "Then in [16], 0-Hecke modules were constructed whose quasisymmetric characteristics are exactly the quasisymmetric Schur functions.", "In addition, 0-Hecke modules were constructed in [16] for the skew quasisymmetric Schur functions of [4], and 0-Hecke actions on a family of tableaux related to the generalized Demazure atoms of [11] were defined in [17].", "Our motivation is to complete the picture for the canonical Schur-like bases by providing a representation-theoretic interpretation of the extended Schur functions.", "In this paper, we accomplish this by constructing a 0-Hecke action on standard extended tableaux, and proving that the quasisymmetric characteristics of the corresponding modules are exactly the extended Schur functions.", "We additionally prove these modules are indecomposable.", "In comparison, the modules for the dual immaculate quasisymmetric functions are also indecomposable [3], while the modules for the quasisymmetric Schur functions are not in general: a direct sum decomposition is given in [16] whose components are proved to be indecomposable in [14]." ], [ "Quasisymmetric functions and noncommutative symmetric functions", "A composition is a finite sequence $\\alpha = (\\alpha _1, \\ldots , \\alpha _k)$ of positive integers.", "We call $\\alpha _1,\\ldots , \\alpha _k$ the parts of $\\alpha $ , and when $\\alpha $ has $k$ parts we say the length $\\ell (\\alpha )$ of $\\alpha $ is $k$ .", "When the parts of $\\alpha $ sum to $n$ , we say that $\\alpha $ is a composition of $n$ , written $\\alpha \\vDash n$ .", "Compositions of $n$ are in bijection with subsets of $[n-1] = \\lbrace 1,\\ldots , n-1\\rbrace $ ; given a composition $\\alpha = (\\alpha _1, \\ldots , \\alpha _k)\\vDash n$ , define $\\mathbb {S}(\\alpha )$ to be the subset $\\lbrace \\alpha _1, \\alpha _1+\\alpha _2, \\ldots , \\alpha _1+\\cdots + \\alpha _{k-1}\\rbrace $ of $[n-1]$ .", "We say a composition $\\beta $ refines a composition $\\alpha $ if $\\alpha $ can be obtained by summing consecutive entries of $\\beta $ .", "Example 2.1 Let $\\alpha = (2,1,3) \\vDash 6$ .", "Then $\\mathbb {S}(\\alpha ) = \\lbrace 2,3\\rbrace \\subset [5]$ .", "The composition $(2,1,3)$ refines the composition $(2,4)$ but does not refine $(4,2)$ .", "Let $[x_1, x_2, \\dots ]]$ denote the Hopf algebra of formal power series of bounded degree in infinitely many commuting variables.", "The Hopf algebra $\\mathrm {QSym}$ of quasisymmetric functions [8] is the subalgebra of $[x_1, x_2, \\dots ]]$ consisting of those formal power series $f$ such that for every composition $\\alpha = (\\alpha _1, \\ldots \\alpha _k)$ , $[x_{i_1}^{\\alpha _1} \\cdots x_{i_{k}}^{\\alpha _{k}} \\mid f] = [x_{j_1}^{\\alpha _1} \\cdots x_{j_{k}}^{\\alpha _{k}} \\mid f]$ for any two sequences $1 \\le i_1< \\cdots < i_{k}$ and $1 \\le j_1< \\cdots < j_{k}$ , where $[x_{i_1}^{\\alpha _1} \\cdots x_{i_{k}}^{\\alpha _{k}} \\mid f]$ is the coefficient of the monomial $x_{i_1}^{\\alpha _1} \\cdots x_{i_{k}}^{\\alpha _k}$ in the monomial expansion of $f$ .", "The monomial and fundamental quasisymmetric functions $M_\\alpha $ and $F_\\alpha $ are additive bases of $\\mathrm {QSym}$ introduced in [8].", "They are indexed by compositions, and defined by $M_\\alpha = \\sum _{i_1< i_2 < \\cdots < i_k} x_{i_1}^{\\alpha _1} \\cdots x_{i_{k}}^{\\alpha _{k}} \\qquad \\mbox{ and } \\qquad F_\\alpha = \\sum _{\\beta \\text{ refines }\\alpha } M_\\beta .$ Example 2.2 Let $\\alpha = (1,3,1)$ .", "We have $M_{(1,3,1)} = \\sum _{i<j<k}x_ix_j^3x_k$ and $F_{(1,3,1)} = M_{(1,3,1)} + M_{(1,2,1,1)} + M_{(1,1,2,1)} + M_{(1,1,1,1,1)}.$ The Hopf algebra $\\mathrm {NSym}$ of noncommutative symmetric functions [9] is an analogue of the symmetric functions in noncommuting variables.", "It is generated by elements $H_1, H_2, \\ldots $ with no relations, and has additive basis $\\lbrace H_\\alpha \\rbrace $ indexed by compositions $\\alpha =(\\alpha _1, \\ldots , \\alpha _k)$ , where the complete homogeneous function $H_\\alpha $ is defined to be the product $H_{\\alpha _1}\\cdots H_{\\alpha _k}$ .", "As Hopf algebras, $\\mathrm {NSym}$ is dual to $\\mathrm {QSym}$ via the pairing $\\langle H_\\alpha , M_\\beta \\rangle = \\delta _{\\alpha , \\beta }$ .", "The dual basis in $\\mathrm {NSym}$ to the fundamental basis $\\lbrace F_\\alpha \\rbrace $ of $\\mathrm {QSym}$ is the ribbon Schur functions $\\lbrace {\\bf r}_\\alpha \\rbrace $ ." ], [ "Standard extended tableaux and extended Schur functions", "The diagram $D(\\alpha )$ of a composition $\\alpha $ is the array of boxes in the plane with $\\alpha _i$ boxes in row $i$ , left-justified.", "We depict composition diagrams in French notation, i.e., the bottom row is row 1.", "Example 2.3 The diagram $D(\\alpha )$ of $\\alpha =(2,1,3)$ is shown below.", "$\\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr {\\ } & {\\ } & {\\ } \\cr {\\ } \\cr {\\ } & {\\ } \\crcr }}$ A standard extended tableau [1] of shape $\\alpha $ is a bijective assignment of the integers $\\lbrace 1,2, \\ldots n\\rbrace $ to the boxes of $D(\\alpha )$ , such that the entries in each row of $D(\\alpha )$ increase from left to right and the entries in each column of $D(\\alpha )$ increase from bottom to top.", "If $\\alpha $ is a partition, i.e., $\\alpha _1\\ge \\alpha _2 \\ge \\cdots \\ge \\alpha _{\\ell (\\alpha )}$ , then the standard extended tableaux of shape $\\alpha $ are exactly the standard Young tableaux of shape $\\alpha $ .", "We denote the collection of all standard extended tableaux of shape $\\alpha $ by $\\mathrm {SET}(\\alpha )$ .", "Remark 2.4 The standard extended tableaux defined above are a vertical reflection of the standard extended tableaux defined in [1], which are fillings of right-justified composition diagrams in which entries decrease from left to right along rows and decrease down columns.", "Example 2.5 The standard extended tableaux of shape $(2,1,3)$ are shown below.", "$\\begin{array}{c@{\\hspace{24.0pt}}c@{\\hspace{24.0pt}}c@{\\hspace{24.0pt}}c}T_1 = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 4 & 5 & 6 \\cr 3 \\cr 1 & 2 \\crcr }} & T_2 = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 4 & 5 & 6 \\cr 2 \\cr 1 & 3 \\crcr }} & T_3 = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 3 & 5 & 6 \\cr 2 \\cr 1 & 4 \\crcr }}\\end{array}$ We say an entry $i$ of a standard extended tableau $T$ is a descent of $T$ if $i$ is weakly to the right of $i+1$ in $T$ .", "Define the descent composition $\\mathrm {Des}(T)$ of $T$ to be the composition $\\alpha $ such that $\\mathbb {S}(\\alpha )$ is the set of all descents of $T$ .", "Example 2.6 Consider the three standard extended tableaux from Example REF .", "The descents of $T_1$ are 2 and 3, the descents of $T_2$ are 1 and 3, and the descents of $T_3$ are 1, 2 and 4.", "Hence $\\mathrm {Des}(T_1) = (2,1,3)$ , $\\mathrm {Des}(T_2) = (1,2,3)$ and $\\mathrm {Des}(T_3) = (1,1,2,2)$ .", "Let $\\alpha $ be a composition.", "In [1], the extended Schur functions $\\mathcal {E}_\\alpha $ were defined as the stable limits of polynomials obtained by applying Kohnert's algorithm [13] to right-justified cell diagrams.", "The extended Schur functions are quasisymmetric and in fact expand positively in the fundamental basis of $\\mathrm {QSym}$ [1].", "We take the formula for this expansion as definitional for the extended Schur functions.", "Theorem 2.7 [1] Let $\\alpha $ be a composition.", "Then $\\mathcal {E}_\\alpha = \\sum _{T\\in \\mathrm {SET}(\\alpha )}F_{\\mathrm {Des}(T)}.$ Example 2.8 By Examples REF and REF , we have $\\mathcal {E}_{(2,1,3)} = F_{(2,1,3)} + F_{(1,2,3)} + F_{(1,1,2,2)}.$ Theorem 2.9 [1] The extended Schur functions $\\lbrace \\mathcal {E}_\\alpha \\rbrace $ form a basis of $\\mathrm {QSym}$ .", "Every Schur function is in fact an extended Schur function.", "We may take the following result as definitional for the celebrated Schur functions: Proposition 2.10 [1] If $\\alpha $ is a partition, then the extended Schur function $\\mathcal {E}_\\alpha $ is equal to the Schur function $s_\\alpha $ .", "The extended Schur functions are thus a basis of $\\mathrm {QSym}$ that contains the Schur basis of symmetric functions.", "We note that other important and well-studied bases of $\\mathrm {QSym}$ such as the quasisymmetric Schur functions, fundamental quasisymmetric functions and dual immaculate quasisymmetric functions do not contain the Schur functions.", "The extended Schur functions are dual to the shin basis $\\mathcal {E}^*_\\alpha $ of noncommutative symmetric functions introduced and studied in [6].", "The shin functions have the property that the image of $\\mathcal {E}^*_\\alpha $ under the natural projection from $\\mathrm {NSym}$ to $\\mathrm {Sym}$ is the Schur function $s_\\alpha $ if $\\alpha $ is a partition, and 0 otherwise.", "Complete homogeneous functions expand positively in the shin basis [6], which then implies via duality that extended Schur functions expand positively into the monomial basis of quasisymmetric functions.", "Since extended Schur functions expand positively into the fundamental basis of quasisymmetric functions (Theorem REF ), duality implies the following result for shin functions.", "Proposition 2.11 The ribbon Schur functions expand positively in the shin basis of $\\mathrm {NSym}$ via the formula ${\\bf r}_\\beta = \\sum _\\beta K_{\\alpha , \\beta } \\mathcal {E}^*_\\alpha $ where $K_{\\alpha ,\\beta }$ is the number of $T\\in \\mathrm {SET}(\\alpha )$ such that $\\mathrm {Des}(T)=\\beta $ .", "By Theorem REF and the definition of $K_{\\alpha ,\\beta }$ , we have $\\mathcal {E}_\\alpha = \\sum _\\beta K_{\\alpha ,\\beta }F_\\beta .$ Hence, by the fact the ribbon Schur functions are dual to the fundamental quasisymmetric functions, we have $\\langle \\mathcal {E}_\\alpha , {\\bf r}_\\beta \\rangle = K_{\\alpha ,\\beta }.$ Therefore, since the shin functions are dual to the extended Schur functions, we have ${\\bf r}_\\beta = \\sum _\\beta K_{\\alpha , \\beta } \\mathcal {E}^*_\\alpha .$" ], [ "0-Hecke algebras and quasisymmetric characteristic", "The 0-Hecke algebra $H_n(0)$ is defined to be the algebra over $ with generators $ T1, ..., Tn-1$ subject to relations{\\begin{@align*}{1}{-1}T_i^2 & = T_i \\quad \\mbox{ for all } 1\\le i \\le n-1 \\\\T_iT_j & = T_jT_i \\quad \\mbox{ for all } i, j \\mbox{ with } |i-j|\\ge 2 \\\\T_iT_{i+1}T_i & =T_{i+1}T_iT_{i+1} \\quad \\mbox{ for all } 1\\le i \\le n-2.\\end{@align*}}$ For any permutation $\\sigma \\in S_n$ , one can define an element $T_\\sigma \\in H_n(0)$ by $T_\\sigma = T_{i_1}T_{i_2} \\cdots T_{i_r}$ where $s_{i_1} s_{i_2} \\cdots s_{i_r}$ is any reduced word for $\\sigma $ .", "Then $\\lbrace T_\\sigma : \\sigma \\in S_n\\rbrace $ is an additive basis for $H_n(0)$ .", "The Grothendieck group $\\mathcal {G}_0(H_n(0))$ is the linear span of the isomorphism classes of the finite-dimensional representations of $H_n(0)$ , subject to the relation $[Y]=[X]+[Z]$ whenever one has a short exact sequence $0\\rightarrow X\\rightarrow Y\\rightarrow Z\\rightarrow 0$ of $H_n(0)$ -representations $X,Y,Z$ .", "There are $2^{n-1}$ irreducible representations of $H_n(0)$ ; these may be indexed by the $2^{n-1}$ compositions of $n$ .", "Let $\\mathcal {F}_\\alpha $ denote the irreducible representation corresponding to the composition $\\alpha $ .", "By [15], $\\mathcal {F}_\\alpha $ is one-dimensional, hence equal to the span of some nonzero vector $v_\\alpha $ .", "The structure of $\\mathcal {F}_\\alpha $ as a $H_n(0)$ -representation is given by the following action of the generators $T_i$ of $H_n(0)$ : $T_i(v_\\alpha ) = {\\left\\lbrace \\begin{array}{ll} v_\\alpha & \\mbox{ if } i\\notin \\mathbb {S}(\\alpha ) \\\\0 & \\mbox{ if } i \\in \\mathbb {S}(\\alpha ).\\end{array}\\right.", "}$ Define $\\mathcal {G} = \\bigoplus _{n\\ge 0} \\mathcal {G}_0(H_n(0)).$ The set $\\lbrace \\mathcal {F}_\\alpha \\rbrace $ as $\\alpha $ ranges over all compositions is an additive basis of $\\mathcal {G}$ .", "Moreover, $\\mathcal {G}$ has a ring structure via the induction product.", "There is a ring isomorphism $ch:\\mathcal {G}\\rightarrow \\mathrm {QSym}$ [7] defined by setting $ch([\\mathcal {F}_\\alpha ]) = F_\\alpha $ .", "For any $H_n(0)$ -module $X$ , the image $ch([X])$ is called the quasisymmetric characteristic of $X$ ." ], [ "Modules for extended Schur functions", "The immaculate basis, noncommutative Schur basis and the shin basis have been described as the canonical Schur-like bases of $\\mathrm {NSym}$ [5].", "Interpretations of the dual bases of the first two as quasisymmetric characteristics of certain $H_n(0)$ -modules are given in [3] and [16] respectively.", "We complete this picture by constructing $H_n(0)$ -modules whose quasisymmetric characteristics are the extended Schur functions.", "Specifically, in this section we construct a $H_n(0)$ -module $X_\\alpha $ for each composition $\\alpha $ of $n$ , and prove that the quasisymmetric characteristic $ch([X_\\alpha ])$ is equal to the extended Schur function $\\mathcal {E}_\\alpha $ .", "Additionally, we prove that these modules $X_\\alpha $ are indecomposable for all compositions $\\alpha $ ." ], [ "0-Hecke actions and modules", "Given a composition $\\alpha $ of $n$ , define a standard row-increasing tableau of shape $\\alpha $ to be a bijective assignment of the integers $1, \\ldots , n$ to the boxes of $D(\\alpha )$ such that entries increase from left to right along rows.", "We note that no condition is imposed on columns.", "Let $\\mathrm {SRIT}(\\alpha )$ denote the set of standard row-increasing tableaux of shape $\\alpha $ .", "For $T\\in \\mathrm {SRIT}(\\alpha )$ and $1\\le i \\le n-1$ , define $\\pi _i(T) = {\\left\\lbrace \\begin{array}{ll} T & \\mbox{ if } i \\mbox{ is weakly above } i+1 \\mbox{ in } T \\\\s_i(T) & \\mbox{ otherwise }\\end{array}\\right.", "}$ where $s_i(T)$ denotes the filling of $D(\\alpha )$ obtained from $T$ by swapping the entries $i$ and $i+1$ .", "Example 3.1 Let $\\alpha = (4,2,3)$ and let $T = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 4 & 6 & 7 \\cr 1 & 5 \\cr 2 & 3 & 8 & 9 \\crcr }}\\in \\mathrm {SRIT}(\\alpha ).$ Then $\\pi _4(T) = \\pi _8(T)=T$ , while $\\pi _5(T) = s_5(T) = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 4 & 5 & 7 \\cr 1 & 6 \\cr 2 & 3 & 8 & 9 \\crcr }}\\in \\mathrm {SRIT}(\\alpha ).$ Let $V_\\alpha $ denote the $\\mathbb {C}$ -vector space spanned by $\\mathrm {SRIT}(\\alpha )$ .", "Proposition 3.2 The operators $\\pi _i$ define a $H_n(0)$ -action on $V_\\alpha $ .", "Specifically, we have $\\pi _i(T)\\in V_\\alpha $ for all $T\\in V_\\alpha $ and all $1\\le i \\le n-1$ , and the $\\pi _i$ satisfy the relations for the generators $T_i$ of the 0-Hecke algebra.", "Let $T\\in \\mathrm {SRIT}(\\alpha )$ .", "First we note that $\\pi _i(T)\\in V_\\alpha $ , since $\\pi _i$ can exchange the entries $i$ and $i+1$ only if they are in different rows, in which case exchanging $i$ and $i+1$ does not affect the relative order of the entries in either of the two rows containing $i$ or $i+1$ .", "If $i$ is weakly above $i+1$ in $T$ , then $\\pi _i(T)=T$ so $\\pi _i^2(T)=T = \\pi _i(T)$ .", "Otherwise, $\\pi _i(T)=s_i(T)$ , and then $\\pi _i^2(T)=\\pi _i(s_i(T)) = s_i(T)=\\pi _i(T)$ .", "Hence $\\pi _i^2=\\pi _i$ .", "If $|i-j|\\ge 2$ , then $\\lbrace i,i+1\\rbrace \\cap \\lbrace j,j+1\\rbrace = \\emptyset $ , so $\\pi _i$ and $\\pi _j$ affect disjoint pairs of boxes and thus it is clear that $\\pi _i\\pi _j(T) = \\pi _j\\pi _i(T)$ .", "Finally, we show $\\pi _i\\pi _{i+1}\\pi _i = \\pi _{i+1}\\pi _i\\pi _{i+1}$ .", "We check the following cases: $i$ is weakly above $i+1$ ; $i+1$ is weakly above $i+2$ $i$ is strictly below $i+1$ ; $i+1$ is strictly below $i+2$ $i$ is weakly above $i+1$ ; $i+1$ is strictly below $i+2$ $i$ is weakly above $i+2$ $i$ is strictly below $i+2$ $i$ is strictly below $i+1$ ; $i+1$ is weakly above $i+2$ $i$ is weakly above $i+2$ $i$ is strictly below $i+2$ (1): Here we have $\\pi _i(T)=\\pi _{i+1}(T)=T$ , hence $\\pi _i\\pi _{i+1}\\pi _i(T) = \\pi _{i+1}\\pi _i\\pi _{i+1}(T)$ .", "(2): Here it is straightforward to check $\\pi _i\\pi _{i+1}\\pi _i(T) = s_is_{i+1}s_i(T) = s_{i+1}s_is_{i+1}(T) = \\pi _{i+1}\\pi _i\\pi _{i+1}(T)$ .", "(3): In this case, we have $\\pi _i(T)=T$ and $\\pi _{i+1}(T) = s_{i+1}(T)$ .", "Hence $\\pi _i\\pi _{i+1}\\pi _i(T) = \\pi _is_{i+1}(T)$ and $\\pi _{i+1}\\pi _i\\pi _{i+1}(T)= \\pi _{i+1}\\pi _is_{i+1}(T)$ .", "Then we have: (3a): Here, $\\pi _i(s_{i+1}(T)) = s_{i+1}(T)$ .", "So $\\pi _i\\pi _{i+1}\\pi _i(T) = s_{i+1}(T)$ and $\\pi _{i+1}\\pi _i\\pi _{i+1}(T)= \\pi _{i+1}s_{i+1}(T) = s_{i+1}(T)$ (since $s_{i+1}(T)$ has $i+1$ weakly above $i+2$ ).", "(3b): Here, $\\pi _i(s_{i+1}(T)) = s_is_{i+1}(T)$ .", "So $\\pi _i\\pi _{i+1}\\pi _i(T) = s_is_{i+1}(T)$ and $\\pi _{i+1}\\pi _i\\pi _{i+1}(T)= \\pi _{i+1}s_is_{i+1}(T)$ .", "But $\\pi _{i+1}s_is_{i+1}(T)= s_is_{i+1}(T)$ ; this is because $s_is_{i+1}$ sends $i+1$ to the original position of $i$ in $T$ and $i+2$ to the original position of $i+1$ in $T$ , meaning that $i+1$ is weakly above $i+2$ in $s_is_{i+1}(T)$ .", "(4): In this case, we have $\\pi _{i+1}(T) = T$ and $\\pi _i(T) = s_i(T)$ .", "Hence $\\pi _i\\pi _{i+1}\\pi _i(T) = \\pi _i\\pi _{i+1}s_i(T)$ and $\\pi _{i+1}\\pi _i\\pi _{i+1}(T)= \\pi _{i+1}s_i(T)$ .", "Then we have: (4a): Here, $\\pi _{i+1}s_i(T) = s_i(T)$ .", "So $\\pi _{i+1}\\pi _i\\pi _{i+1}(T)= s_i(T)$ and $\\pi _i\\pi _{i+1}\\pi _i(T) = \\pi _i s_i(T) = s_i(T)$ (since $s_i(T)$ has $i$ weakly above $i+1$ ).", "4(b): Here $\\pi _{i+1}s_i(T) = s_{i+1}s_i(T)$ .", "So $\\pi _{i+1}\\pi _i\\pi _{i+1}(T)= s_{i+1}s_i(T)$ and $\\pi _i\\pi _{i+1}\\pi _i(T) = \\pi _i s_{i+1}s_i(T)$ .", "But $\\pi _i s_{i+1}s_i(T)=s_{i+1}s_i(T)$ ; this is because $s_{i+1}s_i$ sends $i$ to the original position of $i+1$ in $T$ and $i+1$ to the original position of $i+2$ in $T$ , meaning that $i$ is weakly above $i+1$ in $s_{i+1}s_i(T)$ .", "Remark 3.3 This action is equivalent to the $H_n(0)$ -action defined on words of content $\\alpha $ in [3].", "We prefer to work directly with tableaux of shape $D(\\alpha )$ , and include the proof of Proposition REF above for completeness.", "Let $\\mathrm {NSET}(\\alpha )$ denote $\\mathrm {SRIT}(\\alpha )\\setminus \\mathrm {SET}(\\alpha )$ , i.e., those elements of $\\mathrm {SRIT}(\\alpha )$ in which entries do not increase up some column.", "Let $Y_\\alpha $ denote the vector subspace of $V_\\alpha $ spanned by $\\mathrm {NSET}(\\alpha )$ .", "Lemma 3.4 The vector space $Y_\\alpha $ is an $H_n(0)$ -submodule of $V_\\alpha $ .", "Suppose $T\\in \\mathrm {NSET}(\\alpha )$ .", "Then $T$ has a pair of entries $j<k$ such that $j$ is above $k$ in the same column.", "If $k>j+1$ , then for any $1\\le i \\le n-1$ , $\\pi _i$ can change only one of $j,k$ , and by at most 1, so $\\pi _i(T)\\in Y_\\alpha $ .", "If $k=j+1$ , then $j$ is above $j+1=k$ , so $\\pi _j(T) = T \\in Y_\\alpha $ .", "It remains to observe that $\\pi _i$ for $i\\ne j$ either has no effect on the boxes with entries $j$ or $j+1$ , or it replaces $j$ with $j-1$ or $j+1$ with $j+2$ , which does not change the relative order of the entries of these two boxes.", "Hence $\\pi _i(T)\\in \\mathrm {NSET}(\\alpha )$ for all $1\\le i \\le n-1$ .", "Define $X_\\alpha $ to be the quotient module $V_\\alpha /Y_\\alpha $ .", "Then $\\mathrm {SET}(\\alpha )$ is a basis of $X_\\alpha $ .", "Theorem 3.5 For any $1\\le i \\le n-1$ and any composition $\\alpha $ of $n$ , the action of $\\pi _i$ on $X_\\alpha $ is given by $\\pi _i(T) = {\\left\\lbrace \\begin{array}{ll} T & \\mbox{ if } i \\mbox{ is strictly left of } i+1 \\mbox{ in } T \\\\0 & \\mbox{ if } i \\mbox{ and } i+1 \\mbox{ are in the same column of } T \\\\s_i(T) & \\mbox{ if } i \\mbox{ is strictly right of } i+1 \\mbox{ in } T\\end{array}\\right.", "}$ for any $T\\in \\mathrm {SET}(\\alpha )$ .", "Let $T\\in \\mathrm {SET}(\\alpha )$ .", "First suppose $i$ is strictly left of $i+1$ in $T$ .", "Since entries increase in both rows and columns of $T$ , $i$ cannot be strictly below $i+1$ in $T$ ; if it were, the entry in the row of $i+1$ and column of $i$ would have to be strictly larger than $i$ and strictly smaller than $i+1$ , which is impossible.", "Hence $\\pi _i(T)=T\\in \\mathrm {SET}(\\alpha )$ .", "Now suppose $i$ and $i+1$ are in the same column of $T$ .", "Since $T\\in \\mathrm {SET}(\\alpha )$ , $i$ is strictly below $i+1$ .", "Then $\\pi _i(T) = s_i(T)$ , in which $i$ is strictly above $i+1$ .", "Hence $\\pi _i(T)\\in \\mathrm {NSET}_\\alpha $ , i.e.", "$\\pi _i(T) = 0$ in $X_\\alpha = V_\\alpha /Y_\\alpha $ .", "Finally suppose $i$ is strictly right of $i+1$ in $T$ .", "Since entries increase along rows and up columns of $T$ , $i$ cannot also be weakly above $i+1$ in $T$ .", "Hence $\\pi _i(T)=s_i(T)$ .", "Since $i$ and $i+1$ are in different rows and different columns, $\\pi _i(T)=s_i(T)\\in \\mathrm {SET}(\\alpha )$ .", "Remark 3.6 It is also possible, though tedious, to show directly that the operators $\\pi _i$ on $\\mathrm {SET}(\\alpha )$ defined in Theorem REF satisfy the 0-Hecke relations.", "Example 3.7 Let $\\alpha = (4,2,3)$ and let $T = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 4 & 8 & 9 \\cr 3 & 7 \\cr 1 & 2 & 5 & 6 \\crcr }}\\in \\mathrm {SET}(\\alpha ).$ Then $\\pi _5(T)=T$ , $\\pi _7(T)=0$ , and $\\pi _6(T) = s_6(T) = \\vtop {{&\\hbox{t}o 0pt{\\usebox 2\\hss }\\vbox to 18{\\vss \\hbox{t}o 18{\\hss #\\hss }\\vss }\\cr 4 & 8 & 9 \\cr 3 & 6 \\cr 1 & 2 & 5 & 7 \\crcr }}\\in \\mathrm {SET}(\\alpha ).$ Define a relation $\\preceq $ on $\\mathrm {SET}(\\alpha )$ by setting $S\\preceq T$ if we can obtain $S$ from $T$ by applying a (possibly empty) sequence of the $\\pi _i$ operators.", "Lemma 3.8 The relation $\\preceq $ is a partial order on $\\mathrm {SET}(\\alpha )$ .", "It is clear from the definition that $\\preceq $ is reflexive and transitive.", "To see that it is antisymmetric, let $T\\in \\mathrm {SET}(\\alpha )$ and define a vector $d_T$ by letting its $j$ th entry $(d_T)_j$ be the sum of the entries in the first $j$ rows of $T$ , for $1\\le j \\le \\ell (\\alpha )$ .", "Suppose $\\pi _i(T)=s_i(T)$ .", "Then $i$ is strictly lower than $i+1$ in $T$ and $\\pi _i$ exchanges $i$ and $i+1$ .", "Consequently, $(d_{\\pi _i(T)})_j\\ge (d_T)_j $ for all $1\\le j \\le \\ell (\\alpha )$ , and if $k$ is the index of the row in which $i$ appears in $T$ , we have $(d_{\\pi _i(T)})_k>(d_T)_k$ .", "Therefore, if $S$ is obtained from $T$ via a sequence of the operators $\\pi _i$ , then either $S=T$ or there is some entry of $d_S$ that is strictly larger than the corresponding entry of $d_T$ .", "Since application of the $\\pi _i$ operators to $S$ cannot decrease entries of $d_S$ , it is not possible to also obtain $T$ from $S$ via a sequence of these operators.", "Extend the partial order $\\preceq $ on $\\mathrm {SET}(\\alpha )$ to a total order $\\preceq ^t$ arbitrarily.", "Suppose $\\preceq ^t$ orders the $m$ elements of $\\mathrm {SET}(\\alpha )$ as $T_1 \\preceq ^t T_2 \\preceq ^t \\cdots \\preceq ^t T_m$ .", "For each $1\\le j \\le m$ , define $X_j$ to be the $\\mathbb {C}$ -linear span of all $T_k\\in \\mathrm {SET}(\\alpha )$ such that $k\\le j$ .", "It is immediate from the definitions of $\\preceq $ , $\\preceq ^t$ and $X_j$ that $X_j$ is an $H_n(0)$ -module for each $1\\le j\\le m$ .", "We therefore have a filtration of $X_\\alpha $ given by $0:=X_0\\subset X_1 \\subset X_2 \\subset \\cdots \\subset X_m = X_\\alpha .$ By definition, each quotient module $X_j/X_{j-1}$ is one-dimensional and spanned by $T_j\\in \\mathrm {SET}(\\alpha )$ .", "Lemma 3.9 For any $1\\le i \\le n-1$ and any $1\\le j \\le m$ , we have $\\pi _i(T_j) = {\\left\\lbrace \\begin{array}{ll}T_j & \\mbox{ if } i \\mbox{ is strictly left of } i+1 \\mbox{ in } T_j \\\\0 & \\mbox{ otherwise.", "}\\end{array}\\right.", "}$ If $i$ is strictly left of $i+1$ in $T_j$ , then by Theorem REF we have $\\pi _i(T_j)=T_j$ .", "If $i$ is not strictly left of $i+1$ , then by Theorem REF $\\pi _i(T)$ is either 0 or $s_i(T)$ .", "However, $s_i(T) \\in X_{j-1}$ , so $s_i(T)=0$ in $X_j/X_{j-1}$ .", "We may now prove our main result.", "Theorem 3.10 Let $\\alpha $ be a composition of $n$ .", "The quasisymmetric characteristic of the $H_n(0)$ -module $X_\\alpha $ is the extended Schur function $\\mathcal {E}_\\alpha $ .", "The quotient module $X_j/X_{j-1}$ is one-dimensional, thus irreducible.", "Lemma REF implies that $\\pi _i(T_j) = {\\left\\lbrace \\begin{array}{ll}T_j & \\mbox{ if } i \\notin \\mathbb {S}(\\mathrm {Des}(T_j)) \\\\0 & \\mbox{ if } i \\in \\mathbb {S}(\\mathrm {Des}(T_j)).\\end{array}\\right.", "}$ Therefore, by (REF ), $X_j/X_{j-1}$ is isomorphic as $H_n(0)$ -modules to $\\mathcal {F}_{\\mathrm {Des}(T_j)}$ .", "Hence we have $ch([X_j/X_{j-1}]) = F_{\\mathrm {Des}(T_j)}$ .", "Therefore, $ch([X_\\alpha ]) = \\sum _{j=1}^m ch([X_j/X_{j-1}]) = \\sum _{j=1}^mF_{\\mathrm {Des}(T_j)}= \\sum _{T\\in \\mathrm {SET}(\\alpha )}F_{\\mathrm {Des}(T)} = \\mathcal {E}_\\alpha ,$ where the last equality follows from Theorem REF ." ], [ "Indecomposability", "As is the case for the dual immaculate quasisymmetric functions, but not the case for the quasisymmetric Schur functions, the modules $X_\\alpha $ for the extended Schur functions are indecomposable.", "We devote the remainder of the paper to establishing this fact, following the approach of [3] and [16].", "Let $T^{{\\rm sup}}_\\alpha $ be the standard extended tableau of shape $\\alpha $ whose entries in the $i$ th row are the first $\\alpha _i$ integers larger than $\\alpha _1+\\cdots + \\alpha _{i-1}$ .", "We call $T^{{\\rm sup}}_\\alpha $ the super-standard extended tableau of shape $\\alpha $ .", "In Example REF , $T_1$ is the super-standard extended tableau of shape $(2,1,3)$ .", "Lemma 3.11 The module $X_\\alpha $ is cyclically generated by $T^{{\\rm sup}}_\\alpha $ .", "Let $S\\in \\mathrm {SET}(\\alpha )$ , $S\\ne T^{{\\rm sup}}_\\alpha $ .", "Let $\\mathfrak {b}$ be the earliest box of $D(\\alpha )$ in which $S$ and $T^{{\\rm sup}}_\\alpha $ disagree, where the boxes are ordered by reading rows left to right, starting with the bottom row and proceeding upwards.", "Suppose $\\mathfrak {b}$ has entry $j$ in $S$ .", "Then in $S$ , the entry $j-1$ must appear in a later box than $\\mathfrak {b}$ , and since entries increase along rows and up columns, $j-1$ is strictly above and strictly left of $j$ in $S$ .", "Hence $S = \\pi _{j-1}(S^{\\prime })$ for $S^{\\prime }\\in \\mathrm {SET}(\\alpha )$ where $S^{\\prime }$ is $S$ with the entries $j$ and $j-1$ swapped.", "If the entry ($j-1$ ) of $\\mathfrak {b}$ in $S^{\\prime }$ does not agree with the entry of $\\mathfrak {b}$ in $T^{{\\rm sup}}_\\alpha $ , then repeat the process, resulting in $S^{\\prime \\prime }=\\pi _{j-2}(S^{\\prime }) \\in \\mathrm {SET}(\\alpha )$ that has $j-2$ in $\\mathfrak {b}$ .", "Since the entry in $\\mathfrak {b}$ decreases by one each time, eventually we obtain $S^*\\in \\mathrm {SET}(\\alpha )$ which agrees with $T^{{\\rm sup}}_\\alpha $ on all boxes up to and including $\\mathfrak {b}$ , and $S$ is obtained from $S^*$ via a sequence of the operators.", "Repeating the process on the next box in which $S^*$ and $T^{{\\rm sup}}_\\alpha $ disagree, etc, eventually we obtain $S$ from $T^{{\\rm sup}}_\\alpha $ via a sequence of the operators.", "Lemma 3.12 Suppose $T\\in SET(\\alpha )$ has the property that $\\pi _i(T)=T$ for all $i$ such that $i\\ne \\alpha _1+\\cdots + \\alpha _r$ for any $1\\le r \\le \\ell (\\alpha )$ .", "Then $T=T^{{\\rm sup}}_\\alpha $ .", "The first entry of the first row of $T$ must be 1, by the increasing row and column conditions.", "Suppose the first $j$ entries of the first row of $T$ are $1,\\ldots , j$ for some $1\\le j \\le \\alpha _1-1$ .", "If $j+1$ is not in the first row of $T$ , the increasing row and column conditions force $j+1$ to be weakly left of $j$ and thus $\\pi _j(T)\\ne T$ , contradicting the assumption.", "Hence the entries of the first row of $T$ are $1, \\ldots , \\alpha _1$ .", "A similar argument then ensures the entries of the second row of $T$ are $\\alpha _1+1, \\ldots , \\alpha _2$ , and continuing thus we obtain $T=T^{{\\rm sup}}_\\alpha $ .", "Theorem 3.13 Let $\\alpha \\vDash n$ .", "Then $X_\\alpha $ is an indecomposable $H_n(0)$ -module.", "Let $f$ be an idempotent module endomorphism of $X_\\alpha $ .", "We will show $f$ is either zero or the identity, which by [12] implies $X_\\alpha $ is indecomposable.", "Suppose $f(T^{{\\rm sup}}_\\alpha ) = \\sum _{T\\in \\mathrm {SET}(\\alpha )}b_TT.$ It follows from Lemma REF that for any $S\\in \\mathrm {SET}(\\alpha )$ such that $S \\ne T^{{\\rm sup}}_\\alpha $ , there exists some $1\\le i \\le n-1$ such that $\\pi _i(T^{{\\rm sup}}_\\alpha ) = T^{{\\rm sup}}_\\alpha $ but $\\pi _i(S)\\ne S$ .", "For such an $i$ , we have $f(T^{{\\rm sup}}_\\alpha ) = f(\\pi _i(T^{{\\rm sup}}_\\alpha )) = \\pi _if(T^{{\\rm sup}}_\\alpha ) = \\pi _i( \\sum _{T\\in \\mathrm {SET}(\\alpha )}b_TT) = \\sum _{T\\in \\mathrm {SET}(\\alpha )}b_T\\pi _i(T).$ The coefficient of $S\\ne T^{{\\rm sup}}_\\alpha $ on the right-hand side of the expression above is zero, since if there was $S^{\\prime }\\in \\mathrm {SET}(\\alpha )$ such that $\\pi _i(S^{\\prime })=S$ , we would have $\\pi _i(S^{\\prime }) = \\pi _i^2(S^{\\prime }) = \\pi _i(S) \\ne S$ , a contradiction.", "Therefore $b_S = 0$ for all $S\\ne T^{{\\rm sup}}_\\alpha $ , and we have $f(T^{{\\rm sup}}_\\alpha ) = b_{T^{{\\rm sup}}_\\alpha }T^{{\\rm sup}}_\\alpha $ .", "Since $f^2=f$ , we must have $b_{T^{{\\rm sup}}_\\alpha }^2 = b_{T^{{\\rm sup}}_\\alpha }$ , which forces $b_{T^{{\\rm sup}}_\\alpha }=0$ or $b_{T^{{\\rm sup}}_\\alpha }=1$ .", "Since $X_\\alpha $ is cyclically generated by $T^{{\\rm sup}}_\\alpha $ (Lemma REF ), we conclude $f$ is either zero or the identity on $X_\\alpha $ , as required." ] ]
1906.04383
[ [ "Probing the top quark flavor-changing couplings at CEPC" ], [ "Abstract We propose to study the flavor properties of the top quark at the future Circular Electron Positron Collider (CEPC) in China.", "We systematically consider the full set of 56 real parameters that characterize the flavor-changing neutral interactions of the top quark, which can be tested at CEPC in the single top production channel.", "Compared with the current bounds from the LEP2 data and the projected limits at the high-luminosity LHC, we find that CEPC could improve the limits of the four-fermion flavor-changing coefficients by one to two orders of magnitude, and would also provide similar sensitivity for the two-fermion flavor-changing coefficients.", "Overall, CEPC could explore a large fraction of currently allowed parameter space that will not be covered by the LHC upgrade.", "We show that the $c$-jet tagging capacity at CEPC could further improve its sensitivity to top-charm flavor-changing couplings.", "If a signal is observed, the kinematic distribution as well as the $c$-jet tagging could be exploited to pinpoint the various flavor-changing couplings, providing valuable information about the flavor properties of the top quark." ], [ "Introduction", "After the discovery of the Higgs boson [1], [2], the focus of high energy physics turned to the study of its detailed properties.", "While the Higgs measurements at the Large Hadron Collider (LHC) could reach a precision level of about 5%$\\sim $ 10% [3] (except for the Higgs trilinear coupling), precision measurements of Higgs couplings could benefit from the cleaner environment of a future $e^+e^-$ collider.", "Among several proposals, the Circular Electron Positron Collider (CEPC) in China [4], [5] is proposed to run as a Higgs factory at 240 GeV, which maximizes the $e^+e^-\\rightarrow HZ$ cross-section, producing at least a million Higgs bosons over a period of 7 years.", "Apart from the Higgs boson, the top quark could play an equally important role in the electroweak symmetry breaking mechanism [6].", "By virtue of its large mass, it is often thought of as a window to new physics.", "Producing top quark pairs at a lepton collider would, however, require a minimum center-of-mass energy of about $2m_{top}\\approx $ 345 GeV, beyond the currently planned CEPC energy.", "While an energy upgrade above the $t\\bar{t}$ threshold remains an option, an interesting question to ask is whether we could still learn something about the top quark at an energy below the production threshold.", "One possibility, for instance, would be to study virtual top quarks, which appear in almost all electroweak processes due to quantum corrections [7], [8], [9].", "In this work, we study a different possibility: instead of producing pairs of top quarks on shell, single top quark can be produced in association with a light quark.", "The process $e^+e^-\\rightarrow t(\\bar{t})j$ is possible with $E_{cm}=240$ GeV.", "This process is highly suppressed by the Glashow-Iliopoulos-Maiani (GIM) mechanism [10] in the Standard Model (SM), but if physics beyond SM exists and gives rise to the so called top quark flavor-changing neutral (FCN) interactions, this production mode could happen via an $s$ -channel $Z$ or photon, or via a contact four-fermion FCN interaction.", "The top quark FCN couplings have been searched for at the LHC, Tevatron, LEP2 and HERA experiments [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52].", "Currently, the best constraints of the two-fermion FCN couplings come from the LHC [15], [34], [39], [40], [24], while the four-fermion contact interactions have received much less attention, even though they are indispensable for a complete description of FCN couplings, and are also motivated by the explicit models beyond SM [53], [54], [55], [56].", "Interestingly, it was shown that the best sensitivity for the $eetq$ contact interactions is still given by the LEP2 experiments, despite its much lower integrated luminosity [53], [57].", "LHC and LEP2 thus provide complementary constraints of the theory space spanned by the two types of FCN interactions.", "This immediately implies that a future $e^+e^-$ collider could further improve our knowledge of the top quark flavor properties.", "The goal of this paper is to study the prospects of top FCN couplings at CEPC, to demonstrate that a similar complementarity is expected between CEPC and the high-luminosity LHC (HL-LHC), and to provide input for CEPC experiments.", "Similar prospects have been provided previously for TESLA, FCC-ee, and CLIC [58], [59], [60], but only the CLIC report [60] has considered the four-fermion interactions.", "The paper is organized as follows.", "In Section , we describe the theory background with focus on the two-fermion FCN and four-fermion $eetq$ FCN interactions, and their different sensitivities at a hadron collider and a $e^+e^-$ collider.", "In Section , we give the details of our simulation and our analysis strategy.", "In Section , we present our results and discuss possible improvements.", "Section  is devoted to our conclusion.", "Some additional results can be found in Appendix ." ], [ "Flavor changing effective operators", "FCN interaction of the top quark is highly suppressed by the GIM mechanism.", "The branching ratios for two-body top FCN decays in SM are of the order of $10^{-12}$ –$10^{-15}$ [61], [62], [63].", "Any hint for such processes would thus immediately point to physics beyond SM.", "A wide variety of limits have been set on these couplings.", "For example, flavor changing decay modes $t\\rightarrow qZ$ and $t\\rightarrow q\\gamma $ were searched for at the Tevatron by CDF [11], [12], [13] and D0 [14], and at the LHC by ATLAS [15], [16], [17], [18], [19] and CMS [20], [21], [22].", "At the LHC, $t\\rightarrow qH$ was also searched for [23], [24], [25], [26], [27], [28], [29], [30], [31], [32].", "Direct top production, $pp\\rightarrow t$ , was considered at the Tevatron by CDF [33] and at the LHC by ATLAS [34], [35], [36], while a similar production with an additional jet in the final state was considered by D0 [37], [38] and CMS [39].", "Single top production in association with a photon and a $Z$ were searched for by CMS [40] and ATLAS [41].", "At LEP2, $e^+e^-\\rightarrow tj$ was investigated by all four collaborations [42], [43], [44], [45], [46], [47], while at HERA, the single-top $e^-p\\rightarrow e^-t$ production was considered by ZEUS [48], [49] and H1 [50], [51], [52].", "The most constraining limits were recently collected and summarized in Table 33 of Ref. [57].", "The sensitivities in terms of the two-body branching ratios are roughly of the order $10^{-4}$ to $10^{-3}$ , approaching the expected values from typical new physics models [64].", "A complete and systematic description of the top quark FCN couplings based on the Standard Model Effective Field Theory (SMEFT) [65], [66], [67] was discussed and documented in the LHC TOP Working Group note [68].", "The idea is that starting from the Warsaw basis operators [69], one defines linear combinations of Wilson coefficients that give independent contributions in a given measurement.", "For the $e^+e^-\\rightarrow tj$ process, the relevant basis operators are the following two-fermion operators: $&O_{\\varphi q}^{1(ij)} = (\\varphi ^\\dagger i\\!\\!\\overleftrightarrow{D}_\\mu \\varphi )\\left( \\bar{q}_i \\gamma ^\\mu q_j \\right),\\\\&O_{\\varphi q}^{3(ij)} = (\\varphi ^\\dagger i\\!\\!\\overleftrightarrow{D}^I_\\mu \\varphi )\\left( \\bar{q}_i \\gamma ^\\mu \\tau ^I q_j \\right),\\\\&O_{\\varphi u}^{(ij)} = (\\varphi ^\\dagger i\\!\\!\\overleftrightarrow{D}_\\mu \\varphi )\\left( \\bar{u}_i \\gamma ^\\mu u_j \\right),\\\\&O_{uW}^{(ij)}=\\left( \\bar{q}_i\\sigma ^{\\mu \\nu }\\tau ^I u_j \\right)\\tilde{\\varphi }W_{\\mu \\nu }^I,\\\\&O_{uB}^{(ij)}=\\left( \\bar{q}_i\\sigma ^{\\mu \\nu } u_j \\right)\\tilde{\\varphi }B_{\\mu \\nu },$ where $\\varphi $ is the Higgs doublet, $\\tilde{\\varphi }=i\\sigma _2\\varphi $ , $\\tau ^I$ is the Pauli matrix, $B_{\\mu \\nu }$ and $W_{\\mu \\nu }^I$ are the $U(1)_Y$ and $SU(2)_L$ gauge field strength tensors, $(\\varphi ^\\dagger i\\!\\!\\overleftrightarrow{D}_\\mu \\varphi )\\equiv i\\varphi ^\\dagger \\left( D_\\mu -\\overleftarrow{D}_\\mu \\right)\\varphi $ , and $(\\varphi ^\\dagger i\\!\\!\\overleftrightarrow{D}_\\mu \\varphi )\\equiv i\\varphi ^\\dagger \\left( \\tau ^ID_\\mu -\\overleftarrow{D}_\\mu \\tau ^I\\right)\\varphi $ .", "The following four-fermion basis operators are also relevant: $&O_{lq}^{1(ijkl)}=\\left( \\bar{l}_i \\gamma _\\mu l_j \\right)\\left( \\bar{q}_k \\gamma ^\\mu q_l \\right),\\\\&O_{lq}^{3(ijkl)}=\\left( \\bar{l}_i \\gamma _\\mu \\tau ^I l_j \\right)\\left( \\bar{q}_k \\gamma ^\\mu \\tau ^I q_l \\right),\\\\&O_{lu}^{(ijkl)}=\\left( \\bar{l}_i \\gamma _\\mu l_j \\right)\\left( \\bar{u}_k \\gamma ^\\mu u_l \\right),\\\\&O_{eq}^{(ijkl)}=\\left( \\bar{e}_i \\gamma _\\mu e_j \\right)\\left( \\bar{q}_k \\gamma ^\\mu q_l \\right),\\\\&O_{eu}^{(ijkl)}=\\left( \\bar{e}_i \\gamma _\\mu e_j \\right)\\left( \\bar{u}_k \\gamma ^\\mu u_l \\right),\\\\&O_{lequ}^{1(ijkl)}=\\left( \\bar{l}_i e_j \\right)\\varepsilon \\left( \\bar{q}_k u_l \\right),\\\\&O_{lequ}^{3(ijkl)}=\\left( \\bar{l}_i \\sigma _{\\mu \\nu } e_j \\right)\\varepsilon \\left( \\bar{q}_k \\sigma ^{\\mu \\nu } u_l \\right),$ where $i,j,k,l$ are flavor indices.", "For the four-fermion operators, only the $i=j=1$ components are relevant for the $e^+e^-\\rightarrow t(\\bar{t})j$ process.", "Other operators such as $O_{u\\varphi }^{(ij)}\\equiv \\left( \\varphi ^\\dagger \\varphi \\right)\\left( \\bar{q}_iu_j\\tilde{\\varphi }\\right)$ and $O_{uG}^{(ij)} \\equiv \\left( \\bar{q}_i\\sigma ^{\\mu \\nu } T^a u_j\\right)\\tilde{\\varphi }G_{\\mu \\nu }^a$ could lead to FCN couplings $tqH$ and $tqg$ , but they cannot be probed in the single top channel.", "The following linear combinations of Wilson coefficients can be defined as independent degrees of freedom that enter this process: Two-fermion degrees of freedom: $& c_{\\varphi q}^{-[I](3+a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{\\varphi q}^{1(3a)} - C_{\\varphi q}^{3(3a)} \\right\\rbrace ,\\\\& c_{\\varphi u}^{[I](3+a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{\\varphi u}^{1(3a)} \\right\\rbrace ,\\\\& c_{uA}^{[I](3a)}\\equiv \\left\\lbrace c_WC_{uB}^{(3a)}+s_WC_{uW}^{(3a)} \\right\\rbrace ,\\\\& c_{uA}^{[I](a3)}\\equiv \\left\\lbrace c_WC_{uB}^{(a3)}+s_WC_{uW}^{(a3)} \\right\\rbrace ,\\\\& c_{uZ}^{[I](3a)}\\equiv \\left\\lbrace -s_WC_{uB}^{(3a)}+c_WC_{uW}^{(3a)} \\right\\rbrace ,\\\\& c_{uZ}^{[I](a3)}\\equiv \\left\\lbrace -s_WC_{uB}^{(a3)}+c_WC_{uW}^{(a3)} \\right\\rbrace .$ Four-fermion $eetq$ degrees of freedom: $&c_{lq}^{-[I](1,3+a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{lq}^{1(113a)}-C_{lq}^{3(113a)} \\right\\rbrace ,\\\\&c_{eq}^{[I](1,3+a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{eq}^{(113a)} \\right\\rbrace ,\\\\&c_{lu}^{[I](1,3+a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{lu}^{(113a)} \\right\\rbrace ,\\\\&c_{eu}^{[I](1,3+a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{eu}^{(113a)} \\right\\rbrace ,\\\\&c_{lequ}^{S[I](1,3a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{lequ}^{1(113a)} \\right\\rbrace ,\\\\&c_{lequ}^{S[I](1,a3)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{lequ}^{1(11a3)} \\right\\rbrace ,\\\\&c_{lequ}^{T[I](1,3a)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{lequ}^{3(113a)} \\right\\rbrace ,\\\\&c_{lequ}^{T[I](1,a3)}\\equiv {}_{\\Re }^{[\\Im ]}\\!\\left\\lbrace C_{lequ}^{3(11a3)} \\right\\rbrace ,$ where quark generation indices ($a=1,2$ ) and lepton generation indices are enclosed in parentheses.", "An $I$ in the superscript represents the imaginary part of the coefficient, denoted by $\\Im $ on the right hand side, while without $I$ only the real part is taken, represented by $\\Re $ on the right hand side.", "In total, one collects the following 28 real and independent degrees of freedom for each $a$ (and thus 56 in total): $\\footnotesize \\begin{array}{lllllll}c_{\\varphi q}^{-(3+a)} & c_{uZ}^{(a3)} & c_{uA}^{(a3)} &c_{lq}^{-(1,3+a)} & c_{eq}^{(1,3+a)} &c_{lequ}^{S(1,a3)} & c_{lequ}^{T(1,a3)}\\\\c_{\\varphi u}^{(3+a)} & c_{uZ}^{(3a)} & c_{uA}^{(3a)} &c_{lu}^{(1,3+a)} & c_{eu}^{(1,3+a)} &c_{lequ}^{S(1,3a)} & c_{lequ}^{T(1,3a)}\\\\c_{\\varphi q}^{-I(3+a)} & c_{uZ}^{I(a3)} & c_{uA}^{I(a3)} &c_{lq}^{-I(1,3+a)} & c_{eq}^{I(1,3+a)} &c_{lequ}^{SI(1,a3)} & c_{lequ}^{TI(1,a3)}\\\\c_{\\varphi u}^{I(3+a)} & c_{uZ}^{I(3a)} & c_{uA}^{I(3a)} &c_{lu}^{I(1,3+a)} & c_{eu}^{I(1,3+a)} &c_{lequ}^{SI(1,3a)} & c_{lequ}^{TI(1,3a)}\\end{array}$ Among the seven columns, the first three come from the two-fermion operators.", "$c^-_{\\varphi q}$ and $c_{\\varphi u}$ give rise to $tqZ$ coupling with a vector-like Lorentz structure, while $c_{uA}$ and $c_{uZ}$ give rise to the $tq\\gamma $ and $tqZ$ dipole interactions.", "The last four come from the $eetq$ four-fermion operators.", "$c^-_{lq}$ , $c_{lu}$ , $c_{eq}$ , and $c_{eu}$ coefficients give rise to interactions between two vector currents, while $c^S_{lequ}$ and $c^T_{lequ}$ to interactions between two scalar and two tensor currents, respectively.", "We note that the first two rows are CP-even while the last two rows are CP-odd.", "The first and the third rows involve a left-handed light quark, while the second and the fourth rows involve a right-handed light quark.", "The interference between coefficients from different rows in the limit of massless quarks vanishes for this reason.", "Furthermore, the signatures of the degrees of freedom in the first row are identical to those in the third row, and similarly the second row is identical to the fourth row.", "This is due to the absence of an SM amplitude that interferes with the FCN coefficients, which leads to cross-sections that are invariant under a change of phase: $c_i+c^I_ii\\rightarrow e^{i\\delta } (c_i+c^I_ii)$ .", "It is therefore sufficient to focus on the degrees of freedom in the first two rows, and in the rest of the paper we will refer to them simply as coefficients.", "We also note that the $e^+e^-\\rightarrow tj$ signal of the coefficients from the first two rows are similar, up to a $\\theta \\rightarrow \\pi -\\theta $ transformation of the scattering angle in the $tj$ production.", "The decay of the top quark, however, breaks this similarity.", "This is because the two coefficients produce left-handed and right-handed top quarks respectively, while the lepton momentum from the top decay is correlated with the top helicity.", "This leads to a difference in signal efficiencies between the first two rows.", "Two-fermion FCN interactions in the first three columns are considered in almost all experimental searches.", "Four-fermion FCN interactions, on the other hand, have unduly been neglected.", "They were proposed in Ref.", "[70], and searched for at LEP2 by the L3 and DELPHI collaborations [45], [47], but the three-body decays through four-fermion FCN interactions have never been searched for at the Tevatron or LHC, except for the lepton-flavor violating case.", "As for the prospects at future $e^+e^-$ colliders, four-fermion couplings were also neglected in the studies of single top at TESLA and FCC-ee [58], [59], although the recent CLIC yellow report has included them [60].", "However, the four-fermion operators are indispensable for a complete characterization of the top quark flavor properties.", "They could arise, for example, in the presence of a heavy mediator coupling to one top quark and one light quark, or in the cases where the equation of motion (EOM) is used to remove redundant two-fermion operators in terms of the basis operators.", "Their existence also guarantees the correctness of the effective description when particles go off-shell or in loops, see [53] for a detailed discussion.", "The three-body decay $t\\rightarrow cf\\bar{f}$ was calculated in several explicit models [54], [55], [56], giving a further motivation for considering the $tcll$ contact operators.", "Ref.", "[71] recasted the LHC constraints of $t\\rightarrow qZ$ to provide bounds.", "Finally, the lepton-and-quark-flavor violating top decay through contact interactions was studied in [72], and recently searched for by the ATLAS collaboration [73].", "An interesting fact about the $eetq$ four-fermion FCN interaction is that the most stringent limits are still coming from the LEP2 experiments.", "In Ref.", "[57], a global analysis based on the current bounds was performed within the SMEFT framework.", "The result clearly showed that the LHC is more sensitive to the two-fermion operator coefficients, while LEP2 is more sensitive to the four-fermion ones.", "Hence, their results are currently complementary in the full parameter space, as demonstrated in Figure 59 in Section 8.1 of Ref. [57].", "The complementarity persists even with HL-LHC (see Figure 59 right of Ref.", "[57]), despite an order of magnitude difference between the LEP2 and HL-LHC luminosities.", "Clearly, this implies that an $e^+e^-$ collider with higher luminosity could continue to provide valuable information about the top FCN interactions, and explore the parameter space which will not be covered by the HL-LHC.", "The difference in sensitivities between the two types of colliders can be understood as follows.", "The two-fermion operators can be searched for at the LHC by the flavor-changing decay of the top quark, but the same decay through a four-fermion operator is a three-body decay, and will be suppressed by an additional phase space factor.", "As an illustration, the decay rates of $t\\rightarrow ce^+e^-$ through $c_{\\varphi u}$ , $c_{uZ}$ and $c_{eu}$ are $8.1\\times 10^{-5}$ , $2.4\\times 10^{-4}$ GeV and $3.2\\times 10^{-6}$ GeV, respectively, for $c/\\Lambda ^2=1$ TeV$^{-2}$ .", "Furthermore, the $e^+e^-$ mass spectrum is a continuum, and thus the best sensitivity requires a dedicated search without a mass window cut (see discussions in Refs.", "[53], [71]).", "Searching for four-fermion operators in single top channels at a hadron collider suffers from the same phase-space suppression.", "The situation in an $e^+e^-$ collider is, however, different.", "The two-fermion operators can be searched for through single top $e^+e^-\\rightarrow Z^*/\\gamma ^* \\rightarrow tj$ (or through top decay if the center-of-mass energy allows for top quark pair production, though typically the former has a better sensitivity [58]).", "In the case of a four-fermion operator case, instead of a suppression effect, the production rate is actually enhanced due to the fact that there is one less propagator than in the two-fermion case.", "As an illustration, the single top production cross-section at $E_{cm}=240$ GeV for $c_{\\varphi u}$ , $c_{uZ}$ and $c_{eu}$ are 0.0018 pb, 0.020 pb and 0.12 pb, respectively, for $c/\\Lambda ^2=1$ TeV$^{-2}$ , and this enhancement effect increases with energy.", "The comparison of the two cases is illustrated in Figure .", "Another advantage of a lepton collider is that one can reconstruct the missing momentum.", "This is not relevant for the problem at hand, but could be important for setting bounds on four-fermion operator with neutrinos, see Ref. [74].", "Figure: NO_CAPTION Figure: NO_CAPTION (top) The flavor-changing decay at the LHC.", "The four-fermion operator contribution is suppressed by an additional phase space factor compared with the two-fermion contribution.", "(bottom) The flavor-changing single top at a $e^+e^-$ collider.", "The four-fermion operator contribution is enhanced due to one less $s$ -channel propagator than in the two-fermion case.", "Green dots and blue squares represent two- and four-fermion operator insertions." ], [ "Simulation", "To study the prospects of top FCN couplings, we consider the scenario of CEPC running with a center-of-mass energy $E_{cm}=240$ GeV and an integrated luminosity of 5.6 ab$^{-1}$ .", "We simulate the signal and background at leading order with parton shower, by using MadGraph5_aMC@NLO [75] and Pythia8 [76], [77].", "The signal is generated with the UFO model [78], [79], dim6top, which follows the LHC TopWG EFT recommendation [68] and is available at https://feynrules.irmp.ucl.ac.be/wiki/dim6top.", "The detector level simulation is performed with Delphes with the default CEPC card [80].", "Jets are reconstructed using the FastJet package [81] with the anti-$k_t$ algorithm [82] with a radius parameter of 0.5.", "Automatic calculation for QCD corrections of processes involving only two-fermion FCN operators were developed in Ref.", "[83] (see also Refs.", "[84], [85], [86], [87], [88], [89], [90], [91], [92] where the results for the other top flavor-changing channels have been presented).", "The corrections for four-fermion operators were given in the appendix of Ref. [53].", "The sizes are below 20%, corresponding to less than 10% change in the coefficients, and therefore we neglect these corrections in this work.", "The dominant background comes from the $W$ -pair and $Z$ -pair production, and we do not expect a significant change at the next-to-leading order in QCD.", "We consider the semi-leptonic top quark decays.", "The signal final state is $bjl\\nu $ , where $j$ is an up or charm quark jet.", "The dominant background is $qq^{\\prime }l\\nu $ , with one light or charm quark jet misidentified as a $b$ -jet.", "A large fraction comes from the $W$ pair production with one $W$ decaying hadronically and the other leptonically, while the diagrams with only one $W$ resonance decaying leptonically also make an important contribution.", "We thus take into account the full contribution from the $e^+e^-\\rightarrow Wq\\bar{q}^{\\prime }$ process with $W$ decaying leptonically.", "Adding all the diagrams from $e^+e^-\\rightarrow l\\nu q\\bar{q}^{\\prime }$ does not make a sizable change to the background [58], and so they are not taken into account.", "Another source of background comes from $b\\bar{b}ll$ and $c\\bar{c}ll$ , where one of the jets is mistagged and one of the leptons is missed by the detector.", "This is included in our simulation, but the contribution is subdominant.", "Selected diagrams for the signal and background are shown in Figure .", "Figure: NO_CAPTION Figure: NO_CAPTION Based on the expected signature of the signal process, we select events with exactly one charged lepton (electron or muon) and at least two jets.", "The charged lepton must have $p_\\text{T}>10$ GeV and $|\\eta |<3.0$ .", "All jets are required to have $p_\\text{T}>20$ GeV and $|\\eta |<3.0$ .", "Exactly one jet should be $b$ -tagged.", "If more than one non-$b$ -tagged jet is present, the one with the highest $p_\\text{T}$ is selected as the up or charm quark jet candidate.", "We have chosen a $b$ -tagging working point with 80% efficiency for $b$ -jets and a mistagging rate of 10% (0.1%) from $c$ -jets (light jets) [93].", "A missing energy greater than 30 GeV is also required due to the presence of a neutrino.", "The $W$ boson candidate is reconstructed from the charged lepton and the missing energy.", "The top quark candidate is reconstructed by combining the $W$ boson candidate with the $b$ -jet.", "At the parton level, we expect the non-$b$ -tagged jet from the signal to have $E_j=\\frac{s-m_{top}^2}{2\\sqrt{s}}\\approx 58\\ \\mbox{GeV}$ .", "For the background, if the contribution comes from the diboson production (e.g.", "Figure  down left), we expect the dijet mass to peak at $m_W=80.4$ GeV.", "The contribution from the non-resonant diagrams (e.g.", "Figure  down right) cannot, however, be neglected and gives rise to a continuum spectrum in the dijet mass distribution.", "At the reconstruction level, it turns out that the energy of the non-$b$ -tagged jet $E_j$ , the invariant mass of the $b$ -jet and the non-$b$ -tagged jet $m_{jj}$ , and the invariant mass of the top quark candidate $m_{top}$ are the most useful variables to discriminate the signal from background.", "In Figure , we plot these variables at the reconstruction level, for the background as well as for the signals from the two typical operator coefficients, $c_{uZ}$ and $c_{eq}$ , for illustration.", "As our baseline analysis, we impose the following kinematic cuts at the reconstruction level $&E_j< 60\\ \\mathrm {GeV}\\,,\\\\&m_{jj}>100 \\ \\mathrm {GeV}\\,,\\\\&m_{top}<180 \\ \\mathrm {GeV}\\,.$ These cuts are motivated by Figure .", "The expected number of background events after event selection is about 1400 with an integrated luminosity of 5.6 ab$^{-1}$ , corresponding to a statistical uncertainty of about 2.7%.", "We assume that the systematic uncertainty will be under control below this level.", "The impact of the systematic uncertainty can be easily estimated, e.g.", "a $3\\%$ systematic uncertainty will weaken the bound on the cross-section by a factor of about 1.5, which corresponds to a factor of 1.2 on the value of the coefficients.", "In the rest of the paper we simply ignore the systematic effects.", "We will see that this simple baseline scenario already allows to obtain reasonable sensitivities.", "In the absence of any FCN signal, the 95% confidence level (CL) upper bound of the fiducial cross-section is $0.0134$ fb.", "Alternatively, the 5$\\sigma $ discovery limit of the signal cross-section, determined by $S/\\sqrt{B}=5$ , is a function of the integrated luminosity $L_\\text{int}$ : $\\sigma = \\frac{5\\sqrt{\\sigma _{B}}}{\\sqrt{L_\\text{int}}}= \\frac{2.51\\ \\mbox{fb}}{\\sqrt{L_\\text{int}/\\mbox{fb}^{-1}}}$ Figure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTION$m_{top}$ , $E_j$ , and $m_{jj}$ are shown for signals from $c_{uZ}^{(23)}$ and $c_{eq}^{(1,3+2)}$ .", "The cross-section is a quadratic function of the operator coefficients.", "Including the interference effects, such a function has 28 independent terms for the 7 coefficients in each row of Eq.", "(REF ).", "These terms for the first two rows are the same as those for the last two rows, because they only differ by a CP phase which would never show up in the cross-section (without any possible interference with SM).", "Thus, only 56 independent terms need to be determined for the first two rows for each $a$ .", "We sample the parameter space by 56 points and simulate the fiducial cross-section for each of them.", "The results are fitted by the following form: $\\sigma =\\sum _{a=1,2}\\frac{(1\\ \\mathrm {TeV})^4}{\\Lambda ^4}\\left(\\vec{C_1^a}\\cdot M_1^a\\cdot \\vec{C_1^a}^{T}+\\vec{C_2^a}\\cdot M_2^a\\cdot \\vec{C_2^a}^{T}\\right)$ where $\\vec{C}_{1,2}$ denote the vectors formed by the coefficients in the first and second rows of Eq.", "(REF ).", "$a$ is the light quark generation.", "$M_{1,2}^a$ are $7\\times 7$ matrices.", "The above result allows to convert the upper bound and discovery limit of the cross-section into a 56-dimensional coefficient space.", "We have verified the relations between signatures from different rows in Eq.", "(REF ): the 1st (2nd) and the 3rd (4th) rows always give the same signatures; the 1st (3rd) and the 2nd (4th) rows at the production level are identical up to a $\\theta \\rightarrow \\pi -\\theta $ transformation in the production angle, but differ if the top decays.", "In Appendix , a comparison between the signals from $c_{uZ}^{(23)}$ , $c_{uZ}^{(32)}$ and $c_{uZ}^{I(23)}$ are shown in Figure .", "A comparison between the signals from $c_{eq}^{(1,3+2)}$ , $c_{eu}^{(1,3+2)}$ and $c_{eq}^{I(1,3+2)}$ are shown in Figure .", "Our baseline analysis could be improved by exploiting additional features of the signal with a template fit.", "One possibility is to make use of heavy flavor tagging.", "The operators with $a=2$ , requiring a tagged $c$ -jet in the signal definition, could largely suppress the background, as most background comes from events with one charm and one strange quark in the final state, with the charm mistagged as a $b$ .", "The clean environment of CEPC allows a precise determination of the displaced vertices and excellent capability of $c$ -jet tagging [5].", "We assume a working point with a 70% tagging efficiency for $c$ -jets and 20% (12%) mistagging rate from $b$ -jets (light jets) [93].", "To constrain the coefficients with $a=2$ , we require a $c$ -jet in the signal definition, while to constrain the $a=1$ coefficients we veto the events with a $c$ -jet, although the latter is not expected to significantly change the sensitivity as most background events do not have an extra $c$ -jet except the one that fakes the $b$ -jet.", "Another useful information is the angular distribution of the single top, which is determined by the specific Lorentz structure of the operator.", "In Figure REF , we show the distribution of the top scattering angle from all 7 coefficients in the first row at the parton level and the reconstruction level.", "The scattering angle $\\theta $ is defined as the angle between the momentum of the $e^+$ beam and $t$ or $\\bar{t}$ .", "The distributions for the top and anti-top are related by $\\theta \\rightarrow \\pi -\\theta $ , and this is illustrated by comparing the first two plots in Figure REF .", "Furthermore, this holds even for the reconstructed top and anti-top candidates from the background due to the CP symmetry.", "For this reason, we consider the observable $c=Q_l\\times \\cos \\theta $ , i.e.", "the lepton charge times the cosine of the scattering angle.", "The discrimination power of this observable is illustrated in the right plot of Figure REF , at the reconstruction level.", "We perform a template fit by further dividing the signal region into 4 bins, defined as $c\\in (-1,-0.5)$ , $[-0.5,0)$ , $[0,0.5)$ , and $[0.5,1)$ .", "To construct a $\\chi ^2$ fit, we take $\\sqrt{B}$ in each bin as the experimental uncertainty.", "The smallest number of events in one bin is 24 even after requiring a $c$ -jet, and so the Gaussian distribution is a good approximation.", "We simulate the Gaussian fluctuation in all bins by generating a large number of pseudo-measurement samples and compute the average $\\chi ^2$ for each point in the coefficient space.", "Our 95% CL bound is determined by $\\left<\\chi ^2\\right><9.49$ .", "Figure: NO_CAPTION 2" ], [ "Results", "Following our baseline analysis, the 95% CL limits of the individual coefficients in the first row are given in Figure , where they are compared with the current limits from LHC+LEP2 and with the HL-LHC projection.", "FCC-ee projection at the center-of-mass energy of 240 GeV is given in Ref.", "[59], but only for the 3 two-fermion coefficients, and we show them in the same plot.", "Note that Ref.", "[71] suggested that the current signal region of the $t\\rightarrow ql^+l^-$ decay mode, designed for the search of $t\\rightarrow qZ$ mode, can be extended by including the “off-shell” region with $\\left|m_{l^+l^-}-m_Z\\right|>15$ GeV.", "This could lead to HL-LHC prospects that are slightly better than Figure  for some of the four-fermion coefficients.", "The CLIC bounds, on the other hand, are only available with higher center-of-mass energy runs and are not shown in the plot.", "For example, the expected limits of the four-fermion coefficients from a 380 GeV run with an integrated luminosity of 500 fb$^{-1}$ , are about a factor of $3\\sim 4$ better than those from CEPC, due to the higher beam energy and beam polarization [60].", "Looking at the 3 two-fermion coefficients on the left, the limits are either weaker than or comparable to HL-LHC.", "Still, we emphasize that even in this case the CEPC measurement provides an important consistency check with the existing results.", "The most interesting result, however, is the improvement of the other four four-fermion coefficients.", "As expected, we see that they are 1$\\sim $ 2 orders of magnitude better than the current limits and the combination of HL-LHC and LEP2.", "Similar results are observed for the second row operators and are displayed in Figure REF in Appendix .", "In Figure , we show the two-dimensional bound of the two-fermion coefficient $c_{\\varphi q}^{-(3+a)}$ and the four-fermion coefficient $c_{eq}^{(1,3+a)}$ , compared with LHC, HL-LHC, and LEP2.", "Clearly, a large fraction of the currently allowed parameter space will be probed by CEPC.", "A similar plot for the operators in the second row of Eq.", "(REF ) is given in Figure  in Appendix .", "In Figure  we plot the discovery limits of the seven coefficients in the first row of Eq.", "(REF ) in terms of $\\Lambda /\\sqrt{c}$ , as a function of integrated luminosity.", "The scale is roughly that of new physics, assuming that the coupling is of the order of one.", "The plot shows that new physics at a few TeV leading to four-fermion FCN interactions can be discovered already at an early stage of CEPC running.", "The improvement with luminosity is, however, less significant.", "Note that the two curves corresponding to $c_{eq}^{(1,3+2)}$ and $c_{lq}^{-(1,3+2)}$ overlap with each other.", "This is because they give rise to four-fermion couplings that only differ in the chirality of the electron fields, and thus have the same rate in the signal region defined by our baseline analysis.", "The results for the coefficients of the second row are given in Appendix , Figure , where a similar degeneracy between $c_{eu}^{(1,3+2)}$ and $c_{lu}^{(1,3+2)}$ can be observed.", "Figure: NO_CAPTIONREF ), as expected from CEPC, compared with the existing LHC+LEP2 bounds and the projected limits from HL-LHC+LEP2 and FCC-ee with 3 ab$^{-1}$ luminosity at 240 GeV (only for the first three coefficients), see Refs.", "[57], [59].", "The results for both generations $a=1,2$ are displayed.", "The orange column “CEPC baseline” is the expected limit following our baseline analysis, which applies to both flavors (a=1,2).", "The red column “CEPC template fit” uses $c$ -jet tagging for signal definition and only applies to $a=2$ operators.", "2 Figure: NO_CAPTION$c_{\\varphi q}^{-(3+2)}$ and the four-fermion coefficient $c_{eq}^{(1,3+2)}$ at 95% CL.", "Other operators are fixed to 0.", "The allowed regions from HL-LHC and LEP2 are similar to Figure 59 in Section 8.1 of Ref.", "[57], except that there all coefficients are marginalized over.", "The blue region (“CEPC B”) is the bound expected from CEPC following our baseline analysis.", "The yellow region (“CEPC T”) is obtained with a template fit approach, see the discussion in Section .", "Figure: NO_CAPTION$\\Lambda /\\sqrt{c}$ , which is roughly the scale of new physics, for the coefficients in the first row of Eq.", "(REF ), as a function of the integrated luminosity of CEPC.", "The template fit method described in the previous section leads to two-fold improvements.", "First, if SM is assumed, the 95% CL limits of the operator coefficients for $a=2$ are improved.", "This is mostly due to the $c$ -tagging requirement.", "The results are shown in Figure  (red columns), and Figure  (the yellow region), where the improvements are seen clearly.", "The same effects on the other four-fermion operators are displayed and compared in Figure .", "The second improvement is from the discrimination power between the different kinds of signals, which comes from both the angular distribution and the $c$ -tagging information.", "This is particularly important when an excess is found, in which case we need to understand the FCN operator that leads to it.", "The baseline approach can only give the overall magnitude of the flavor-changing effects, while the template fit helps to pin down the actual form of the operator.", "This is illustrated in Figures , where we consider two hypothetical scenarios, with $c_{eq}^{(1,3+a)}=c_{lq}^{-(1,3+a)}=0.05$ , and $c_{lequ}^{S(1,a3)}=0.065$ , $c_{lequ}^{T(1,a3)}=0.025$ ($\\Lambda =1$ TeV).", "These values are consistent with the current bounds, but are around the sensitivity expected at CEPC.", "Assuming that the other coefficients vanish, with the baseline approach we are able to identify the overall flavor-changing effect, but not the value of each coefficient.", "The allowed region in the two-dimensional parameter space is a ring, giving no information about the actual form of new physics.", "The template fit, on the other hand, can pinpoint with more precision the value of each coefficient.", "This holds also for the $a=1$ case, even though the precision is slightly worse.", "A four-fold degeneracy shows up in the first scenario.", "This is because the overall sign of the coefficients does not have a visible effect (due to the absence of SM interference), and the relative sign between $c_{eq}^{(1,3+a)}$ and $c_{lq}^{-(1,3+a)}$ cannot be observed because the two operators do not interfere.", "In the second case this is reduced to a two-fold degeneracy.", "This is because the interference between $c_{lequ}^{S(1,a3)}$ and $c_{lequ}^{T(1,a3)}$ is proportional to $\\cos \\theta $ , so the opposite sign can be excluded by the angular distribution.", "In fact, due to the shape of the background (see Figure REF right), the template fit has a better discrimination power when $c_{lequ}^{S(1,a3)}$ and $c_{lequ}^{T(1,a3)}$ have opposite signs.", "This effect can be seen even with the SM hypothesis, see the right plot in Figure .", "The discrimination between $a=1$ and $a=2$ operators is also possible with the help of $c$ -tagging.", "This is demonstrated in Figure , where we consider three hypothetical scenarios, with $\\left(c_{lq}^{-(1,3+1)},c_{lq}^{-(1,3+2)}\\right)=(0,0.05),\\ (0.05,0)$ , and $(0.35,0.35)$ .", "By using events with and without a $c$ -jet, we can resolve the light-quark flavor involved in the FCN coupling with some precision.", "This is unlike LHC, where one has to combine the production and decay measurements to disentangle the two light-quark flavors in the flavor-changing signal by using the fact that the production channel depends on the light-quark parton distribution function.", "As an additional remark, we note that a flat direction exists between the three coefficients $c_{\\varphi q}^{-(3+a)}$ , $c_{lq}^{-(1,3+a)}$ and $c_{eq}^{(1,3+a)}$ , which cannot be constrained by a single run at 240 GeV.", "A second working point with larger energy would be useful to lift the degeneracy, as the two-fermion and four-fermion contributions depend differently on energy.", "All other directions can be constrained simultaneously at 240 GeV.", "Figure: NO_CAPTION Figure: NO_CAPTION$c_{eq}^{(1,3+a)}=c_{lq}^{-(1,3+a)}=0.05$ .", "Right: $c_{lequ}^{S(1,a3)}=0.065$ , $c_{lequ}^{T(1,a3)}=0.025$ .", "Both points are labeled by a black dot in the plots.", "The template fit helps to pinpoint the coefficients.", "Better precision is obtained for operators involving a charm-quark (i.e.", "$a=2$ ).", "Figure: NO_CAPTION$c_{lq}^{-(1,3+a)}$ coefficients with $a=1$ and $a=2$ , at 95% CL.", "The other coefficients are turned off.", "Three hypotheses are considered.", "The template fit helps to identify the light-quark flavor involved in the FCN coupling.", "A more comprehensive study can further improve these results in several aspects.", "The QCD correction of the four-fermion operators can be implemented in the analysis, although we expect the correction to be similar to the two-fermion case.", "Kinematic features of the signals from different operators can be fully exploited by using a multivariate analysis.", "Alternatively, one could also construct the covariant matrix directly, following the statistically optimal observable [94], [95], which in theory guarantees the best sensitivity.", "However, the nonlinear form of the cross-section in the parameter space and the non-analytic nature of the detector effects need to be carefully dealt with.", "The same approach has been used to study the FCN couplings at the CLIC [60], where the detector effects were taken into account by an efficiency parameter.", "Finally, useful information may also come from the study of flavor changing decay of the top quark, depending on the possibility of an energy upgrade above the 350 GeV threshold, which in addition could also provide access to the Higgs and gluon FCN couplings.", "We defer these studies to a future work." ], [ "Conclusion", "The CEPC collider, proposed as a Higgs factory, is also an ideal machine to study the flavor properties of the top quark.", "The FCN interactions of the top quark can be searched for in the single top production $e^+e^-\\rightarrow tj$ .", "The results from LEP2, Tevatron and LHC experiments suggest that a future lepton collider would provide the best sensitivity for the four-fermion $eetq$ FCN interactions, complementary to a hadron collider which mainly constrains the two-fermion FCN interactions.", "In this work, we derived the expected sensitivity at CEPC, with an energy of 240 GeV and integrated luminosity of 5.6 ab$^{-1}$ , of the full set of 56 FCN operators that are relevant for the $e^+e^-\\rightarrow tj$ channel, and showed that an improvement of about 1-2 orders of magnitude of the four-fermion FCN couplings could be expected.", "Our main results are displayed in Figures  and , where one can clearly see that a large fraction of the currently allowed FCN parameters could be tested at CEPC.", "We also showed that the capability of $c$ -jet tagging at CEPC further improves the sensitivity for the flavor-changing couplings between the top and charm quarks.", "In case a signature is established, we showed that kinematic observables could be used to pinpoint the values of the coefficients, which in turn would give information about the new physics behind the discovery.", "Note added: After this work was posted on arXiv, Ref.", "[96] appeared, where the authors discussed the expected limits of the four-fermion coefficients at the Large Hadron-Electron Collider.", "The results are of the same order of magnitude as what we gave in Figures  and REF .", "We would like to thank M. Chala, B. Fuks, G. Durieux, Z. Liang and H.-S. Shao for helpful discussions and suggestions.", "figuresection" ], [ "Additional results", "2 Here, we list some additional results mentioned in the previous sections.", "In Figures  and , we compare the signals from $c_{uZ}^{(23)}$ , $c_{uZ}^{(32)}$ , $c_{uZ}^{I(23)}$ , and from $c_{eq}^{(1,3+2)}$ , $c_{eu}^{(1,3+2)}$ , $c_{eq}^{I(1,3+2)}$ , illustrating the relations between the coefficients in different rows of Eq.", "(REF ).", "In Figure REF , we show the individual limits and prospects for the coefficients from the second row of Eq.", "(REF ), similar to Figure .", "In Figure , we present the two-dimensional bound of the two-fermion coefficient $c_{\\varphi u}^{-(3+2)}$ and the four-fermion coefficient $c_{eu}^{(1,3+2)}$ , similar to Figure .", "Finally, in Figure , we show the discovery limits of the coefficients of the second row of Eq.", "(REF ), similar to Figure .", "2 Figure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTION$c_{uZ}^{(23)}$ , $c_{uZ}^{(32)}$ and $c_{uZ}^{I(23)}$ at the parton level.", "Distributions of the scattering angle, the lepton energy, and the lepton pseudorapidity are compared.", "Figure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTION$c_{eq}^{(1,3+2)}$ , $c_{eu}^{(1,3+2)}$ and $c_{eq}^{I(1,3+2)}$ at the parton level.", "Distributions of the scattering angle, the lepton energy, and the lepton pseudorapidity are compared.", "Figure: The 95% CL limits on individual coefficients in the second row ofEq.", "(), expected from the CEPC, compared with the existing LHC+LEP2bounds, and the projected limits from HL-LHC+LEP2 and from FCC-ee with 3 ab -1 ^{-1}luminosity at 240 GeV (only for the first three coefficients), seeRefs.", ", .", "Results for both generationsa=1,2a=1,2 are displayed.", "The orange column “CEPC baseline” is the expectedlimits following our baseline analysis, which applies to both flavors (a=1,2).The red column “CEPC template fit” uses the cc-jet tagging in its signaldefinition and only applies to a=2a=2 operators.Figure: NO_CAPTION$c_{\\varphi u}^{-(3+2)}$ and a four-fermion coefficient $c_{eu}^{(1,3+2)}$ , at the 95% CL.", "Other operators are fixed at 0.", "The allowed regions from HL-LHC and LEP2 are similar to Figure 59 in Section 8.1 of Ref.", "[57], except for that there all coefficients are marginalized over.", "The blue region (“CEPC B”) is the bound expected from the CEPC, following our baseline analysis.", "The yellow region (“CEPC T”) is obtained with a template fit approach, see more discussions in Section .", "Figure: NO_CAPTION$\\Lambda /\\sqrt{c}$ , which is roughly the scale of new physics, for coefficients in the second row of Eq.", "(REF ), as function of integrated luminosity at the CEPC.", "2" ] ]
1906.04573
[ [ "Corrected overlap weight and clustering coefficient" ], [ "Abstract We discuss two well known network measures: the overlap weight of an edge and the clustering coefficient of a node.", "For both of them it turns out that they are not very useful for data analytic task to identify important elements (nodes or links) of a given network.", "The reason for this is that they attain their largest values on maximal subgraphs of relatively small size that are more probable to appear in a network than that of larger size.", "We show how the definitions of these measures can be corrected in such a way that they give the expected results.", "We illustrate the proposed corrected measures by applying them on the US Airports network using the program Pajek." ], [ "Network element importance measures", "To identify important / interesting elements (nodes, links) in a network we often try to express our intuition about their importantance using an appropriate measure (node index, link weight) following the scheme larger is the measure value of an element, more important / interesting is this element.", "Too often, in analysis of networks, researchers uncritically pick some measure from the literature (degrees, closeness, betweenness, hubs and authorities, clustering coefficient, etc.", "[10], [9]) and apply it to their network.", "In this paper we discuss two well known network local density measures: the overlap weight of an edge [8] and the clustering coefficient of a node [4], [11].", "For both of them it turns out that they are not very useful for data analytic task to identify important elements of a given network.", "The reason for this is that they attain their largest values on maximal subgraphs of relatively small size – they are more probable to appear in a network than that of larger size.", "We show how their definitions can be corrected in such a way that they give the expected results.", "We illustrate the proposed corrected measures by applying them on the US Airports network using the program Pajek.", "We will limit our attention to undirected simple graphs $\\mathbf {G}= ({\\cal V},{\\cal E})$ .", "Many similar indices and weights were proposed by graph drawing community for disentanglement in visualization of hairball networks [5], [6], [7].", "When searching for important subnetworks in a given network we often assume a model that in the evolution of the network the increased activities in a part of the network create new nodes and edges in that part increasing its local density.", "We expect from a redlocal density measure $ld(x,\\mathbf {G})$ for an element (node/link) $x$ of network $\\mathbf {G}$ the following properties: ld1.", "adding an edge, $e$ , to the local neighborhood, $\\mathbf {G}^{(1)}$ , does not decrease the local density $ld(x,\\mathbf {G}) \\le ld(x,\\mathbf {G}\\cup e)$ .", "ld2.", "normalization:    $0 \\le ld(x,\\mathbf {G}) \\le 1$ .", "ld3.", "$ld(x,\\mathbf {G})$ can attain value 1, $ld(x,\\mathbf {G}) = 1$ , on the largest subnetwork of certain type in the network." ], [ "Overlap weight", "A direct measure of the overlap of an edge $e=(u:v) \\in {\\cal E}$ in an undirected simple graph $\\mathbf {G}= ({\\cal V},{\\cal E})$ is the number of common neighbors of its end nodes $u$ and $v$ (see Figure REF ).", "It is equal to $t(e)$ – the rednumber of triangles (cycles of length 3) to which the edge $e$ belongs.", "The rededge neighbors subgraph is labeled $T(\\deg (u)-t(e)-1, t(e), \\deg (v) - t(e)-1)$ – the subgraph in Figure REF is labeled $T(4,5,3)$ .", "There are two problems with this measure: it is not normalized (bounded to $[0,1]$ ); it does not consider the `potentiality' of nodes $u$ and $v$ to form triangles – there are $\\min (\\deg (u),\\deg (v)) - 1 - t(e)$ nodes in the smaller set of neighbors that are not in the other set of neighbors.", "Figure: Neighbors of e(u:v)e(u:v)Two simple normalizations are: $ \\frac{t(e)}{n-2}\\qquad \\mbox{ or } \\qquad \\frac{t(e)}{\\mu }$ where $n=|{\\cal V}|$ is the number of nodes, and $\\mu = \\max _{e \\in {\\cal E}} t(e)$ is the maximum number of triangles on an edge in the graph $\\mathbf {G}$ .", "The (topological) redoverlap weight of an edge $e=(u:v) \\in {\\cal E}$ considers also the degrees of edge's end nodes and is defined as $ o(e) = \\frac{t(e)}{(\\deg (u)-1)+(\\deg (v)-1) - t(e)} $ In the case $\\deg (u)=\\deg (v)=1$ we set $o(e) = 0$ .", "It somehow resolves both problems.", "The overlap weight is essentially a Jaccard similarity index [12] $ J(X,Y) = \\frac{|X \\cap Y|}{|X \\cup Y|}$ for $X = N(u) \\setminus \\lbrace v \\rbrace $ and $Y = N(v) \\setminus \\lbrace u \\rbrace $ where $N(z)$ is the set of neighbors of a node $z$ .", "In this case we have $|X \\cap Y| = t(e)$ and $ |X \\cup Y| = |X| + |Y| - |X \\cap Y| = (\\deg (u)-1)+(\\deg (v)-1) - t(e) .", "$ Note also that $h(X,Y) = 1- J(X,Y) = \\frac{|X \\oplus Y|}{|X \\cup Y|}$ is the normalized Hamming distance [12].", "The operation $\\oplus $ denotes the symmetric difference $X \\oplus Y = (X \\cup Y) \\setminus (X \\cap Y)$ .", "Another normalized overlap measure is the redoverlap index [12] $ O(e) = O(X,Y) = \\frac{|X \\cap Y|}{\\max (|X|,|Y|)} = \\frac{t(e)}{\\max (\\deg (u),deg(v)) - 1}.$ Both measures $J$ and $O$ , applied to networks, have some nice properties.", "For example: a pair of nodes $u$ and $v$ are structurally equivalent iff $J(X,Y) = O(X,Y) = 1$ .", "Therefore the overlap weight measures the redsubstitutiability of one edge's end node by the other.", "Figure: US Airports 1997 network, a North-East cut-outIntroducing two auxiliary quantities $ m(e) = \\min (\\deg (u),\\deg (v)) - 1 \\quad \\mbox{and} \\quad M(e) = \\max (\\deg (u),\\deg (v)) - 1 $ we can rewrite the definiton of the overlap weight $ o(e) = \\frac{t(e)}{m(e)+M(e) - t(e)}, \\quad M(e) > 0 $ and if $M(e)=0$ then $o(e)=0$ .", "For every edge $e \\in {\\cal E}$ it holds $ 0 \\le t(e) \\le m(e) \\le M(e)$ .", "Therefore $ m(e)+M(e)-t(e) \\ge t(e)+t(e)-t(e) = t(e) $ showing that $0 \\le o(e) \\le 1$ .", "The value $o(e)=1$ is attained exactly in the case when $M(e)=t(e)$ ; and the value $o(e)=0$ exactly when $t(e)=0$ .", "In simple directed graphs without loops different types of triangles exist over an arc $a(u,v)$ .", "We can define overlap weights for each type.", "For example: the redtransitive overlap weight $ o_t(a) = \\frac{t_t(a)}{(\\mathop {\\rm outdeg}\\nolimits (u)-1)+(\\mathop {\\rm indeg}\\nolimits (v)-1) - t_t(a)} $ and the redcyclic overlap weight $ o_c(a) = \\frac{t_c(a)}{\\mathop {\\rm indeg}\\nolimits (u)+\\mathop {\\rm outdeg}\\nolimits (v) - t_c(a)} $ where $t_t(a)$ and $t_c(a)$ are the number of transitive / cyclic triangles containing the arc $a$ .", "In this paper we will limit our discussion to overlap weights in undirected graphs." ], [ "US Airports links with the largest overlap weight", "Let us apply the overlap weight to the network of US Airports 1997 [1].", "It consists of 332 airports and 2126 edges among them.", "There is an edge linking a pair of airports iff in the year 1997 there was a flight company providing flights between those two airports.", "The size of a circle representing an airport in Figure REF is proportional to its degree – the number of airports linked to it.", "The airports with the largest degree are: Table: NO_CAPTIONFor the overlap weight the edge cut at level 0.8 (a subnetwork of all edges with overlap weight at least 0.8) is presented in Figure REF .", "It consists of two triangles, a path of length 2, and 17 separate edges.", "Figure: Edges with the largest overlap – cut at 0.8A tetrahedron (Kwigillingok, Kongiganak,Tuntutuliak, Bethel), see Figure REF , gives the first triangle in Figure REF – attached with the node Bethel to the rest of network.", "Figure: Zoom inFrom this example we see that in real-life networks edges with the largest overlap weight tend to be edges with relatively small degrees in their end nodes ($o(e)=1$ implies $\\deg (u) = \\deg (v) = t(e)+1$ ) – the overlap weight does not satisfy the condition ld3.", "Because of this the overlap weight is not very useful for data analytic tasks in searching for important elements of a given network.", "We would like to emphasize here that there are many applications in which overlap weight proves to be useful and appropriate; we question only its appropriateness for determining the most overlaped edges.", "We will try to improve the overlap weight definition to better suit the data analytic goals." ], [ "Corrected overlap weight", "We define a corrected overlap weight as $ o^{\\prime }(e) = \\frac{t(e)}{\\mu +M(e) - t(e)} $ By the definiton of $\\mu $ for every $e \\in {\\cal E}$ it holds $t(e) \\le \\mu $ .", "Since $M(e) - t(e) \\ge 0$ also $ \\mu +M(e)-t(e) \\ge \\mu $ and therefore ld2, $0 \\le o^{\\prime }(e) \\le 1$ .", "$o^{\\prime }(e)=0$ exactly when $t(e)=0$ , and $o^{\\prime }(e)=1$ exactly when $\\mu = M(e) = t(e)$ .", "For ld3, the corresponding maximal edge neighbors subgraph contains $T(0,\\mu ,0)$ .", "The end nodes of the edge $e$ are structurally equivalent.", "To show that ld1 also holds let $\\mathbf {G}^{(1)}(e)$ denote the edge neighbors subgraph of the edge $e$ .", "Let $f$ be the edge added to $\\mathbf {G}^{(1)}(e)$ .", "We can assume that $\\deg (u) \\ge \\deg (v)$ , $e = (u:v)$ .", "Therefore $M(e) = \\deg (u) - 1$ .", "We have to consider some cases: a.", "$f \\in {\\cal E}(\\mathbf {G}^{(1)}(e))$ : then $\\mathbf {G}\\cup f = \\mathbf {G}$ and $o^{\\prime }(e,\\mathbf {G}\\cup f) = o^{\\prime }(e,\\mathbf {G})$ .", "b.", "$f \\notin {\\cal E}(\\mathbf {G}^{(1)}(e))$ : b1.", "$f = (u:t)$ : then $t \\in N(v)\\setminus T(e) \\setminus e$ .", "It creates new triangle $(u,v,t)$ .", "We have $t^{\\prime }(e) = t(e)+1$ and $M^{\\prime }(e) = M(e)+1$ .", "We get $ o^{\\prime }(e,\\mathbf {G}\\cup f) = \\frac{t^{\\prime }(e)}{\\mu + M^{\\prime }(e) - t^{\\prime }(e)} = \\frac{t(e)+1}{\\mu + M(e) - t(e)} > o^{\\prime }(e,\\mathbf {G}) $ b2.", "$f = (v:t)$ : then $t \\in N(u)\\setminus T(e) \\setminus e$ .", "It creates new triangle $(u,v,t)$ .", "We have $t^{\\prime }(e) = t(e)+1$ and $M^{\\prime }(e) = M(e)$ .", "We get $ o^{\\prime }(e,\\mathbf {G}\\cup f) = \\frac{t^{\\prime }(e)}{\\mu + M^{\\prime }(e) - t^{\\prime }(e)} = \\frac{t(e)+1}{\\mu + M(e) - t(e)-1} > \\frac{t(e)+1}{\\mu + M(e) - t(e)} > o^{\\prime }(e,\\mathbf {G}) $ b3.", "$f = (t:w)$ and $t,w \\in N(u) \\cup N(v) \\setminus \\lbrace u,v\\rbrace $ : No new triangle on $e$ is created.", "We have $t^{\\prime }(e) = t(e)$ and $M^{\\prime }(e) = M(e)$ .", "Therefore $o^{\\prime }(e,\\mathbf {G}\\cup f) = o^{\\prime }(e,\\mathbf {G})$ .", "The corrected overlap weight $o^{\\prime }$ is a kind of local density measure, but it is primarly a substitutiability measure.", "To get a better local density measure we have to consider besides triangles also quadrilaterals (4-cycles)." ], [ "US Airports 1997 links with the largest corrected overlap weight", "For the US Airports 1997 network we get $\\mu = 80$ .", "For the corrected overlap weight the edge cut at level 0.5 is presented in Figure REF .", "Six links with the largest triangular weights are given in Table REF .", "Figure: US Airports links o ' (o^{\\prime }(WB Hartsfield Atlanta, Charlotte/Douglas Intl)=0.7308) = 0.7308Table: Largest triangular weights in US Airports 1997 networkIn Figure REF all the neighbors of end nodes WB Hartsfield Atlanta and Charlotte/Douglas Intl of the link with the largest corrected overlap weight value are presented.", "They have 76 common (triangular) neighbors.", "The node WB Hartsfield Atlanta has 11 and the node Charlotte/Douglas Intl has 25 additional neighbors.", "Note (see Table REF ) that there are some links with higher triangular weight, but also with much higher number of additional neighbors – therefore with smaller corrected overlap weights." ], [ "Comparisons", "In Figure REF the set $\\lbrace (o(e), o^{\\prime }(e)) : e \\in {\\cal E}\\rbrace $ is displayed for the US Airports 1997 network.", "For most edges it holds $ o^{\\prime }(e) \\le o(e)$ .", "It is easy to see that $ o(e) < o^{\\prime }(e) \\Leftrightarrow \\mu < m(e)$ .", "Edges with the overlap value $o(e) > 0.8$ have the corrected overlap weight $o^{\\prime }(e) < 0.2$ .", "Figure: Comparison (overlap, corrected overlap)In Figure REF the sets $\\lbrace (m(e), o(e)) : e \\in {\\cal E}\\rbrace $ and $\\lbrace (m(e), o^{\\prime }(e)) : e \\in {\\cal E}\\rbrace $ are displayed for the US Airports 1997 network.", "With increasing of $m(e)$ the corresponding overlap weight $o(e)$ is decreasing; and the corresponding corrected overlap weight $o^{\\prime }(e)$ is also increasing.", "We can observe similar tendencies if we compare both weights with respect to the number of triangles $t(e)$ (see Figure REF ).", "Figure: Comparison – minDeg(e)(e)Figure: Comparison – # of triangles" ], [ "Clustering coefficient", "For a node $u \\in {\\cal V}$ in an undirected simple graph $\\mathbf {G}= ({\\cal V}, {\\cal E})$ its (local) redclustering coefficient [12] is measuring a local density in the node $u$ and is defined as a proportion of the number of existing edges between $u$ 's neighbors to the number of all possible edges between $u$ 's neighbors $ cc(u) = \\frac{|{\\cal E}(N(u))|}{|{\\cal E}(K_{\\deg (u)})|} = \\frac{2\\cdot E(u)}{\\deg (u)\\cdot (\\deg (u)-1)}, \\quad \\deg (u)>1 $ where $E(u) = |{\\cal E}(N(u))|$ .", "If $\\deg (u) \\le 1$ then $cc(u) = 0$ .", "It is easy to see that $ E(u) = \\frac{1}{2} \\sum _{e \\in S(u)} t(e)$ where $S(u)=\\lbrace e(u:v) : e \\in {\\cal E}\\rbrace $ is the star in node $u$ .", "It holds $0 \\le cc(u) \\le 1$ ; $cc(u) = 1$ exactly when ${\\cal E}(N(u))$ is isomorphic to $K_{\\deg (u)}$ – a complete graph on $\\deg (u)$ nodes.", "Therefore it seems that the clustering coefficient could be used to identify nodes with the densest neighborhoods.", "The notion of clustering coefficient can be extended also to simple directed graphs (with loops)." ], [ "US Airports with the largest clustering coefficient", "Let us apply also the clustering coefficient to the US Airports 1997 network.", "Table: US Airports 1997 with clustering coefficient =1= 1In Table REF airports with the clustering coefficient equal to 1 and the degree at least 4 are listed.", "There are 28 additional such airports with a degree 3, and 38 with a degree 2.", "Again we see that the clustering coefficient attains its largest value in nodes with relatively small degree.", "The probability that we get a complete subgraph on $N(u)$ is decreasing very fast with increasing of $\\deg (u)$ .", "The clustering coefficient does not satisfy the condition ld3." ], [ "Corrected clustering coefficient", "To get a corrected version of the clustering coefficient we proposed in Pajek [3] to replace $\\deg (u)$ in the denominator with $\\Delta = \\max _{v \\in {\\cal V}} \\deg (v)$ .", "In this paper we propose another solution – we replace $\\deg (u)-1$ with $\\mu $ : $ cc^{\\prime }(u) = \\frac{2\\cdot E(u)}{\\mu \\cdot \\deg (u)}, \\quad \\deg (u) > 0 $ If $\\deg (u)=0$ then $cc^{\\prime }(u)=0$ .", "Note that, if $\\Delta > 0$ then $\\mu < \\Delta $ .", "To verify the property ld1 we add to $\\mathbf {G}(u)$ a new edge $f$ with its end nodes in $\\mathbf {G}(u)$ .", "Then $E^{\\prime }(u) = E(u)+1$ and $\\deg ^{\\prime }(u) = \\deg (u)$ .", "Therefore $ cc^{\\prime }(u,\\mathbf {G}\\cup f) = \\frac{2\\cdot E^{\\prime }(u)}{\\mu \\cdot \\deg ^{\\prime }(u)} = \\frac{2\\cdot ( E(u)+1)}{\\mu \\cdot \\deg (u)} > cc^{\\prime }(u,\\mathbf {G})$ To show the property ld2, $0 \\le cc^{\\prime }(u) \\le 1$ , we have to consider two cases: a.", "$\\deg (u)\\ge \\mu $ : then for $v \\in N(u)$ we have $\\deg _{N(u)}(v) \\le \\mu $ and therefore $ 2\\cdot E(u) =\\sum _{v \\in N(u)} \\deg _{N(u)}(v) \\le \\sum _{v \\in N(u)} \\mu = \\mu \\cdot \\deg (u) $ b.", "$\\deg (u) < \\mu $ : then $\\deg (u)-1 \\le \\mu $ and therefore $ 2\\cdot E(u) \\le \\deg (u)\\cdot (\\deg (u)-1) \\le \\mu \\cdot \\deg (u) $ For the property ld3, the value $cc^{\\prime }(u)=1$ is attained in the case a on a $\\mu $ -core, and in the case b on $K_{\\mu +1}$ ." ], [ "US Airports nodes with the largest corrected clustering coefficient", "In Table REF US Airports with the largest corrected clustering coefficient are listed.", "The largest value 0.3739 is attained for Cleveland-Hopkins Intl airport.", "In Figure REF the adjacency matrix of a subnetwork on its 45 neighbors is presented.", "The subnetwork is relatively complete.", "A small value of corrected clustering coefficient is due to relatively small $\\deg = 45$ with respect to $\\mu = 80$ .", "Figure: Links among Cleveland-Hopkins Intl neighbors" ], [ "Comparisons", "In Figure REF the set $\\lbrace (cc(e), cc^{\\prime }(e)) : e \\in {\\cal E}\\rbrace $ is displayed for the US Airports 1997 network.", "The correlation between both coefficients is very small.", "An important observation is that edges with the largest value of the clustering coefficient have relatively small values of the corrected clustering coefficient.", "We also see that the number of edges in a node's neighborhood is almost functionally dependent on its degree.", "Figure: Comparison – ordinary and corrected clustering coefficients; degrees and number of edgesFigure: Comparison – degreesFrom Figure REF we see that the clustering coefficient is decreasing with the increasing degree.", "Nodes with large degree have small values of clustering coefficient.", "The values of corrected clustering coefficient are large for nodes of large degree." ], [ "Conclusions", "In the paper we showed that two network measures, the overlap weight and clustering coefficient, are not suitable for the data analytic task of determining important elements in a given network.", "We proposed corrected versions of these two measures that give expected results.", "Because $\\mu \\le \\Delta $ we can replace in the corrected measures $\\mu $ with $\\Delta $ .", "Its advantage is that it can be easier computed; but the corresponding corrected index is less `sensitive'.", "An interesting task for future research is a comparision of the proposed measures with measures from graph drawing [5], [6], [7]." ], [ "Acknowledgments", "The computations were done combining Pajek [3] with short programs in Python and R [2].", "This work is supported in part by the Slovenian Research Agency (research program P1-0294 and research projects J1-9187, and J7-8279) and by Russian Academic Excellence Project '5-100'.", "The paper is a detailed and extended version of the talk presented at the CMStatistics (ERCIM) 2015 Conference.", "The author's attendance on the conference was partially supported by the COST Action IC1408 – CRoNoS." ] ]
1906.04581
[ [ "Extremal problems for convex geometric hypergraphs and ordered\n hypergraphs" ], [ "Abstract An ordered hypergraph is a hypergraph whose vertex set is linearly ordered, and a convex geometric hypergraph is a hypergraph whose vertex set is cyclically ordered.", "Extremal problems for ordered and convex geometric graphs have a rich history with applications to a variety of problems in combinatorial geometry.", "In this paper, we consider analogous extremal problems for uniform hypergraphs, and determine the order of magnitude of the extremal function for various ordered and convex geometric paths and matchings.", "Our results generalize earlier works of Bra{\\ss}-K\\'{a}rolyi-Valtr, Capoyleas-Pach and Aronov-Dujmovi\\v{c}-Morin-Ooms-da Silveira.", "We also provide a new generalization of the Erd\\H os-Ko-Rado theorem in the ordered setting." ], [ "Introduction", "An ordered graph is a graph together with a linear ordering of its vertex set.", "Extremal problems for ordered graphs have a long history, and were studied extensively in papers by Pach and Tardos [17], Tardos [21] and Korándi, Tardos, Tomon and Weidert [14].", "Let ${\\rm {ex}}_{\\rightarrow }(n,F)$ denote the maximum number of edges in an $n$ -vertex ordered graph that does not contain the ordered graph $F$ .", "This extremal problem is phrased in [14] in terms of pattern-avoiding matrices.", "Marcus and Tardos [16] showed that if the forbidden pattern is a permutation matrix, then the answer is in fact linear in $n$ , and thereby solved the Stanley-Wilf Conjecture, as well as a number of other well-known open problems.", "A central open problem in the area was posed by Pach and Tardos [17], in the form of the following conjecture.", "An ordered graph has interval chromatic number two if it is bipartite with bipartition $A \\cup B$ and $A$ precedes $B$ in the ordering of the vertices.", "Conjecture A Let $F$ be an ordered acyclic graph with interval chromatic number two.", "Then ${\\rm {ex}}_{\\rightarrow }(n,F) = O(n\\cdot \\mbox{\\rm polylog} \\,n)$ .", "In support of Conjecture A, Korándi, Tardos, Tomon and Weidert [14] proved for a wide class of forests $F$ that ${\\rm {ex}}_{\\rightarrow }(n,F) = n^{1 + o(1)}$ .", "This conjecture is related to a question of Braß in the context of convex geometric graphs.", "A convex geometric (cg) graph is a graph together with a cyclic ordering of its vertex set.", "Given a convex geometric graph $F$ , let ${\\rm {ex}}_{\\circlearrowright }(n,F)$ denote the maximum number of edges in an $n$ -vertex convex geometric graph that does not contain $F$ .", "Extremal problems for geometric graphs have a fairly long history, going back to theorems on disjoint line segments [13], [20], [15], and more recent results on crossing matchings [3], [5].", "Motivated by the famous Erdős unit distance problem, the first author [8] showed that the maximum number of unit distances between points of a convex $n$ -gon is $O(n\\log n)$ .", "In the vein of Conjecture REF , Braß [2] asked for the determination of all acyclic graphs $F$ such that ${\\rm {ex}}_{\\circlearrowright }(n,F)$ is linear in $n$ , and this problem remains open (recently it was solved for trees [10]).", "In this paper, we study extremal problems for ordered and convex geometric uniform hypergraphs.", "An ordered (convex geometric) $r$ -graph is an $r$ -uniform hypergraph whose vertex set is linearly (cyclically) ordered.", "Although the theory of cg (hyper)graphs can be studied independently of any geometric context, extremal problems for both cg graphs and hypergraphs are frequently motivated by problems in discrete geometry [4], [18], [2], [1].", "Instances of the extremal problem for two disjoint triangles in the convex geometric setting are connected to the well-known triangle-removal problem [12].", "In [9] we show that certain types of paths in the convex geometric setting give the current best bounds for the notorious extremal problem for tight paths in uniform hypergraphs.", "One of the goals of this paper is to study extremal problems simultaneously in the ordered and cg settings and compare and contrast their behaviors." ], [ "Results", "We denote by ${\\rm {ex}}_{\\rightarrow }(n,F)$ (${\\rm {ex}}_{\\circlearrowright }(n,F)$ ) the maximum number of edges in an $n$ -vertex ordered (cg) $r$ -graph that does not contain $F$ , and let ${\\rm {ex}}(n,F)$ denote the usual (unordered) extremal function.", "Let $P$ be the linearly ordered path with three edges with ordered vertex set $1<2<3<4$ and edge set $\\lbrace 13, 32, 24\\rbrace $ .", "In the convex geometric setting we use $P$ to denote the unique cg graph isomorphic to the path with three edges where the edges 13 and 24 cross.", "We then have $ {\\rm {ex}}_{\\rightarrow }(n, P) = 2n - 3={\\rm {ex}}_{\\circlearrowright }(n,P) \\qquad \\hbox{ for $n \\ge 3$ }$ where the former is a folklore result and the latter is due to Braß, Károlyi and Valtr [3].", "To our knowledge, (REF ) are the only known nontrivial exact results for connected ordered or convex geometric graphs that have crossings in their embedding.", "These two simple exact results therefore provide a good launchpad for further investigation in the hypergraph case.", "This is the direction we take, extending (REF ) to longer paths and to the hypergraph setting.", "In the process, we will also discover some subtle differences between the ordered and convex geometric cases which are not visible in (REF ).", "There are many ways to extend the definition of a path to hypergraphs and we choose one of the most natural ones, namely tight paths.", "There are also many possibilities for the ordering of the vertices of the path and again we make a rather natural choice, namely crossing paths which are defined below (a similar notion was studied by Capoyleas and Pach [5] who considered the corresponding question for matchings in a cg graph).", "A tight $k$ -path is an $r$ -graph whose edges have the form $\\lbrace v_i,v_{i + 1},\\dots ,v_{i + r - 1}\\rbrace $ for $0 \\le i < k$ .", "Typically, we list the vertices $v_0v_1\\dots v_{k+r-2}$ in a tight $k$ -path.", "We let $<$ denote the underlying ordering of the vertices of an ordered hypergraph.", "In the case of convex geometric hypergraphs, we slightly abuse the same notation so that $u_1<u_2<\\cdots < u_{\\ell }$ is shorthand for $u_1<u_2<\\cdots < u_{\\ell } < u_1$ which means that moving clockwise in the cyclic ordering of the vertices from $u_1$ we first encounter $u_2$ , then $u_3$ , and so on until we finally encounter $u_{\\ell }$ and then $u_1$ again.", "In other words, $u_1, \\ldots , u_{\\ell }$ is a cyclic interval where the vertices are listed in clockwise order.", "When needed, we use the notation $\\Omega _n$ to denote the vertex set of a generic $n$ -vertex convex geometric hypergraph, with the clockwise ordering of the vertices.", "Definition 1 (Crossing paths in ordered and convex geometric hypergraphs) An $r$ -uniform crossing $k$ -path $P_k^r$ in an ordered or convex geometric hypergraph is a tight $k$ -path $v_0v_1\\dots v_{r+k-2}$ with the ordering Table: NO_CAPTIONAn ordered $P_5^2$ (Figure 1) and a convex geometric $P_7^2$ and $P_5^3$ (Figure 2) are shown below.", "Figure: Ordered P 5 2 P_5^2Figure: Convex Geometric P 7 2 P_7^2 and P 5 3 P_5^3Our first result generalizes ${\\rm {ex}}_{\\rightarrow }(n, P_3^2) = 2n - 3$ to larger $k$ and $r$ .", "Theorem 2.1 Fix $k \\ge 1$ , $r \\ge 2$ and let $n\\ge r+k$ .", "Then ${\\rm {ex}}_{\\rightarrow }(n, P^r_k)= {\\left\\lbrace \\begin{array}{ll}{n \\atopwithdelims ()r} - {n-k+1 \\atopwithdelims ()r} &\\mbox{ for } k \\le r + 1 \\\\\\Theta (n^{r - 1}\\log n) & \\mbox{ for }k \\ge r + 2.\\end{array}\\right.", "}$ Our second theorem generalizes the Braß, Károlyi and Valtr [3] result ${\\rm {ex}}_{\\circlearrowright }(n, P_3^2) = 2n - 3$ to larger $k$ and $r$ .", "Theorem 2.2 Fix $k \\ge 1$ , $r \\ge 2$ and let $n \\ge 2r+1$ .", "Then $ {\\rm {ex}}_{\\circlearrowright }(n, P^r_k) = {\\left\\lbrace \\begin{array}{ll}\\Theta (n^{r - 1}) & \\mbox{ for } 3 \\le k \\le 2r-1 \\\\{n \\atopwithdelims ()r} - {n - r \\atopwithdelims ()r} & \\mbox{ for }k = r + 1 \\\\\\Theta (n^{r - 1}\\log n) & \\mbox{ for }k \\ge 2r.\\end{array}\\right.", "}$ For short paths we have the following better bounds, which improve the previous results on this problem by Aronov et.", "al.", "[1] when $k=2$ .", "Theorem 2.3 For fixed $2\\le k \\le r$ , $(1+o(1))\\frac{k-1}{3 \\ln 2r}{n\\atopwithdelims ()r-1}<{\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\le \\dfrac{(k-1)(r-1)}{r}\\dbinom{n}{r-1}.$ Furthermore, when $k \\in \\lbrace 2,r\\rbrace $ , the following sharper bounds hold: $ {\\rm {ex}}_{\\circlearrowright }(n,P_2^r) &\\le & \\frac{1}{2}{n \\atopwithdelims ()r-1} \\\\ {\\rm {ex}}_{\\circlearrowright }(n, P_r^r) &\\ge & (1-o(1))(r-2){n \\atopwithdelims ()r-1}.$ The lower bound in () is close to the upper bound in (REF ), since the upper bound is $(r - 2 + 1/r){n \\atopwithdelims ()r - 1}$ .", "We remark that it remains open to prove or disprove that for every $r \\ge 2$ , there exists $c_r$ such that $c_r \\rightarrow 0$ as $r \\rightarrow \\infty $ and ${\\rm {ex}}_{\\circlearrowright }(n, P_2^r) \\le c_r {n \\atopwithdelims ()r-1} + o(n^{r-1}).$ Theorems REF and REF reveal a discrepancy between the ordered setting and the convex geometric setting: in the convex geometric setting, crossing paths of length up to $2r - 1$ have extremal function of order $n^{r - 1}$ , whereas this phenomenon only occurs for crossing paths of length up to $r + 1$ in the ordered setting.", "In fact, we know that ${\\rm {ex}}_{\\circlearrowright }(n, P^r_k)={\\rm {ex}}_{\\rightarrow }(n, P^r_k)$ iff $k \\in \\lbrace 1, r+1\\rbrace $ ." ], [ "Crossing matchings", "Let $M_k^2$ denote the cgg consisting of $k$ pairwise crossing line segments.", "In other words, there is a labelling of the vertices such that the edges of the matching are $v_i v_{k + i}$ for $1 \\le i \\le k$ , and $v_1 < v_2 < \\dots < v_{2k}$ .", "Capoyleas and Pach [5] proved the following theorem which extended a result of Ruzsa (he proved the case $k=3$ ) and settled a question of Gärtner and conjecture of Perles: Theorem 2.4 (Capoyleas-Pach [5]) For all $n \\ge 2k - 1$ , ${\\rm {ex}}_{\\circlearrowright }(n,M_k^2) = 2(k-1)n - {2k - 1 \\atopwithdelims ()2}$ .", "As mentioned earlier, a related open problem of Braß [2] is to determine all acyclic graphs $F$ such that ${\\rm {ex}}_{\\circlearrowright }(n,F) = O(n)$ .", "For $r \\ge 2$ , an $r$ -uniform crossing $k$ -matching $M_k^r$ has vertex set $v_1,v_2,\\dots ,v_{rk}$ on a convex $n$ -gon in clockwise order and consists of the edges $\\lbrace v_i,v_{i+k},\\dots ,v_{i+(r-1)k}\\rbrace $ for $1 \\le i \\le k$ .", "Note that crossing paths have the property that if we take every $r$ th edge of the path, we obtain a crossing matching.", "One can similarly define a crossing $k$ -matching $M_k^r$ in ordered $r$ -graphs: it has vertex set $v_1,v_2,\\cdots ,v_{rk}$ with $v_1<v_2<\\ldots <v_{rk}$ and consists of the edges $\\lbrace v_i,v_{i+k},\\dots ,v_{i+(r-1)k}\\rbrace $ for $1 \\le i \\le k$ .", "However, if we consider a cg $r$ -graph $G_1$ and an ordered $r$ -graph $G_2$ with the same set of vertices and the same set of edges (only the ordering in $G_1$ is linear and in $G_2$ is circular), then with our definitions a set $F$ of edges is a crossing matching in $G_1$ if and only if it is a crossing matching in $G_2$ .", "It follows that ${\\rm {ex}}_{\\circlearrowright }(n,M_k^r)={\\rm {ex}}_{\\rightarrow }(n,M_k^r) \\qquad \\hbox{ for all $k,r,n$}.$ Aronov, Dujmovič, Morin, Ooms and da Silveira [1] considered the case $k=2$ , $r=3$ and determined the order of magnitude in those cases; our result below provides better bounds.", "The $k=2$ case of Theorem REF could be viewed as an ordered version of the Erdős-Ko-Rado Theorem.", "Theorem 2.5 For $n>r>1$ , ${\\rm {ex}}_{{\\circlearrowright }}(n, M^r_2) = {n \\atopwithdelims ()r} - {n-r \\atopwithdelims ()r} $ and for fixed $k, r > 2$ , $(1-o(1)) (k-1)r {n \\atopwithdelims ()r - 1} \\le {\\rm {ex}}_{\\circlearrowright }(n,M_k^r) \\le 2(k - 1)(r-1){n \\atopwithdelims ()r - 1}.$ Note that, unlike the results on the paths, there are no extra $\\log n$ factors in the formulas for crossing matchings.", "We were unable to determine the asymptotic behavior of ${\\rm {ex}}_{\\circlearrowright }(n,M_k^r)$ for any pair $(k,r)$ with $k,r > 2$ ." ], [ "Upper bound for $k \\le r + 1$", "Observe that ${\\rm {ex}}_{\\rightarrow }(n,P^1_2)=1$ for all $n \\ge 1$ .", "We then have the following recurrence: Proposition 3.1 Let $2\\le k\\le r+1$ and $n\\ge r+k$ .", "Then ${\\rm {ex}}_{\\rightarrow }(n, P^r_k)\\le {n-2\\atopwithdelims ()r-2}+{\\rm {ex}}_{\\rightarrow }(n-2, P^{r-1}_{k-1})+{\\rm {ex}}_{\\rightarrow }(n-1, P^r_k).$ Let $G$ be an $n$ -vertex ordered $r$ -graph not containing $P^r_k$ with $e(G)={\\rm {ex}}_{\\rightarrow }(n, P^r_k)$ .", "We may assume $V(G)=[n]$ with the natural ordering.", "Let $G_1=\\lbrace e\\in G: \\lbrace 1,2\\rbrace \\subset e\\rbrace $ and $G_2=\\lbrace e\\in G: 1\\in e, 2\\notin e, e-\\lbrace 1\\rbrace \\cup \\lbrace 2\\rbrace \\in G\\rbrace $ .", "Let $G_3$ be obtained from $G-E(G_1)-E(G_2)$ by gluing vertex 1 with vertex 2 into a new vertex $2^{\\prime }$ .", "Since we have deleted the edges of $G_1$ , our $G_3$ is an $r$ -graph, and since we have deleted the edges of $G_2$ , $G_3$ has no multiple edges.", "Thus $e(G)=e(G_1)+e(G_2)+e(G_3)$ .", "We view $G_3$ as an ordered $r$ -graph with vertex set $\\lbrace 2^{\\prime },3,\\ldots ,n\\rbrace $ .", "If $G_3$ contains a crossing ordered path $P$ with edges $e^{\\prime }_1,e^{\\prime }_2,\\ldots ,e^{\\prime }_k$ , then only $e^{\\prime }_1$ may contain $2^{\\prime }$ , and all other edges are edges of $G$ .", "Thus either $P$ itself is in $G$ or the path obtained from $P$ by replacing $e^{\\prime }_1$ with $e^{\\prime }_1-\\lbrace 2^{\\prime }\\rbrace +\\lbrace 1\\rbrace $ or with $e^{\\prime }_1-\\lbrace 2^{\\prime }\\rbrace +\\lbrace 2\\rbrace $ is in $G$ , a contradiction.", "Thus $G_3$ contains no $P^r_k$ and hence $e(G_3)\\le {\\rm {ex}}_{\\rightarrow }(n-1, P^r_k).$ By definition, $e(G_1)\\le {n-2\\atopwithdelims ()r-2}$ .", "We can construct an ordered $(r-1)$ -graph $H_2$ with vertex set $\\lbrace 3,4,\\ldots ,n\\rbrace $ from $G_2$ by deleting from each edge vertex 1.", "If $H_2$ contains a crossing ordered path $P^{\\prime }$ with edges $e^{\\prime \\prime }_1,e^{\\prime \\prime }_2,\\ldots ,e^{\\prime \\prime }_{k-1}$ , then the set of edges $\\lbrace e_1,\\ldots ,e_k\\rbrace $ where $e_1=e^{\\prime \\prime }_1+\\lbrace 1\\rbrace $ and $e_i=e^{\\prime \\prime }_{i-1}+\\lbrace 2\\rbrace $ for $i=2,\\ldots ,k$ forms a $P^r_k$ in $G$ , a contradiction.", "Summarizing, we get ${\\rm {ex}}_{\\rightarrow }(n, P^r_k)=e(G) &=& e(G_1)+e(G_2)+e(G_3) \\\\&\\le & {n-2\\atopwithdelims ()r-2}+{\\rm {ex}}_{\\rightarrow }(n-2, P^{r-1}_{k-1})+ {\\rm {ex}}_{\\rightarrow }(n-1, P^r_k),$ as claimed.", "$\\Box $ We are now ready to prove the upper bound in Theorem REF for $k \\le r + 1$ : We are to show that ${\\rm {ex}}_{\\rightarrow }(n, P^r_k) \\le {n \\atopwithdelims ()r} - {n - k + 1 \\atopwithdelims ()r}$ .", "We use induction on $k+n$ .", "Since $P^r_1$ is simply an edge, ${\\rm {ex}}_{\\rightarrow }(n, P^r_1)=0$ for any $n$ and $r$ , and the theorem holds for $k=1$ .", "Suppose now the upper bound in the theorem holds for all $(k^{\\prime },n^{\\prime },r^{\\prime })$ with $k^{\\prime }+n^{\\prime }<k+n$ and we want to prove it for $(k,n,r)$ .", "By the previous paragraph, it is enough to consider the case $k\\ge 2$ .", "Then by Proposition REF and the induction assumption, ${\\rm {ex}}_{\\rightarrow }(n, P^r_k) &\\le & {n-2\\atopwithdelims ()r-2}+\\left[{n-2 \\atopwithdelims ()r-1} - {n-k \\atopwithdelims ()r-1}\\right]+\\left[ {n-1 \\atopwithdelims ()r} - {n-k \\atopwithdelims ()r}\\right] \\\\&=&\\left[{n-2\\atopwithdelims ()r-2} +{n-2 \\atopwithdelims ()r-1}+{n-1 \\atopwithdelims ()r}\\right] -\\left[ {n-k \\atopwithdelims ()r}+ {n-k \\atopwithdelims ()r-1}\\right] \\\\&=& {n \\atopwithdelims ()r} - {n-k+1 \\atopwithdelims ()r},$ as required.", "This proves the upper bound in Theorem REF for $k \\le r + 1$ .", "$\\Box $" ], [ "Lower bound for $k \\le r + 1$", "For the lower bound in Theorem REF for $k \\le r + 1$ , we provide the following construction.", "For $1\\le k\\le r$ , let $G(n,r,k)$ be the family of $r$ -tuples $(a_1,\\ldots ,a_r)$ of positive integers such that Table: NO_CAPTIONAlso, let $G(n,r,r+1)=G(n,r,r)\\cup \\lbrace (a_1,\\ldots ,a_r): a_1<a_2<\\ldots <a_r=n\\rbrace $ .", "Suppose $G(n,r,k)$ has an ordered crossing $P_k^r$ with edges $e_1,\\ldots ,e_k$ .", "Let $e_1=(a_1,\\ldots ,a_r)$ where $1\\le a_1<a_2<\\ldots <a_r\\le n$ .", "By the definition of a crossing ordered path, for each $2\\le j\\le k$ , $e_j$ has the form $\\mbox{\\em $e_j=(a_{j,1},\\ldots ,a_{j,r})$ where $a_i< a_{j,i}<a_{i+1}$ for $1\\le i\\le j-1$ and $a_{j,i}=a_{i}$ for $j\\le i\\le r$.", "}$ By the definition of $G(n,r,k)$ , either there is $1\\le i\\le k-1$ such that $a_{i+1}=a_i+1$ or $k=r+1$ and $a_r=n$ .", "In the first case, we get a contradiction with (REF ) for $j=i+1$ .", "In the second case, we get a contradiction with (REF ) for $j=r+1$ .", "In order to calculate $|G(n,r,k)|$ , consider the following procedure $\\Pi (n,r,k)$ of generating all $r$ -tuples of elements of $[n]$ not in $G(n,r,k)$ : take an $r$ -tuple $(a_1,\\ldots ,a_r)$ of positive integers such that $1\\le a_1<a_2<\\ldots <a_r\\le n-k+1$ and then increase $a_j$ by $j-1$ if $1\\le j\\le k$ and by $k-1$ if $k\\le j\\le r$ .", "By definition, the number of outcomes of this procedure is ${n-k+1\\atopwithdelims ()r}$ .", "Also $\\Pi (n,r,k)$ never generates a member of $G(n,r,k)$ and generates each other $r$ -subset of $[n]$ exactly once.", "$\\Box $" ], [ "Upper bound for $k \\ge r + 2$", "An ordered $r$ -graph has interval chromatic number $r$ if it is $r$ -partite with $r$ -partition $A_1, \\ldots , A_r$ and $A_i$ precedes $A_{i+1}$ in the ordering of the vertices for all $i\\in [r-1]$ .", "Let $z_{\\rightarrow }(n,F)$ denote the maximum number of edges in an $n$ -vertex ordered $r$ -graph of interval chromatic number $r$ that does not contain the ordered graph $F$ .", "Pach and Tardos [17] showed that every $n$ -vertex ordered graph may be written as the union of at most $\\lceil \\log n\\rceil $ edge disjoint subgraphs each of whose components is a graph of interval chromatic number two, and deduced that ${\\rm {ex}}_{\\rightarrow }(n,F) = O(z_{\\rightarrow }(n,F)\\log n)$ for every ordered graph $F$ .", "They also observed that the log factor is not present when $z_{\\rightarrow }(n,F)=\\Omega (n^c)$ and $c>1$ .", "Unsurprisingly, this phenomenon also holds for ordered $r$ -graphs when $r>2$ .", "We will use the following result which is a rephrasing of [11], Theorem 1.1.", "Theorem 3.1 ([11], Theorem 1.1) Fix $r \\ge c\\ge r-1\\ge 1$ and an ordered $r$ -graph $F$ with $z_{\\rightarrow }(n, F)=\\Omega (n^{c})$ .", "Then $ {\\rm {ex}}_{\\rightarrow }(n, F) = \\left\\lbrace \\begin{array}{ll}O(z_{\\rightarrow }(n, F) \\log n) & \\mbox{ if } c =r-1 \\\\O(z_{\\rightarrow }(n, F))& \\mbox{ if }c >r-1.\\end{array}\\right.$ By Theorem REF , the following claim yields ${\\rm {ex}}_{\\rightarrow }(n, P_k^r) = O(n^{r-1}\\log n)$ for all $k \\ge 2$ , i.e., the upper bound in Theorem REF for $k \\ge r + 2$ .", "Proposition 3.2 For $k \\ge 1$ , $r \\ge 2$ , $z_{\\rightarrow }(n,P_k^r) = O(n^{r - 1})$ .", "Proof.", "We prove a stronger statement by induction on $k$ : if $H$ is an ordered $n$ -vertex $r$ -graph of interval chromatic number $r$ with $r$ -partition $X_1,X_2,\\dots ,X_r$ of sizes $n_1,n_2,\\dots ,n_r$ respectively, and $H$ has no crossing $k$ -path, then $e(H) \\le k P$ where $P=\\prod _{i = 1}^r n_i \\cdot \\sum _{i=1}^r \\frac{1}{n_i}.$ The base case $k=1$ is trivial.", "For the induction step, assume the result holds for paths of length at most $k-1$ , and suppose $e(H) > kP$ .", "For each $(r-1)$ -set $S$ of vertices mark the edge $S \\cup \\lbrace w\\rbrace $ where $w$ is maximum.", "Let $H^{\\prime }$ be the $r$ -graph of unmarked edges.", "Since we marked at most $P$ edges, $e(H^{\\prime }) > (k-1)P$ .", "By the induction assumption there exists a $P^r_{k-1} =v_1 v_2 \\ldots v_{k+r-2} \\subset H^{\\prime }$ and we can extend this to a $P^r_k$ in $H$ using the marked edge obtained from the $(r-1)$ -set $\\lbrace v_{k}, \\ldots , v_{k+r-2}\\rbrace $ .", "This proves the proposition.", "$\\Box $" ], [ "Lower bound for $k \\ge r + 2$", "We now turn to the lower bound in Theorem REF .", "Let $G(n,r,r+2)$ be the family of $r$ -tuples $(a_1,\\ldots ,a_r)$ of positive integers such that Table: NO_CAPTIONThe number of choices of $a_1\\le n/4$ is $n/4$ , then the number of choices of $a_2$ is $\\log _2 (n/4)$ , and the number of choices of the remaining $(r-2)$ -tuple $(a_3,\\ldots ,a_r)$ is at least ${n/2\\atopwithdelims ()r-2}$ .", "Thus if $r\\ge 3$ and $n>20r$ , then $|G(n,r,r+2)|\\ge \\frac{n^{r-1}}{(r-2)!3^{r}}\\log _2 n.$ Suppose $G(n,r,r+2)$ contains a $P_{r+2}^r$ with vertex set $\\lbrace a_1, \\ldots , a_{2r+1}\\rbrace $ and edge set $\\lbrace a_i\\ldots a_{i+r-1}: 1 \\le i \\le r+2\\rbrace $ .", "By the definition of ordered path, the vertices are in the following order on $[n]$ : $a_1<a_{r+1}<a_{2r+1}<a_2<a_{r+2}<a_3<a_{r+3}<\\ldots <a_r<a_{2r}.$ Hence the 2nd, $r+1$ st and $r+2$ nd edges are $\\lbrace a_{r+1},a_2,a_3 \\ldots , a_{r}\\rbrace , \\qquad \\lbrace a_{r+1}, a_{r+2}\\ldots , a_{2r}\\rbrace , \\qquad \\lbrace a_{2r+1},a_{r+2}, \\ldots , a_{2r}\\rbrace .$ The differences between the second and the first coordinates in these three vectors are $d_1=a_{2}-a_{r+1} , \\qquad d_2= a_{r+2}-a_{r+1}, \\qquad d_3= a_{r+2}-a_{2r+1}.$ By (REF ), we have $d_1,d_3<d_2<d_1+d_3$ so it is impossible that all the three differences $d_1, d_2, d_3$ are powers of two.", "This yields the lower bound in Theorem REF for $k\\ge r+2$ .", "$\\Box $" ], [ "Proof of Theorem ", "We begin with the upper bounds when $r+1< k \\le 2r-1$ .", "Definition 2 An ordered $r$ -graph $F$ is a split hypergraph if there is a partition of $V(F)$ into intervals $X_1<X_2<\\dots <X_{r - 1}$ and there exists $i \\in [r-1]$ such that every edge of $F$ has two vertices in $X_i$ and one vertex in every $X_j$ for $j \\ne i$ .", "Every $r$ -graph of interval chromatic number $r$ is a split hypergraph (but not vice versa).", "We write $e(H)$ for the number of edges in a hypergraph $H$ , $v(H) = \\bigl |\\bigcup _{e \\in H} e\\bigr |$ and $d(H) = e(H)/v(H)^{r - 1}$ .", "The function $d(H)$ could be viewed as a normalized average degree of $H$ .", "We require the following nontrivial result about split hypergraphs.", "Theorem 4.1 ([11], Theorem 1.2) For $r\\ge 3$ there exists $c=c_r>0$ such that every ordered $r$ -graph $H$ contains a split subgraph $G$ with $d(G) \\ge c\\, d(H)$ .", "Proposition 4.1 For $r\\ge 3$ there exists $C=C_r>0$ such that, if $r+1<k\\le 2r-1$ , then ${\\rm {ex}}_{\\circlearrowright }(n, P_k^r)\\le k C\\, n^{r-1}$ .", "Let $c=c_r$ be the constant from Theorem REF and let $C=1/c$ .", "Given a convex geometric $r$ -graph $H$ with $e(H) > k \\,C n^{r-1}$ , we view $H$ as a linearly ordered $r$ -graph (by “opening up\" the circular ordering between any two vertices) and apply Theorem REF to obtain a split subgraph $G \\subset H$ where $e(G) > k m^{r-1}$ where $m=v(G)$ .", "Now, viewing $H$ once again as a convex geometric $r$ -graph, let $X_0 < X_1 < \\dots < X_{r-3} < X$ be cyclic intervals such that every edge of $G$ contains two vertices in $X$ and one vertex in each $X_i : 0 \\le i \\le r - 3$ .", "Our main assertion is the following: For $k \\in [2r-1]$ , $G$ contains a crossing $k$ -path $v_0 v_1 \\ldots v_{k+r-2}$ such that $\\bullet $ $v_i \\in X_i$ for $i\\lnot \\equiv -1, -2 \\mod {r}$ and $\\bullet $ $v_i \\in X$ for $i \\equiv -1, -2 \\mod {r}$ .", "To prove this assertion we proceed by induction on $k$ , where the base case $k = 1$ is trivial.", "For the induction step, suppose that $1\\le k \\le 2r-2$ , and we have proved the result for $k$ and we wish to prove it for $k+1$ .", "Suppose that $k \\equiv i \\lnot \\equiv 0, -1$ (mod $r$ ) where $0 \\le i<r$ .", "For each $f \\in \\partial G$ that has no vertex in $X_{i-1}$ , delete the edge $f \\cup v \\in G$ where $v$ is the largest vertex in $X_{i-1}$ in clockwise order.", "Let $G^{\\prime }$ be the subgraph that remains after deleting these edges.", "Then $e(G^{\\prime })\\ge e(G)-m^{r-1}>(k+1)m^{r-1}-m^{r-1}=km^{r-1},$ so by induction $G^{\\prime }$ contains a $P_{k}^r$ with vertices $v_0, v_1, \\ldots , v_{k-1},\\ldots , v_{k+r-2}$ , where $v_i \\in X_i$ for $i\\lnot \\equiv -1, -2$ (mod $r$ ) and $v_i \\in X$ for $i \\equiv -1, -2$ (mod $r$ ).", "Our goal is to add a new vertex $v$ to the end of the path where $v \\in X_{i-1}$ .", "Let $v=v_{k+r-1}$ be the vertex in $X_{i-1}$ for which the edge $e_k=v_k v_{k+1} \\ldots v_{k+r-1}$ was deleted in forming $G^{\\prime }$ .", "Note that $v$ exists as $v_{k-1} v_k \\ldots v_{k+r-2} \\in E(G)$ and so $v_k \\ldots v_{k+r-2} \\in \\partial G.$ Adding vertex $v$ and edge $e_k$ to our copy of $P_k^r$ yields a copy of $P_{k+1}^r$ as required.", "Next suppose that $i \\equiv 0,-1$ (mod $r$ ).", "Proceed exactly as before except we modify the definition of $G^{\\prime }$ slightly as follows: for every $f \\in \\partial G$ which has exactly one vertex in each $X_i$ and in $X$ , if $w$ is the vertex of $f$ in $X$ , then delete $f \\cup v \\in G$ where $v$ is the largest such vertex in $X$ satisfying $v<w$ .", "By induction, $G^{\\prime }$ contains a $P_{k}^r$ with vertices $v_0, v_1, \\ldots , v_{k-1},\\ldots , v_{k+r-2}$ , where $v_i \\in X_i$ for $i\\lnot \\equiv -1, -2$ (mod $r$ ) and $v_i \\in X$ for $i \\equiv -1, -2$ (mod $r$ ).", "Our goal is to add a new vertex $v$ to the end of the path where $v \\in X$ so we may assume that $k\\in \\lbrace r-1, r\\rbrace $ , and we are trying to find vertex $v$ which we will label as $v_{k+r-1} \\in \\lbrace v_{2r-2}, v_{2r-1}\\rbrace $ as above with $v \\in X$ .", "Note that we already have the two vertices $v_{r-2} < v_{r-1}$ in $X$ .", "So we either want to add $v_{2r-2}$ satisfying $v_{r-2} < v_{2r-2}< v_{r-1}$ or we want to add $v_{2r-1}$ satisfying $v_{r-2} < v_{2r-2}< v_{r-1} < v_{2r-1}$ .", "Suppose that $k=r-1$ so that we are in the first case.", "Since $v_{r-2} \\ldots v_{2r-3} \\in E(G^{\\prime })$ , the $(r-1)$ -set $f =v_{r-1} \\ldots v_{2r-3}$ has exactly one vertex $v_{r-1} \\in X$ .", "Since $f \\cup \\lbrace v_{r-2}\\rbrace = v_{r-2}v_{r-1} \\ldots v_{2r-3} \\in E(G^{\\prime })$ , we have $f \\in \\partial G$ and moreover $v_{r-2}$ was not deleted from $f \\cup \\lbrace v_{r-2}\\rbrace $ if forming $G^{\\prime }$ .", "Hence there is a vertex $v \\in X$ with $v_{r-2}<v<v_{r-1}$ such that the edge $f \\cup \\lbrace v\\rbrace = v_{r-1} \\ldots v_{2r-3}v \\in E(G)$ and the vertex $v$ and edge $f \\cup \\lbrace v\\rbrace $ can be used to extend the $P_k^r$ to a $P_{k+1}^r$ .", "For the case $k=r$ , we choose $v$ to be the largest vertex in $X$ in defining $G^{\\prime }$ and apply an identical argument to that when $i\\lnot \\equiv -1, -2$ (mod $r$ ) .", "$\\Box $ Next we give lower bounds for $k \\ge 2r$ .", "Proposition 4.2 For $k \\ge 2r\\ge 4$ we have ${\\rm {ex}}_{\\circlearrowright }(n, P_k^r) =\\Omega (n^{r-1} \\log n).$ We take the same family $G(n,r,r+2)$ as used for ordered hypergraphs (see Section REF ), but with the cyclic ordering of the vertex set.", "When we have a $k$ -edge crossing path $P=w_1w_2\\ldots w_{r+k-1}$ , the vertex $w_1$ does not need to be the leftmost in the first edge $w_1\\ldots w_r$ , so the argument in Section REF does not go through for $k=r+2$ .", "In fact, $G(n,r,r+2)$ does contain $P_k^r$ for $k \\le 2r-1$ .", "However, suppose $G(n,r,r+2)$ has a crossing $2r$ -edge path $P=w_1\\ldots w_{3r-1}$ , and the $i$ th edge of the path is $A_i=w_iw_{i+1}\\ldots w_{i+r-1}$ .", "Suppose vertex $w_{r+j}$ is the leftmost in the set $\\lbrace w_{r},w_{r+1},\\ldots ,w_{2r-1}\\rbrace $ .", "Then writing the edges $A_{j+1}, A_{j+r}$ and $A_{j+r+1}$ as vectors with increasing coordinates, we have $A_{j+1}=\\lbrace w_{j+r}, w_{j+1},w_{j+2}, \\ldots , w_{j+r-1}\\rbrace , \\quad A_{j+r}= \\lbrace w_{j+r}, w_{j+r+1}\\ldots , w_{j+2r-1}\\rbrace , $ $\\mbox{and }\\quad A_{j+r+1}=\\lbrace w_{j+2r},w_{j+r+1},w_{j+r+2}, \\ldots , w_{j+2r-1}\\rbrace .$ The differences between the second and the first coordinates in these three vectors are $d_1=w_{j+1}-w_{j+r} , \\qquad d_2= w_{j+r+1}-w_{j+r}, \\qquad d_3= w_{j+r+1}-w_{j+2r}.$ As at the end of Section REF , it is impossible that all the differences $d_1, d_2, d_3$ are powers of two.", "$\\Box $ Proof of Theorem REF .", "Proposition REF yields $C=C_r$ such that $ {\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\le kC\\, n^{r-1}$ for $k \\le 2r-1$ .", "Since the family of all $r$ -subsets of $[n]$ containing 1 witnesses that for $k\\ge 3$ , $r \\ge 2$ , ${\\rm {ex}}(n,P_k^r) = \\Omega (n^{r - 1})$ , and ${\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\ge {\\rm {ex}}(n,P_k^r)$ , we get ${\\rm {ex}}_{\\circlearrowright }(n,P_k^r) = \\Theta (n^{r - 1})$ for $3\\le k \\le 2r - 1$ .", "In the case $k = r + 1$ , Theorem REF gives ${\\rm {ex}}_{\\circlearrowright }(n,P_{r+1}^r) \\le {\\rm {ex}}_{\\rightarrow }(n,P_{r+1}^r) = {n \\atopwithdelims ()r} - {n - r \\atopwithdelims ()r}.$ On the other hand, since $P_{r+1}^r\\supseteq M_r^2$ and $G(n, r, r+1) \\lnot \\supseteq M_r^2$ , ${\\rm {ex}}_{\\circlearrowright }(n,P_{r+1}^r) \\ge {\\rm {ex}}_{\\circlearrowright }(n,M_r^2) ={\\rm {ex}}_{\\rightarrow }(n, M_r^2) \\ge |G(n, r, r+1)| = {n \\atopwithdelims ()r} - {n - r \\atopwithdelims ()r},$ so the second statement in Theorem REF follows.", "It remains to consider $k \\ge 2r$ , and here we have ${\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\le {\\rm {ex}}_{\\rightarrow }(n,P_k^r) = O(n^{r - 1}\\log n)$ from Theorem REF and a lower bound from Proposition REF .", "$\\Box $" ], [ "Upper bound in Theorem ", "Let us first prove the upper bound ${\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\le \\frac{(k-1)(r-1)}{r}{n \\atopwithdelims ()r-1} \\qquad \\hbox{($2 \\le k \\le r$)}.$ Recall that our notation for a crossing $k$ -path $P_k^r$ ($k \\le r$ ) on a cyclically ordered vertex set $\\Omega _n$ is the following: the vertices $v_1, v_2, \\ldots , v_{r+k-1}$ form a tight path with edges $e_i=\\lbrace v_i, \\ldots , v_{i+r-1}\\rbrace $ , $i \\in [k]$ and the (clockwise) ordering of the vertices on $\\Omega _n$ is $v_1<v_{r+1}< v_2<v_{r+2}<\\cdots <v_{k-1}<v_{r+k-1}<v_k<v_{k+1} <\\cdots < v_r (< v_1).$ We define $T_k(H)$ to be the set of $(v_{k}, \\ldots , v_{r+k-1}) \\in V(H)^{r}$ for which there is a $P_k^r$ in $H$ with vertices $v_1, \\ldots , v_{r+k-1}$ as ordered above.", "In other words, $T_k(H)$ is the set of ending edges for a $P_k^r$ in $H$ .", "Theorem 5.1 Let $r \\ge 2$ and $1\\le k\\le r$ .", "Then for any cg $r$ -graph $H$ on $\\Omega _n$ , $ |T_k(H)| \\ge r \\cdot e(H) - (r - 1)(k - 1)\\cdot |\\partial H|.$ In particular, if $H$ contains no $P_k^r$ , then $ e(H) \\le \\frac{(k-1)(r-1)}{r}|\\partial H| \\le \\frac{(k - 1)(r - 1)}{r}{n \\atopwithdelims ()r - 1}.$ We proceed by induction on $k$ .", "For $k = 1$ , and each edge $e \\in E(H)$ , the number of copies of $P_1^r$ with edge set $\\lbrace e\\rbrace $ is $r$ , since after choosing which vertex of $e$ to label with $v_1$ , the order of the remaining vertices of $e$ is determined (they are cyclically ordered).", "Therefore $|T_1(H)| \\ge re(H)$ .", "Suppose $k \\ge 2$ and assume by induction that $|T_{k-1}(H)| \\ge r e(H) - (r - 1)(k - 2)|\\partial H|$ .", "Let $L$ be the collection of $r$ -sets in $T_{k-1}(H)$ with the following property: The elements of $L$ are $e= x_{r+1}< \\cdots < x_{r+k-1}<x_k<\\cdots <x_r$ where $e \\in E(H)$ and there does not exist any vertex $x$ such that $x_k<x<x_{k+1}$ and $e-\\lbrace x_k\\rbrace \\cup \\lbrace x\\rbrace \\in E(H)$ .", "Observe that $|L|\\le (r-1)|\\partial H|$ since for each ordered $(r-1)$ set $e-\\lbrace x_k\\rbrace \\in \\partial H$ there must be a unique $x_k$ satisfying $x_{r+k-1}<x_k<x_{k+1}$ such that $e \\in L$ (the vertex closest to $x_{k+1}$ ).", "Our goal is to prove that $|T_k(H)| \\ge |T_{k-1}(H) \\backslash L|$ via an injection.", "Then, using the fact that $|L|\\le (r-1)|\\partial H|$ and the induction hypothesis, we have $ |T_k(H)| \\ge |T_{k-1}(H) \\backslash L| \\ge r \\cdot e(H) - (k - 2)(r - 1) \\cdot |\\partial H| - |L| \\ge r \\cdot e(H) - (k - 1)(r - 1) \\cdot |\\partial H|.$ We must give an injection $f : T_{k-1}(H) \\backslash L \\rightarrow T_k(H)$ .", "Suppose that $e= v_{r+1}< \\cdots < v_{r+k-1}<v_k<\\cdots <v_r \\in T_{k-1}(H) \\backslash L$ .", "Then there exists a vertex $x$ such that $v_k<x<v_{k+1}$ and $e-\\lbrace v_k\\rbrace \\cup \\lbrace x\\rbrace \\in E(H)$ .", "Let $A$ be the set of all such vertices $x$ .", "Consider the vertex $y \\in A$ such that $y \\le x$ for all $x \\in A$ .", "In other words, $y$ is the closest vertex to $v_k$ among all vertices of $A$ .", "Let $f(e)=e-\\lbrace v_k\\rbrace \\cup \\lbrace y\\rbrace $ .", "Since $k \\le r$ , we clearly have $f(e) \\in T_k(H)$ as we obtain a $P_k^r$ that ends in $f(e)$ by taking the copy of $P_{k-1}^r$ that ends in $e$ and just adding the edge $f(e)$ .", "Moreover, $f$ is an injection, as if there is an $e^{\\prime }=e-\\lbrace v_k\\rbrace \\cup \\lbrace y^{\\prime }\\rbrace $ such that $f(e^{\\prime })=f(e)$ , then, assuming that $v_k<y^{\\prime }<y$ , $y$ would not have been the closest vertex to $v_k$ in $A$ .", "This contradiction shows that $f$ is indeed an injection and the proof is complete." ], [ "Lower bound in Theorem ", "Our next goal is to prove the following lower bound in Theorem REF for $r \\ge k \\ge 2$ : $ {\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\ge (1+o(1))\\frac{k-1}{3 \\ln 2r}{n\\atopwithdelims ()r-1}.", "$ A gap of an $r$ -element subset $R$ of $\\Omega _n$ is a segment of $\\Omega _n$ between two clockwise consecutive vertices of $R$ .", "We say $R$ has $(k,m)$ -gaps if some $k - 1$ consecutive gaps of $R$ all have length more than $m$ – in other words, there are at least $m$ vertices of $\\Omega _n$ in each gap.", "For $n>r$ , let $K_n^r$ be the family of all $r$ -element subsets of $\\Omega _n$ .", "For $n>r\\ge k$ , let $H(n,r,k,m)$ be the family of the members of $K_n^r$ that have $(k,m)$ -gaps, and $\\overline{H}(n,r,k,m)$ be the family of the members of $K_n^r$ that do not have $(k,m)$ -gaps.", "For a hypergraph $H$ and $v\\in V(H)$ , let $H\\lbrace v\\rbrace $ denote the set of edges of $H$ containing $v$ .", "Lemma 5.2 If $m\\ge \\frac{(n-1)\\ln 2r}{(r-1)(k-1)},$ then $|H(n,r,k,m)|\\le \\frac{1}{2} {n\\atopwithdelims ()r}.", "\\quad \\mbox{ Equivalently,}\\; |\\overline{H}(n,r,k,m)|\\ge \\frac{1}{2} {n\\atopwithdelims ()r}.$ Instead of proving (REF ) directly, it will be easier to prove that $\\mbox{\\em for every $j\\in \\Omega _n$,}\\quad |H(n,r,k,m)\\lbrace j\\rbrace |\\le \\frac{1}{2} |K^r_n\\lbrace j\\rbrace |= \\frac{1}{2}{n-1\\atopwithdelims ()r-1};$ and (REF ) implies (REF ) because $|H(n,r,k,m)|=\\frac{n}{r}|H(n,r,k,m)\\lbrace j\\rbrace |$ and ${n\\atopwithdelims ()r}=\\frac{n}{r} |K^r_n\\lbrace j\\rbrace |$ .", "Recall the vertex set of $\\Omega $ is $\\lbrace 0,1,2,\\dots ,n-1\\rbrace $ .", "By symmetry, it is enough to prove (REF ) for $j=n-1$ .", "First, we show that $|H(n,r,k,m)\\lbrace n-1\\rbrace |\\le r |K^r_{n-(k-1)m}\\lbrace n-1-(k-1)m\\rbrace |.$ Indeed, from each $F\\in H(n,r,k,m)\\lbrace n-1\\rbrace $ , we can get an $F^{\\prime }\\in K^r_{n-(k-1)m}\\lbrace n-1-(k-1)m\\rbrace $ by deleting the first $m$ vertices in $k-1$ consecutive gaps of length at least $m+1$ , and renumbering the remaining $n-(k-1)m$ vertices so that the vertex $n-1$ of $\\Omega $ will be $(n-1)-(k-1)m$ .", "On the other hand, each $F^{\\prime }\\in K^r_{n-(k-1)m}\\lbrace n-1-(k-1)m\\rbrace $ can be obtained this way from $r$ distinct $F\\in H(n,r,k,m)\\lbrace n-1\\rbrace $ .", "This proves (REF ).", "Now, using $1 - x \\le e^{-x}$ , (REF ) and (REF ) yield $|H(n,r,k,m)\\lbrace n-1\\rbrace |\\le r {n-1-(k-1)m\\atopwithdelims ()r-1}= r {n-1\\atopwithdelims ()r-1} \\prod _{i = 1}^{r-1} \\frac{n - (k - 1)m - i}{n - i}$ $\\le r {n-1\\atopwithdelims ()r-1}\\exp \\Bigl (-\\frac{(k - 1)m(r-1)}{n -1 }\\Bigr ) \\le r {n-1\\atopwithdelims ()r-1}\\frac{1}{2r},$ yielding (REF ).", "We are ready to prove (REF ).", "Let $t=t(r,k)=\\left\\lceil \\frac{(r-1)(k-1)}{\\ln 2r}\\right\\rceil .$ Suppose $n>r\\ge k\\ge 2$ .", "If $r=2$ , then $k=2$ , and the bound is trivial; so let $r\\ge 3$ .", "Suppose first that $t$ divides $n$ and let $m=n/t$ .", "Then $m$ satisfies (REF ).", "By rotating $\\Omega $ we find a subgraph $H^{\\prime }$ of $\\overline{H}(n,r,k,m)$ with at least $|\\overline{H}(n,r,k,m)|/m$ edges such that every edge of $H^{\\prime }$ adds up to zero modulo $m$ .", "We claim that $\\mbox{\\em $H^{\\prime }$ does not contain crossing $P_k^r$.", "}$ Indeed, assume $H^{\\prime }$ contains a crossing $P_k^r$ with the vertices $v_0,v_1,\\dots ,v_{k+r-2}$ .", "By the definition of crossing paths, $v_0 < v_r < v_1 < v_{1+r} < \\dots < v_{k-1} < v_{k -1+ r} < v_k$ .", "Since the set $\\lbrace v_1,v_2,\\dots ,v_{r-1}\\rbrace $ forms an edge together with both $v_0$ and $v_r$ , $v_r \\equiv v_0 \\mod {m}$ .", "Similarly, $v_{r + i} \\equiv v_i \\mod {m}$ for all $i < k$ .", "But this means that the edge $\\lbrace v_0,v_1,\\dots ,v_{r-1}\\rbrace $ has $k - 1$ consecutive gaps of length more than $m$ , thus it does not belong to $\\overline{H}(n,r,k,m)$ .", "This contradiction proves (REF ).", "Thus if $r\\ge 3$ , $2\\le k\\le r$ are fixed, $n$ is a large number divisible by $t$ and $m=n/t$ , then by (REF ) and (REF ), $H^{\\prime }$ is a cg $r$ -graph not containing crossing $P_k^r$ with $|H^{\\prime }|\\ge \\frac{1}{2m}{n\\atopwithdelims ()r}\\ge \\frac{t}{2r }{n-1\\atopwithdelims ()r-1}\\ge \\frac{(k-1)(r-1)}{2r \\ln 2r}{n-1\\atopwithdelims ()r-1}\\ge (1+o(1))\\frac{k-1}{3 \\ln 2r}{n\\atopwithdelims ()r-1}.$ If $n$ is not divisible by $t$ , then let $n^{\\prime }$ be the largest positive integer divisible by $t$ such that $n^{\\prime }\\le n$ .", "Then $ {\\rm {ex}}_{\\circlearrowright }(n,P_k^r) \\ge {\\rm {ex}}_{\\circlearrowright }(n^{\\prime },P_k^r) \\ge (1+o(1))\\frac{k-1}{3 \\ln 2r}{n^{\\prime }\\atopwithdelims ()r-1}=(1+o(1))\\frac{k-1}{3 \\ln 2r}{n\\atopwithdelims ()r-1}.\\hfill \\quad \\Box $" ], [ "The case $k = 2$", "Here we prove the upper bound (REF ), namely: ${\\rm {ex}}_{\\circlearrowright }(n,P_2^r) \\le \\frac{1}{2}{n \\atopwithdelims ()r-1}.$ Recall that $P_2^r$ on $\\Omega _n$ has a vertex set $v_1<v_{r+1}< v_2<v_{3}<\\cdots < v_r (< v_1), $ and edges $\\lbrace v_1, \\dots , , v_r\\rbrace $ and $\\lbrace v_2, \\dots , v_{r+1}\\rbrace $ .", "Consider a $P_2^r$ -free cgh $H$ on the vertex set $\\Omega _n$ .", "Label the vertices of an $e\\in H$ as $1\\le a_1< a_2<\\cdots < a_r \\le n, $ and define $T_1(e):= e\\setminus \\lbrace a_1\\rbrace $ and $T_2(e):= e\\setminus \\lbrace a_r\\rbrace $ .", "Since $H$ is $P_2^r$ -free, we have $T_\\alpha (e)\\ne T_\\alpha (e^{\\prime })$ for $e\\ne e^{\\prime }\\in H$ (and $\\alpha =1,2$ ).", "Indeed, if we take (in case of $\\alpha =1$ ) $v_2, \\dots , v_r= a_2, \\dots , a_r$ and $\\lbrace v_1, v_{r+1}\\rbrace = \\lbrace a_1, a_1^{\\prime }\\rbrace $ then we obtain a $P_2^r$ .", "We also have $T_1(e)\\ne T_2(e^{\\prime })$ , otherwise we define $\\lbrace v_1, v_{r+1}\\rbrace = \\lbrace a_1, a_r^{\\prime }\\rbrace $ and again obtain a forbidden path.", "This way we associated two $(r-1)$ -sets to each member of $H$ , yielding (REF ).", "$\\Box $" ], [ "The case $k = r$", "Here we prove (), namely: ${\\rm {ex}}_{\\circlearrowright }(n, P_r^r) > (1-o(1))(r-2){n \\atopwithdelims ()r-1}.$ Recall that $P_r^r$ on $\\Omega _n$ has a vertex set $v_1<v_{r+1}< v_2<v_{r+2}< v_{3}<\\cdots < v_{r-1}< v_{2r-1}< v_r (< v_1),$ and edges $e_1,\\ldots ,e_r$ , where for $i=1,\\ldots ,r$ , $e_i=\\lbrace v_i, v_{i+1},\\dots , v_{r+i-1}\\rbrace $ .", "By (REF ), $\\mbox{\\em for every $1\\le i\\le r$, the only vertices in $e_i$ that can be consecutive on $\\Omega _n$ are $v_{i+r-1}$ and $v_i$.", "}$ Recall that the $n$ vertices of $\\Omega _n$ are arranged in clockwise order as $1<2<3< \\dots <n$ .", "Let $H$ be the following family of $r$ -sets of $\\Omega _n$ .", "Label the vertices of an $e\\in H$ as $1 < a_1< a_2<\\cdots < a_r < n,$ and put $e$ into $H$ if there exists $2\\le i\\le r-1$ with $a_{i-1}+1=a_{i}$ .", "The number of such $e\\in H$ is asymptotically $(r-2){n \\atopwithdelims ()r-1}+O(n^{r-2})$ .", "We claim that $H$ does not contain a $P_r^r$ .", "Suppose, on the contrary, that $F\\subset H$ is a copy of $P_r^r$ as it is described in (REF ).", "Choose $i\\in [r-1]$ such that the largest number in $\\lbrace v_1,\\ldots ,v_{2r-1}\\rbrace $ is either $v_i$ or $v_{r+i-1}$ .", "Consider $e_i$ in the form $(a_1,\\ldots ,a_r)$ as in (REF ).", "Since $e_i=\\lbrace v_i, v_{i+1},\\dots , v_{r+i-1}\\rbrace $ , by the choice of $i$ , $v_{i+r-1}\\in \\lbrace a_{r-1},a_{r}\\rbrace $ .", "This together with (REF ) contradicts the definition of $H$ .", "$\\Box $" ], [ "Proof of Theorem ", "We are to show that for $k,r > 2$ , $(k-1)r {n \\atopwithdelims ()r - 1} - O(n^{r - 2}) \\le {\\rm {ex}}_{\\circlearrowright }(n,M_k^r) = {\\rm {ex}}_{\\rightarrow }(n,M_k^r)< 2(k - 1)(r-1){n \\atopwithdelims ()r - 1}.$ A simple construction demonstrating the lower bound in Theorem REF is the following cgh : let $A$ be the set of $r$ -gons that contain at least one vertex from a fixed set of $k-1$ vertices of a convex $n$ -gon, and let $B$ be the set of $r$ -gons that have a side of length at most $k-1$ .", "The cgh $A \\cup B$ has $(k - 1)r{n \\atopwithdelims ()r - 1} + O(n^{r - 2})$ edges and does not contain $M_k^r$ .", "For the upper bound, let $H$ be a largest $r$ -uniform $n$ -vertex family of sets with vertices on a convex polygon of $n$ points with no $M_{k}^r$ .", "For each edge $A$ , choose a shortest chord $ch(A)$ , say $v_rv_1$ and view the vertices of $A$ as $v_1,v_2,\\ldots ,v_r$ in clockwise order.", "Define the type of $A$ to be the vector ${\\bf t}(A)=(t_1,\\ldots ,t_{r-1})$ where $\\mbox{$t_i=v_{i+1}-v_i$ for $i=1,\\ldots ,r-2$ and$t_{r-1}=n-(t_1+\\ldots +t_{r-2})=v_1-v_{r-1}$.", "}$ The coordinates of each vector ${\\bf t}(A)$ are positive integers, $t_{r-1}(A)\\ge 2$ , and $t_1(A)+\\ldots +t_{r-1}(A)=n$ for each $A$ by definition.", "The number of such vectors is exactly $\\binom{n-2}{r-2}$ (because this is the number of ways to mark $r-2$ out of the $n-1$ separators in an ordered set of $n$ dots so that the last separator is not marked).", "For every given type ${\\bf t}=(t_1,\\ldots ,t_{r-1})$ , the family $H({\\bf t})$ of the chords $ch(A)$ of the edges $A$ of type ${\\bf t}$ does not contain $k$ crossing chords.", "Thus by Theorem REF , $|H({\\bf t})|< 2(k-1)n$ .", "Hence, using $r\\ge 3$ , $|H|<2(k-1)n\\binom{n-2}{r-2}= 2(k-1)\\frac{(r-1)(n-r+1)}{n-1}\\binom{n}{r-1}<2(k - 1)(r-1){n \\atopwithdelims ()r - 1},$ as claimed.", "$\\Box $" ], [ "Concluding remarks", "$\\bullet $ A hypergraph $F$ is a forest if there is an ordering of the edges $e_1,e_2,\\dots ,e_t$ of $F$ such that for all $i \\in \\lbrace 2,3,\\dots ,t\\rbrace $ , there exists $h < i$ such that $e_i \\cap \\bigcup _{j < i} e_j \\subseteq e_h$ .", "It is not hard to show that ${\\rm {ex}}(n,F) = O(n^{r - 1})$ for each $r$ -uniform forest $F$ .", "It is therefore natural to extend the Pach-Tardos Conjecture REF to $r$ -graphs as follows: Conjecture B Let $r \\ge 2$ .", "Then for any ordered $r$ -uniform forest $F$ with interval chromatic number $r$ , ${\\rm {ex}}_{\\rightarrow }(n,F) = O(n^{r-1} \\cdot \\mbox{\\rm polylog} \\, n)$ .", "Theorem REF shows that to prove Conjecture REF , it is enough to consider the setting of $r$ -graphs of interval chromatic number $r$ .", "Theorem REF verifies this conjecture for crossing paths, and also shows that the $\\log n$ factor in Theorem REF is necessary.", "It would be interesting to find other general classes of ordered $r$ -uniform forests for $r \\ge 3$ for which Conjecture REF can be proved.", "A related problem is to determine for which ordered forests $F$ we have ${\\rm {ex}}_{\\rightarrow }(n, F)= O(n^{r-1})$ ?", "This is a hypergraph generalization of Braß' question [2] which was solved recently for trees [10].", "$\\bullet $ It appears to be substantially more difficult to determine the exact value of the extremal function for $r$ -uniform crossing $k$ -paths in the convex geometric setting than in the ordered setting.", "It is possible to show that for $k \\le 2r - 1$ , $ c(k,r) = \\lim _{n \\rightarrow \\infty } \\frac{{\\rm {ex}}_{\\circlearrowright }(n,P_k^r)}{{n \\atopwithdelims ()r-1}}$ exists.", "We do not as yet know the value of $c(k,r)$ for any pair $(k,r)$ with $2 \\le k \\le r$ , even though in the ordered setting Theorem REF captures the exact value of the extremal function for all $k\\le r+1$ , and $c(r+1,r) = r$ .", "$\\bullet $ One can consider more general orderings of tight paths, namely instead of the vertices whose subscripts are congruent to $a$ modulo $r$ increasing within an interval (conditions (i), (ii), (iii) in Definition REF ), we can specify which congruence classes of vertices are increasing within their interval and which are decreasing.", "Our methods can handle such situations as well." ], [ "Acknowledgement.", "This research was partly conducted during AIM SQuaRes (Structured Quartet Research Ensembles) workshops, and we gratefully acknowledge the support of AIM.", "Table: NO_CAPTION" ] ]
1906.04575
[ [ "A fast solver for the narrow capture and narrow escape problems in the\n sphere" ], [ "Abstract We present an efficient method to solve the narrow capture and narrow escape problems for the sphere.", "The narrow capture problem models the equilibrium behavior of a Brownian particle in the exterior of a sphere whose surface is reflective, except for a collection of small absorbing patches.", "The narrow escape problem is the dual problem: it models the behavior of a Brownian particle confined to the interior of a sphere whose surface is reflective, except for a collection of small patches through which it can escape.", "Mathematically, these give rise to mixed Dirichlet/Neumann boundary value problems of the Poisson equation.", "They are numerically challenging for two main reasons: (1) the solutions are non-smooth at Dirichlet-Neumann interfaces, and (2) they involve adaptive mesh refinement and the solution of large, ill-conditioned linear systems when the number of small patches is large.", "By using the Neumann Green's functions for the sphere, we recast each boundary value problem as a system of first-kind integral equations on the collection of patches.", "A block-diagonal preconditioner together with a multiple scattering formalism leads to a well-conditioned system of second-kind integral equations and a very efficient approach to discretization.", "This system is solved iteratively using GMRES.", "We develop a hierarchical, fast multipole method-like algorithm to accelerate each matrix-vector product.", "Our method is insensitive to the patch size, and the total cost scales with the number N of patches as O(N log N), after a precomputation whose cost depends only on the patch size and not on the number or arrangement of patches.", "We demonstrate the method with several numerical examples, and are able to achieve highly accurate solutions with 100,000 patches in one hour on a 60-core workstation." ], [ "Introduction", "We consider the numerical solution of two related problems which arise in the study of Brownian diffusion by a particle in the exterior or interior of a porous sphere.", "We denote the open unit ball centered at the origin in $\\mathbb {R}^3$ by $\\Omega $ , and assume that the sphere $\\partial \\Omega $ is partially covered by $N$ small patches of radius $\\varepsilon $ , measured in arclength (Fig.", "REF ).", "For the sake of simplicity, we assume that the patches are disk-shaped and comment briefly on more general shapes in the conclusion.", "Figure: A sphere partially covered by disk-shaped patches.", "Weassume each patch is of radius ε\\varepsilon .", "We also assume that distinctpatches are separated by a distance of at least ε\\varepsilon .In the figure, this means that the regions bounded by the dashedlines do not overlap.The union of the patches is referred to as the absorbing boundary and denoted by $\\Gamma _A$ .", "The remainder of the boundary, $\\Gamma _R = \\partial \\Omega \\backslash \\Gamma _A$ , is referred to as the reflecting boundary.", "The first problem, called the narrow capture problem, is to calculate the concentration $\\bar{u}(x)$ , at equilibrium, of Brownian particles at $x\\in \\mathbb {R}^3 \\backslash \\overline{\\Omega }$ with a given fixed concentration far from the origin, assuming that particles are absorbed (removed) at $\\Gamma _A$ .", "The second problem, called the narrow escape problem, is to calculate the mean first passage time (MFPT) in $\\Omega $ , namely the expected time $\\bar{v}(x)$ for a Brownian particle released at $x \\in \\Omega $ to first reach $\\Gamma _A$ .", "In both settings, particles are reflected from $\\Gamma _R$ .", "In this paper, we sometimes refer to the narrow capture problem as the exterior problem, and the narrow escape problem as the interior problem.", "These problems have received quite a lot of attention in the mathematics and biophysics communities since the seminal work of Berg and Purcell [1].", "We do not seek to review the biophysical background here, but note that the absorbing patches serve as a simplified model for either surface receptors (the capture mechanism) or pores (the escape mechanism) in an otherwise impermeable membrane.", "We refer the reader to [1], [2], [3], [4], [5], [6], [7] for more detailed discussions of applications and a selection of work on related biophysical models.", "Standard arguments from stochastic analysis show that both $\\bar{u}$ and $\\bar{v}$ satisfy a Poisson equation with mixed Dirichlet-Neumann boundary conditions [8], [9].", "More precisely, for the capture problem, if the far-field particle concentration is set to be 1, then $\\bar{u}$ satisfies the exterior Laplace equation: ${\\left\\lbrace \\begin{array}{ll}\\Delta \\bar{u}= 0 & x \\in \\mathbb {R}^3 \\backslash \\overline{\\Omega } \\\\\\bar{u}= 0 & x \\in \\Gamma _A \\\\\\frac{\\partial \\bar{u}}{\\partial n} = 0 & x \\in \\Gamma _R \\\\\\bar{u}(x) \\rightarrow 1 & |x| \\rightarrow \\infty .", "\\\\\\end{array}\\right.", "}$ A scalar quantity of interest is the total equilibrium flux $J$ of particles through $\\Gamma _A$ : $J = \\int _{\\Gamma _A} \\frac{\\partial \\bar{u}}{\\partial n} \\, dS.$ This is sometimes referred to as the capacitance of the system (see Remark REF ).", "For the escape problem, the MFPT $\\bar{v}$ satisfies the interior Poisson equation: ${\\left\\lbrace \\begin{array}{ll}\\Delta \\bar{v}= -1 & x \\in \\Omega \\\\\\bar{v}= 0 & x \\in \\Gamma _A \\\\\\frac{\\partial \\bar{v}}{\\partial n} = 0 & x \\in \\Gamma _R.\\end{array}\\right.", "}$ Here, the quantity of interest is the average MFPT $\\mu $ - that is the average, over all possible initial particle positions, of the expected time to escape from $\\Omega $ through $\\Gamma _A$ : $\\mu = \\frac{1}{|\\Omega |} \\int _\\Omega \\bar{v}\\, dV.$ Here, and in the remainder of the paper, $\\frac{\\partial }{\\partial n}$ refers to the derivative in the outward normal direction; $n$ points towards the interior of $\\Omega $ for the exterior problem, and towards the exterior of $\\Omega $ for the interior problem.", "In order to understand how the distribution of absorbing patches on the surface affects $\\bar{u}(x)$ , $\\bar{v}(x)$ and the associated quantities $J$ and $\\mu $ , a variety of asymptotic and numerical methods have been developed (see [1], [10], [11], [12], [13], [4], [5] and the references therein).", "Remark 1 The total flux $J$ defined in (REF ) is sometimes referred to as the capacitance because of a connection to electrostatics.", "Imagine that the ball $\\Omega $ is a dielectric with low permittivity, and that $\\Gamma _A$ is a collection of perfectly conducting patches on its surface, connected by infinitesimally thin wires so that they act as a single conductor.", "Suppose also that this object is surrounded by a dielectric with high permittivity and that the outer dielectric is enclosed by an infinitely large perfectly conducting sphere, with a unit voltage drop from the outer conductor to the conducting patches.", "Then, letting the ratio of the permittivity of the outer dielectric to that of the inner dielectric approach $\\infty $ , the electrostatic potential outside $\\overline{\\Omega }$ satisfies (REF ), and the electrostatic capacitance of the system is given by $J$ .", "Remark 2 The total flux $J$ is computed directly from the Neumann data on $\\Gamma _A$ , as seen from (REF ).", "Likewise, the average MFPT $\\mu $ can be computed directly from the Dirichlet data $\\bar{v}$ on $\\Gamma _R$ .", "For this, we use Green's second identity, $\\int _\\Omega \\left( \\psi \\Delta \\varphi - \\varphi \\Delta \\psi \\right) \\, dV = \\int _{\\partial \\Omega } \\left( \\psi \\frac{\\partial \\varphi }{\\partial n} - \\varphi \\frac{\\partial \\psi }{\\partial n} \\right)\\, dS$ with $\\psi (x) \\equiv \\bar{v}(x)$ and $\\varphi (x) \\equiv \\frac{|x|^2}{6}$ .", "Using that $\\Delta \\frac{|x|^2}{6} = 1$ , $\\int _\\Omega \\frac{|x|^2}{6} dV(x) =\\frac{2 \\pi }{15}$ , and that for $|x| = 1$ , $n\\equiv x$ and $\\frac{\\partial }{\\partial n} \\frac{|x|^2}{6} = \\frac{1}{3}$ , we obtain $\\int _\\Omega \\bar{v}\\, dV = \\frac{1}{3} \\int _{\\partial \\Omega } \\bar{v}\\, dS - \\frac{1}{6}\\int _{\\partial \\Omega } \\frac{\\partial \\bar{v}}{\\partial n} \\, dS - \\frac{2\\pi }{15}.$ Applying the divergence theorem to the second term, dividing by $|\\Omega |$ , and using that $|\\Omega | = \\frac{4 \\pi }{3}$ , $|\\partial \\Omega | = 4 \\pi $ gives an alternative expression for $\\mu $ : $\\mu = \\frac{1}{|\\partial \\Omega |}\\int _{\\partial \\Omega } \\bar{v}\\, dS + \\frac{1}{15} \\equiv \\frac{1}{|\\partial \\Omega |}\\int _{\\Gamma _R} \\bar{v}\\, dS + \\frac{1}{15}.$ Thus the average MFPT over $\\Omega $ may be obtained from the average MFPT on $\\partial \\Omega $ .", "Given an arrangement of patches, we present here a fast, high-order accurate numerical scheme for the evaluation of $\\bar{u}$ , $J$ , $\\bar{v}$ , and $\\mu $ , of particular use when $N$ is large and $\\varepsilon $ is small.", "Such computations are numerically challenging, partly because solutions of elliptic boundary value problems of mixed type are singular near Dirichlet-Neumann interfaces [14], [15].", "Direct discretization, using either PDE-based methods or integral equation methods, would require many degrees of freedom to resolve the singularities in $\\bar{u}$ and $\\bar{v}$ .", "Further, the resulting linear systems would be large and ill-conditioned, especially in cases involving large numbers of small patches.", "The formulation presented here is well-conditioned, is nearly identical for the capture and escape problems, and suffers no loss in accuracy or increase in computational cost as $\\varepsilon $ is decreased.", "To make large-scale problems practical, we have developed a fast algorithm, so that the cost per GMRES iteration [16] is of the order $\\mathcal {O}(N \\log N)$ , rather than $\\mathcal {O}(N^2)$ .", "Our method involves the following ingredients: We make use of the Neumann Green's functions for the interior and exterior of the sphere to recast (REF ) and (REF ) as first-kind integral equations for a density $\\sigma $ on $\\Gamma _A$ .", "Given a patch radius $\\varepsilon $ , we precompute the solution operator for the corresponding one-patch integral equation, assuming smooth Dirichlet data which is expanded in a rapidly converging series of Zernike polynomials.", "We analytically incorporate a square root singularity in the induced density at the Dirichlet/Neumann interface.", "To solve the many-patch integral equation, we use the solution operator for the one-patch integral equation as a block-diagonal “right preconditioner”.", "This yields a second-kind Fredholm system of equations which, upon discretization, is well-conditioned and has a small number of degrees of freedom per patch.", "We solve the resulting linear system by iteration, using GMRES, and accelerate each matrix-vector product by means of a fast algorithm modeled after the fast multipole method (FMM).", "The fast algorithm uses the interpolative decomposition [17] to derive a compressed representation of the outgoing field induced by the density on a patch, a hierarchical organization of patches into groups at different length scales, and a spectral representation of the smooth incoming field due to densities on distant patches.", "Though most of the past work on the narrow capture and narrow escape problems is based on asymptotics, we wish to highlight the numerical work of Bernoff and Lindsay, who also proposed an integral equation method for the narrow capture problem for the sphere and the plane based on the Neumann Green's function [12].", "Our approach to discretization shares several characteristics with theirs: both methods incorporate a square root singularity into the density on each patch analytically, and both use a representation in terms of Zernike polynomials for smooth Dirichlet data on each patch.", "The paper is organized as follows.", "In Section , we introduce the analytical framework for our method, reformulate the boundary value problems as first-kind integral equations using single layer potentials, and explain how to calculate the scalar quantities $J$ and $\\mu $ directly as functionals of the layer potential densities.", "In Section , we show how to transform the first-kind integral equations into Fredholm equations of the second-kind, using the solution operator for the one-patch integral equation as a preconditioner.", "In Sections , , and we describe our discretization approach for the full system of equations, and in Section we introduce the technical tools involved in our fast algorithm.", "In Section we describe the full method, including our fast algorithm to accelerate the application of the system matrix.", "In Section , we provide a detailed description of the solver for the one-patch integral equation.", "We demonstrate the performance of the method with numerical experiments in Section .", "Figure: MFPT v ¯\\bar{v} plotted just inside the unit sphere foran example with N=100000N = 100\\, 000 random well-separated patches of radius ε≈0.00141\\varepsilon \\approx 0.00141.", "Theintegral equation associated with this problem was solved in 63minutes on a 60-core workstation, to an L 2 L^2 residual error ofapproximately 2.2×10 -8 2.2 \\times 10^{-8}.", "Further details are given in Section." ], [ "Analytical setup", "Our approach to solving the exterior and interior problems (REF ) and (REF ) uses a representation of each solution as an integral involving the corresponding Neumann Green's function.", "This representation leads to an integral equation, and the scalar quantity of interest - $J$ or $\\mu $ - can be calculated directly from its solution." ], [ "Neumann Green's functions for the sphere", "Let us first consider the exterior Neumann problem: ${\\left\\lbrace \\begin{array}{ll}\\Delta u = 0 & x \\in \\mathbb {R}^n \\backslash \\overline{\\Omega } \\\\\\frac{\\partial u}{\\partial n} = g & x \\in \\partial \\Omega \\\\u(x) \\rightarrow 0 & |x| \\rightarrow \\infty .\\end{array}\\right.", "}$ Here $\\Omega $ is a bounded domain, and $g$ a given continuous function on $\\partial \\Omega $ .", "This problem has a unique solution, and if $\\Omega $ is the unit ball in $\\mathbb {R}^3$ , it may be obtained using the exterior Neumann Green's function $G_E(x,x^{\\prime })$ , which is known analytically [18], [19].", "$G_E$ is symmetric, and satisfies ${\\left\\lbrace \\begin{array}{ll}-\\Delta G_E(x,x^{\\prime }) = 4 \\pi \\delta (x-x^{\\prime }) &x,x^{\\prime } \\in \\mathbb {R}^3 \\backslash \\Omega \\\\\\frac{\\partial }{\\partial n_{x^{\\prime }}} G_E(x,x^{\\prime }) = 0 &x \\in \\mathbb {R}^3\\backslash \\Omega , \\, x^{\\prime } \\in \\partial \\Omega , \\, x \\ne x^{\\prime }, \\\\\\end{array}\\right.", "}$ with $G_E(x,x^{\\prime }) = \\mathcal {O}\\left( |x|^{-1} \\right)$ as $|x| \\rightarrow \\infty $ for fixed $x^{\\prime } \\in \\mathbb {R}^3 \\backslash \\Omega $ .", "It can be shown, using Green's second identity, that $u(x) = \\frac{1}{4 \\pi } \\int _{\\partial \\Omega } G_E(x,x^{\\prime }) g(x^{\\prime }) \\, dS(x^{\\prime })$ solves the exterior Neumann problem (REF ).", "When $x^{\\prime } \\in \\partial \\Omega $ , $G_E$ is given explicitly by $G_E(x,x^{\\prime }) = \\frac{2}{|x-x^{\\prime }|} + \\log \\left( \\frac{|x| - x \\cdot x^{\\prime }}{1 - x \\cdot x^{\\prime } + |x -x^{\\prime }|} \\right).$ If, in addition, $x \\in \\partial \\Omega $ , then $G_E(x,x^{\\prime }) = \\frac{2}{|x-x^{\\prime }|} - \\log \\left( \\frac{2}{|x-x^{\\prime }|} \\right) -\\log \\left(1 + \\frac{1}{2} |x-x^{\\prime }| \\right).$ The interior Neumann problem is given by ${\\left\\lbrace \\begin{array}{ll}\\Delta v = 0 & x \\in \\Omega \\\\\\frac{\\partial v}{\\partial n} = g & x \\in \\partial \\Omega ,\\end{array}\\right.", "}$ where $\\Omega $ is a bounded domain and $g$ is a continuous function defined on the boundary, with the additional constraint that $g$ must satisfy the consistency condition $\\int _{\\partial \\Omega } g \\, dS = 0.$ This problem has a solution which is unique up to an additive constant.", "The consistency condition precludes the existence of an interior Green's function with zero Neumann data.", "Rather, for $\\Omega $ the unit ball in $\\mathbb {R}^3$ , we have an interior Neumann Green's function $G_I(x,x^{\\prime })$ , also known analytically [18], [19].", "It is again symmetric and satisfies ${\\left\\lbrace \\begin{array}{ll}-\\Delta G_I(x,x^{\\prime }) = 4 \\pi \\delta (x-x^{\\prime }) &x,x^{\\prime } \\in \\Omega \\\\\\frac{\\partial }{\\partial n_{x^{\\prime }}} G_I(x,x^{\\prime }) = -1 &x \\in \\overline{\\Omega }, \\, x^{\\prime } \\in \\partial \\Omega , \\, x \\ne x^{\\prime }.", "\\\\\\end{array}\\right.", "}$ As before, $v(x) = \\frac{1}{4 \\pi } \\int _{\\partial \\Omega } G_I(x,x^{\\prime }) g(x^{\\prime }) \\, dS(x^{\\prime })$ solves the interior Neumann problem (REF ).", "When $x^{\\prime } \\in \\partial \\Omega $ , $G_I$ is given by $G_I(x,x^{\\prime }) = \\frac{2}{|x-x^{\\prime }|} + \\log \\left( \\frac{2}{1 - x \\cdot x^{\\prime } +|x - x^{\\prime }|} \\right).$ If, in addition, $x \\in \\partial \\Omega $ , this reduces to $G_I(x,x^{\\prime }) = \\frac{2}{|x-x^{\\prime }|} + \\log \\left( \\frac{2}{|x-x^{\\prime }|} \\right) -\\log \\left(1 + \\frac{1}{2} |x-x^{\\prime }| \\right).$ This is the same as (REF ) except for the sign of the second term.", "In other words, the restrictions of the interior and exterior Green's functions to the boundary $\\partial \\Omega $ are nearly identical.", "The following lemma, which we will require in the next section, follows from the second property in (REF ) and the symmetry of $G_I$ .", "Lemma 1 Let $\\Gamma $ be an open subset of $\\partial \\Omega $ and let $\\sigma $ be continuous on $\\Gamma $ .", "Then for $x \\in \\partial \\Omega \\backslash \\bar{\\Gamma }$ , $\\frac{\\partial }{\\partial n_x} \\int _{\\Gamma } G_I(x,x^{\\prime })\\sigma (x^{\\prime }) \\, dS(x^{\\prime }) = -\\int _{\\Gamma } \\sigma (x^{\\prime }) \\, dS(x^{\\prime }).$ Figure: MFPT v ¯\\bar{v} plotted just inside the unit sphere foran example with N=10000N = 10\\, 000 uniformly distributed patches of radius ε≈0.00447\\varepsilon \\approx 0.00447.", "Theintegral equation associated with this problem was solved in 114seconds on a 60-core workstation, and in 15 minutes on a four-core,eight-thread laptop, to an L 2 L^2 residual error ofapproximately 6.4×10 -8 6.4 \\times 10^{-8}.", "Further details are given in Section." ], [ "The narrow capture problem", "We turn now to the narrow capture problem, which is the simpler of the two.", "We first modify the BVP (REF ) by defining $u = 1 - \\bar{u}$ , so that solutions decay as $|x| \\rightarrow \\infty $ .", "The function $u$ satisfies the modified equations ${\\left\\lbrace \\begin{array}{ll}\\Delta u = 0 & x \\in \\mathbb {R}^3 \\backslash \\Omega \\\\u = 1 & x \\in \\Gamma _A \\\\\\frac{\\partial u}{\\partial n} = 0 & x \\in \\Gamma _R \\\\u(x) \\rightarrow 0 & |x| \\rightarrow \\infty .", "\\\\\\end{array}\\right.", "}$ Let us denote the unknown Neumann data on $\\Gamma _A$ by $\\sigma (x^{\\prime })$ .", "Then (REF ) implies that for $x \\in \\mathbb {R}^3\\backslash \\overline{\\Omega }$ , we have $u(x) = \\frac{1}{4 \\pi } \\int _{\\Gamma _A} G_E(x,x^{\\prime }) \\frac{\\partial u}{\\partial n}(x^{\\prime }) \\, dS(x^{\\prime }) \\equiv \\int _{\\Gamma _A} G_E(x,x^{\\prime }) \\sigma (x^{\\prime })\\, dS(x^{\\prime }).$ By analogy with classical potential theory, we refer to this as a single layer potential representation with density $\\sigma $ supported on $\\Gamma _A$ .", "Since the dominant singularity of the kernel $G_E$ is that of the free-space Green's function for the Laplace equation, this single layer potential is continuous up to $\\partial \\Omega $ .", "Taking the limit as $x \\rightarrow \\Gamma _A$ and using the second condition in (REF ), we obtain the first-kind integral equation $\\int _{\\Gamma _A} G_E(x,x^{\\prime }) \\sigma (x^{\\prime }) \\, dS(x^{\\prime }) = f(x), \\quad x \\in \\Gamma _A,$ where $f(x) \\equiv 1$ , with the weakly singular kernel $G_E$ .", "Assuming that we can solve (REF ) for $\\sigma $ , it follows that $u(x)$ , given by (REF ), is the solution to (REF ), and that $\\bar{u}= 1-u$ solves (REF ).", "Furthermore, since $\\sigma \\equiv \\frac{\\partial u}{\\partial n} \\equiv -\\frac{\\partial \\bar{u}}{\\partial n}$ on $\\Gamma _A$ , the total flux $J$ from (REF ) will be given by $J = -I_\\sigma $ where we have introduced the shorthand $I_\\sigma := \\int _{\\Gamma _A} \\sigma \\, dS.$ We will not prove the existence of a solution to (REF ), but sketch a possible approach.", "If we replace the kernel $G_E$ in (REF ) with its first term $\\frac{2}{|x-x^{\\prime }|}$ , which is the free-space Green's function for the Laplace equation (up to a constant scaling factor), we obtain the first-kind integral equation for the Dirichlet problem on an open surface, which we can denote in operator form by $ \\mathcal {S}_0 \\sigma = f. $ This is a well-studied problem, which has a unique solution in the Sobolev space $H^{-\\frac{1}{2}}(\\Gamma _A)$ given data in $H^{\\frac{1}{2}}(\\Gamma _A)$ [20].", "Writing the full single layer potential operator in the form $\\mathcal {S}_0 + K$ , where $K$ is a compact pseudodifferential operator of order $-2$ , we may rewrite (REF ) in the form of a Fredholm integral equation of the second kind: $(I + \\mathcal {S}_0^{-1}K) \\sigma = \\mathcal {S}_0^{-1} \\, f.$ Thus, to prove existence and uniqueness for the single patch equation, one can apply the Fredholm alternative to (REF ).", "That is, one need only show that the homogenous version of the single patch equation has no nontrivial solutions.", "This is straightforward to prove when $\\varepsilon $ is sufficiently small, since the norm of $K$ goes to zero as $\\varepsilon $ goes to zero and the corresponding Neumann series converges.", "We conjecture that the result holds for any $\\varepsilon $ ." ], [ "The narrow escape problem", "The analytical formulation of the narrow escape problem is somewhat more complicated than that of the narrow capture problem, largely because of the non-uniqueness of the interior Neumann problem, but it leads to a similar integral equation.", "We first recast the Poisson problem (REF ) as a Laplace problem with inhomogeneous boundary conditions.", "Assume that $v$ satisfies ${\\left\\lbrace \\begin{array}{ll}\\Delta v = 0 & x \\in \\Omega \\\\v = 1 & x \\in \\Gamma _A \\\\\\frac{\\partial v}{\\partial n} = D & x \\in \\Gamma _R,\\end{array}\\right.", "}$ for some non-zero constant $D$ .", "Then $\\bar{v}$ given by $\\bar{v}= \\frac{v - 1}{3D} + \\frac{1 - |x|^2}{6}$ solves (REF ).", "We will therefore seek a method to produce a solution of (REF ) for some $D \\ne 0$ .", "Figure: MFPT v ¯\\bar{v} plotted just inside the unit sphere foran example with N=10000N = 10\\, 000 random, clustered patches of radius ε≈0.0035\\varepsilon \\approx 0.0035.", "Theintegral equation associated with this problem was solved in 269seconds on a 60-core workstation, and in 35 minutes on a four-core,eight-thread laptop, to an L 2 L^2 residual error ofapproximately 6.5×10 -8 6.5 \\times 10^{-8}.", "Further details are given in Section.Lemma 2 Let $v(x) = \\int _{\\Gamma _A} G_I(x,x^{\\prime }) \\sigma (x^{\\prime }) \\, dS(x^{\\prime }),$ where $\\sigma $ satisfies the first-kind integral equation $\\int _{\\Gamma _A} G_I(x,x^{\\prime }) \\sigma (x^{\\prime }) \\, dS(x^{\\prime }) = 1$ for $x \\in \\Gamma _A$ .", "Then $v$ solves (REF ) with $D = -I_\\sigma $ , for $I_\\sigma $ defined as in (REF ), and $I_\\sigma \\ne 0$ .", "Proof: The function $v(x)$ is harmonic in $\\Omega $ , and by Lemma REF , it satisfies the third condition of (REF ) with $D \\equiv -I_\\sigma $ , as long as $I_\\sigma \\ne 0$ .", "Taking $x$ to $\\Gamma _A$ and using the continuity of the single layer potential up to $\\Gamma _A$ , we find that $v$ will satisfy the second condition of (REF ) as long as $\\sigma $ satisfies (REF ).", "It remains only to show that if $\\sigma $ satisfies (REF ), then $I_\\sigma \\ne 0$ .", "If not, then $v$ given by (REF ) satisfies (REF ) with $D = 0$ , as does the constant function 1.", "It follows from Green's identity that solutions to (REF ) with the same value of $D$ are unique, so we must have $v \\equiv 1$ .", "The formula (REF ) for $G_I$ shows that if $|x^{\\prime }| = 1$ , then $G_I(0,x^{\\prime })= 2$ , so if $v \\equiv 1$ we have $ 1 = v(0) = 2 \\int _{\\Gamma _A} \\sigma (x^{\\prime }) \\, dS(x^{\\prime }) = 2 I_\\sigma , $ a contradiction.", "$\\Box $ The question of the existence of a solution to (REF ) is analogous to that for (REF ), which was discussed in Section REF .", "To calculate the average MFPT $\\mu $ directly from $\\sigma $ , we plug (REF ) into (REF ) to obtain $\\mu = \\frac{1}{3 D|\\partial \\Omega |} \\int _{\\partial \\Omega } v \\, dS - \\frac{1}{3D} +\\frac{1}{15}.$ To calculate $\\frac{1}{|\\partial \\Omega |} \\int _{\\partial \\Omega } v \\, dS$ , we use the representation (REF ): $\\frac{1}{|\\partial \\Omega |} \\int _{\\partial \\Omega } v \\, dS &= \\frac{1}{|\\partial \\Omega |} \\int _{\\partial \\Omega } \\int _{\\Gamma _A}G_I(x,x^{\\prime }) \\sigma (x^{\\prime }) \\, dS(x^{\\prime }) \\, dS(x) \\\\&= \\int _{\\Gamma _A} \\sigma (x^{\\prime }) \\left( \\frac{1}{|\\partial \\Omega |} \\int _{\\partial \\Omega }G_I(x,x^{\\prime }) \\, dS(x) \\right) \\, dS(x^{\\prime }).$ A calculation using the explicit form (REF ) of $G_I$ gives $\\frac{1}{|\\partial \\Omega |} \\int _{\\partial \\Omega } G_I(x,x^{\\prime }) \\, dS(x) = 2$ for any $x^{\\prime } \\in \\partial \\Omega $ .", "We therefore have $\\frac{1}{|\\partial \\Omega |} \\int _{\\partial \\Omega } v \\, dS = 2 I_\\sigma .$ Plugging this into (REF ) and replacing $D$ by $-I_\\sigma $ gives $\\mu = \\frac{1}{3 I_\\sigma } - \\frac{3}{5}.$" ], [ "A multiple scattering formalism", "We have shown that the solutions of the two boundary value problems of interest, as well the associated scalars $J$ and $\\mu $ , may be obtained by solving (REF ) and (REF ), respectively, on the collection of absorbing patches.", "These integral equations differ only by the sign of one term in their respective kernels, as seen in Section REF .", "Since our treatment of the two cases is the same, we drop the subscripts on $G_E$ and $G_I$ , and discuss the solution of $ \\int _{\\Gamma _A} G(x,x^{\\prime }) \\sigma (x^{\\prime }) \\, dS(x^{\\prime }) = 1 \\quad x \\in \\Gamma _A,$ where $\\sigma $ is an unknown density on $\\Gamma _A$ .", "Letting $\\Gamma _A = \\cup _{i=1}^N \\Gamma _{i}$ , where $\\Gamma _{i}$ is the $i$ th patch, and letting $\\sigma _i$ be the restriction of $\\sigma $ to $\\Gamma _{i}$ , we write this equation in the form $\\sum _{j=1}^N \\int _{\\Gamma _{j}} G(x,x^{\\prime }) \\sigma _j(x^{\\prime }) \\, dS(x^{\\prime }) = 1 \\quad x \\in \\Gamma _{i}, \\, i = 1,\\ldots ,N.$ For the sake of simplicity, we assume that each patch has the same radius $\\varepsilon $ .", "We also assume that the patches are well-separated, in the sense that the distance between the centers of any two patches in arc length along the surface of the sphere is at least $3 \\varepsilon $ .", "That is, any two patches are separated by a distance greater than or equal to their own radius.", "For $x \\in \\Gamma _i$ , we define $\\mathcal {S}_{ij}$ by $(\\mathcal {S}_{ij} \\sigma _j)(x) := \\int _{\\Gamma _j} G(x,x^{\\prime }) \\sigma _j(x^{\\prime })\\, dS(x^{\\prime }).", "$ More specifically, we define each such operator in a coordinate system fixed about the center of $\\Gamma _j$ .", "Since all the patches have the same radius, the operators $\\mathcal {S}_{ii}$ are therefore identical, and we denote $\\mathcal {S}_{ii}$ by $\\mathcal {S}$ .", "Thus we may rewrite the many-patch integral equation (REF ) in the form $\\mathcal {S}\\sigma _i + \\sum _{j\\ne i}^N \\mathcal {S}_{ij} \\sigma _j = 1 \\quad i = 1,\\ldots ,N.$ The aim of this section is to reformulate (REF ) as a Fredholm system of the second kind in an efficient basis.", "Definition 1 Let $f$ be a smooth function on some patch $\\Gamma _i$ .", "The one-patch integral equation with data $f$ is defined by $\\mathcal {S}\\sigma _i = f,$ where $\\sigma _i$ is an unknown density on $\\Gamma _i$ .", "Remark 3 Writing (REF ) in the form $\\mathcal {S}\\sigma _i = 1 - \\sum _{j\\ne i}^N \\mathcal {S}_{ij} \\sigma _j,$ and observing that $\\mathcal {S}_{ij} \\sigma _j$ is a smooth function for $\\Gamma _j$ well-separated from $\\Gamma _i$ , we see that each $\\sigma _i$ satisfies a one-patch integral equation with smooth data.", "Conversely, if $\\sigma _1,\\ldots ,\\sigma _N$ satisfy (REF ), then each $\\mathcal {S}\\sigma _i$ is smooth on $\\Gamma _i$ .", "It is convenient to make use of an orthonormal basis $\\lbrace q_1,q_2,\\dots \\rbrace $ of smooth functions on each patch, so that for smooth $f$ on $\\Gamma _i$ we have $ f(x) = \\sum _{n=1}^\\infty \\hat{f}_n q_n(x),$ in the usual $L^2$ sense, with $ \\hat{f}_n = \\int _{\\Gamma _i} f(x) q_n(x) \\, dx.$ We postpone until Section a discussion of our particular choice of the basis $\\lbrace q_n\\rbrace $ , which will be constructed using Zernike polynomials.", "We will denoted by $\\hat{f}^K$ the vector of the first $K$ coefficients: $ \\hat{f}^K = (\\hat{f}_1,\\hat{f}_2,\\ldots ,\\hat{f}_K)^T.$ Definition 2 Let $f$ be a smooth function on $\\Gamma $ defined by (REF ), with $\\hat{f}$ , $\\hat{f}^K$ computed as above.", "The projection operators $\\mathcal {P}$ and $\\mathcal {P}^K$ are defined by $ \\left( \\mathcal {P}[f] \\right)_n = \\hat{f}_n, $ with $\\mathcal {P}^K$ defined in the same manner for $n \\le {K}$ .", "The synthesis operators $\\mathcal {Q}$ and $\\mathcal {Q}^K$ are defined by $ \\mathcal {Q}[\\hat{f}](x) = \\sum _{n=1}^\\infty \\hat{f}_n q_n(x),\\quad \\mathcal {Q}^K[\\hat{f}_{K}](x) = \\sum _{n=1}^{K} f_n q_n(x).$ $\\mathcal {P}$ and $\\mathcal {P}^K$ are left inverses of $\\mathcal {Q}$ and $\\mathcal {Q}^K$ , respectively.", "Finally, we define $b_n$ to be the solution of the one-patch integral equation with data given by the basis element $q_n$ : $b_n = \\mathcal {S}^{-1} q_n.$ Thus, if a smooth function $f$ on $\\Gamma _i$ is expanded as $f = \\sum _{n=1}^\\infty \\hat{f}_n q_n$ , then the solution of the one-patch integral equation with data $f$ is given by $\\mathcal {S}^{-1} f = \\sum _{n=1}^\\infty \\hat{f}_n b_n$ .", "This motivates the following definition.", "Definition 3 We denote the solution operator of the one-patch integral equation in the basis $\\lbrace q_n\\rbrace $ by $\\mathcal {B}= \\mathcal {S}^{-1} \\mathcal {Q}.$ For $\\hat{f} = \\lbrace \\hat{f}_1,\\hat{f}_2,\\ldots \\rbrace $ and $f(x) =\\sum _{n=1}^\\infty \\hat{f}_n q_n(x)$ , $\\mathcal {B}$ satisfies $\\mathcal {B}[\\hat{f}](x) = \\sum _{n=1}^\\infty \\hat{f}_n b_n(x).$ We denote the solution operator of the one-patch integral equation in the truncated basis $\\lbrace q_n\\rbrace _{n=1}^K$ by $\\mathcal {B}^K= \\mathcal {S}^{-1} \\mathcal {Q}^K.$ For $\\hat{f} = (\\hat{f}_1,\\hat{f}_2,\\ldots \\,\\hat{f}_K)$ and $f(x) =\\sum _{n=1}^K \\hat{f}_n q_n(x)$ , $\\mathcal {B}_K$ satisfies $\\mathcal {B}^K[\\hat{f}](x) = \\sum _{n=1}^K \\hat{f}_n b_n(x).$ Note that the construction of $\\mathcal {B}$ requires solving the one-patch integral equations with data $q_1,q_2,\\ldots $ to obtain $b_1,b_2,\\ldots $ , and that the construction of $\\mathcal {B}^K$ requires solving the first $K$ of these equations.", "For a fixed patch radius $\\varepsilon $ , these solutions are universal and do not depend on the number or arrangement of patches in the full problem.", "Given $\\mathcal {B}$ , we are now able to rewrite the integral equation (REF ) as a well-conditioned Fredholm system of the second kind in the basis $\\lbrace q_n\\rbrace $ .", "On $\\Gamma _i$ , we define a function $f_i$ by $ f_i = \\mathcal {S}\\sigma _i.", "$ Substituting into (REF ), we have $f_i + \\sum _{j\\ne i}^N \\mathcal {S}_{ij} \\mathcal {S}^{-1} f_j = 1\\quad i = 1,\\ldots ,N. $ To transform to the basis $\\lbrace q_n\\rbrace $ , we write $f_i$ in the form $f_i = \\mathcal {Q}\\hat{f}_i$ and multiply on the left by $\\mathcal {P}$ to obtain $\\hat{f}_i + \\mathcal {P}\\sum _{j\\ne i}^N \\mathcal {S}_{ij}\\mathcal {B}\\hat{f}_j =\\mathcal {P}\\, 1 \\quad i = 1,\\ldots ,N.$ Since the patches $\\Gamma _i$ and $\\Gamma _j$ are well-separated, $\\mathcal {P}\\mathcal {S}_{ij} \\mathcal {B}$ is a compact operator for $i \\ne j$ , so that (REF ) is a Fredholm system of the second kind.", "The corresponding truncated system takes the form $\\hat{f}_i^K + \\mathcal {P}^K\\sum _{j\\ne i}^N\\mathcal {S}_{ij}\\mathcal {B}^K\\hat{f}_j^K =\\mathcal {P}^K\\, 1 \\quad i = 1,\\ldots ,N,$ where we have used the approximation $f_i \\approx \\mathcal {Q}^K\\hat{f}_i^K$ .", "Remark 4 We refer to the approach described above as a multiple scattering formalism by analogy with the problem of wave scattering from multiple particles in a homogeneous medium.", "In the language of scattering theory, one would say that for the $i$ th patch, the boundary data is the known data ($\\mathcal {S}\\sigma _i = 1$ ), perturbed by the potential “scattered\" from all other patches, namely $\\sum _{j\\ne i}^N \\mathcal {S}_{ij} \\sigma _j$ .", "Solving the system (REF ) corresponds to determining how the collection of uncoupled single patch solutions $\\mathcal {S}\\sigma _i = 1$ needs to be perturbed to account for the “multiple scattering\" effects.", "The approach developed above, where $f_i = \\mathcal {S}\\sigma _i$ are the unknowns, has many advantages over solving (REF ) directly, even with ${\\mathcal {S}}^{-1}$ as a left preconditioner.", "By working in the spectral basis, we avoid the need to discretize $\\sigma _i$ on each patch, the number of degrees of freedom per patch is significantly reduced, and the linear system is a well-conditioned Fredholm equation of the second kind.", "Remark 5 The original unknowns $\\sigma _i$ may be recovered from the solution of (REF ) or (REF ) using the formula $\\sigma _i = \\mathcal {B}\\hat{f}_i \\approx \\mathcal {B}^K \\hat{f}_i^K.$ Thus, we may think of the unknowns $\\hat{f}_i$ as a representation of the unknown density $\\sigma _i$ in the basis $\\lbrace b_n\\rbrace $ .", "We turn now to the construction of an orthonormal basis $\\lbrace q_n \\rbrace $ for smooth functions on a patch, the construction of the singular solutions $b_n = \\mathcal {S}^{-1} q_n$ , and the efficient solution of the discretized multiple scattering system (REF )." ], [ "A basis for smooth functions on a patch", "It is well-known that the Zernike polynomials are a spectrally accurate, orthogonal basis for smooth functions on the disk.", "For a thorough discussion of these functions, we refer the reader to [21].", "Here, we simply summarize their relevant properties.", "The Zernike polynomials on the unit disk $0 \\le r \\le 1$ , $0 \\le \\theta < 2 \\pi $ are given by ${\\left\\lbrace \\begin{array}{ll}Z_n^m(r,\\theta ) &= R_n^m(r) \\cos (m \\theta ) \\\\Z_n^{-m}(r,\\theta ) &= R_n^m(r) \\sin (m \\theta ),\\end{array}\\right.", "}$ with $0 \\le m < \\infty $ , $m \\le n < \\infty $ , and $R_n^m(r) = (-1)^{(n-m)/2} r^m P_{(n-m)/2}^{m,0}(1 - 2 r^2), $ where $P_n^{\\alpha ,\\beta }(x)$ is a Jacobi polynomial on $[-1,1]$ .", "The Jacobi polynomials are orthogonal on $[-1,1]$ with respect to the weight function $(1-x)^\\alpha (1+x)^\\beta $ .", "Thus, for fixed $m$ , the functions $R_n^m(r)$ are orthogonal on $[0,1]$ with respect to the weight function $r$ .", "This gives the orthogonality relation $ \\int _0^{2 \\pi } \\int _0^1 Z_{n_1}^{m_1}(r,\\theta )Z_{n_2}^{m_2}(r,\\theta ) r \\, dr \\, d\\theta = \\frac{(1 + \\delta _{m_1,0}) \\pi }{2n_1 + 2} \\delta _{n_1,n_2} \\delta _{m_1,m_2}.$ The natural truncation of this basis is to fix a cutoff mode $M$ in both the radial and angular variables, and to let $0 \\le m \\le n \\le M$ .", "This yields $K = (M+1)(M+2)/2$ basis functions.", "To use this basis on a generic patch $\\Gamma _i$ , we define a polar coordinate system $(r,\\theta )$ about the patch center, for which $r$ is the distance in arc length along the sphere from the center, and $\\theta $ is the polar angle.", "We rescale the radial variable from $[0,1]$ to $[0,\\varepsilon ]$ , transforming the Zernike polynomials to functions on $\\Gamma _i$ .", "Finally, the basis functions $q_1,\\ldots ,q_K$ discussed in Section can be defined as the scaled Zernike polynomials up to mode $M$ .", ">From the orthogonality relation (REF ), the projection operators $\\mathcal {P}$ and $\\mathcal {P}^K$ are obtained as normalized inner products against Zernike polynomials in polar coordinates.", "This Zernike transform can be implemented numerically using a tensor product quadrature with a Gauss-Legendre rule in the radial variable and a trapezoidal rule in the angular variable.", "The number of grid points required to obtain the exact Zernike coefficients of a function in the space spanned by $q_1,\\dots ,q_K$ is $\\mathcal {O}(K)$ ; we denote this number by $K^*$ .", "We refer to these points as the Zernike sampling nodes $x_1^z,\\ldots ,x_{K^*}^z$ (see [21] for further details).", "Remark 6 Rewriting (REF ) in the form $\\hat{f}_i^K =\\mathcal {P}^K\\, \\left(1 - \\sum _{j\\ne i} \\mathcal {S}_{ij}\\mathcal {B}^K\\hat{f}_j^K \\right),$ we see that the truncation error compared with (REF ) depends on how well the smooth function $1 - \\sum _{j\\ne i} \\mathcal {S}_{ij}\\mathcal {B}^K\\hat{f}_j^K$ is represented in the space spanned by $q_1,\\ldots ,q_K$ .", "In the one-patch case, the summation term vanishes, and $K=1$ is sufficient.", "For multiple patches, the choice of $K$ depends largely on how well-separated the patches are.", "Since the Zernike basis is spectrally accurate, $M$ grows only logarithmically with the desired precision.", "In practice, a posteriori estimates are easily obtained for any fixed configuration by inspection of the decay of the Zernike coefficients $\\hat{f}_i^K$ in the computed solution." ], [ "Informal description of the one-patch solver", "While the details of our solver for the one-patch integral equation $ \\mathcal {S}\\sigma _i = f $ are deferred to Section , we outline the general approach here.", "First, we note that in the absence of curvature (i.e.", "a flat disk on a half-space) and with the associated terms of the Green's function removed, the solution $\\sigma _i$ is known to have a square root singularity at the disk edge [12], [14], [15], [20], [22].", "In our case, we will explicitly include this square root singularity in the representation of $\\sigma _i$ , but also allow for weaker singularities - which we have observed and will demonstrate in Section REF - by using a discretization that is adaptively refined toward the edge $\\partial \\Gamma _i$ .", "Assume then that we have discretized the patch $\\Gamma _i$ using a suitable polar mesh with $n_f$ fine grid points, denoted by $x_{i,1}^f,\\ldots ,x_{i,n_f}^f$ .", "The fine grid points for different patches are identical relative to the coordinate systems of their own patches.", "We denote the corresponding samples of the right-hand side $f$ and $\\sigma _i$ by $\\begin{aligned}\\vec{f} &= (f(x_{i,1}^f),\\ldots ,f(x_{i,n_f}^f))^T, \\\\\\vec{\\sigma }_i &= ((\\vec{\\sigma }_i)_1,\\ldots ,(\\vec{\\sigma }_i)_{n_f})^T\\approx (\\sigma _i(x_{i,1}^f),\\ldots ,\\sigma _i(x_{i,n_f}^f))^T.\\end{aligned}$ We assume that $\\mathcal {S}$ is discretized to high-order accuracy by a matrix $S$ with $\\mathcal {S}[\\sigma _i](x_{i,k}^f)\\approx \\sum _{l=1}^{n_f} S(k,l) (\\vec{\\sigma _i})_l,$ so that the discretized system takes the form $S \\vec{\\sigma }_i = \\vec{f}.$ We will also require a set of quadrature weights, denoted by $w_1^f,\\ldots ,w_{n_f}^f$ and identical for each patch, that permit the accurate integration over $\\Gamma _i$ of the product of an arbitrary smooth function with the discretized density $\\vec{\\sigma }_i$ , taking into account the fact that $\\sigma _i$ has an edge singularity.", "That is, we assume that $\\int _{\\Gamma _i} g(x) \\sigma _i(x) \\, dS(x) \\approx \\sum _{l=1}^{n_f}g(x_l^f) (\\vec{\\sigma }_i)_l w_l^f$ for any smooth $g$ , with high-order accuracy.", "In the next section, we will use this quadrature to discretize the operators $\\mathcal {S}_{ij}$ .", "The solutions of the $K$ one-patch integral equations (REF ) may be obtained in a precomputation, after which we have access to the functions $b_1,\\ldots ,b_K$ sampled on the fine grid.", "We assemble these functions into an $n_f \\times K$ matrix $B$ with $B(n,m) = b_m(x_n^f).$ $B$ is then the discretization of the operator $\\mathcal {B}^K$ , mapping the first $K$ Zernike coefficients of a smooth function to the solution of the corresponding one-patch integral equation sampled on the fine grid.", "If we denote by $Q$ the discretization of the synthesis operator $\\mathcal {Q}^K$ as an $n_f\\times K$ matrix, $Q(i,j) = q_j(x_i^f),$ then we have, as in Definition REF , $SB = Q.$ In short, the precomputation amounts to solving this matrix system for $B$ ." ], [ "Discretization of the multiple scattering system", "We return now to the multiple scattering system (REF ).", "The unknowns on $\\Gamma _i$ are defined in the truncated Zernike basis as $\\hat{f}_i^K$ .", "We will need as intermediate variables the fine grid samples of $\\sigma _i(x)$ .", "From Remark REF , we define the sampling vector $\\vec{\\sigma _i}$ by $ \\vec{\\sigma _i} = B \\hat{f}_i^K \\approx \\mathcal {B}^K \\hat{f}_i^K.", "$ In order to discretize the integral operators $\\mathcal {S}_{ij}$ for $i \\ne j$ , we note that $G(x,x^{\\prime })$ is smooth for $x \\in \\Gamma _i$ , $x^{\\prime } \\in \\Gamma _j$ , and use the quadrature (REF ).", "This yields $\\int _{\\Gamma _{j}} G(x,x^{\\prime }) \\sigma _j(x^{\\prime }) \\, dS(x^{\\prime }) \\approx \\sum _{l=1}^{n_f} G(x,x_{j,l}^f) (\\vec{\\sigma _j})_l w_l^f.$ Setting $x = x_{i,k}^z$ to be the $k$ th Zernike sampling node on $\\Gamma _{i}$ , we define the matrix $S_{ij}$ by $S_{ij}(k,l) = G(x_{i,k}^z,x_{j,l}^f) w_l^f.", "$ Thus, $S_{ij}$ maps a density sampled on the fine grid on $\\Gamma _j$ to the smooth field it induces at the Zernike sampling nodes on $\\Gamma _i$ .", "Lastly, we discretize the truncated Zernike transform $\\mathcal {P}^K$ as a $K\\times K^*$ matrix $P$ using the trapezoidal-Legendre scheme described in Section .", "Definition 4 The discrete Zernike transform $P$ is defined to be the mapping of a smooth function sampled on the $K^*$ Zernike sampling nodes to its $K$ Zernike coefficients.", "We can now write the multiple scattering system (REF ) in a fully discrete form, $\\hat{f}_i^K + P \\sum _{j \\ne i} S_{ij} B\\hat{f}_j^K = P \\vec{1} \\quad i=1,\\ldots ,N,$ where $\\vec{1}$ is the vector of length $K^*$ with all entries equal to 1.", "Since $P \\in \\mathbb {R}^{K \\times K^*}$ , $S_{ij} \\in \\mathbb {R}^{K^* \\times n_f}$ , and $B \\in \\mathbb {R}^{n_f \\times K}$ , this is a linear system of dimensions $K N \\times K N$ , with $K << n_f$ degrees of freedom per patch.", "As a discretization of a Fredholm system of the second kind, it is amenable to rapid solution using an iterative method such as GMRES [16].", "We now describe how to calculate the constants $J$ and $\\mu $ from the solution of (REF ).", "We saw in Sections REF and REF that these can be computed directly from $I_\\sigma =\\sum _{i=1}^N \\int _{\\Gamma _{i}} \\sigma _i \\, dS$ .", "Using the fine grid quadrature (REF ), we have $I_\\sigma = \\sum _{i=1}^N\\int _{\\Gamma _{i}} \\sigma _i \\, dS \\approx \\sum _{i=1}^N \\sum _{k=1}^{n_f} (B\\hat{f}_i^K)_kw_k^f = (w_1^f,\\ldots ,w_{n_f}^f) B \\sum _{i=1}^N \\hat{f}_i^K.$ Since we may precompute the row vector $I := (w_1^f,\\ldots ,w_{n_f}^f) B$ of length $K$ , the cost to compute $I_\\sigma $ is $\\mathcal {O}(NK)$ .", "When the system (REF ) is solved iteratively, each matrix-vector product is dominated by the computation of the “multiple scattering events” $P \\sum _{j \\ne i} S_{ij} B \\hat{f}_j^K$ for $i=1,\\ldots ,N$ .", "That is, for each patch $\\Gamma _i$ , we must compute the Zernike coefficients of the field induced on that patch by the densities on all other patches.", "Note that if we were to calculate the above sums by simple matrix-vector products, the cost would be $\\mathcal {O}(n_f K N^2)$ .", "We turn now to the description of a scheme that permits the computation of these sums using $\\mathcal {O}(K N \\log N)$ operations, with a constant which depends only on the desired precision, but not on $n_f$ ." ], [ "Efficient representation of outgoing\nand incoming fields", "Our fast algorithm relies on what is variously referred to as a compressed, skeletonized, or sparsified representation of the far field induced by a source density $\\sigma _i$ on a single patch $\\Gamma _i$ (Fig.", "REF ).", "We define the far field region $\\Theta _i$ for a patch $\\Gamma _i$ to be the set of points whose distance from the center of $\\Gamma _i$ (measured in arc length along the surface of the sphere) is greater than $2\\varepsilon $ .", "In light of our restriction on the minimum patch separation distance, this ensures that the far field region of a particular patch contains every other patch.", "Figure: For a patch Γ i \\Gamma _i, the far field region Θ i \\Theta _i is defined asthe complement on the surface of the sphere of a disk of radius 2ε2\\varepsilon , measured in arclength, about the center of Γ i \\Gamma _i.", "The black dots in thefigure represent the subset of the fine grid points used toefficiently represent the outgoing field induced by the densityσ i \\sigma _i.We start from (REF ), which was used to define the matrix $S_{ij}$ .", "We will show that there is a subset of $p$ fine grid points with $p << n_f$ and modified source strengths $\\vec{\\rho }_i =(\\rho _{i,1},\\rho _{i,2},\\dots ,\\rho _{i,p})^T$ so that $\\int _{\\Gamma _i} G(x,x^{\\prime }) \\sigma _i(x^{\\prime }) \\, dS(x^{\\prime }) \\approx \\sum _{l=1}^{n_f} G(x,x_{i,l}^f) (\\vec{\\sigma }_i)_l w_l^f\\approx \\sum _{m=1}^pG(x,x_{i,\\pi (m)}^f) \\rho _{i,m},$ for any $x \\in \\Theta _i$ .", "Moreover, there is a stable algorithm for obtaining this compressed or skeletonized outgoing representation.", "Here, $\\pi (m)$ is an indexing function which maps $\\lbrace 1,\\ldots ,p\\rbrace \\rightarrow \\lbrace 1,\\ldots ,n_f\\rbrace $ , and identifies which of the original fine grid points are used in the representation.", "The number $p$ represents the numerical rank, to a specified precision, of the $n_f$ functions $\\lbrace G(x,x_{i,l}^f)\\rbrace $ on $\\Theta _i$ .", "Remark 7 The existence of such low-rank factorizations is discussed in detail in [23], [24], [25].", "For the purposes of computation, we will use the interpolative decomposition (ID) [17], [23], [26], described briefly below.", "The ID and related compression schemes are essential and widely used in hierarchical, fast algorithms for applying and inverting dense matrices (see for example [27], [28], [29], [30], [31], [32], [33], [34], [35], [36] and the references therein)." ], [ "The interpolative decomposition", "We consider a generic patch $\\Gamma _i$ and, for simplicity, drop the patch index $i$ on all quantities.", "We first discretize $\\Theta $ on a training grid $x_1^t,\\ldots ,x_{n_t}^t$ of $n_t$ points chosen to be sufficiently fine to accurately represent smooth functions on $\\Theta $ .", "We can then obtain a matrix $A$ of size $n_t \\times n_f$ , with entries $A_{jl} =G(x_j^t,x_l^f)$ , so that the $l$ th column of $A$ is a discretization of the function $G(x,x_l^f)$ on the training grid.", "Given a user-specified tolerance $\\epsilon $ , the ID takes as input a matrix $A$ , and returns the factorization $\\widetilde{A} \\Pi $ with $\\Vert A - \\widetilde{A} \\Pi \\Vert _2 = O(\\epsilon ),$ where $\\widetilde{A}$ is $n_t \\times p$ and $\\Pi $ is $p\\times n_f$ .", "The parameter $p$ is the numerical rank of $A$ determined by the ID as part of the factorization.", "The columns of $\\widetilde{A}$ are a $p$ -column subset of the original matrix $A$ , chosen so that the column space of $\\widetilde{A}$ approximates that of $A$ .", "The matrix $\\Pi $ contains the coefficients needed to approximately reconstruct the columns of $A$ from those of $\\widetilde{A}$ .", "If we define the indexing function $\\pi $ so that the $m$ th column of $\\widetilde{A}$ is the $\\pi (m)$ th column of $A$ , then the approximation (REF ) implies that $G(x_j^t,x_l^f) \\approx \\sum _{m=1}^p G(x_j^t,x_{\\pi (m)}^f) \\Pi _{ml}$ for $l=1,\\ldots ,n_f$ .", "Since the columns of $A$ represent the functions $\\lbrace G(x,x_l^f)\\rbrace $ on a fine training grid, the expression above holds not just for $x \\in \\lbrace x_j^t\\rbrace $ , but more generally for $x \\in \\Theta $ .", "That is, $G(x,x_l^f) \\approx \\sum _{m=1}^p G(x,x_{\\pi (m)}^f) \\Pi _{ml}.$ Summing both sides of this expression against $(\\vec{\\sigma })_l w_l^f$ and rearranging yields $\\sum _{l=1}^{n_f} G(x,x_l^f) (\\vec{\\sigma })_l w_l^f \\approx \\sum _{l=1}^{n_f}\\sum _{m=1}^p G(x,x_{\\pi (m)}^f) \\Pi _{ml} (\\vec{\\sigma })_l w_l^f =\\sum _{m=1}^p G(x,x_{\\pi (m)}^f) (\\Pi W \\vec{\\sigma })_m$ where $W$ is a diagonal $n_f \\times n_f$ matrix with $W_{ll} = w_l^f$ .", "Since $\\vec{\\sigma } = B \\hat{f}^K$ , we let $T := \\Pi W B$ to obtain the representation (REF ) with $ \\vec{\\rho } = T \\hat{f}^K.$ $T$ is a generic $p \\times K$ matrix which may be formed and stored once $\\Pi $ , $W$ , and $B$ are available.", "We emphasize that each of these matrices is identical for all patches of a given radius $\\varepsilon $ and may therefore be precomputed.", "$\\Pi $ is obtained from a single interpolative decomposition, $W$ is a simply a matrix of quadrature weights, and $B$ is computed by solving a sequence of one-patch integral equations as explained in Section .", "Using this compression scheme alone, it is straightforward to reduce the cost of computing the sums (REF ) from $\\mathcal {O}(K n_f N^2)$ to $\\mathcal {O}(K p N^2)$ .", "The tools introduced in the remainder of this section will allow us to reduce the cost further to $\\mathcal {O}(K p N \\log N)$ ." ], [ "Quadtree on the sphere", "We now describe a data structure which will enable us to organize groups of patches in a hierarchical fashion.", "We first inscribe the sphere in a cube (see Fig.", "REF ).", "We then project each patch center onto the surface of the cube via the ray from the origin through the patch center (indicated by the arrows in the figure).", "This defines a set of points on the surface of the cube.", "We then build a quadtree on each face of the cube, subdividing boxes until there is only one point per box, and pruning empty boxes in the process.", "The union of these six quadtrees is an FMM-like full tree data structure, which provides a subdivision of the sphere itself into a hierarchy of levels.", "The patches assigned to a particular box in the full tree will be said to form a patch group.", "Each patch is a member of one patch group at each level of the full tree.", "At the leaf level, each group consists of a single patch.", "We define parent, child, and neighbor boxes in the full tree in the same way as in an ordinary quadtree.", "The only modification to the definition of a neighbor box is that it wraps across cube edges and corners.", "Thus, a box adjacent to an edge has eight neighbors (like an interior box) unless it is a corner box, in which case it has seven neighbors.", "Well-separatedness and the interaction list for boxes or their corresponding patch groups are define as in the usual FMM.", "Two boxes at a given level are well-separated if they are not neighbors, and the interaction list for a particular box is comprised of the well-separated children of its parent's neighbors.", "We will sometimes refer to a patch $\\Gamma _i$ as being in the interaction list of some patch group $\\gamma $ , by which we mean that $\\Gamma _i$ is contained in a group which is in the interaction list of $\\gamma $ .", "Figure: The sphere is inscribed in a cube and eachpatch center is projected to a face of the cube by a rayemanating from the sphere center (left).", "An adaptive quad tree isthen built on each face until, at the finest level,there is one patch in every non-empty leaf node in the quad tree(right)." ], [ "The representation of incoming fields on patch\ngroups", "Since the incoming field due to remote source patches in the interaction list of a patch group $\\gamma $ is smooth, it can be efficiently represented on a spectral polar grid (see Fig.", "REF ).", "This requires the construction of a bounding circle on the surface of the sphere, enclosing all of the patches in $\\gamma $ , which circumscribes the grid.", "Incoming field values can then be obtained at arbitrary points inside the bounding circle by interpolation.", "We refer to the grid samples of the incoming field as an incoming representation.", "Figure: For a group of mm patches, the field due to well-separatedsource patches may be captured with high order accuracy on a polargrid which covers all mm patches.The bounding circle is straightforward to construct using a “smallest circle algorithm\" for a collection of points in the plane, suitably adapted to the sphere (see [37], [38], [39] and the references therein for discussion of the smallest circle problem).", "Given a bounding circle for a patch group, we can build a local polar coordinate system $(r,\\theta )$ , for which $r = 0$ corresponds to the center of the patch group, and $r = R$ corresponds to the bounding circle.", "We must select an incoming grid in these coordinates which can represent a smooth incoming field in a high order manner with as few grid points as possible.", "For this, we will use a parity-restricted Chebyshev-Fourier basis, formed by taking products of scaled Chebyshev polynomials in the radial variable $r \\in [-R,R]$ with trigonometric functions in the angular variable $\\theta \\in [0, 2 \\pi )$ .", "The coefficients of an expansion in these basis functions corresponding to Chebyshev and Fourier modes of different parity can be shown to be zero, hence the name of the basis.", "This is an efficient and spectrally accurate basis with a simple associated grid [21].", "Namely, the coefficients of the expansion may be computed from function samples on a polar grid comprised of the scaled Chebyshev nodes in $r \\in [0,R]$ and equispaced nodes in $\\theta \\in [0,2 \\pi )$ .", "The desired field may then be evaluated at any point inside a patch group's bounding circle by evaluating the resulting Chebyshev-Fourier expansion.", "It is straightforward to verify that the number of grid points and coefficients required to obtain an accuracy $\\epsilon $ is $\\mathcal {O}(\\log ^2(1/\\epsilon ))$ ." ], [ "Solution of the multiple scattering system", "We now describe our method to solve the discretized many-patch system (REF ), including the fast algorithm for accelerating the computation of the multiple scattering interactions (REF ) within a GMRES iteration.", "Step 1: Precomputation (for each choice of $\\varepsilon $ ) Given the patch radius $\\varepsilon $ , select the Zernike truncation parameter $K$ and form the matrix $Q$ .", "(a) Solve the system $S B = Q$ described in Section .", "(b) Construct the matrix $T$ defined in Section REF by building and composing the matrices $\\Pi $ , $W$ , and $B$ .", "$\\Pi $ need not be stored after $T$ is formed.", "(c) Construct the vector $I = (w_1^f,\\ldots ,w_{n_f}^f) B$ , used to obtain the quantities $J$ and $\\mu $ in (REF ).", "At this point we no longer need to store $B$ , only the $p \\times K$ matrix $T$ and the $1 \\times K$ vector $I$ .", "The storage associated with the outputs of the precomputation phase is therefore negligible.", "Step 2: Construction of hierarchical data structure Let $N$ denote the number of patches on the surface of the sphere, assumed to satisfy the the minimum patch separation condition introduced in Section .", "(a) Form the quadtree on the sphere described in Section REF .", "The data structure should associate each patch with its group at every level, and identify the interaction list of every patch group.", "(b) For each patch group, construct the incoming grid described in Section REF .", "For each patch, construct the Zernike sampling grid described in Section .", "Step 3: Iteration We use GMRES to solve the system (REF ).", "At each iteration, we must apply the system matrix; that is, we must compute $\\hat{f}_i^K + P \\sum _{j \\ne i} S_{ij} B \\hat{f}_j^K$ for $i=1,\\ldots ,N$ , where here $(\\hat{f}_1^K,\\ldots ,\\hat{f}_N^K)^T \\in \\mathbb {R}^{KN}$ is the input vector at a given iteration.", "The following algorithm computes this expression in $\\mathcal {O}(N \\log N)$ operations.", "Compute and store the outgoing coefficients $\\vec{\\rho }_i = T \\hat{f}_i^K$ for each patch, $i=1,\\ldots ,N$ .", "Cost: Approximately $p K N$ .", "Loop through every patch group in every level.", "For each patch group $\\gamma $ , loop through all patches in its interaction list.", "For each such patch $\\Gamma _i$ , evaluate the field induced by the density on $\\Gamma _i$ on the incoming grid of $\\gamma $ , using the outgoing representation (REF ).", "Add together all such field values to obtain the total incoming field on the incoming grid.", "Cost: If $q$ is an upper bound on the number of points in each incoming grid, the cost of evaluating a single outgoing representation on an incoming grid is at most $q p$ .", "At each level, the outgoing representation corresponding to each patch must be evaluated on at most 27 incoming grids, since the interaction list of each patch's group at that level contains at most 27 other groups.", "There are approximately $\\log _4 N$ levels.", "Therefore, the cost of this step is approximately $27 q p N \\log _4 N$ .", "At the leaf level of the tree, each patch group $\\gamma $ contains a single patch, say $\\Gamma _i$ .", "Though we have already evaluated the outgoing representation for $\\Gamma _i$ on the incoming grids of all (single-patch) groups in the interaction list of $\\gamma $ , we now do so also for the neighbors of $\\gamma $ , which are also single-patch groups but are not contained in the interaction list of $\\gamma $ .", "We add these contributions to the field values already stored on the incoming grids of these neighbor patches.", "Cost: Since each leaf-level single-patch group has at most 8 neighbors, the cost of this step is approximately $8 q p N$ .", "Note: For each patch $\\Gamma _i$ , the incoming field due to every other patch has now been stored in the incoming grid of exactly one patch-group of which $\\Gamma _i$ is a member.", "Indeed, every other patch is either a neighbor of $\\Gamma _i$ at the leaf level, or it is contained in exactly one of the interaction lists of the patch groups containing $\\Gamma _i$ .", "Loop through each patch group.", "For every patch $\\Gamma _i$ in a group $\\gamma $ , evaluate the interpolant of the incoming field stored on the incoming grid of $\\gamma $ at the Zernike sampling nodes on $\\Gamma _i$ .", "Cost: There are $\\mathcal {O}(K)$ Zernike sampling nodes, so the cost of each interpolation is approximately $q^2$ to form the interpolant and $Kq$ to evaluate it.", "Each patch is a member of a single group at each level, so we must carry out approximately $N \\log _4 N$ such interpolations.", "The total cost is therefore approximately $(q^2+ Kq) N \\log _4 N$ .", "(For large $q$ , this step could be accelerated with fast transform methods but $q$ is generally too small for this to provide any significant benefit.)", "At this point, we have computed the field due to all other patches on the Zernike sampling grid on each patch.", "That is, we have computed the sums $\\sum _{j \\ne i} S_{ij} B \\hat{\\sigma }_j$ for $i =1,\\ldots ,N$ .", "Apply the matrix $P$ to the values stored on the Zernike sampling grid on each patch and add $\\hat{f}_i^K$ to the result to obtain (REF ).", "Cost: Approximately $K^2 N$ .", "The total cost of each iteration is therefore $\\mathcal {O}(N \\log N)$ , with asymptotic constants which involve the parameters $K$ , $q$ , and $p$ associated with the resolution of smooth functions on spectral grids.", "The singular character of the problem is dealt with entirely during the precomputation phase." ], [ "Optimizations and parallelization", "While the algorithm described above has the desired computational complexity, there are several practical considerations that are worth discussing to optimize its performance.", "Selection of incoming grid parameters: Rather than making a uniform choice of the radial and azimuthal truncation parameters for the incoming grid, we can compute these adaptively as follows.", "For each patch group $\\gamma $ , we determine the distance from its bounding circle to the nearest patch in its interaction list.", "We then adaptively construct an incoming grid which accurately interpolates a collection of point sources $G(x,x^{\\prime })$ at points $x^{\\prime }$ this distance away.", "This adaptive interpolation is carried out by increasing the incoming grid truncation parameters until the last few Legendre-Fourier coefficients of the interpolant fall below some specified tolerance.", "Additional compression of the outgoing representation: Instead of using the same outgoing coefficients $\\vec{\\rho }_i$ for each level of the quadtree, we can associate with each patch a different outgoing representation for each level.", "Recall that the far field regions $\\Theta _i$ were constructed identically for each patch $\\Gamma _i$ to be as large as possible, consistent with the minimum patch separation.", "This way, one could build a single generic matrix $T$ taking a density on a patch to its outgoing representation.", "$T$ was built by compressing the outgoing field due to a generic patch $\\Gamma $ against a grid on a generic far field region $\\Theta $ .", "Instead, we can build one such matrix for each level of the quadtree by constructing a generic far field region for each level.", "Each such far field region is an annulus or disk on the surface of the sphere.", "For each level, it is taken to be just large enough so that for any $i =1,\\ldots ,N$ , in the coordinate system of $\\Gamma _i$ , it covers the bounding circle of every group $\\gamma $ containing $\\Gamma _i$ in its interaction list at that level.", "Using the interpolative decomposition, we can then recompress the outgoing representation for a generic patch against training grids on each of the approximately $\\log _4 N$ new far field regions.", "We obtain one matrix $T$ per level, each of which has fewer rows and therefore yields fewer outgoing coefficients than the original.", "Parallelization: Each step of the algorithm to compute (REF ) may be straightforwardly parallelized.", "Steps (1) and (5) are parallelized over all patches; steps (2) and (4) are parallelized over all patch groups at all levels; step (3) is parallelized over all patch groups at the leaf level." ], [ "The one-patch\nintegral equation", "In this section, we describe in detail a solver for the integral equation (REF ), as well as the construction of the far-field quadrature nodes $x_{i,1}^f,\\ldots ,x_{i,n_f}^f$ and weights $w_1^f,\\ldots ,w_{n_f}^f$ discussed in Section .", "We assume that a patch $\\Gamma $ has radius $\\varepsilon $ and make use of cylindrical coordinates $(r,\\theta ,z)$ .", "If we take the center of the patch to be the north pole of the sphere, then $r = 0$ corresponds to the $z$ -axis, $r = 0$ and $z = \\pm 1$ to the north and south poles, respectively, and $\\theta =0$ to the $x$ -axis.", "Following the approach of [40], [41], we use the rotational symmetry of $\\Gamma $ to reduce the integral equation over the patch to a sequence of one-dimensional integral equations, each corresponding to a Fourier mode in the variable $\\theta $ .", "More precisely, we denote by $C$ the arc which generates $\\Gamma $ via rotation about the $z$ -axis: $C(t) \\equiv (r(t),z(t)) =(\\sin (t),\\cos (t))$ for $t \\in [0,\\varepsilon ]$ .", "In this parametrization, $t$ is simply the arclength along the sphere.", "Let $x = (r,\\theta ,z)$ and $x^{\\prime } = (r^{\\prime },\\theta ^{\\prime },z^{\\prime })$ .", "Since $G_E$ and $G_I$ are functions of $|x-x^{\\prime }|$ and $|x-x^{\\prime }| = \\sqrt{r^2 + r^{\\prime 2} + (z-z^{\\prime })^2 -2rr^{\\prime }\\cos (\\theta -\\theta ^{\\prime })},$ we can write the dependence of the Green's function in cylindrical coordinates as $G(x-x^{\\prime }) = G(r,r^{\\prime },z-z^{\\prime },\\theta -\\theta ^{\\prime })$ .", "In these coordinates, the one-patch integral equation (REF ) takes the form $\\int _0^\\varepsilon \\int _0^{2 \\pi } G(r(t),r^{\\prime }(t^{\\prime }),z(t)-z^{\\prime }(t^{\\prime }),\\theta -\\theta ^{\\prime })\\sigma (r^{\\prime }(t^{\\prime }),z^{\\prime }(t^{\\prime }),\\theta ^{\\prime }) r^{\\prime }(t^{\\prime }) \\, dt^{\\prime } \\, d \\theta ^{\\prime } = f(r(t),z(t),\\theta ).$ Representing $\\sigma $ as a Fourier series in $\\theta $ , $\\sigma (r(t),z(t),\\theta ) = \\sum _{n=-\\infty }^\\infty \\sigma _n(t) e^{i n\\theta },$ and taking the Fourier transform of both sides of this equation, upon rearrangement, gives the following integral equation for the Fourier modes: $2 \\pi \\int _0^\\varepsilon G_n(t,t^{\\prime })\\sigma _n(t^{\\prime }) \\sin (t^{\\prime }) \\, dt^{\\prime } = f_n(t).$ Here $G_n(t,t^{\\prime })$ , $\\sigma _n(t)$ , and $f_n(t)$ are the Fourier transforms of $G(r(t),r^{\\prime }(t^{\\prime }),z(t)-z^{\\prime }(t^{\\prime }),\\theta )$ , $\\sigma (r(t),z(t),\\theta )$ and $f(r(t),z(t),\\theta )$ with respect to $\\theta $ .", "Thus, after solving the one-dimensional modal equations (REF ), we can recover $\\sigma (r(t),z(t),\\theta )$ from its Fourier series.", "Note that the Fourier series is spectrally convergent because $\\sigma (r(t),z(t),\\theta )$ is smooth as a function of $\\theta $ , even though it is singular as a function of $t$ at the edge $t = \\varepsilon $ ." ], [ "Evaluation of the modal kernels", "Let $G_n^{(1)}(t,t^{\\prime }) &= \\frac{1}{\\pi } \\int _0^\\pi \\frac{2}{|x-x^{\\prime }|}\\cos (n \\tilde{\\theta }) \\, d \\tilde{\\theta } \\\\G_n^{(2)}(t,t^{\\prime }) &= \\frac{1}{\\pi } \\int _0^\\pi \\log \\left(\\frac{2}{|x-x^{\\prime }|} \\right) \\cos (n \\tilde{\\theta }) \\, d \\tilde{\\theta } \\\\G_n^{(3)}(t,t^{\\prime }) &= \\frac{1}{\\pi } \\int _0^\\pi \\log \\left( 1 +\\frac{1}{2} |x-x^{\\prime }| \\right) \\cos (n \\tilde{\\theta }) \\, d \\tilde{\\theta }.$ Then, using the formulae (REF ) and (REF ), it is straightforward to show that $G_n = G_n^{(1)} + G_n^{(2)} - G_n^{(3)}$ for $G_E(x,x^{\\prime })$ and $G_n = G_n^{(1)} - G_n^{(2)} - G_n^{(3)}$ for $G_I(x,x^{\\prime })$ .", "We can write $|x-x^{\\prime }|$ in terms of $t$ , $t^{\\prime }$ and $\\tilde{\\theta } = \\theta -\\theta ^{\\prime }$ as $|x-x^{\\prime }| = \\sqrt{2 \\left( 1-\\cos (t)\\cos (t^{\\prime }) -\\sin (t)\\sin (t^{\\prime })\\cos (\\tilde{\\theta }) \\right)}.$ The integrands are not smooth at $t = t^{\\prime }$ , $\\tilde{\\theta } = 0$ , so we must use specialized methods to evaluate each kernel.", "$G_n^{(1)}(t,t^{\\prime })$ is simply the cosine transform of the Coulomb kernel and arises in boundary integral equations for electrostatics on axisymmetric surfaces.", "In [41], an efficient evaluation algorithm is described which involves writing the modal kernel in terms of Legendre functions of half-integer order and using their associated three-term recurrence.", "We refer the reader to this paper for further details.", "The kernel $G_n^{(2)}(t,t^{\\prime })$ is weakly singular and may be evaluated by adaptive Gaussian quadrature.", "However, the following formula, discovered by a combination of analytical manipulation and symbolic calculation with Mathematica, has been numerically verified for a wide range of values and is significantly faster: $\\frac{1}{\\pi }\\int _0^\\pi \\log \\left( \\frac{2}{|x-x^{\\prime }|} \\right) \\cos (n\\tilde{\\theta }) \\, d \\tilde{\\theta } ={\\left\\lbrace \\begin{array}{ll}-\\log \\left(\\cos (t_1/2)\\sin (t_2/2)\\right) & n = 0 \\\\\\frac{1}{2 n} \\left(\\tan (t_1/2) \\cot (t_2/2)\\right)^n & n > 0 \\\\t_1 = \\min (t,t^{\\prime }), t_2 = \\max (t,t^{\\prime }).\\end{array}\\right.", "}$ The integrand in the expression for $G_n^{(3)}(t,t^{\\prime })$ is even more weakly singular, so $G_n^{(3)}(t,t^\\prime )$ may be evaluated relatively quickly by adaptive Gaussian quadrature." ], [ "Discretization of the modal integral equations", "Since (REF ) is a singular integral equation, care must be taken to discretize it accurately.", "The dominant singularity of the kernel $G_n(t,t^{\\prime })$ at $t = t^{\\prime }$ is the logarithmic singularity of $G_n^{(1)}(t,t^{\\prime })$ .", "An analogous classical problem is therefore the first-kind integral equation arising from the solution of the Dirichlet problem on an open arc in two dimensions by a single layer potential.", "Stable and accurate numerical schemes for this problem can be found, for example, in [42], [43], [44].", "As described in [44], when the domain is the interval $[-1,1]$ , the solution of $\\int _{-1}^1 \\log |t-s| \\sigma (s) \\, ds = f(t)$ can be computed with spectral accuracy in the form $\\sigma (t) = g(t)/\\sqrt{(1+t)(1-t)}$ , where $g$ is a smooth function whose Chebyshev coefficients depend in a simple manner on those of $f$ .", "For an open arc, the corresponding integral equation can be preconditioned using the solution of (REF ).", "This procedure results in a Fredholm equation of the second kind for which the density may be represented as a Chebyshev expansion and computed stably with high order accuracy.", "In the present context, the inclusion of the additional weakly singular kernels $G_n^{(2)}$ and $G_n^{(3)}$ cause the singularity of $\\sigma _n(t)$ to be more complex, but our numerical evidence suggests that there is still a dominant square root singularity at $t = \\varepsilon $ .", "To be more precise, if we represent $\\sigma _n$ by $\\sigma _n(t) = g_n(t)/\\sqrt{\\varepsilon - t}$ near $t = \\varepsilon $ , we can investigate the effectiveness of representing $g_n$ in a basis of orthogonal polynomials.", "While the exact behavior of $g_n(t)$ is not understood analytically, the numerical results presented in Section REF suggest that it is only mildly non-smooth.", "We note that there is no singularity at the endpoint $t = 0$ , since this point corresponds to the patch center, at which there is no physical singularity.", "To resolve the endpoint singularity of $\\sigma _n$ , we discretize it on a set of panels $[a_0,a_1],[a_1,a_2],\\ldots ,[a_{m-1},a_m]$ on $[0,\\varepsilon ]$ which are dyadically refined towards $t = \\varepsilon $ : $ a_0 = 0,\\ a_1 = \\frac{\\varepsilon }{2},\\ a_2 = \\frac{3 \\varepsilon }{4},\\dots ,\\ a_{m-1} = \\frac{(2^{m-1}-1) \\varepsilon }{2^{m-1}},\\ a_{m} =\\varepsilon .$ On each panel, except the last, $\\sigma _n$ is represented as a Legendre series of fixed order $k$ .", "Since $\\sigma _n$ is smooth on each such panel and separated from its singularity by a distance equal to the panel length, it can be shown that this representation has an error of size $\\mathcal {O}(e^{-k} \\log _2 (1/\\varepsilon ))$ .", "This argument is widely used in handling endpoint and corner singularities in the context of boundary integral equations [45], [46], [47], [48], [49], [50].", "On the last panel, we analytically incorporate a square root singularity into our representation of $\\sigma _n$ as above, and expand $g_n(t) =\\sigma _n(t) \\sqrt{\\varepsilon - t}$ as a series of Jacobi polynomials with $\\alpha = -\\frac{1}{2}$ and $\\beta = 0$ .", "If the singularity of $\\sigma _n$ at $t = \\varepsilon $ were exactly of square root type, this would yield a spectrally accurate representation of $\\sigma _n$ .", "Instead, as we will show in Section REF , we obtain a representation which is finite order but resolves the solution quite well even for modest truncation parameters.", "Thus we have rewritten (REF ) as $f_n(t)= 2 \\pi \\sum _{j=1}^{m-1} \\int _{a_{j-1}}^{a_j} G_n(t,t^{\\prime })\\sigma _n(t^{\\prime }) \\sin (t^{\\prime }) \\, dt^{\\prime } \\\\+ 2 \\pi \\int _{a_{m-1}}^{\\varepsilon }\\frac{G_n(t,t^{\\prime })}{\\sqrt{\\varepsilon - t^{\\prime }}} \\left( \\sigma _n(t^{\\prime })\\sqrt{\\varepsilon - t^{\\prime }} \\right) \\sin (t^{\\prime }) \\, dt^{\\prime } \\,$ and discretized $\\sigma _n$ by Legendre polynomials for the first $m-1$ panels and by Jacobi polynomials for the last.", "Sampling the resulting equations at the corresponding quadrature nodes - Gauss-Legendre for the first $m-1$ panels and Gauss-Jacobi for the last - yields a collocation method for $\\sigma _n$ , in which $\\sigma _n$ is determined by its piecewise polynomial basis coefficients.", "For each collocation node $t_i$ , we compute the system matrix entries by adaptively integrating $G_n(t_i,t^{\\prime })$ in $t^{\\prime }$ against the piecewise polynomial basis functions.", "We compute the values $f_n(t_i)$ by discretizing the Fourier transform of $f(r(t_i),z(t_i),\\theta )$ in $\\theta $ by the trapezoidal rule, which is spectrally accurate for smooth, periodic functions.", "We solve the resulting set of linear systems - one for each Fourier mode - by $LU$ factorization and back substitution.", "The factorizations may be reused, since we must solve a one-patch integral equation for many different right hand sides.", "We can now define the fine grid points and the smooth quadrature weights introduced in Section .", "The points $x_{i,1}^f,\\ldots ,x_{i,n_f}^f$ are the tensor products of the collocation nodes in the radial direction with equispaced points - the trapezoidal rule quadrature nodes - in the azimuthal direction.", "$w_1^f,\\ldots ,w_{n_f}^f$ are the corresponding quadrature weights - products of the panel-wise Gauss weights with the trapezoidal rule weight." ], [ "Numerical investigation of the singularity of\n$\\sigma _n$", "In this section, we contrast two strategies for representing $\\sigma _n$ in (REF ).", "In the first, we use $m = 1$ panels, and represent $g_n$ in a basis of Jacobi polynomials, which takes into account the square root singularity in $\\sigma _n$ .", "This approach would yield spectral accuracy with respect to $g_n$ if $\\sigma _n$ only contained a square root singularity.", "The second strategy is the one described above; we use $m > 1$ panels with a Jacobi polynomial basis of fixed degree only in the last panel.", "These experiments give us some insight into the nature of the true singularity in $\\sigma _n$ , and justify our discretization choice.", "In both cases, we solve the interior one-patch integral equation by the method described above for a basis of Zernike polynomials with truncation parameter $M = 15$ .", "The results do not change significantly if we solve the exterior equation instead.", "We do this for several different choices of $\\varepsilon $ .", "The Fourier series truncation is fixed sufficiently large to resolve the highest azimuthal Zernike mode.", "For each solution, we measure the residual error in $L^2$ , normalized by the patch size: $\\left\\Vert \\mathcal {S}\\sigma - f\\right\\Vert _{L^2(\\Gamma )} / |\\Gamma |.$ Here $|\\Gamma |$ is the surface area of the patch, and $f$ is a Zernike polynomial.", "This measures the extent to which the computed solution of the one-patch BVP satisfies the Dirichlet boundary condition.", "This solution automatically satisfies the Neumann boundary condition and the PDE, because of its representation as a single layer potential with the Neumann Green's function, so a small $L^2$ residual error corresponds to a solution which nearly satisfies the boundary value problem.", "This error is computed by quadrature on a Legendre-Fourier grid which does not overlap with the grid on which the integral equation is solved, so it is not the same as the residual of the solution to the discrete linear system.", "Using the first strategy ($m=1$ ), we measure the error (REF ) for each Zernike polynomial, as the number of Jacobi basis functions is increased.", "The error is defined to be the maximum taken over all Zernike polynomials.", "The results are presented in the left panel of Fig.", "REF .", "We observe an initial regime of rapid convergence, followed by much slower convergence.", "Indeed, 15 basis functions are required to resolve the highest Zernike modes we have used as data.", "Afterward, the slow regime of convergence suggests that $\\sigma _n$ has a dominant square root singularity and a subdominant term which is nonsmooth, but much smaller.", "We also notice that performance improves as $\\varepsilon $ is decreased, which is not surprising since as $\\varepsilon \\rightarrow 0$ , we approach the flat case in which $\\sigma _n$ has a pure square root singularity.", "The second strategy is explored in the right panel of Fig REF .", "Here, we fix 20 basis functions per panel - sufficient to begin with a good error constant, according to the first experiment.", "We then increase the number $m$ of panels.", "Although we can already obtain quite good accuracy using the first strategy, the second allows us to reach near-machine precision.", "The improvement is particularly dramatic for larger choices of $\\varepsilon $ .", "Figure: Left panel: g n g_n is represented by a basis ofJacobi polynomials on a single panel.", "We plot themaximum residual error () vs.the number of Jacobi basis functions.", "Right panel: g n g_n is represented ina Legendre basis on every panel except the last, where a Jacobi basis is used.We plot the maximum residual error vs. the number of panels." ], [ "Numerical experiments", "An important parameter in studying narrow escape and narrow capture problems is the patch area fraction $f_{N,\\varepsilon }$ .", "Since the surface area of a single patch of radius $\\varepsilon $ is given by $A_\\varepsilon = 4 \\pi \\sin ^2(\\varepsilon /2), $ we have $f_{N,\\varepsilon } = N \\sin ^2(\\varepsilon /2).$ Assuming $\\varepsilon $ is sufficiently small, we may write $f_{N,\\varepsilon } \\approx \\varepsilon ^2 N / 4.$ Given $N$ , we will use (REF ) to compute the patch radius $\\varepsilon $ for a given patch area fraction." ], [ "Convergence with respect to the Zernike basis", "We first investigate the convergence of the solution with respect to the Zernike truncation parameter $M$ , which determines the largest radial and azimuthal Zernike modes used to represent the smooth incoming field on each patch.", "We fix the patch area fraction at $f_{N,\\varepsilon } = 0.05$ and carry out experiments with $N = 10$ , 100, and 1000 patches.", "$\\varepsilon $ is computed from (REF ).", "The patch locations are drawn from a uniform random distribution on the sphere, with a minimal patch separation of $2\\varepsilon $ enforced.", "In each case, we solve the one-patch problems with the truncation parameter $M$ set to $1,3,5,\\ldots ,15$ .", "The one-patch solutions are obtained, guided by the results in Fig.", "REF , using 13 panels with 20 basis functions per panel, and the number of Fourier modes set equal to the number of azimuthal modes in the Zernike basis.", "The ID and GMRES tolerances are set to $10^{-15}$ , and the incoming grid tolerance is set to $10^{-12}$ .", "We measure error in two ways.", "The first, as in (REF ), is to examine the relative $L^2$ residual of the multiple scattering system (REF ) (the discrepancy of the computed boundary values with the Dirichlet data) on a random patch $\\Gamma _i$ : $\\frac{1}{|\\Gamma _i|} \\left\\Vert \\left(\\mathcal {S}\\sigma _i + \\sum _{j \\ne i}^N \\mathcal {S}_{ij} \\sigma _j\\right) - 1\\right\\Vert _{L^2(\\Gamma _i)}.$ The second is to examine the difference between the computed average mean first passage time (MFPT) $\\mu $ and a reference value, denoted by $\\mu _{\\text{ref}}$ .", "We obtain $\\mu _{\\text{ref}}$ by carrying out a more refined simulation, with $M = 17$ on each patch, while also increasing the number of panels and basis functions used to solve the one-patch problem to 19 and 30, respectively, and doubling the numbers of both radial and azimuthal modes used in the incoming grids of all patch groups.", "This is a self-consistent convergence test for $\\mu $ .", "The results are presented in Fig.", "REF .", "In all cases, we observe the expected spectral convergence with respect to $M$ , and can reach errors of approximately $10^{-12}$ or less.", "We also find that the residual error appears to provide a good upper bound on the error of $\\mu $ until convergence is reached.", "Figure: L 2 L^2 residual error and self-consistent convergence errorof the average MFPT μ\\mu for random patches withf N,ε =0.05f_{N,\\varepsilon } = 0.05.", "Left panel: N=10N = 10, ε≈0.141\\varepsilon \\approx 0.141.", "Middle panel: N=100N =100, ε≈0.0447\\varepsilon \\approx 0.0447.", "Right panel: N=1000N = 1000,ε≈0.0141\\varepsilon \\approx 0.0141." ], [ "Large scale simulations", "We next study the performance of our solver as $N$ is increased and $\\varepsilon $ is decreased.", "The error is measured by computing the $L^2$ residual (REF ) on a random patch.", "The parameters for the one-patch solver are set as in the previous section with $M = 15$ , but we fix the ID tolerance at $10^{-11}$ , the GMRES tolerance at $10^{-10}$ , and the incoming grid truncation tolerance at $10^{-8}$ .", "This selection of parameters yields errors in range $10^{-7}-10^{-10}$ for all of our experiments.", "Our calculations are performed on either a laptop with a 4-core Intel i7-3630QM 2.40GHz processor or a workstation with four Intel Xeon E7-4880 2.50GHz processors.", "each of which has 15 cores.", "The algorithm has been implemented in Fortran, and in both cases, the hierarchical fast algorithm is parallelized over all available cores using OpenMP.", "We consider randomly located patches, uniformly located patches and patches that are highly clustered.", "For each experiment we report $N$ , $\\varepsilon $ , the computed value of the average MFPT $\\mu $ , truncated at 8 significant digits, the $L^2$ residual error on a random patch, the total number of GMRES iterations, the total solve time, and the time per GMRES iteration.", "We also compute the parallel scaling factor - namely, the ratio of the time to compute the matrix-vector product (REF ) using a single core to the time required using all cores on the 60-core workstation." ], [ "Example 1: Random patches with area fraction\n$f_{N,\\varepsilon } = 0.05$", "Fixing the patch area fraction at $f_{N,\\varepsilon } = 0.05$ , we let $\\varepsilon $ be given by (REF ) for $N = 10,100,1000,10\\, 000,100\\, 000$ , with patches randomly distributed on the sphere with a minimum patch separation of $2\\varepsilon $ .", "The corresponding results are given in Table REF .", "In the left panel of Fig.", "REF , we plot the time per GMRES iteration as a function of $N$ using the 4-core laptop and the 60-core workstation, as well as a reference curve with $\\mathcal {O}(N \\log N)$ scaling.", "In Fig.", "REF , we also plot the computed MFPT $\\bar{v}$ just inside the unit sphere - on a sphere of radius $1-\\varepsilon /5$ - for $N= 10,100,1000,10\\, 000$ .", "The case $N = 100\\, 000$ case was plotted earlier, in Fig.", "REF .", "Note that the number of GMRES iterations increases with $N$ , as one would expect from the increased complexity of the problem, but slowly.", "The computation with $N = 100\\, 000$ required just over an hour to complete using the 60-core workstation.", "The computation with $N = 10\\, 000$ required just over 45 minutes to solve on the 4-core laptop, and the computation with $N = 1000$ required approximately one minute.", "(The case $N = 100\\, 000$ was not attempted on the laptop because of memory requirements.)", "Note from the data in Table REF that we achieve approximately $85\\%$ parallel efficiency at $N=1000$ and an efficiency near $90\\%$ for the largest calculation.", "Note also from Fig.", "REF that the complexity of the fast algorithm is consistent with the expected $O(N \\log N)$ scaling.", "Table: Narrow escape problem with random patches at patcharea fraction f N,ε =0.05f_{N,\\varepsilon }= 0.05.Figure: Time per GMRES iteration for the 4-core laptop and 60-coreworkstation.", "A reference curve with 𝒪(NlogN)\\mathcal {O}(N \\log N) scaling isalso plotted." ], [ "Example 2: Uniform patches with area fraction\n$f_{N,\\varepsilon } = 0.05$", "Using the same patch area fraction as in the previous example, we let $N$ take the same values, but place the patch centers at the Fibonacci spiral points, which are approximately uniform on the sphere [12].", "Results are shown in Table REF and the middle panel of Fig.", "REF .", "The computed MFPT $\\bar{v}$ on the sphere of radius $1-\\varepsilon /5$ was plotted in Fig.", "REF for the case $N = 10\\, 000$ .", "The MFPT is plotted for the $N = 100$ and $N = 1000$ cases in Fig.", "REF .", "Table: Narrow escape problem with uniform patchesat patch area fraction f N,ε =0.05f_{N,\\varepsilon } = 0.05." ], [ "Example 3: Clustered patches", "In our final example, we configure the patches to form a collection of 20 clusters.", "Each cluster is contained within a disk on the surface of the sphere centered at the vertices of a dodecahedron inscribed in the sphere, and the radii of the disks are chosen so that all 20 disks cover one quarter of the area of the sphere.", "Patch centers are placed randomly on the sphere, and a proposed center is accepted if it falls within one of the disks, while enforcing a minimum patch separation distance of $2 \\varepsilon $ .", "We choose $\\varepsilon $ empirically to be as large as possible so that our random placement process yields the desired number $N$ of patches in a reasonable amount of time.", "For sufficiently large $N$ , this results in a much denser packing of patches within each cluster than we had in our previous examples.", "The results of our simulations are provided in Table REF and the right panel of Fig.", "REF .", "The MFPT is plotted on a sphere of radius $1-\\varepsilon /5$ in Fig.", "REF for the $N =10\\, 000$ case and in Fig.", "REF for the $N = 100$ and $N =1000$ cases.", "The denser packing of patches leads to a greater number of GMRES iterations than in the previous examples and longer computation times, but the difference is mild.", "The case with $N = 100\\, 000$ required just over an hour and a half to solve on our 60-core workstation.", "The simulation with $N = 10\\, 000$ required 75 minutes on a laptop, and the simulation with $N =1000$ required about one minute.", "Table: Narrow escape problem with clustered patches.Figure: Plots of the MFPT v ¯\\bar{v} on a sphere of radius 1-ε/51- \\varepsilon /5 for the experiments described inSection .", "The first two rows correspond toExample 1 with N=10,100,1000,10000N =10,100,1000,10\\, 000.", "The third row corresponds to Example 2with N=100,1000N = 100, 1000.", "The final row corresponds toExample 3 with N=100,1000N = 100, 1000.Remark 8 We carried out the simulations above for the corresponding exterior problem as well (the narrow capture problem).", "As expected (since the integral equations are nearly identical), the timings and errors are similar and are therefore omitted." ], [ "Conclusions", "We have developed a fast solver for the narrow capture and narrow escape problems on the sphere with arbitrarily-distributed well-separated disk-shaped patches.", "We solve the corresponding mixed boundary value problems by an integral equation scheme derived using the Neumann Green's functions for the sphere.", "Our numerical method combines a high order accurate solver for the one-patch problem, a multiple scattering formalism, and a hierarchical fast algorithm.", "We have demonstrated the scheme on examples with $N$ as large as $100\\, 000$ , significantly larger than previously accessible.", "The ability to carry out such large-scale simulations will permit a systematic study of the asymptotic approaches described, for example, in [10] and [11].", "Possible extensions of our method include the consideration of narrow escape and narrow capture problems when the patches are asymmetric and have multiple shapes.", "Assuming some separation between patches, the multiple scattering formalism still applies, but the single patch integral equation will not be solvable by separation of variables and the compressed representation of outgoing fields will need to be computed for each distinct patch type.", "Neither of these extra steps, however, affects the asymptotic $\\mathcal {O}(N \\log N)$ scaling of the fast algorithm.", "Exterior problems involving multiple spheres with different arrangements of patches could also be simulated by a simple modification of our multiple scattering approach.", "A more challenging problem is to extend our method to non-spherical geometries.", "For this, one would either have to discretize the entire domain surface, rather than just the absorbing patches, or construct the Neumann Green's function for such a domain numerically.", "In the latter case, aspects of our multiple scattering approach would carry over.", "We are currently investigating these issues and will report on our progress at a later date." ], [ "Acknowledgments", "We would like to thank Michael Ward for suggesting this problem and for several valuable insights.", "We would also like to thank Mike O'Neil for many useful conversations.", "J.K. was supported in part by the Research Training Group in Modeling and Simulation funded by the National Science Foundation via grant RTG/DMS-1646339." ] ]
1906.04209
[ [ "Principled Training of Neural Networks with Direct Feedback Alignment" ], [ "Abstract The backpropagation algorithm has long been the canonical training method for neural networks.", "Modern paradigms are implicitly optimized for it, and numerous guidelines exist to ensure its proper use.", "Recently, synthetic gradients methods -where the error gradient is only roughly approximated - have garnered interest.", "These methods not only better portray how biological brains are learning, but also open new computational possibilities, such as updating layers asynchronously.", "Even so, they have failed to scale past simple tasks like MNIST or CIFAR-10.", "This is in part due to a lack of standards, leading to ill-suited models and practices forbidding such methods from performing to the best of their abilities.", "In this work, we focus on direct feedback alignment and present a set of best practices justified by observations of the alignment angles.", "We characterize a bottleneck effect that prevents alignment in narrow layers, and hypothesize it may explain why feedback alignment methods have yet to scale to large convolutional networks." ], [ "Introduction", "The architectures and optimization methods for neural networks have undergone considerable changes.", "Yet, the training phase still relies on the backpropagation (BP) algorithm to compute gradients, designed some 30 years ago [1].", "The main pitfalls of BP are its biological implausibility and computational limitations.", "On the biological side, the weight transport problem [2], [3] forbids the feedback weights from sharing information with the feedforward ones.", "On the practical side, as the update of the parameters of a given layer depends on downstream layers, parallelization of the backward pass is impossible.", "This phenomenon is known as backward locking [4].", "Finally, BP prevents the use of non-differentiable operations, even if some workarounds are possible [5].", "These issues have motivated the development of alternative training algorithms." ], [ "Related work", "A number of such methods have focused on enhanced biological realism.", "Boltzmann machine learning [6], Contrastive Hebbian Learning [7], and Generalized Recirculation [8] all use local learning signals that do not require propagation of a gradient through the network.", "Target Propagation [9], [10], [11], Decoupled Neural Interfaces [4], [12], and Local Error Signals [13] use trainable modules from which they derive a learning signal.", "These methods not only enable asynchronous processing of the backward pass, but alleviate some limitations of BP.", "For instance, local learning signals do not exhibit vanishing gradients, allowing for deeper architectures.", "They are also inherently regularized and thus less sensitive to over-fitting – as they don't use a precise gradient on the training set.", "Feedback Alignment (FA) [14] leverages the general structure of BP but uses independent feedforward and feedback paths (see Figure REF ).", "Instead of having the backward weights be the transpose of the forward ones, they are fixed random matrices.", "Surprisingly, learning still occurs, as the network learns to make the teaching signal useful.", "This finding has motivated further work around BP with asymmetric feedbacks.", "If FA comes with a performance penalty, simply keeping a sign-concordant feedback [15] ensures learning with performances on par with BP [16].", "More recently, [17] introduced a method inspired by alignment where the backward weights are tuned throughout training to improve their agreement with the forward weights, without a direct path between the two to share information.", "All of these methods solve the weight transport problem, but do not present computational advantages.", "Direct Feedback Alignment (DFA) [18] was introduced as a variant of FA with a direct feedback path from the error to each layer, allowing for layer-wise training (see Figure REF ).", "As this method presents both heightened biological realism and backward unlocking, it is the focus of this paper.", "FA and DFA have also been demonstrated with sparse feedback matrices [19].", "However, most of these synthetic gradient methods have yet to scale to harder tasks like ImageNet [20].", "Figure: Forward and backward flow in BP, FA, and DFA (from left to right)." ], [ "Our motivations and contributions", "Best practices are well established for training deep neural networks with BP.", "On the other hand, alternative methods can't rely on such an extensive body of research around their use.", "Standard implementations for such methods are scarce.", "As most deep learning software libraries are designed to perform BP, optimization problems abound when attempting to use them with a different paradigm.", "Understanding which best practices devised for BP still apply on these different methods can be challenging: quantifying how much of the performance observed is due to the training algorithm and to the architecture chosen is not straightforward.", "Thus, we argue that for DFA and other synthetic gradient methods to be properly evaluated past toy examples, standards must be defined for their use.", "Only then will we be able to assert their inherent limits.", "BP did not scale from MNIST to ImageNet immediately; it required years of careful research and considerations.", "Accordingly, we take the next step towards scaling DFA by presenting a comprehensive set of rules on how to best design and train a deep neural network with this method." ], [ "Alignment angles", "Alignment is key to ensuring useful weight updates in DFA.", "Consequently, properly measuring alignment angles allows us to access novel insights.", "In Section , we explain our methodology to extend the classic measurement of angles in FA to DFA." ], [ "Baselines", "In section , we review past claims on best practices for DFA in the light of alignment angles measurements.", "This allows us to better understand how classic techniques like batch normalization and dropout influence DFA.", "We also highlight standards for more efficient implementations of DFA, helping alleviate memory issues." ], [ "Convolutions", "In section we show that convolutional layers systematically fail to align.", "Thus, DFA is unable to train deep convolutional architectures, a result missed by previous papers.", "We further hypothesize this may be due to a bottlenecking effect in convolutional layers.", "Our implementation of DFA and code to reproduce the results of this paper are available on GitHubhttps://github.com/lightonai/principled-dfa-training." ], [ "Method", "At layer $i$ out of $N$ , with $\\mathbf {W}_i$ its weight matrix, $\\mathbf {b}_i$ its biases, $f_i$ its activation function, and $\\mathbf {h}_i$ its activations, the forward pass can be written as: $\\forall i \\in \\left[1, \\ldots , N \\right]:\\ \\mathbf {a}_i = \\mathbf {W}_i \\mathbf {h}_{i-1} + \\mathbf {b}_i,\\ \\mathbf {h}_i = f_i \\left( \\mathbf {a}_i\\right)$ $\\mathbf {h}_0 = \\mathbf {X}$ is the input data and $\\mathbf {h}_N = f(\\mathbf {a}_N) = \\mathbf {\\hat{y}}$ are the predictions." ], [ "Learning with BP", "Parameter updates are propagated through the network using the chain-rule of derivatives, allowing for blame to be assigned precisely to each neuron of each layer.", "Comparing $\\mathbf {\\hat{y}}$ and the ground truth $\\mathbf {y}$ , a loss function $\\mathcal {L} = \\mathcal {L} \\left( \\mathbf {\\hat{y}}, \\mathbf {y} \\right)$ adapted to the task being learned is computed.", "Disregarding the learning rate, we can write the equation for the update of parameters as: $\\delta \\mathbf {W}_i = -\\frac{\\partial \\mathcal {L}}{\\partial \\mathbf {W}_{i}} = -\\left[ \\left( \\mathbf {W}_{i+1} \\delta \\mathbf {a}_{i+1} \\right) \\odot f_i^{\\prime }(\\mathbf {a}_i) \\right] \\mathbf {h}_{i-1}^\\top ,\\ \\delta \\mathbf {a}_{i} = \\frac{\\partial \\mathcal {L}}{\\partial \\mathbf {a}_{i}}$" ], [ "Learning with DFA", "In DFA, the gradient signal $\\mathbf {W}_{i+1} \\delta \\mathbf {a}_{i+1}$ coming from the $(i+1)$ -th layer is replaced with a random projection of the global error signal.", "With $\\mathbf {e} = \\mathbf {\\hat{y}} - \\mathbf {y}$ the global error vector, and $\\mathbf {B}_i$ a fixed random matrix of appropriate shape drawn at initialization for each layer: $\\delta \\mathbf {W}_i = -\\left[ \\left( \\mathbf {B}_i \\mathbf {e} \\right) \\odot f_i^{\\prime }(\\mathbf {a}_i) \\right] \\mathbf {h}_{i-1}^\\top $ The update is thus independent of other layers, enabling parallel processing of the backward pass." ], [ "Learning and alignment", "In BP, the learning signal is defined as: $\\mathbf {c}_i = \\mathbf {W}_{i+1}^\\top \\delta \\mathbf {a}_{i+1}$ Geometrically, for an arbitrary learning signal $\\mathbf {t}_i$ to be useful, it must lie within 90of the signal $\\mathbf {c}_i$ .", "This means that, on average, $\\mathbf {t}_i^T \\mathbf {c}_i > 0$ .", "In the case of feedback alignment: $\\mathbf {t}_i = \\mathbf {\\delta h}_i = \\mathbf {B}_i \\mathbf {e}$ Evidently, this means we can ensure learning by tweaking either $\\mathbf {B}_i$ or $\\mathbf {W}_{i + 1}$ .", "In DFA, as $\\mathbf {B}_i$ is fixed, this means the feedforward weights $\\mathbf {W}_{i + 1}$ will learn to make the teaching signal useful by bringing it in alignment with the ideal BP teaching signal.", "This process is key to allowing learning in asymmetric feedback methods." ], [ "Measuring alignment", "This alignment phenomenon can be quantified as the angle between the DFA and BP signals.", "We can thus compute the alignment angle $\\beta _i$ : $\\cos (\\beta _i) = \\frac{\\mathbf {\\delta h}_i^\\top \\mathbf {c}_i}{\\vert \\vert \\mathbf {\\delta h}_i\\vert \\vert \\vert \\vert \\mathbf {c}_i\\vert \\vert }$ Accordingly, if $\\cos (\\beta _i) >0$ , then $\\vert \\beta _i \\vert < 90^\\circ $ , and we have learning occurring with DFA.", "From a practical point of view, it is therefore possible to measure alignment angles in DFA, only at the expense of having to perform a standard BP backward pass as well.", "In the common case of mini-batch gradient descent, the alignment angle is the mean of the diagonal of the matrix resulting from the product of the DFA and BP updates.", "This is interesting, because by quantifying how certain practices affect the alignment angle, we can better separate which part of the final change in performance observed is due to the practice itself, or to its interaction with DFA.", "While alignment angles can be calculated on any parts of the network, we recommend measuring them after the non-linearities of each layer.", "For simple tasks where very low training errors can be reached, such as MNIST, it is common for alignment towards the end of training to decrease and spread: this is because as the error gets small, so does the DFA training signal.", "Accordingly, a small change in the distribution of the weights will be enough to generate large change in alignment." ], [ "Implementation and infrastructure details", "Our implementation of DFA leverages PyTorch [21].", "For all the experiments of the paper, memory requirements were around 1 GB; even consumer-grade hardware should have no problem reproducing our results.", "We ran our experiments on a machine with 4 RTX 2080 Ti, allowing us to run multiple experiments in parallel.", "For larger architectures, our code is able to store the feedback weights and perform the random projections on a separate GPU.", "Similarly, for angle measurements, the model running backpropagation can be stored on a separate GPU as well.", "Combined with using a unique feedback matrix (see Section REF for details) this allows us to scale DFA to networks with an unprecedented number of parameters.", "In experiments unreported in this paper, we were able to train a model of more than 217 million parameters on ImageNet with DFA using two V100s." ], [ "Establishing best practices for DFA", "Past works have made a number of statements regarding best practices for feedback alignment methods.", "However, most of these claims have been derived simply from intuition or the observation of loss values.", "This makes it challenging to discern between the influence of the recommended practices on DFA itself, and the learning capacity they add to the network as a whole.", "Using angle measurements, we formulate principled recommendations for training neural networks with DFA.", "We also present some implementation details to make DFA leaner in terms of memory usage." ], [ "Enabling DFA to scale to very large networks by using a unique feedback matrix", "The computational advantage of backward unlocking in DFA comes at the price of an increased memory demand.", "Large random matrices need to be stored in memory to compute the feedback signals.", "For each layer, the corresponding projection matrix is of size $e \\times l_i$ where $e$ is the length of the error vector (e.g.", "the number of classes in a classification problem), and $l_i$ is the output size of layer $i$ .", "Increasing the output size of the network by a factor $k$ has little effect on the memory needs of BP, while it will scale the size of each feedback matrix by $k$ in DFA, with drastic effects on the memory requirements.", "Figure: From the feedback matrix of the largest layer with output size l max l_\\text{max}, the feedback matrices of all others layers (here l i l_i, l k l_k, l j l_j, l l l_l) can be obtained by slicing.Thus, a naive implementation of DFA will incur a tremendous memory cost.", "This has prevented researchers in the past from scaling their experiments to datasets like ImageNet [20].", "We notice that in the theorem justifying DFA convergence [18] no assumption is made on the independence of the random matrices.", "In fact, it is even suggested to use the same matrices for layers of the same size.", "We expand on this idea and implement the backward pass of DFA by drawing just one random matrix for the largest layer.", "For the smaller layers we take fixed slices of this unique larger random matrix (see Figure REF ).", "We thus reduce the memory requirements, especially for very deep networks, as the memory needed for the synthetic gradients computation will depend only on the size of the largest layer and of the error vector.", "Table REF showcases the savings in memory achieved using our technique.", "In section , this allows us to use DFA on ImageNet to train VGG-16.", "Table: Comparison of memory cost of the feedback weights in the backward pass between a naive implementation of DFA and our unified feedback matrix implementation.", "Assuming a network with NN layers of output sizes l i l_i for the i-th layer and an error vector of length ee.", "l max l_\\text{max} is the output size of the largest layer.", "VGG-16 architecture for ImageNet taken from ." ], [ "Normalization of the backward weights", "The importance of normalization for the backward weights in asymmetric feedback methods is well-known.", "However, practices diverge, from methods depending on the feed-forward weights [16], to manually tuned parameters [14].", "Using information on the weights of the forward path goes against the idea of fixing the weight transport problem.", "Instead, we propose to simply scale the feedback weights depending on their dimensions, in a way that is compatible with the use of a unique feedback matrix.", "With $\\mathbf {U}$ a random matrix such that $\\mathbf {U}_{ij} \\sim \\mathcal {N}(0, 1)$ , $e$ the length of the error vector, and $l_\\text{max}$ the output size of the largest layer, our unique random matrix $\\mathbf {B}$ is defined as: $\\mathbf {B} = \\frac{\\mathbf {U}}{\\sqrt{l_\\text{max} e}}$ Then, we define for each layer $\\mathbf {B}_i \\in \\mathbb {R}^{l_i \\times e}$ by slicing $\\mathbf {B}$ to the appropriate dimensions $\\mathbf {B}_{1:l_i,:}$ , where $l_i$ is the output size of the $i$ -th layer.", "The subscript $1:l_i,:$ means that we take only the first $l_i$ rows and all the columns of the matrix.", "We adjust the normalization accordingly: $\\mathbf {B_i} = \\sqrt{\\frac{l_\\text{max}}{l_i}}\\mathbf {B}_{1:l_i,1:e}$ Table REF and REF show that this normalization significantly improves accuracy and alignment angles, in both fully-connected and convolutional architectures.", "Table: Performance of a 3 layers of 800 neurons network on CIFAR-10 with different practices.", "Learning rate is set to 5·10 -4 5 \\cdot 10^{-4} for SGD with no momentum, except for |x|\\vert x \\vert and lReLU(-0.5) where it is 10 -4 10^{-4}.", "Weights are initialized using He initialization .", "If unspecified, non-linearities are tanh.", "When dropout is used, it is with probability 0.1 on the input, and the reported value in parentheses for each layer but the output.", "Accuracies reported are the average of 10 random runs.", "Statistics for alignment angles are computed on a batch of 128 samples from the best run of the 10.", "Values in parentheses are standard deviations." ], [ "Batch Normalization", "Batch Normalization (BN) [24] is an essential component of modern architectures.", "It has repeatedly proven its effectiveness, even though there has been much discussion about the exact underlying mechanisms [25], [26].", "In [15] it was reported as being essential to good performance with FA, while [18] and [14] did not use it.", "We argue that for DFA, BN is not critical like it is for FA.", "For fully-connected networks (Table REF ), we confirm that BN is not necessary for good performance.", "In our specific test case, BN gives slightly worse performance and penalizes the average alignment angle significantly, also increasing its dispersion around the mean.", "For convolutional networks (Table REF ), adding BN leads to a catastrophic loss of performance.", "As DFA is not actually training convolutional layers (see section ), it is hard to draw a systematic conclusion regarding BN and DFA for such architectures." ], [ "Dropout", "Dropout [27] is a widely used technique to reduce overfitting in deep networks.", "Intuitively, DFA appears less sensitive to this issue, as it does not follow a precise gradient.", "Yet, our experiments in Table REF show that it still occurs.", "Accordingly, it's no surprise to see dropout enhancing performance, by limiting overfitting – bringing the test and train accuracies closer together.", "However, dropout simultaneously reduces alignment: because it zeroes neurons randomly, it makes it harder for the feedforward weights to properly align.", "By reducing the dropout rate to 0.1 instead of the recommended 0.5, we can still benefit from its regularizing affect, while reducing its negative influence on DFA.", "Accordingly, while we recommend the use of dropout for DFA, it should be employed with lower rates than with BP.", "Moreover, if possible, other methods should be considered to mitigate overfitting, such as data augmentation.", "For convolutional architectures (Table REF ), as the tendency to overfit is more limited, there are no benefits to using dropout.", "The penalty on alignment is simply too important and not worth the additional regularization." ], [ "Activation functions", "In Table REF , we observe that ReLU is heavily penalizing both in performance and alignment for DFA, with a large dispersion of angle values.", "In layers close to the output, we observe angles close to 90, denoting a complete failure to align.", "Also, the same analysis is valid for LeakyReLU (lReLU) [28] with negative slope $0.01$ .", "The experiments with $\\tanh {(x)}$ suffered of vanishing gradients problems.", "Indeed, synthetic gradients of DFA can still vanish either when the error signal becomes zero, or when the derivative of the activation function does.", "Interestingly, a less common choice of activation function such as $\\vert x \\vert $ yielded high alignment values and good accuracy.", "Motivated by this observation, we tried lReLU with a negative slope of $-0.5$ , to mimic a \"weak\" version of the absolute value.", "This function yielded the best results in our experiments.", "These observations point to a need of rethinking activation functions designed for BP in the context of training with DFA." ], [ "Convolutional networks and the bottleneck effect", "While simple convolutional architectures can be trained with FA or DFA, the gap in performance compared to BP is far more important.", "This remains unexplained so far, and has prevented DFA from scaling to harder computer vision tasks like ImageNet [20].", "On the other hand, previous research has briefly mentioned the inability of FA and DFA to train architectures with narrow layers.", "We hypothesize that these two phenomena are one and the same." ], [ "DFA fails to train the convolutional part of CNNs", "In [18], it was observed that DFA was significantly worse than BP at training convolutional architectures – more so than for fully-connected networks.", "Nevertheless, this did not lead to further investigations, as CNNs trained with DFA were still outperforming their fully-connected counterparts as expected.", "In [20], DFA was not scaled to classic convolutional networks like VGG-16 due to memory concerns.", "Using our new unique feedback matrix method, we are able to train VGG-16 with DFA on both CIFAR-100 and ImageNet.", "Results are reported in Table REF : it is obvious that DFA completely fails to train these large architectures.", "To better understand theses results, we generated visualizations of the learned convolutional filters (Figure REF ).", "The objective was not derive precise visualizations, but rather to be able to roughly compare the filters learned by BP and DFA.", "The results are quite clear: while BP creates meaningful filters, potentially extracting complex features, the filters learned by DFA are completely random.", "Decent results for CNNs trained with DFA are reported by [18] and ourselves in Table REF because they are on simple tasks.", "Most of the performance can then be attributed to the classifier.", "Performance is higher than a simpler fully-connected network, because random convolutional filters can still extract useful features [29]." ], [ "Bottlenecks and alignment", "Narrower layers lack the degrees of freedom necessary to allow for feedforward weights to both learn the task at hand, and align with the random feedback weights.To better quantify this effect, we devise an experiment in which the second layer of a three layers fully-connected classification network is artificially bottlenecked.", "Doing so by simply reducing the number of neurons in the layer would also hamper the forward pass, by preventing information flow.", "Instead, we keep its size constant, but zero out a certain percentage of the elements of its gradients.", "By doing so, a fixed number of weights will remain constant throughout training, and will not be able to contribute to alignment – but will still let information flow through in the forward pass.", "The parts of the gradient that are zeroed out are selected at initialization time with a random mask, and that mask is kept the same for all of the training.", "The results obtained are reported in Figure REF .", "We observe an initial phase, where an increase in the number of neurons able to align leads to an increase in accuracy and in alignment in the bottleneck.", "Eventually, this settles out, and performance and alignment remains more or less constant.", "The high base performance at the beginning can be explained by the relative simplicity of the task: the last layer is still able to get make decent classification guesses as information can freely flow through the bottlenecked layer in our scheme.", "We expect the threshold for performance to remain constant – here located at around 100 neurons – to change depending on the difficulty of the task.", "Figure: Accuracy and alignment for various bottlenecks.", "The size of the bottleneck is the number of neurons with non zeroed out gradients.", "Architecture is a 3 layers of 800 neurons network trained on CIFAR-10.", "The second layer is bottlenecked.", "Learning rate is set to 5·10 -3 5 \\cdot 10^{-3} for SGD with no momentum.", "Weights are initialized using He initialization and non-linearities are tanh.", "Statistic for alignment are computed on a batch of 128 samples." ], [ "Convolutions as bottlenecks", "Convolutional layers benefit of fewer degrees of freedom than fully connected ones as they must obey to a certain structure.", "Accordingly, convolutions are not properly trained by DFA because they create bottlenecks in the network.", "This is corroborated by results in Table REF : alignment angles in deep convolutional layers are zeroes.", "This means the update derived by DFA are orthogonal to the ones of BP; they are essentially random." ], [ "Conclusion and outlooks", "A thorough and principled analysis of the effects of dropout and batch normalization for the training of deep neural networks with DFA has shown that the recommendations that apply for BP don't translate easily to this synthetic gradient method.", "Similarly, the design and choice of activation functions in this context needs adaptation and will be the subject of future work.", "These new insights have been made possible by direct measurement of the alignment angles, allowing for a better comprehension of how different practices affect the learning mechanics of DFA.", "We have more precisely characterized a bottleneck effect, that prevents learning with DFA in narrower layers.", "Due to their structured nature, convolutional layers suffer from this effect, preventing DFA from properly training them – as is verified by measurement of the alignment angles.", "For DFA to be scaled to harder computer vision tasks, this issue needs to be tackled, either by a change in the formulation of DFA, or adaptations of convolutional layers to it.", "In the future we plan to expand our analysis to different types of structured layers, like [30] and [31], and delve deeper into the interactions between DFA and convolutional layers.", "We also see paths to further computational optimizations in this method, for example by using a Fastfood transform [32] in place of the dense random matrix multiplication in the backward pass." ] ]
1906.04554
[ [ "A cosmic shadow on CSL" ], [ "Abstract The Continuous Spontaneous Localisation (CSL) model solves the measurement problem of standard quantum mechanics, by coupling the mass density of a quantum system to a white-noise field.", "Since the mass density is not uniquely defined in general relativity, this model is ambiguous when applied to cosmology.", "We however show that most natural choices of the density contrast already make current measurements of the cosmic microwave background incompatible with other laboratory experiments." ], [ "The CSL Master Equations", "The CSL equation is given by (see, for instance, Eq.", "(4) of Ref.", "[60]) $\\mathrm {d}\\left|\\Psi \\left[ v\\right]\\right\\rangle &=\\biggl \\lbrace - i \\hat{H} \\mathrm {d}t + \\frac{\\sqrt{\\gamma }}{m_0}\\int \\mathrm {d}\\mathbf {x}_\\mathrm {p} \\left[\\hat{C}\\left(\\mathbf {x}_\\mathrm {p}\\right)- \\left\\langle \\hat{C}\\left(\\mathbf {x}_\\mathrm {p}\\right) \\right\\rangle \\right]\\mathrm {d}W_t\\left(\\mathbf {x}_\\mathrm {p}\\right)\\nonumber \\\\ &-\\frac{\\gamma }{2m_0^2}\\int \\mathrm {d}\\mathbf {x}_\\mathrm {p}\\left[\\hat{C}\\left(\\mathbf {x}_\\mathrm {p}\\right)- \\left\\langle \\hat{C}\\left(\\mathbf {x}_\\mathrm {p}\\right)\\right\\rangle \\right]^2\\mathrm {d}t\\biggr \\rbrace \\left|\\Psi \\left[ v \\right]\\right\\rangle \\, ,$ where $\\gamma $ is a free parameter, $m_0$ a reference mass (usually the mass of a nucleon), $\\hat{H}$ the Hamiltonian of the system, $\\hat{C}$ the collapse operator and $W_t(\\mathbf {x}_\\mathrm {p})$ is an ensemble of independent Wiener processes satisfying $\\mathbb {E} \\left[\\mathrm {d}W_t({\\mathbf {x}}_\\mathrm {p}) \\mathrm {d}W_{t^{\\prime }}({\\mathbf {x}}^{\\prime }_\\mathrm {p})\\right] =\\delta ({\\mathbf {x}}_{\\rm p}-{\\mathbf {x}}^{\\prime }_{\\rm p})\\delta (t-t^{\\prime })\\mathrm {d}t^2$ .", "This equation is written in physical coordinates ${\\mathbf {x}}_\\mathrm {p}$ .", "However, in cosmology, it is more convenient to work in terms of comoving coordinates defined by ${\\mathbf {x}}_\\mathrm {p}=a{\\mathbf {x}}$ , where $a$ is the time-dependent scale factor and describes how the size of the universe evolves with time.", "Comoving coordinates are coordinates for which the motion related to the expansion of the universe is subtracted out.", "In terms of these coordinates, the CSL equation reads $\\mathrm {d}\\left|\\Psi \\left[v\\right]\\right\\rangle &=\\biggl \\lbrace - i \\hat{H} \\mathrm {d}t + \\frac{1}{m_0}\\sqrt{\\frac{\\gamma }{a^3}}\\int \\mathrm {d}\\mathbf {x} \\, a^3\\left[\\hat{C}\\left(\\mathbf {x}\\right)- \\left\\langle \\hat{C}\\left(\\mathbf {x}\\right) \\right\\rangle \\right]\\mathrm {d}W_t\\left(\\mathbf {x}\\right)\\nonumber \\\\ &-\\frac{\\gamma }{2m_0^2}\\int \\mathrm {d}\\mathbf {x} \\, a^3\\left[\\hat{C}\\left(\\mathbf {x}\\right)- \\left\\langle \\hat{C}\\left(\\mathbf {x}\\right)\\right\\rangle \\right]^2\\mathrm {d}t\\biggr \\rbrace \\left|\\Psi \\left[v\\right]\\right\\rangle \\, ,$ with $\\mathrm {d}W_t({\\mathbf {x}}_\\mathrm {p})=a^{-3/2} \\mathrm {d}W_t({\\mathbf {x}})$ and $\\mathbb {E}\\left[\\mathrm {d}W_t({\\mathbf {x}}) \\mathrm {d}W_{t^{\\prime }}({\\mathbf {x}}^{\\prime })\\right] =\\delta ({\\mathbf {x}}-{\\mathbf {x}}^{\\prime })\\delta (t-t^{\\prime })\\mathrm {d}t^2$ , this last result coming from the fact that $\\mathbb {E}\\left[\\mathrm {d}W_t({\\mathbf {x}}_\\mathrm {p})\\mathrm {d}W_{t^{\\prime }}({\\mathbf {x}}^{\\prime }_\\mathrm {p})\\right]=\\delta (a{\\mathbf {x}}-a{\\mathbf {x}}^{\\prime })\\delta (t-t^{\\prime }) \\mathrm {d}t^2\\nonumber =a^{-3}\\delta ({\\mathbf {x}}-{\\mathbf {x}}^{\\prime })\\delta (t-t^{\\prime })\\mathrm {d}t^2$ .", "Notice that other implementations of the spontaneous localization model have been considered [11], [16], [17], [18], where the collapse is phenomenologically described.", "In this framework, collapse instantaneously occurs on space-like hypersurfaces, when the wavelength of a given mode crosses out a certain threshold.", "In our case, the dynamics of the collapse is fully resolved, but it is interesting to notice that these effective implementations already found modifications to the scalar and tensor power spectra.", "In the CSL theory, the collapse operator is taken to be the energy density.", "Moreover, in cosmological perturbations theory, one writes $\\hat{\\rho } = \\bar{\\rho }+\\widehat{\\delta \\rho }$ , where $\\bar{\\rho }$ is the background energy density, and only the fluctuating part is quantised.", "As a consequence, the classical background part does not contribute to the CSL equation since $\\hat{C}({\\mathbf {x}})-\\langle \\hat{C}\\left(\\mathbf {x}\\right)\\rangle =\\bar{\\rho }+\\widehat{\\delta \\rho }-\\langle \\bar{\\rho }+\\widehat{\\delta \\rho }\\rangle =\\widehat{\\delta \\rho }-\\langle \\widehat{\\delta \\rho }\\rangle $ .", "The collapse operator also needs to be coarse-grained over the distance $r_\\mathrm {c}$ , where $r_\\mathrm {c}$ is the other free parameter in the model.", "One therefore introduces the Gaussian coarse-graining procedure $f_{\\mathrm {cg}}\\left(\\mathbf {x}\\right) = \\left(\\frac{a}{r_\\mathrm {c}}\\right)^3\\frac{1}{\\left(2\\pi \\right)^{3/2}} \\int \\mathrm {d}\\mathbf {y}f\\left(\\mathbf {x}+\\mathbf {y}\\right)e^{-\\frac{\\left|\\mathbf {y}\\right|^2 a^2}{2 r_\\mathrm {c}^2}}\\, .$ This implies that the collapse operator used in the CSL equation reads $\\hat{C}\\left(\\mathbf {x}\\right) = \\bar{\\rho } \\left.\\widehat{\\frac{\\delta \\rho }{\\bar{\\rho }}}\\right|_{\\mathrm {cg}}\\left(\\mathbf {x}\\right)=3 M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\frac{\\mathcal {H}^2}{a^2} \\left.\\widehat{\\frac{\\delta \\rho }{\\bar{\\rho }}}\\right|_{\\mathrm {cg}}\\left(\\mathbf {x}\\right),$ where we have used the Friedmann equation relating $\\mathcal {H}=a^{\\prime }/a$ to $\\bar{\\rho }$ .", "In cosmology, perturbation theory is usually formulated in Fourier space.", "In the CSL context, this leads to one CSL equation for each mode, namely [61] $\\mathrm {d}\\left|\\Psi _{\\mathbf {k}}^s\\left(t\\right)\\right\\rangle &=\\biggl \\lbrace - i \\hat{H}_{\\mathbf {k}}^s \\mathrm {d}t + \\frac{\\sqrt{\\gamma a^3}}{m_0}\\left[\\hat{C}^s\\left(\\mathbf {k}\\right) - \\left\\langle \\hat{C}^s\\left(\\mathbf {k}\\right) \\right\\rangle \\right] \\mathrm {d}W_t^s({\\mathbf {k}})-\\frac{\\gamma a^3}{2m_0^2}\\left[\\hat{C}^s\\left(\\mathbf {k}\\right)- \\left\\langle \\hat{C}^s\\left(\\mathbf {k}\\right)\\right\\rangle \\right]^2\\mathrm {d}t\\biggr \\rbrace \\left|\\Psi _{\\mathbf {k}}^s\\left(t\\right)\\right\\rangle \\, ,$ the index $s$ designating the real and imaginary parts, $s=\\mathrm {R},\\mathrm {I}$ .", "The correlation functions of the noise in Fourier space are given by $\\mathbb {E}\\left[\\mathrm {d}W_t^{\\mathrm {R}}({\\mathbf {k}})\\,\\mathrm {d}W_{t^{\\prime }}^{\\mathrm {R}}({\\mathbf {k}}^{\\prime })\\right]&=\\mathbb {E}\\left[\\mathrm {d}W_t^{\\mathrm {I}}({\\mathbf {k}})\\,\\mathrm {d}W_{t^{\\prime }}^{\\mathrm {I}}({\\mathbf {k}}^{\\prime })\\right]=\\delta ({\\mathbf {k}}-{\\mathbf {k}}^{\\prime })\\delta (t-t^{\\prime })\\mathrm {d}t^2,\\quad \\mathbb {E}\\left[\\mathrm {d}W_t^{\\mathrm {R}}({\\mathbf {k}})\\,\\mathrm {d}W_{t^{\\prime }}^{\\mathrm {I}}({\\mathbf {k}}^{\\prime })\\right]=0,$ and the Fourier transform of the collapse operator reads $\\hat{C}\\left(\\mathbf {k}\\right) & = 3 M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\frac{\\mathcal {H}^2}{a^2}e^{-\\frac{k^2 r_\\mathrm {c}^2}{2a^2}} \\widehat{\\frac{\\delta \\rho }{\\bar{\\rho }}}\\left(\\mathbf {k}\\right)\\, .$ The CSL equation can also be cast into a Lindblad equation, see for instance Eq.", "(21) of Ref.", "[60], which takes the form $\\frac{\\mathrm {d}\\hat{\\rho }}{\\mathrm {d}t} = -i\\left[\\hat{{H}},\\hat{\\rho }\\right]-\\frac{\\gamma }{2m_0^2}\\int \\mathrm {d}\\mathbf {x} \\, a^3\\left[\\hat{C}\\left(\\mathbf {x}\\right),\\left[\\hat{C}\\left(\\mathbf {x}\\right),\\hat{\\rho }\\right]\\right]$ for the mean density matrix $\\hat{\\rho } = \\mathbb {E}(\\vert \\Psi \\rangle \\langle \\Psi \\vert )$ .", "In Fourier space, this gives rise to one equation per Fourier mode, which can be written as $\\frac{\\mathrm {d}\\hat{\\rho }_{\\mathbf {k}}^s}{\\mathrm {d}t} =-i\\left[\\hat{{H}}_{\\mathbf {k}}^s,\\hat{\\rho }_{\\mathbf {k}}^s\\right]-\\frac{\\gamma }{2m_0^2}a^3\\left[\\hat{C}^s\\left(\\mathbf {k}\\right),\\left[\\hat{C}^s\\left(\\mathbf {k}\\right),\\hat{\\rho }_{\\mathbf {k}}^s\\right]\\right].$" ], [ "Solving the Lindblad equation", "The stochastic mean of the quantum expectation value of some observable $\\hat{O}_{\\mathbf {k}}^s$ is given by $\\mathbb {E}\\left(\\left\\langle \\hat{O}_{\\mathbf {k}}^s\\right\\rangle \\right) =\\mathrm {Tr}\\left(\\hat{\\rho }_{\\mathbf {k}}^s \\hat{O}_{\\mathbf {k}}^s\\right)$ , where $\\hat{\\rho }_{\\mathbf {k}}^s$ obeys Eq.", "(REF ).", "Differentiating this expression with respect to conformal time (we recall that conformal time $\\eta $ is related to cosmic time $t$ by $\\mathrm {d}t=a\\mathrm {d}\\eta $ ) and making use of Eq.", "(REF ), one obtains $\\frac{\\mathrm {d}}{\\mathrm {d}\\eta } \\mathbb {E}\\left(\\left\\langle \\hat{O}_{\\mathbf {k}}^s\\right\\rangle \\right) = \\mathbb {E}\\left(\\left\\langle \\frac{\\partial }{\\partial \\eta } \\hat{O}_{\\mathbf {k}}^s\\right\\rangle \\right)- i \\mathbb {E} \\left(\\left\\langle \\left[\\hat{O}_{\\mathbf {k}}^s,\\hat{\\mathcal {H}}_{\\mathbf {k}}^s\\right]\\right\\rangle \\right)- \\frac{\\gamma a^4}{2m_0^2}\\mathbb {E} \\left(\\left[\\left[\\hat{O}_{\\mathbf {k}}^s,\\hat{C}_{\\mathbf {k}}^s\\right],\\hat{C}_{\\mathbf {k}}^s\\right]\\right).$ For one-point correlators, $\\hat{O}_{\\mathbf {k}}^s=v_{\\mathbf {k}}^s$ and $\\hat{O}_{\\mathbf {k}}^s=p_{\\mathbf {k}}^s$ , this gives rise to $\\frac{\\mathrm {d}\\mathbb {E} \\left( \\left\\langle \\hat{v}_{\\mathbf {k}}^s\\right\\rangle \\right)}{\\mathrm {d}\\eta } =\\mathbb {E} \\left(\\left\\langle \\hat{p}_{\\mathbf {k}}^s \\right\\rangle \\right)\\, , \\qquad \\frac{\\mathrm {d}\\mathbb {E} \\left( \\left\\langle \\hat{p}_{\\mathbf {k}}^s \\right\\rangle \\right)}{\\mathrm {d}\\eta } =-\\omega ^2(k,\\eta ) \\mathbb {E} \\left(\\left\\langle \\hat{v}_{\\mathbf {k}}^s \\right\\rangle \\right)\\, ,$ which is nothing but the Ehrenfest theorem.", "For two-point correlators, denoting $P_{vv}(k)=\\mathbb {E} (\\langle \\hat{v}_{\\mathbf {k}}^s{}^2 \\rangle ) $ , $P_{pp}(k)=\\mathbb {E} (\\langle \\hat{p}_{\\mathbf {k}}^s{}^2 \\rangle ) $ , $P_{vp}(k)=\\mathbb {E} (\\langle \\hat{v}_{\\mathbf {k}}^s\\hat{p}_{\\mathbf {k}}^s \\rangle ) $ and $P_{pv}(k)=\\mathbb {E} (\\langle \\hat{p}_{\\mathbf {k}}^s\\hat{v}_{\\mathbf {k}}^s \\rangle ) $ , one obtains $\\frac{\\mathrm {d}P_{vv}(k)}{\\mathrm {d}\\eta }&=P_{vp}(k)+P_{pv}(k)+\\frac{\\gamma }{m_0^2} a^4 \\beta _{\\mathbf {k}}^2, \\\\\\frac{\\mathrm {d}\\left[P_{vp}(k)+P_{pv}(k)\\right]}{\\mathrm {d}\\eta } &=2P_{pp}(k)-2w^2(k,\\eta )P_{vv}(k)- 2a^4\\frac{\\gamma }{m_0^2} \\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}}, \\\\\\frac{\\mathrm {d}P_{pp}(k)}{\\mathrm {d}\\eta }&= -\\omega ^2(k,\\eta )\\left[P_{pv}(k)+P_{vp}(k)\\right]+ a^4\\frac{\\gamma }{m_0^2} \\alpha _{\\mathbf {k}}^2\\, ,$ where the coefficients $\\alpha _{\\mathbf {k}}$ and $\\beta _{\\mathbf {k}}$ have been defined in the main text, see Eqs.", "(REF )-() and (REF )-().", "These equations can be combined into a single third-order equation for $P_{vv}$ only, which reads $\\frac{\\mathrm {d}^3P_{vv}}{\\mathrm {d}\\eta ^3}+4\\omega ^2(k,\\eta )\\frac{\\mathrm {d}P_{vv}}{\\mathrm {d}\\eta }+4 \\omega \\frac{\\mathrm {d}\\omega }{\\mathrm {d}\\eta }P_{vv}=S\\, ,$ where $S$ is the source function given by $S=\\frac{\\gamma }{m_0^2}\\left[2 a^4 \\left(\\alpha _{\\mathbf {k}}^2+\\omega ^2 \\beta _{\\mathbf {k}}^2\\right)-2\\left(a^4 \\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}}\\right)^{\\prime }+\\left(a^4\\beta _{\\mathbf {k}}^2\\right)^{\\prime \\prime }\\right] .$ As we will show below, this source function encodes both the modifications to the power spectrum and the collapsing time.", "Let us note that it is invariant under phase-space canonical transforms, so the results derived hereafter would be the same if other canonical variables than $v_{\\mathbf {k}}$ and $p_{\\mathbf {k}}$ were used.", "As shown in Ref.", "[55], Eq.", "(REF ) can be solved by introducing the Green function of the free theory, $G(\\eta ,\\bar{\\eta })=\\frac{1}{W}\\left[g_{\\mathbf {k}}^0{}^*(\\bar{\\eta })g_{\\mathbf {k}}^0(\\eta )-g_{\\mathbf {k}}^0(\\bar{\\eta })g_{\\mathbf {k}}^0{}^*(\\eta )\\right]\\Theta \\left(\\eta -\\bar{\\eta }\\right),$ where $g_{\\mathbf {k}}^0$ is a solution of the Mukhanov-Sasaki equation, $(g_{\\mathbf {k}}^0)^{\\prime \\prime }+\\omega ^2(k,\\eta )g_{\\mathbf {k}}^0=0$ , $W=g_{\\mathbf {k}}^0{}^{\\prime }g_{\\mathbf {k}}^0{}^*-g_{\\mathbf {k}}^0g_{\\mathbf {k}}^0{}^*{}^{\\prime }$ is its Wronskian, and where $\\Theta (x)=1$ if $x\\ge 0$ and 0 otherwise is the Heaviside function.", "By construction, given the mode equation obeyed by $g_{\\mathbf {k}}^0$ , it is a constant.", "Then, the solution to Eq.", "(REF ) reads $P_{vv}(k) = g_{\\mathbf {k}}^0\\left(\\eta \\right)g^0_{\\mathbf {k}}{}^*\\left(\\eta \\right) +\\frac{1}{2}\\int _{-\\infty }^\\eta S\\left(\\bar{\\eta }\\right)G^2(\\eta ,\\bar{\\eta })\\mathrm {d}\\bar{\\eta }\\, .$" ], [ "Inflation", "During inflation $a\\simeq -1/(H\\eta )$ , and at leading order in the Hubble-flow parameters, Eq.", "(REF ) gives rise to $S_{\\mathrm {inf}}&\\simeq \\frac{4\\gamma }{m_0^2} \\epsilon _1 H^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2 k^2e^{-\\left(r_\\mathrm {c}/\\lambda \\right)^2}\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{-6}\\biggl [126\\epsilon _1^2-75 \\epsilon _1\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{2}+81\\epsilon _1^2\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^2+18\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{4}-48\\epsilon _1\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{2}\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^2\\nonumber \\\\ &+18\\epsilon _1^2\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^4+\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{6}+7\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{4}\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^2-12\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{2}\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^4+2\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{4}\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^4\\biggr ],$ where $\\ell _{_{\\rm H}}=H^{-1}$ is the Hubble radius and $\\lambda =a(\\eta )/k$ the wavelength of the Fourier mode with comoving wavenumber $k$ .", "The quantity $\\ell _{_{\\rm H}}/\\lambda $ can also be written as $\\ell _{_{\\rm H}}/\\lambda =k/(aH)=-k\\eta $ .", "We see that the amplitude of the source is controlled by the energy density during inflation, $\\bar{\\rho }_{\\rm inf}=3H^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2$ , and by the first Hubble-flow parameter $\\epsilon _1$ (at next-to-leading order in slow roll, higher-order Hubble flow parameters would appear).", "The limits we are interested in are $\\ell _{_{\\rm H}}/\\lambda \\ll 1$ (super Hubble limit) and $r_\\mathrm {c}/\\lambda \\ll 1$ (otherwise the exponential term turns the source off, see the discussion in the main text).", "In this regime, the dominant term is the first one, proportional to $126\\epsilon _1^2$ (although it is slow-roll suppressed).", "Normalising the mode function in the Bunch-Davies vacuum, at leading order in slow roll one has $g_{\\mathbf {k}}^0(\\eta )=\\frac{e^{ik\\eta }}{\\sqrt{2k}}\\left(1+\\frac{i}{k\\eta }\\right),$ from which Eq.", "(REF ) gives $G_{\\mathrm {inf}}(\\eta ,\\bar{\\eta }) =\\frac{\\left(1+k^2\\eta \\bar{\\eta }\\right)\\sin \\left[k\\left(\\eta - \\bar{\\eta }\\right)\\right]-k\\left(\\eta - \\bar{\\eta }\\right)\\cos \\left[k\\left(\\eta - \\bar{\\eta }\\right)\\right]}{k^3\\eta \\bar{\\eta }}\\Theta \\left(\\eta -\\bar{\\eta }\\right)\\simeq \\frac{\\eta ^3-\\bar{\\eta }^3}{3\\eta \\bar{\\eta }}\\Theta \\left(\\eta -\\bar{\\eta }\\right)\\, .$ The second expression is valid in the super-Hubble limits $-k\\eta \\rightarrow 0$ (since the power spectrum is computed on super-Hubble scales) and $-k \\bar{\\eta } \\rightarrow 0$ (since we assume $Hr_\\mathrm {c}\\gg 1$ , so any mode is super Hubble when it crosses out $r_\\mathrm {c}$ ).", "Plugging Eqs.", "(REF ) and (REF ) into Eq.", "(REF ), one obtains at leading order $P_{vv}(k) &\\simeq \\vert v_{\\mathbf {k}}\\vert ^2_{\\rm standard}+\\frac{18\\gamma }{m_0^2k}H^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\epsilon _1^3\\left(\\frac{k}{aH}\\right)^{-3}=\\vert v_{\\mathbf {k}}\\vert ^2_{\\rm standard}\\left[1+36\\frac{\\gamma }{m_0^2} H^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\epsilon _1^3\\left(\\frac{k}{aH}\\right)^{-1}\\right],$ where $\\vert v_{\\mathbf {k}}\\vert ^2_{\\rm standard}=\\vert g_{\\mathbf {k}}^0\\vert ^2$ , which is the result used in the main text." ], [ "Radiation-dominated epoch", "Let us now study what happens during the radiation dominated era.", "In that case the scale factor is given by $a(\\eta )=a_{\\mathrm {r}}\\left(\\eta -\\eta _r\\right)$ and, as a consequence, $\\mathcal {H}(\\eta )=a^{\\prime }/a=(\\eta -\\eta _{\\mathrm {r}})^{-1}$ .", "Requiring the scale factor and its derivative (or, equivalently, the Hubble parameter) to be continuous, which is equivalent to the continuity of the first and second fundamental forms, gives $\\eta _{\\mathrm {r}} = 2 \\eta _\\mathrm {end}$ and $a_{\\mathrm {r}} = 1/(H_\\mathrm {end}\\eta _\\mathrm {end}^2)$ .", "Using the coefficients $\\alpha _{\\mathbf {k}}$ and $\\beta _{\\mathbf {k}}$ given in Eqs.", "(REF ) and (), the source function (REF ) reads $\\nonumber S_{\\rm rad}&=8\\frac{\\gamma }{m_0^2} H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2 k^2e^{-\\left(r_\\mathrm {c}/\\lambda \\right)^2}\\left(\\frac{a_\\mathrm {end}}{a}\\right)^4\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^{-6}\\biggl [3024-414\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^2+\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^6-1836\\left(\\frac{a_\\mathrm {end}}{a}\\right)^2\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^2_\\mathrm {end}\\\\ \\nonumber &+216\\left(\\frac{a_\\mathrm {end}}{a}\\right)^4\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^4_\\mathrm {end}-72\\left(\\frac{a_\\mathrm {end}}{a}\\right)^4\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^2\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^4_\\mathrm {end}+432\\left(\\frac{a_\\mathrm {end}}{a}\\right)^2\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^2\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^2_\\mathrm {end}\\\\ &+6\\left(\\frac{a_\\mathrm {end}}{a}\\right)^2\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^2\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^4_\\mathrm {end}-21\\left(\\frac{a_\\mathrm {end}}{a}\\right)^2\\left(\\frac{\\ell _{_{\\rm H}}}{\\lambda }\\right)^4\\left(\\frac{r_\\mathrm {c}}{\\lambda }\\right)^2_\\mathrm {end}\\biggr ].$ Its form is similar to that of the source during inflation, see Eq.", "(REF ), although the amplitude is now proportional to the energy density at the end of inflation, $\\bar{\\rho }_\\mathrm {end}=3H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2$ , and is no longer slow-roll suppressed as is expected in the radiation-dominated era.", "The coefficients of the expansion depend on $\\left(r_\\mathrm {c}/\\lambda \\right)_\\mathrm {end}$ , the ratio between the CSL scale and the mode wavelength evaluated at the end of inflation.", "This dependence on quantities evaluated at the end of inflation comes from the matching procedure.", "At the perturbative level, the Mukhanov-Sasaki variable now obeys ${g_{\\mathbf {k}}^0}^{\\prime \\prime }+(c_{_{\\rm S}}^2k^2-z^{\\prime \\prime }/z)g_{\\mathbf {k}}^0=0$ with $c_{_{\\rm S}}^2=1/3$ and $z= aM_{\\scriptscriptstyle {\\mathrm {Pl}}}\\sqrt{2 \\epsilon _1}/c_{_\\mathrm {S}}= 2\\sqrt{3}aM_{\\scriptscriptstyle {\\mathrm {Pl}}}$ .", "The solution reads $g_{\\mathbf {k}}^0(\\eta )= A_{\\mathbf {k}} e^{-ik\\frac{\\eta -\\eta _{\\mathrm {r}}}{\\sqrt{3}}} + B_{\\mathbf {k}} e^{ik\\frac{\\eta -\\eta _{\\mathrm {r}}}{\\sqrt{3}}}\\, ,$ On super-Hubble scales, continuity of the first and second fundamental forms is equivalent to the continuity of $\\zeta $ and the Bardeen potential $\\Phi $ .", "At leading order in $k \\eta _\\mathrm {end}$ , this leads to $g_{\\mathbf {k}}^0(\\eta ) =-\\frac{3i}{\\sqrt{k \\epsilon _1}(k\\eta _\\mathrm {end})^2}\\sin \\left[\\frac{k}{\\sqrt{3}}\\left(\\eta -\\eta _{\\mathrm {r}}\\right)\\right] .$ Plugging this expression into Eq.", "(REF ), one obtains $G_\\mathrm {rad}(\\eta ,\\bar{\\eta })=\\frac{\\sqrt{3}}{k}\\sin \\left[\\frac{k}{\\sqrt{3}}\\left(\\eta -\\bar{\\eta }\\right)\\right]\\Theta \\left(\\eta -\\bar{\\eta }\\right)\\simeq \\left(\\eta -\\bar{\\eta }\\right)\\Theta \\left(\\eta -\\bar{\\eta }\\right).$ At this stage, one must distinguish between two situations: either the Fourier mode under consideration crosses out the scale $r_\\mathrm {c}$ during inflation or during the radiation-dominated era." ], [ "Case where the mode crosses out $r_\\mathrm {c}$ during inflation", "In the standard situation, the power spectrum of $\\zeta $ computed at the end of inflation is frozen on super Hubble scales and can be directly propagated to the last scattering surface.", "Here, however, a priori, the power spectrum continues to evolve during the radiation-dominated era even on large scales.", "The integral appearing in Eq.", "(REF ) can be split in two parts: one for which $-\\infty <\\bar{\\eta }<\\eta _\\mathrm {end}$ , which was already calculated above during inflation, and one for which $\\eta _\\mathrm {end}<\\bar{\\eta }<\\eta $ that we now calculate.", "If the scale $r_\\mathrm {c}$ is crossed out during inflation, then $(r_\\mathrm {c}/\\lambda )_\\mathrm {end}\\ll 1$ and all the terms in the source but the one proportional to 3024 can be ignored.", "At leading order in $\\ell _{\\mathrm {H}}/\\lambda _\\mathrm {end}$ and $r_\\mathrm {c}/\\lambda _\\mathrm {end}$ , one obtains that, after a few $e$ -folds, the power spectrum freezes to $P_{vv}(k)=\\left|v_{\\mathbf {k}}^s\\right|^2_{\\rm standard}\\left[1+448\\frac{\\gamma }{m_0^2}H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2 \\epsilon _1 \\left(\\frac{k}{aH}\\right)^{-1}_\\mathrm {end}\\right].$" ], [ "Case where the mode crosses out $r_\\mathrm {c}$ during the radiation-dominated era", "The mode crosses out $r_\\mathrm {c}$ when $a_{\\mathrm {cross}}=kr_\\mathrm {c}$ , i.e.", "at $\\eta _{\\mathrm {cross}}=\\eta _\\mathrm {r}+k\\eta _\\mathrm {end}^2H_\\mathrm {end}r_\\mathrm {c}$ , which implies that $(a_\\mathrm {end}/a_{\\mathrm {cross}})(r_\\mathrm {c}/\\lambda )_\\mathrm {end}=1$ .", "As a consequence, in the source (REF ), the terms proportional to 3024, $-1836$ and 216 are of the same order of magnitude initially, while the others are negligible since suppressed by powers of $\\ell _{_{\\rm H}}/\\lambda $ and can be safely neglected.", "This gives rise to $P_{vv}(k)=\\left|v_{\\mathbf {k}}^s\\right|^2_{\\rm standard}\\left[1+\\frac{35408}{143}\\frac{\\gamma }{m_0^2}H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2 \\epsilon _1 \\left(\\frac{r_\\mathrm {c}}{\\ell _{_{\\rm H}}}\\right)_{\\rm end}^{-9}\\left(\\frac{k}{aH}\\right)^{-10}_\\mathrm {end}\\right].$" ], [ "Solving the CSL Equation", "The CSL equation (REF ) admits Gaussian solutions [as revealed e.g.", "from the fact that its Lindblad counterpart (REF ) is linear mode by mode].", "Therefore, since the initial vacuum state, the Bunch-Davies state, is Gaussian, it remains so at any time and the stochastic wave function can be written as $\\Psi _{\\mathbf {k}}^s\\left(\\eta ,v_{\\mathbf {k}}^s\\right)&=\\vert N_{\\mathbf {k}}\\left(\\eta \\right)\\vert \\exp \\Bigl \\lbrace -\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\left(\\eta \\right)\\left[v_{\\mathbf {k}}^s-\\bar{v}_{\\mathbf {k}}^s\\left(\\eta \\right)\\right]^2+i\\sigma _{\\mathbf {k}}^s(\\eta )+i\\chi _{\\mathbf {k}}^s(\\eta )v_{\\mathbf {k}}^s-i\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}(\\eta )\\left(v_{\\mathbf {k}}^s\\right)^2\\Bigr \\rbrace \\, ,$ where, for the state to be normalised, one has $\\vert N_{\\mathbf {k}}\\vert =\\left(\\frac{2\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}}{\\pi }\\right)^{1/4}\\, .$ In the standard picture, the quantum state evolves into a two-mode strongly squeezed state.", "Here, one has $\\langle \\hat{v}_{\\mathbf {k}}^s \\rangle = \\bar{v}_{\\mathbf {k}}^s$ and $\\langle \\hat{p}_{\\mathbf {k}}^s \\rangle = -i\\langle \\partial /\\partial \\hat{v}_{\\mathbf {k}}^s \\rangle = \\chi _{\\mathbf {k}}^s- 2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\bar{v}_{\\mathbf {k}}^s $ , giving rise to $\\langle \\hat{C}^s\\left(\\mathbf {k}\\right) \\rangle =(\\alpha _{\\mathbf {k}}- 2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}} \\beta _{\\mathbf {k}})\\bar{v}_{\\mathbf {k}}^s+\\beta _{\\mathbf {k}}\\chi _{\\mathbf {k}}^s $ .", "For convenience, let us rewrite the CSL equation (REF ) in terms of conformal time, $\\mathrm {d}\\left|\\Psi _{\\mathbf {k}}^s\\left(\\eta \\right)\\right\\rangle &= \\biggl \\lbrace - i \\hat{\\mathcal {H}}_{\\mathbf {k}}^s \\mathrm {d}\\eta + \\frac{\\sqrt{ {\\gamma a^4}}}{m_0}\\left[\\hat{C}^s\\left(\\mathbf {k}\\right)- \\left\\langle \\hat{C}^s\\left(\\mathbf {k}\\right) \\right\\rangle \\right] \\mathrm {d}W_\\eta -\\frac{\\gamma a^4}{2m_0^2} \\left[\\hat{C}^s\\left(\\mathbf {k}\\right)- \\left\\langle \\hat{C}^s\\left(\\mathbf {k}\\right) \\right\\rangle \\right]^2\\mathrm {d}\\eta \\biggr \\rbrace \\left|\\Psi _{\\mathbf {k}}^s\\left(\\eta \\right)\\right\\rangle ,$ where $\\hat{\\mathcal {H}}_{\\mathbf {k}}^s = (\\hat{p}_{\\mathbf {k}}^s)^2/2 +\\omega ^2(k,\\eta ) (\\hat{v}_{\\mathbf {k}}^s)^2/2 $ and where the noise $\\mathrm {d}W_\\eta $ is defined by $\\mathrm {d}W_{t}^s=a^{1/2}\\mathrm {d}W_\\eta ^s$ such that $\\mathbb {E}\\left[\\mathrm {d}W_\\eta ^s({\\mathbf {k}})\\,\\mathrm {d}W_{\\eta ^{\\prime }}^{s^{\\prime }}({\\mathbf {k}}^{\\prime })\\right]=\\delta ({\\mathbf {k}}-{\\mathbf {k}}^{\\prime })\\, \\delta ^{ss^{\\prime }}\\delta (\\eta -\\eta ^{\\prime })\\mathrm {d}\\eta ^2.$ Making use of the representation $\\hat{C}^s\\left(\\mathbf {k}\\right) = \\alpha _{\\mathbf {k}} \\hat{v}_{\\mathbf {k}}^s -\\beta _{\\mathbf {k}} i \\partial /\\partial \\hat{v}_{\\mathbf {k}}^s $ , the CSL equation becomes $\\frac{\\mathrm {d}\\left|\\Psi _{\\mathbf {k}}^s\\left(\\eta \\right)\\right\\rangle }{\\mathrm {d}\\eta }&=\\Biggl \\lbrace -\\left[\\frac{i}{2}\\omega ^2(k,\\eta ) + \\frac{\\gamma }{2m_0^2} a^4\\alpha _{\\mathbf {k}}^2 \\right]\\left(v_{\\mathbf {k}}^s\\right)^2+ \\left(\\frac{i}{2}+ \\frac{\\gamma }{2m_0^2} a^4 \\beta _{\\mathbf {k}}^2 \\right)\\frac{\\partial ^2}{\\partial (v_{\\mathbf {k}}^s)^2 }+i\\frac{\\gamma }{m_0^2} a^4 \\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}}v_{\\mathbf {k}}^s\\frac{\\partial }{\\partial v_{\\mathbf {k}}^s }\\nonumber \\\\& +\\alpha _{\\mathbf {k}} \\left[ \\frac{\\sqrt{\\gamma }}{m_0}a^2\\frac{\\mathrm {d}W_\\eta }{\\mathrm {d}\\eta }+\\frac{\\gamma }{m_0^2} a^4 \\left(\\alpha _{\\mathbf {k}}\\bar{v}_{\\mathbf {k}}^s- 2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}} \\beta _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s+\\beta _{\\mathbf {k}}\\chi _{\\mathbf {k}}^s\\right)\\right] v_{\\mathbf {k}}^s\\nonumber \\\\& -i\\beta _{\\mathbf {k}} \\left[ \\frac{\\sqrt{\\gamma }}{m_0}a^2\\frac{\\mathrm {d}W_\\eta }{\\mathrm {d}\\eta }+\\frac{\\gamma }{m_0^2} a^4 \\left(\\alpha _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s- 2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\beta _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s+\\beta _{\\mathbf {k}}\\chi _{\\mathbf {k}}^s\\right)\\right]\\frac{\\partial }{\\partial v_{\\mathbf {k}}^s }\\nonumber \\\\& -\\frac{\\sqrt{\\gamma }}{m_0}a^2\\left(\\alpha _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s- 2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\beta _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s+\\beta _{\\mathbf {k}}\\chi _{\\mathbf {k}}^s\\right)\\frac{\\mathrm {d}W_\\eta }{\\mathrm {d}\\eta }\\nonumber \\\\& -\\frac{\\gamma }{2m_0^2}a^4 \\left(\\alpha _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s- 2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}} \\beta _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^s+\\beta _{\\mathbf {k}}\\chi _{\\mathbf {k}}^s\\right)^2 +i\\frac{\\gamma }{2m_0^2}a^4\\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}}\\Biggr \\rbrace \\left|\\Psi _{\\mathbf {k}}^s\\left(\\eta \\right)\\right\\rangle $ Plugging Eq.", "(REF ) into Eq.", "(REF ) and making use of Itô calculus, one can identify terms proportional to $({v_{\\mathbf {k}}^s})^2$ , $v_{\\mathbf {k}}^s$ and 1.", "This gives rise to the set of differential equations $\\frac{\\mathrm {d}\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}}{\\mathrm {d}\\eta } &=\\frac{\\gamma }{m_0^2} a^4\\alpha _{\\mathbf {k}}^2-4\\frac{\\gamma }{m_0^2} a^4\\beta _{\\mathbf {k}}^2\\left[\\left(\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\right)^2-\\left(\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\right)^2\\right]+4 \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}} \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}-4\\frac{\\gamma }{m_0^2} a^4 \\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}} \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}},\\\\\\frac{\\mathrm {d}\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}}{\\mathrm {d}\\eta } &=\\frac{1}{2}\\omega ^2(k,\\eta )-2\\left[\\left(\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\right)^2-\\left(\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\right)^2\\right]-8\\frac{\\gamma }{m_0^2} a^4 \\beta _{\\mathbf {k}}^2 \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}+4\\frac{\\gamma }{m_0^2} a^4 \\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}}\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}},\\\\\\frac{\\mathrm {d}\\ln \\left|N_{\\mathbf {k}}\\left(\\eta \\right)\\right|}{\\mathrm {d}\\eta }& = \\frac{1}{4 \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}}\\frac{\\mathrm {d}\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}}{\\mathrm {d}\\eta },\\\\\\frac{\\mathrm {d}\\bar{v}_{\\mathbf {k}}}{\\mathrm {d}\\eta }& = \\chi _{\\mathbf {k}}-2\\bar{v}_{\\mathbf {k}}\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}+\\frac{\\sqrt{\\gamma } a^2}{2m_0\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}}\\left(\\alpha _{\\mathbf {k}} - 2 \\beta _{\\mathbf {k}} \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\right)\\frac{\\mathrm {d}W_\\eta }{\\mathrm {d}\\eta },\\\\\\frac{\\mathrm {d}\\sigma _{\\mathbf {k}}}{\\mathrm {d}\\eta }&=-\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}+2\\left(\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\right)^2\\bar{v}^2_{\\mathbf {k}}-\\frac{\\chi ^2_{\\mathbf {k}}}{2}+\\frac{\\gamma a^4}{2m_0^2}\\beta _{\\mathbf {k}}\\left(\\alpha _{\\mathbf {k}}-2\\beta _{\\mathbf {k}}\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\right)\\left(1-8\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}^2\\right)\\\\ &-2\\frac{\\sqrt{\\gamma }}{m_0}a^2\\beta _{\\mathbf {k}} \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}\\frac{\\mathrm {d}W_\\eta }{\\mathrm {d}\\eta }, \\nonumber \\\\\\frac{\\mathrm {d}\\chi _{\\mathbf {k}}}{\\mathrm {d}\\eta }&=2 \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\chi _{\\mathbf {k}}- 4 \\left(\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\right)^2\\bar{v}_{\\mathbf {k}}+8\\frac{\\gamma }{m_0^2} a^4\\beta _{\\mathbf {k}} \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}} \\bar{v}_{\\mathbf {k}}\\left(\\alpha _{\\mathbf {k}}-2\\beta _{\\mathbf {k}} \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\right)+2\\frac{\\sqrt{\\gamma }}{m_0}a^2\\beta _{\\mathbf {k}} \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\frac{\\mathrm {d}W_\\eta }{\\mathrm {d}\\eta }.$ Two remarkable properties are to be noticed: $\\Omega _{\\mathbf {k}}$ decouples from the other parameters of the wavefunction, and its dynamics is not stochastic though modified by the CSL terms.", "Combining the first two above equations, one can derive an equation for $\\Omega _{\\mathbf {k}} = \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}} + i \\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}$ , namely $\\Omega _{\\mathbf {k}}^{\\prime } = -2\\left(i+2\\frac{\\gamma }{m_0^2} a^4 \\beta _{\\mathbf {k}}^2\\right)\\Omega _{\\mathbf {k}}^2+4i\\frac{\\gamma }{m_0^2} a^4\\alpha _{\\mathbf {k}}\\beta _{\\mathbf {k}} \\Omega _{\\mathbf {k}}+\\frac{\\gamma }{m_0^2} a^4\\alpha _{\\mathbf {k}}^2 + \\frac{i}{2}\\omega ^2(k,\\eta )\\, .$ This is a Ricatti equation that can be made linear by introducing the function $g_{\\mathbf {k}}(\\eta )$ defined by the following expression $\\Omega _{\\mathbf {k}} = \\frac{1}{2\\left(i+2\\gamma a^4 \\beta _{\\mathbf {k}}^2/m_0^2\\right)}\\left(\\frac{g_{\\mathbf {k}}^{\\prime }}{g_{\\mathbf {k}}}-\\frac{1}{2} C_1\\right)\\, ,$ and obeying $g_{\\mathbf {k}}^{\\prime \\prime }+\\left(-\\frac{1}{2} C_1^{\\prime }-\\frac{1}{4}C_1^2+C_2\\right)g_{\\mathbf {k}}=0.$ The coefficients $C_1$ and $C_2$ are given by $C_1\\equiv - 2 i \\frac{\\gamma }{m_0^2} \\left[2 a^4 \\alpha _{\\mathbf {k}} \\beta _{\\mathbf {k}}-\\frac{\\left(a^4 \\beta _{\\mathbf {k}}^2\\right)^{\\prime }}{1-2 i\\gamma a^4 \\beta _{\\mathbf {k}}^2/m_0^2}\\right],\\quad C_2\\equiv \\left(1-2i\\frac{\\gamma }{m_0^2} a^4 \\beta _{\\mathbf {k}}^2\\right)\\left[\\omega ^2(k,\\eta )-2i\\frac{\\gamma }{m_0^2} a^4 \\alpha _{\\mathbf {k}}^2\\right],$ from which it follows that $-C_1^{\\prime }/2-C_1^2/4+C_2=\\omega ^2(k,\\eta )+\\Delta \\omega ^2_\\gamma (k,\\eta )$ , where $\\Delta \\omega ^2_\\gamma (k,\\eta )$ is a function which vanishes when $\\gamma =0$ and can easily be determined from the expressions of $C_1$ and $C_2$ .", "Quite remarkably, one has $\\Delta \\omega ^2_\\gamma (k,\\eta )=-iS+{\\cal O}\\left(\\gamma ^2\\right),$ where $S$ is the source function introduced in Eq.", "(REF ), and computed in Eqs.", "(REF ) and (REF ) for inflation and radiation respectively.", "Solving Eq.", "(REF ) exactly is difficult but can be done perturbatively in $\\gamma $ .", "The perturbed solution can be written as $g_{\\mathbf {k}}(\\eta )=g_{\\mathbf {k}}^0(\\eta )+\\frac{\\gamma }{m_0^2} h_{\\mathbf {k}}(\\eta )+{\\cal O}\\left(\\gamma ^2\\right),$ where $g_{\\mathbf {k}}^0(\\eta )$ is the solution of the mode equation for $\\gamma =0$ introduced above.", "Plugging this expansion into Eq.", "(REF ), the function $h_{\\mathbf {k}}(\\eta )$ obeys $h_{\\mathbf {k}}^{\\prime \\prime }+\\omega ^2(k,\\eta ) h_{\\mathbf {k}}=i\\frac{m_0^2S}{\\gamma }g_{\\mathbf {k}}^0,$ which is solved as $h_{\\mathbf {k}}(\\eta )=i\\int _{-\\infty }^\\eta G(\\eta ,\\bar{\\eta })\\frac{m_0^2S(\\bar{\\eta })}{\\gamma }g_{\\mathbf {k}}^0(\\bar{\\eta })\\mathrm {d}\\bar{\\eta },$ where the Green function $G(\\eta ,\\bar{\\eta })$ has been introduced in Eq.", "(REF ).", "Let us recall that the quantity $m_0^2S/\\gamma $ is of order ${\\cal O}(\\gamma ^0)$ at leading order.", "Inserting the expansion (REF ) into Eq.", "(REF ) finally leads to $\\Omega _{\\mathbf {k}}=\\frac{1}{2i}\\frac{g_{\\mathbf {k}}^0{}^{\\prime }}{g_{\\mathbf {k}}^0}\\left\\lbrace 1-\\frac{\\gamma }{m_0^2}\\left(\\frac{h_{\\mathbf {k}}}{g_{\\mathbf {k}}^0}-\\frac{h_{\\mathbf {k}}^{\\prime }}{g_{\\mathbf {k}}^0{}^{\\prime }}\\right)+i\\frac{\\gamma }{m_0^2}\\frac{g_{\\mathbf {k}}^0}{g_{\\mathbf {k}}^0{}^{\\prime }}\\left[2a^4\\alpha _{\\mathbf {k}}\\beta _{\\mathbf {k}}-\\left(a^4\\beta _{\\mathbf {k}}^2\\right)^{\\prime }\\right]+2i\\frac{\\gamma }{m_0^2}a^4\\beta _{\\mathbf {k}}^2+{\\cal O}\\left(\\gamma ^2\\right)\\right\\rbrace .$" ], [ "Inflation", "We now apply these general considerations to the case of inflation, where the Green function is given by Eq.", "(REF ) and the free source function by the expression above that equation.", "As already mentioned, the first term in the inflationary source $S_\\mathrm {inf}$ given in Eq.", "(REF ), i.e.", "the one proportional to $126\\epsilon _1^2$ , is the dominant one.", "Keeping only this term in Eq.", "(REF ), Eq.", "(REF ) leads to the explicit expression of $h_{\\mathbf {k}}(\\eta )$ which can then be used to calculate the first correction in Eq.", "(REF ).", "The next step consists in calculating the two additional contributions in Eq.", "(REF ).", "Using the expressions of $\\alpha _{\\mathbf {k}}$ and $\\beta _{\\mathbf {k}}$ during inflation, see Eqs.", "(REF ) and (), one obtains at leading order in slow roll $2a^4\\alpha _{\\mathbf {k}}\\beta _{\\mathbf {k}} -\\left(a^4\\beta _{\\mathbf {k}}^2\\right)^{\\prime }\\simeq 108 H^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\epsilon _1^3/[(k\\eta )^4 \\eta ]$ and $2a^4\\beta _{\\mathbf {k}}^2\\simeq 36H^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\epsilon _1^3/(k\\eta )^4$ .", "Inserting these results into Eq.", "(REF ), one finds an exact cancellation, meaning that it is necessary to go to next-to-leading order in slow roll, where the result takes the following form $\\Omega _{\\mathbf {k}}=\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}\\left[1+4i\\frac{\\gamma }{m_0^2} \\epsilon _1^3{\\cal O}(\\epsilon )\\bar{\\rho }_\\mathrm {inf}(-k\\eta )^{-4}+{\\cal O}\\left(\\gamma ^2\\right)\\right].$ Here, ${\\cal O}(\\epsilon )$ is a linear combination of the Hubble flow parameters.", "Given that $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}=k(k\\eta )^2/2$ and $\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}=1/(2\\eta )$ , one finally obtains $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}=\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}\\left[1+4\\frac{\\gamma }{m_0^2}\\epsilon _1^3{\\cal O}(\\epsilon )\\bar{\\rho }_\\mathrm {inf}(-k\\eta )^{-7}\\right].$ We notice that the relative correction to $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}$ increases with time, which is what is needed in order for the collapse to occur, $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\gg \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}$ .", "If one requires the collapse to happen during inflation, a lower bound on the parameter $\\gamma ,$ defined to be its value such that the relative correction evaluated at $\\eta _\\mathrm {end}$ is larger than one, can be placed.", "Of course, this limit depends on the unknown factor ${\\cal O}(\\epsilon )$ .", "However, as discussed below, the collapse is more efficient during the radiation-dominated era, and the precise value of that quantity plays no role." ], [ "Radiation dominated epoch", "During radiation, the Green function is given by Eq.", "(REF ) and the free mode function by Eq.", "(REF ).", "Using the expressions of $\\alpha _{\\mathbf {k}}$ and $\\beta _{\\mathbf {k}}$ during the radiation-dominated era, one also has, for the last two terms in Eq.", "(REF ), $2a^4\\alpha _{\\mathbf {k}}\\beta _{\\mathbf {k}} -\\left(a^4\\beta _{\\mathbf {k}}^2\\right)^{\\prime }\\simeq 864\\eta _\\mathrm {end}^4H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\left[3(\\eta -\\eta _\\mathrm {r})^2-k^2\\eta _\\mathrm {end}^4H_\\mathrm {end}^2r_\\mathrm {c}^2\\right]/[k^4(\\eta -\\eta _\\mathrm {r})^{11}]$ and $2a^4\\beta _{\\mathbf {k}}^2\\simeq 864 \\eta _\\mathrm {end}^4H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2/[k^4(\\eta -\\eta _\\mathrm {r})^8]$ ." ], [ "Case where the mode crosses out $r_\\mathrm {c}$ during inflation", "As explained above, the first term in Eq.", "(REF ) for $S_\\mathrm {rad}$ is the dominant one in that case, and at leading order in $r_\\mathrm {c}/\\lambda _\\mathrm {end}$ , one obtains $\\Omega _{\\mathbf {k}}&\\simeq \\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}\\biggl [1+i\\frac{\\gamma }{m_0^2}1152 \\frac{\\bar{\\rho }_\\mathrm {end}}{k^4(-\\eta _\\mathrm {end})^3(\\eta -\\eta _\\mathrm {r})}+{\\cal O}\\left(\\gamma ^2\\right)\\biggr ].$ Given that $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}=(k\\eta _\\mathrm {end})^4/[2k(\\eta -\\eta _\\mathrm {r})^2]$ and $\\Im \\mathrm {m}\\,\\Omega _{\\mathbf {k}}\\vert _{\\mathbf {k}}=-[2(\\eta -\\eta _\\mathrm {r})]^{-1}$ , we notice that the correction has the same time dependence as $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}$ , so its relative value is frozen to $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\simeq \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}\\left[1+1152\\frac{\\gamma }{m_0^2}\\bar{\\rho }_\\mathrm {end}(-k\\eta _\\mathrm {end})^{-7}+{\\cal O}\\left(\\gamma ^2\\right)\\right].$ This correction is larger than in Eq.", "(REF ), which justifies the statement that the collapse is more efficient in the radiation-domiated epoch.", "The condition for the collapse, i.e.", "having a relative correction of order one, is then $\\frac{\\gamma }{m_0^2}>(1152\\bar{\\rho }_\\mathrm {end})^{-1}(-k\\eta _\\mathrm {end})^7.$" ], [ "Case where the mode crosses out $r_\\mathrm {c}$ during the radiation-dominated era", "As already discussed, three terms must be kept in the expansion (REF ) of $S_\\mathrm {rad}$ , namely the terms proportional to the coefficients 3024, 216 and 1836.", "This gives rise to $\\Omega _{\\mathbf {k}}&\\simeq \\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}\\biggl [1+i\\frac{\\gamma }{m_0^2} \\frac{21792}{11}\\frac{H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2}{k(-k\\eta _\\mathrm {end})^{10}(H_\\mathrm {end}r_\\mathrm {c})^7(\\eta -\\eta _\\mathrm {r})}-i\\frac{\\gamma }{m_0^2}864\\eta _\\mathrm {end}^4H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2\\frac{3(\\eta -\\eta _\\mathrm {r})^2-k^2\\eta _\\mathrm {end}^4H_\\mathrm {end}^2r_\\mathrm {c}^2}{k^4(\\eta -\\eta _\\mathrm {r})^{12}}\\nonumber \\\\ &+i\\frac{\\gamma }{m_0^2}\\frac{864 \\eta _\\mathrm {end}^4H_\\mathrm {end}^2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2}{k^4(\\eta -\\eta _\\mathrm {r})^8}+{\\cal O}\\left(\\gamma ^2\\right)\\biggr ].$ We see that the two last terms are subdominant.", "In this approximation, the relative correction is again time-independent and given by $\\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\simeq \\Re \\mathrm {e}\\,\\Omega _{\\mathbf {k}}\\vert _{\\gamma =0}\\left[1+\\frac{7264}{11}\\frac{\\gamma }{m_0^2}\\bar{\\rho }_\\mathrm {end}(k\\eta _\\mathrm {end})^{-14}(H_\\mathrm {end}r_\\mathrm {c})^{-7}+{\\cal O}\\left(\\gamma ^2\\right)\\right].$ The lower bound on the parameter $\\gamma $ can therefore be expressed as $\\frac{\\gamma }{m_0^2}>\\left(\\frac{7264}{11}\\bar{\\rho }_\\mathrm {end}\\right)^{-1}(-k\\eta _\\mathrm {end})^{14}(H_\\mathrm {end}r_\\mathrm {c})^7.$" ], [ "Density Contrasts and the Parameter $p$", "In the theory of cosmological perturbations, in the scalar sector, the most general perturbed metric tensor reads $\\mathrm {d}s^2 = a^2(\\eta )\\left\\lbrace - \\left(1+2\\phi \\right)\\mathrm {d}\\eta ^2 + 2\\partial _iB\\mathrm {d}x^i \\mathrm {d}\\eta + \\left[\\left(1-2\\psi \\right)\\delta _{ij}+2\\partial _i\\partial _jE\\right]\\mathrm {d}x^i{\\rm d}x^j\\right\\rbrace ,$ where $\\phi $ , $B$ , $\\psi $ and $E$ are four scalar functions of space and time.", "As is well-known, the theory features a “gauge symmetry”, meaning that the quantities appearing in Eq.", "(REF ) are in general not invariant under (small) space-time diffeomorphisms and, therefore, cannot be considered as observables.", "The cure is then either to specify a particular system of coordinates or to work in terms of “gauge-invariant” quantities, that is to say quantities that are invariant under a small change of coordinates.", "The most general change of coordinates that can be constructed with scalar functions [given here by the scalar functions $\\xi ^0(\\eta ,{\\mathbf {x}})$ and $\\xi (\\eta ,{\\mathbf {x}})$ ] is $\\eta \\rightarrow \\tilde{\\eta }=\\eta +\\xi ^0\\left(\\eta ,{\\mathbf {x}}\\right),\\quad x^i \\rightarrow \\tilde{x}^i=x^i+\\delta ^{ij}\\partial _j\\xi \\left(\\eta ,{\\mathbf {x}}\\right)\\, .$ Then, we find that the four scalar functions used to construct the scalar perturbed metric given by Eq.", "(REF ) transform, under the above change of coordinates (REF ), according to $\\tilde{\\phi }=\\phi +\\xi ^{0^{\\prime }}+\\frac{a^{\\prime }}{a}\\xi ^0, \\quad \\tilde{B}=B-\\xi ^0+\\xi ^{\\prime }, \\quad \\tilde{\\psi }=\\psi -\\frac{a^{\\prime }}{a}\\xi ^0,\\quad \\tilde{E}=E+\\xi \\, .$ As a consequence, if we now consider the two following combinations $\\Phi \\equiv \\phi +\\frac{1}{a}\\left[a\\left(B-E^{\\prime }\\right)\\right]^{\\prime },\\quad \\Psi \\equiv \\psi -\\frac{a^{\\prime }}{a}\\left(B-E^{\\prime }\\right)\\, ,$ then it is easy to establish that these two quantites are gauge-invariant: $\\tilde{\\Phi }=\\Phi $ and $\\tilde{\\Psi }=\\Psi $ .", "They are called the Bardeen potentials [52].", "Of course, for consistency, the stress-energy tensor describing matter must also be perturbatively expanded and, as a consequence, one needs to construct gauge-invariant combinations for the scalar quantities appearing in $\\delta T_{\\mu \\nu }$ , in particular for the density contrast.", "From the rule of transformation of two-rank tensors, one obtains $\\tilde{\\delta }=\\delta +\\frac{\\rho ^{\\prime }}{\\rho }\\, \\xi ^0\\,, \\quad \\tilde{v}=v-\\xi ^{\\prime }\\,, \\quad \\tilde{\\delta p}=\\delta p+p^{\\prime }\\xi ^0\\, ,$ where $\\delta \\equiv \\delta \\rho /\\rho $ is the density contrast, $v$ the peculiar velocity and $\\delta p$ the perturbed pressure.", "As is well-known, it is possible to build various density contrasts that are gauge invariant.", "Two proto-typical examples are given by $\\delta _\\mathrm {g}\\equiv \\delta +\\frac{\\rho ^{\\prime }}{\\rho }(B-E^{\\prime })\\, ,\\quad \\delta _\\mathrm {m} \\equiv \\delta +\\frac{\\rho ^{\\prime }}{\\rho }(v+B)\\, .$ More generally, a gauge-invariant density contrast can always be intoduced by considering the following definition $\\delta _\\mathrm {inv}=\\delta +\\frac{\\rho ^{\\prime }}{\\rho }{\\cal D}(\\phi ,\\psi ,B,E),$ where ${\\cal D}$ is an arbitrary function of $\\phi $ , $\\psi $ , $B$ and $E$ and their derivatives, provided it satisfies $\\tilde{\\cal D}\\rightarrow {\\cal D}-\\xi ^0$ .", "One easily checks that this is the case for ${\\cal D}_\\mathrm {g}=B-E^{\\prime }$ or ${\\cal D}_\\mathrm {m}=v+B$ .", "Using the fact that the behaviour of $\\delta _{\\rm g}$ is given by the time-time Einstein equation, namely $\\delta _{\\rm g}=-2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2[3{\\cal H}({\\cal H}\\Phi +\\Phi ^{\\prime })+k^2\\Phi ]/(\\rho a^2)$ , one can also write $\\delta _\\mathrm {inv}=-\\frac{2M_{\\scriptscriptstyle {\\mathrm {Pl}}}^2}{\\rho a^2}\\left[3{\\cal H}({\\cal H}\\Phi +\\Phi ^{\\prime })+k^2\\Phi \\right]+\\frac{\\rho ^{\\prime }}{\\rho }\\left[{\\cal D}(\\phi ,\\psi ,B,E)-B+E^{\\prime }\\right].$ We are then interested in the limit $k\\rightarrow 0$ , namely the large-scale limit for which different situations can occur.", "The most generic one is that ${\\cal D}$ is a function of $\\phi $ , $\\psi $ , $B$ and $E$ where positive powers of $k$ appear.", "In the large-scale limit, the scale-dependent terms will be negligible and the scale dependence of $\\delta _\\mathrm {inv}$ will be that of $\\delta _\\mathrm {g}$ , namely $p=0$ for the parameter $p$ introduced in the main text.", "Of course, one exception is when ${\\cal D}$ is such that it cancels out all the scale-independent terms in Eq.", "(REF ), leaving $k^2\\Phi $ as the leading term.", "This case is nothing but $\\delta _\\mathrm {inv}=\\delta _\\mathrm {m}$ and corresponds to $p=2$ .", "Clearly, this case exists but is “fine-tuned”.", "Note that the only way to modify these conclusions is to incorporate negative powers of $k$ in ${\\cal D}$ .", "However, this would correspond to having a non-local function in real space, which is not very realistic.", "In brief, we have shown that either $p=0$ or $p=2$ , with this last case being in some sense of “zero measure”." ] ]
1906.04405
[ [ "Adaptively Preconditioned Stochastic Gradient Langevin Dynamics" ], [ "Abstract Stochastic Gradient Langevin Dynamics infuses isotropic gradient noise to SGD to help navigate pathological curvature in the loss landscape for deep networks.", "Isotropic nature of the noise leads to poor scaling, and adaptive methods based on higher order curvature information such as Fisher Scoring have been proposed to precondition the noise in order to achieve better convergence.", "In this paper, we describe an adaptive method to estimate the parameters of the noise and conduct experiments on well-known model architectures to show that the adaptively preconditioned SGLD method achieves convergence with the speed of adaptive first order methods such as Adam, AdaGrad etc.", "and achieves generalization equivalent of SGD in the test set." ], [ "Introduction", "Generalizability is the ability of a model to perform well on unseen examples [13].", "Neural networks are know to overfit the data, and mechanisms such as regularization are employed to constrain a model's ability to learn in order to reduce the generalization gap.", "Various schemes such as dropout [31], weight decay [16] and early stopping [28], [6], [37] have been proposed to regularize neural network models.", "Regularization in neural networks can be roughly categorized into implicit methods [24] and explicit methods [25].", "The ability of Stochastic Gradient Descent (SGD) to generalize better than other adaptive optimization methods is often attributed to its role as an implicit regularization mechanism.", "Various adaptive optimization methods such as RMSProp [34], Adam [14], AdaGrad [9] and AMSGrad [29] have been proposed to speed up the training of deep networks.", "First order adaptive methods typically have a faster training speed, but Stochastic Gradient Descent is often found to achieve better generalization on the test set [36], [18].", "Stochastic Gradient Langevin Dynamics (SGLD) [35] adds an isotropic noise to SGD to help it navigate out of saddle points and suboptimal local minima.", "SGLD has a powerful Bayesian interpretation, and is often used in Monte Carlo Markov Chains to sample the posterior for inference [19].", "The slow convergence of SGD while training is due to the uniform scaling in the parameter space.", "Adaptive methods conventionally speed up training by applying an element wise scaling scheme.", "Various approaches to pre-condition the noise in SGLD on the basis of higher order information such as Fisher Scoring [1] have been shown to achieve better generalizability than SGD, but such higher order methods have a high computational complexity and hence not scalable to very deep networks.", "In this paper, we propose a method to adaptively estimate the parameters of noise in SGLD using first order information in order to achieve high training speed and better generalizability." ], [ "Related Work", "Adding noise to the input, the model structure or the gradient updates itself is a well-studied topic [4].", "The success of mini-batch gradient descent over batch gradient descent is attributed to the variance brought due to constraint in the sampling procedure.", "Methods such as weight noise [32] and adaptive weight noise [10], [5] infuses noise by perturbing the weights with a Gaussian Noise.", "Dropout randomly drops neurons with a probability, and it mimics training an ensemble of neural networks.", "Hamiltonian Monte Carlo [8], [23] works by sampling a posterior using noise to explore the state space.", "A mini-batch variant of Hamiltonian Monte Carlo is Stochastic Gradient Langevin Dynamics (SGLD).", "The noise in SGLD help it better explore the loss landscape and also helps it navigate out of malformed curvature such as saddle points and sub-optimal local minima.", "Various adaptive optimization algorithms have been proposed to improve the speed of training of neural networks such as Adam, AdaGrad and AMSGrad.", "The adaptive methods apply an element wise scaling on the gradients to allow for faster convergence.", "The adaptive algorithms perform incredibly well for convex settings, but are not able to generalize as well as SGD for non-convex problems.", "Similar to SGD, the slow convergence in SGLD is attributed to uniform scaling in the parameter space.", "The speed of convergence and generalizability of the method can be improved using an adaptive preconditioner on the noise.", "Scaling of noise in SGLD can be performed by using a pre-conditioner [17].", "Second order pre-conditioners encoding inverse Hessians [21] and Fisher Information [20], [22] have been used to establish better generalizability but suffer from high computational complexity.", "First order methods based on RMSProp [17] use the second order moments of the gradients to inversely scale the noise, thereby increasing noise in sensitive dimensions and dampening noise in dimensions with large gradients.", "We propose a method to scale the noise proportionaly to the second order moment of the gradients in order to achieve a higher training speed by increasing the noise for dimensions with larger gradients.", "In this paper we describe a method to create a pre-conditioner for SGLD which possess the training speed of adaptive methods and the generalizability of SGD with minimal computational overhead." ], [ "Adaptively Preconditioned SGLD", "Consider a supervised learning problem, where we have identically distributed data and label pairs $ (x_1,y_1),.., (x_n,y_n) \\in \\mathbb {R}^{d+1} $ .", "Our goal is to optimize the distribution $ p(y|x) $ by minimizing an approximate loss function $ \\mathcal {L} (y_i| x_i, \\Theta ) $ with respect to $ \\Theta $ , where the distribution $ p(y|x) $ is parametrized by $ \\Theta $ .", "Finding the optimal parameters for a Neural Network is a known NP-hard problem [24], [2].", "The parameters of a probability distribution occupy a Riemannian Manifold [3], and greedy optimization methods such as Stochastic Gradient Descent exploit curvature information in the manifold to find the most optimal parameters of the distribution in a convex case.", "Stochastic Gradient Descent optimizes the loss function using gradients of the loss function with respect to the parameter at each step $\\hat{g}_{s}(\\Theta _t) \\leftarrow \\nabla _{\\Theta } \\hat{\\mathcal {L}}_{s}(\\Theta _t)$ where $ \\hat{\\mathcal {L}}_{s}(\\Theta ) $ is the stochastic estimate of the loss function computed over a mini-batch of size $s $ sampled uniformly from the data.", "The parameter updates can be written as $\\Theta _{t+1} \\leftarrow \\Theta _t - \\eta (\\hat{g}_{s}(\\Theta _t))$ SGD with decreasing step sizes provably converges to the optimum of a convex function, and to the local optimum in case of a non-convex function [30].", "The loss landscape of very deep neural networks is often ill-behaved and non-convex in nature.", "To navigate out of sub-optimal local minima, strategies such as momentum [27], [33] are employed $\\mu _{t} \\leftarrow \\rho \\mu _{t-1} + (1 - \\rho )\\hat{g}_{s}(\\Theta _t)$ $\\Theta _{t+1} \\leftarrow \\Theta _t - \\eta (\\mu _{t})$ [tb] Adaptively Preconditioned SGLD Input $ : \\Theta _0 $ , step size $ \\eta $ , momentum $\\rho $ , noise $\\psi $ Set $ \\mu _0 =0 $ and $ \\sigma _0=0 $ t = 1 to T $ \\hat{g}_{s}(\\Theta _t) \\leftarrow \\nabla _{\\Theta } \\hat{\\mathcal {L}}_{s}(\\Theta _t) $ $ \\mu _{t} \\leftarrow \\rho \\mu _{t-1} + (1 - \\rho )\\hat{g}_{s}(\\Theta _t) $ $ C_{t} \\leftarrow \\rho C_{t-1}+ (1-\\rho )(\\hat{g}_{s}(\\Theta _t)- \\mu _t )(\\hat{g}_{s}(\\Theta _t)- \\mu _{t-1} ) $ $ \\xi _t \\sim N(\\mu _t,C_t) $ $ \\Theta _{t+1} \\leftarrow \\Theta _t- \\eta ( \\hat{g}_{s}(\\Theta _t) + \\psi \\xi _t) $ Figure: ASGLD vs adaptive methodsFigure: ASGLD vs adaptive methodsStochastic Gradient Langevin Dynamics (SGLD) further extends SGD by adding additional Gaussian noise to help it escape sub-optimal minima.", "We can approximate SGLD using $\\xi _t \\sim N(0,\\epsilon )$ $\\Theta _{t+1} \\leftarrow \\Theta _t- \\eta ( \\hat{g}_{s}(\\Theta _t) + \\xi _t )$ SGLD can also be provably shown to converge to the optimal minima in a convex case when limit $ \\epsilon , \\eta \\rightarrow 0 $ holds.", "[19].", "Stochastic Gradient Hamiltonian Monte Carlo Stochastic Gradient [7] adds momentum to SGLD $\\Theta _{t+1} \\leftarrow \\Theta _t- \\eta ( \\mu _t + \\xi _t )$ The equi-scaled nature of noise leads to poor scaling of parameter updates, leading to a slower training speed and risk of converging to a sub-optimal minima [18].", "Noise can be adaptively pre-conditioned to help traverse pathological curvature $\\xi _t \\sim N(0,C)$ Preconditioners based on higher order information use the inverse of Hessian or Fisher Information matrix to help traverse the curvature better.", "Unfortunately, such higher order approaches are computationally infeasible for large and deep networks.", "Adaptive pre-conditioners based on popular adaptive methods such as RMSProp use a diagonal approximation of the inverse of second order moments of the gradient updates.", "Adaptive pre-conditioning methods yield similar or better generalization performance versus SGD, but still possess a rather slower speed of convergence with respect to adaptive first order methods [26].", "We propose an adaptive preconditioner based on a diagonal approximation of second order moment of gradient updates, which posses the generalizability of SGD and the training speed of adaptive first order methods.", "Adaptively Preconditioned SGLD (ASGLD) method scales the noise in a directly proportional manner to allow for faster training speed $C_{t} \\leftarrow \\rho C_{t-1}+ (1-\\rho )(\\hat{g}_{s}(\\Theta _t)- \\mu _t )(\\hat{g}_{s}(\\Theta _t)- \\mu _{t-1} )$ $\\xi _t \\sim N(\\mu _t,C_t)$ $\\Theta _{t+1} \\leftarrow \\Theta _t- \\eta ( \\hat{g}_{s}(\\Theta _t) + \\psi \\xi _t)$ where $\\psi $ is the noise parameter.", "The noise covariance preconditioner scales the noise proportionally in dimensions with larger gradients, essentially helping it escape suboptimal minima and saddle points better and thus helping it converge faster and to a better solution.", "As the algorithm approaches a wide minima, the dampened second order moment starts shrinking, allowing for convergence to the optimum." ], [ "Experiments", "In this section, we examine the impact of using ASGLD method on Resnet 34 [11] and Densenet 121 [12] architectures on CIFAR 10 dataset [15].", "CIFAR 10 dataset contains 60,000 images for ten classes sampled from tiny images dataset.", "Training was performed for a fixed schedule of 200 runs over the training set, and we plot the training and test accuracy in fig 1. and fig 2.", "We reduce the learning rate by a factor of 10 at the 150th epoch.", "We performed hyperparameter tuning in accordance with methods defined in [36] and [18].", "For learning rate tuning, we implement a logarithmically spaced grid with five step sizes, and we try new grid points if the best performing parameter setting is found at one end of the grid.", "We match the settings for other hyperparameters such as batch size, weight decay and dropout probability with the respective base architectures.", "In Resenet 34 architecture, we observe in fig 1.a) that ASGLD performs better in terms of training speed than SGD and achieves similar accuracy on the held out set.", "We also observe in fig 1.b) that ASGLD has similar training speed as first order adaptive methods early in training, but ASGLD begins to significantly outperform adaptive methods in generalization error by the time the learning rates are decayed.", "We also observe that ASGLD is more stable as compared to SGD at the end of the training.", "We see similar trends for Densenet 121 architecture, as evident in fig 2.", "We also observe that ASGLD has a lower generalization error than SGD at the end of the training." ], [ "Discussion", "To investigate the ability of our method, we conducted experiments on CIFAR 10 using well known neural network architectures such as Resnet 34 and Densenet 121.", "Based on the results obtained in the experiment section, we can observe that ASGLD performs as well as adaptive methods and much better than SGD early in training, but by the time the learning rate is decayed we observe that the ASGLD method performs as well as SGD and significantly outperforms first order adaptive methods.", "Furthermore, we also observed similar wall clock times for training with the ASGLD method versus adaptive methods." ], [ "Future Work", "Future work would include exploring the impact of ASGLD on other regularization mechanisms such as Batch Normalization etc.", "We would also like to investigate the effectiveness of ASGLD on other domains such as Natural Language Processing, Speech etc." ], [ "Conclusion", "We propose a new method ASGLD based on adaptively preconditioning noise covariance matrix in SGLD using estimated second order moments of gradient updates for optimizing a non-convex function, and demonstrate its effectiveness over well-known datasets using popular neural network architectures.", "We observe that ASGLD method significantly outperforms adaptive methods in generalizability and SGD in terms of speed of convergence and stability.", "We also observe the increased effectiveness of ASGLD in deeper networks." ], [ "Acknowledgment", "We thank Zoran Kostic for providing valuable comments and computing resources." ] ]
1906.04324
[ [ "Object-aware Aggregation with Bidirectional Temporal Graph for Video\n Captioning" ], [ "Abstract Video captioning aims to automatically generate natural language descriptions of video content, which has drawn a lot of attention recent years.", "Generating accurate and fine-grained captions needs to not only understand the global content of video, but also capture the detailed object information.", "Meanwhile, video representations have great impact on the quality of generated captions.", "Thus, it is important for video captioning to capture salient objects with their detailed temporal dynamics, and represent them using discriminative spatio-temporal representations.", "In this paper, we propose a new video captioning approach based on object-aware aggregation with bidirectional temporal graph (OA-BTG), which captures detailed temporal dynamics for salient objects in video, and learns discriminative spatio-temporal representations by performing object-aware local feature aggregation on detected object regions.", "The main novelties and advantages are: (1) Bidirectional temporal graph: A bidirectional temporal graph is constructed along and reversely along the temporal order, which provides complementary ways to capture the temporal trajectories for each salient object.", "(2) Object-aware aggregation: Learnable VLAD (Vector of Locally Aggregated Descriptors) models are constructed on object temporal trajectories and global frame sequence, which performs object-aware aggregation to learn discriminative representations.", "A hierarchical attention mechanism is also developed to distinguish different contributions of multiple objects.", "Experiments on two widely-used datasets demonstrate our OA-BTG achieves state-of-the-art performance in terms of BLEU@4, METEOR and CIDEr metrics." ], [ "Introduction", "As a task of generating natural language descriptions for video content automatically, video captioning takes a crucial step forward to the high-level video understanding and artificial intelligence.", "It supports various potential applications, such as human-robot interaction or assisting the visually-impaired.", "Recently, it has received increasing attention in both computer vision and artificial intelligence communities.", "Previous works have explored to model the temporal information of video content by temporal attention mechanism [1], [2] or hierarchical encoder-decoder structures [3], [4].", "However, they mainly work on the global frame or salient regions without discrimination on specific object instances, which cannot well capture the detailed temporal dynamics of each object.", "While for obtaining the accurate captioning descriptions for complex video content, it plays a key but challenging role to capture the salient objects with their detailed temporal dynamics.", "As shown in Fig.", "REF , the reference captioning sentence “A Chinese man shoots a basketball into the basket” involves three salient objects in the example video, namely the boy, the basketball and the basket, which needs the object-aware video understanding.", "Besides, the reference sentence also describes the action that the boy is performing, “shoots a basketball”, which needs to understand the detailed temporal dynamics of the boy and the basketball.", "In addition, video representations have great impact on the quality of generated captions.", "Therefore, how to describe the video content using discriminative spatio-temporal representations is also important for video captioning.", "Many works directly extract global features on video frames from the fully-connected layer or global pooling layer in CNN, which may lose much fine spatial information.", "NetVLAD [5] shows its local information encoding ability by embedding a trainable VLAD (vector of locally aggregated descriptors) encoding model into the CNN, which aggregates the local features to encode local spatial information.", "Following it, SeqVLAD [6] is proposed recently to combine the trainable VLAD encoding model with the sequence learning process, which explores both the local spatial information and temporal information of video.", "However, the above methods do not consider the object-specific information, thus cannot distinguish specific fine spatio-temporal information corresponding to the specific object instance.", "For addressing the above two problems, in this paper, we propose a novel video captioning approach based on object-aware aggregation with bidirectional temporal graph (OA-BTG), which captures detailed temporal dynamics for the salient objects in video via a bidirectional temporal graph, and learns discriminative spatio-temporal video representations by performing object-aware local feature aggregation on object regions.", "Its main novelties and advantages are: Bidirectional temporal graph: The bidirectional temporal graph is constructed on salient objects and global frames to capture the detailed temporal dynamics in video.", "The bidirectional temporal graph includes a forward graph along the temporal order and a backward graph reversely along the temporal order, which provide different ways to construct the temporal trajectories with complementary information for each salient object instance.", "In such way, detailed temporal dynamics for objects and global context are captured to generate accurate and fine-grained captions.", "Object-aware aggregation: For encoding the fine spatio-temporal information, we construct learnable VLAD models on object temporal trajectories and global frame temporal sequences, which perform object-aware aggregation for each salient object instance as well as the global frame to learn discriminative representations.", "We also utilize a hierarchical attention mechanism to distinguish different contributions of different object instances.", "In such way, we learn the discriminative spatio-temporal video representations for boosting the captioning performance.", "We conduct experiments on two widely-used datasets MSVD and MSR-VTT, which demonstrate that our proposed OA-BTG approach achieves the state-of-the-art performance in terms of BLEU@4, METEOR and CIDEr metrics for video captioning." ], [ "Related Works", "In the early stage, video captioning methods are mainly template-based language models [7], [8], [9].", "These methods follow a bottom-up paradigm, which first predicts semantic concepts or words, like objects, scenes and activities, and then generates sentences according to pre-defined language templates.", "These methods heavily rely on the template definition and the predicted video concepts, which limits the diversities of generated sentences.", "Recently, inspired by the development of deep learning and neural machine translation (NMT) [10], many sequence learning based models [4], [11], [12], [13] are proposed to address video captioning problem.", "Regarding video captioning as a “translating” process, these methods construct the encoder-decoder structures to directly generate sentences from the video content.", "Venugopalan et al.", "[14] make the early attempt to generate video descriptions using encoder-decoder structure, but they simply apply mean pooling over individual frame features to obtain video representation, which ignores the temporal information of ordered video frames.", "For addressing this issue, the following works [1], [4], [11] make advances by using temporal attention mechanism, as well as taking LSTM-based encoders to learn long-term temporal structures.", "Yang et al.", "[2] achieve the progress by further considering the different characteristics of video frames.", "They propose to adaptively capture the regions-of-interests in each frame, then learn discriminative features based on these regions-of-interests for better video captioning.", "Xu et al.", "[6] propose the SeqVLAD method, which performs feature aggregation on frame features to exploit fine spatial information in video content.", "However, these methods mainly work on the global frame or salient regions without discrimination on specific object instances, which cannot well capture the temporal evolution of each object in video.", "In this work, we propose the OA-BTG approach, which constructs bidirectional temporal graph on the objects across video frames to captures their temporal trajectories.", "In addition, we also perform representation learning on the temporal trajectories, which exploits the object-awareness to boost video captioning.", "There are also some works [12], [15], [16], [17], [18] that exploit multi-modal features for video captioning.", "Besides frame features extracted by popular 2D CNNs, they also exploit motion features extracted by C3D [19] or audio features [20], where they mine the complementarities among multi-modal information to boost the video captioning performance.", "Different from them, our OA-BTG approach takes only visual features, which mainly focuses on capturing detailed temporal evolutions of objects by bidirectional temporal graph and learning discriminative features through object-aware feature aggregation." ], [ "Object-aware Aggregation with Bidirectional Temporal Graph", "In this section, we present the proposed video captioning approach based on object-aware aggregation with bidirectional temporal graph (OA-BTG) in detail, which follows the encoder-decoder framework.", "As shown in Figure REF , our OA-BTG consists of three components.", "(1) Bidirectional Temporal Graph: For the input video, we first extract frames and multiple object regions.", "Then we construct bidirectional temporal graph to capture detailed temporal dynamics along and reversely along the temporal order.", "(2) Object-aware Aggregation: Based on the bidirectional temporal graph, we perform object-aware aggregation to aggregate the local features of object regions and global frames into discriminative VLAD representations using learnable VLAD models.", "(3) Decoder: Above two components form the encoding stage.", "While in the decoding stage, the learned object and frame VLAD representations are integrated and fed into the GRU units to generate descriptions.", "Especially, hierarchical attention is applied to distinguish the different contributions of multiple objects.", "In the following subsections, we will introduce the bidirectional temporal graph, object-aware aggregation and decoder respectively.", "Figure: Overview of the proposed OA-BTG approach." ], [ "Bidirectional Temporal Graph", "The bidirectional temporal graph (BTG) is constructed based on detected object regions and global frames.", "For object regions, bidirectional temporal graph is constructed to group the same object instances or similar regions together along and reversely along the temporal order.", "It obtains the forward and backward temporal trajectories of each detected object instance, thus capture the detailed temporal dynamics of salient objects, which are important for generating accurate and fine-grained language descriptions.", "For the global frames, we organize them into sequences along and reversely along the temporal order to capture the forward and backward temporal dynamics for global context.", "The bidirectional temporal graph is constructed according to the similarities among object regions in different frames, including a forward graph and a backward graph to obtain the object trajectories along and reversely along the temporal order respectively.", "Specifically, for each frame $v_t$ in the input video $V$ , we extract $N$ object regions, $R^{(t)}=\\lbrace r^{(t)}_i\\rbrace $ , where $i=1,2,\\cdots ,N$ , $t=1,2,\\cdots , T$ , and $T$ is the number of sampled frames of $V$ .", "For constructing the forward graph, we take the object regions in frame $v_1$ as anchors to compute the similarities with object regions in all other frames, where the similarity score is defined to indicate whether the two object regions belong to the same object instance.", "With jointly considering the appearance information and relative spatial location information between two object regions $r_i$ and $r_j$ , we define their similarity score $s(i,j)$ as follows: $s(i,j) = (s_{app}(i,j)+s_{iou}(i,j)+s_{area}(i,j))/3$ As for three terms in above equation, $s_{app}$ indicates similarity on visual appearance between $r_i$ and $r_j$ , which is computed according to the Euclidean distance of their visual features: $s_{app}(i,j) = exp\\left(-\\frac{L_2(g_i, g_j)}{max_{p,q}(L_2(g_p,g_q))}\\right)$ where $L_2$ denotes the Euclidean distance, and $max_{p,q}(L_2(g_p,g_q))$ computes the maximal Euclidean distance of all object region pairs between the corresponding two frames, which is utilized as a normalization factor.", "$g$ indicates the extracted visual feature for object region using pretrained CNN model, taking the cropped object region image as input.", "$s_{iou}$ and $s_{area}$ indicate the rates of overlap area and area sizes between $r_i$ and $r_j$ , respectively, which are computed as follows: $s_{iou}(i,j) = \\frac{area(r_i)\\cap area(r_j)}{area(r_i)\\cup area(r_j)}$ $s_{area}(i,j) = exp\\left(-\\left|\\frac{min(A_i, A_j)}{max(A_i, A_j)}-1\\right|\\right)$ where $area$ denotes the spatial area of the object region and $A$ mean its area size.", "The construction of the backward graph is similar to the forward graph, while the anchors to compute the similarity scores are composed of the object regions in frame $v_T$ .", "Then the forward graph and the backward graph are combined to compose the bidirectional temporal graph.", "Aiming to group the object regions in different frames but belonging to the same object instance together, we compare object regions in all the other frames with the anchor object regions, and then align them to the anchor object regions with a nearest neighbor (NN) strategy according to the bidirectional temporal graph.", "Specifically, for the object region $r^{(1)}_i$ in anchor frame $v_1$ and $N$ object regions $R^{(t)}=\\lbrace r^{(t)}_j\\rbrace $ in frame $v_t$ , $t=2,\\cdots ,T$ , the object region $\\mathop {argmax}_{r^{(t)}_j}(s(i,j))$ is aligned to the object region $r^{(1)}_i$ , which means they are considered to belong to the same object instance.", "We also align object regions in other frames to the anchor object regions in frame $v_T$ using the same NN strategy.", "After above alignment process, we have obtained $2N$ groups of aligned object regions.", "Then, $N$ groups on the forward graph are organized along the temporal order to obtain the forward temporal trajectories of detected object instances, while the other $N$ groups on the backward graph are organized reversely along the temporal order to obtain the backward temporal trajectories.", "The forward and backward temporal trajectories are complement on capturing the detailed temporal dynamics of salient objects in video for the following two aspects: (1) Organizing the temporal sequence along and reversely along the temporal order provides two different ways to represent the temporal dynamics of video content, and thus provides complementary information.", "(2) It usually cannot obtain the good temporal trajectories for all the salient objects only on the forward or backward graph, since not all the objects occur throughout the whole video.", "Thus, we resort to both forward and backward temporal trajectories on the bidirectional temporal graph for capturing the temporal dynamics better.", "We denote the forward and backward temporal trajectories as $O^f_i=\\lbrace o^f_{it}\\rbrace $ and $O^b_i=\\lbrace o^b_{it}\\rbrace $ , $t=1,\\cdots ,T$ , $i=1,\\cdots ,N$ , respectively.", "Then, we aslo organize the global frames along and reversely along the temporal order as $V^f=\\lbrace v^f_t\\rbrace $ and $V^b=\\lbrace v^b_t\\rbrace $ , $t=1,\\cdots ,T$ , to capture comprehensive temporal dynamics on global context." ], [ "Object-aware Aggregation", "For object-aware aggregation (OA), we devise two learnable VLAD models to learn the spatio-temporal correlations for object region sequences and global frame sequence, as well as aggregate the local features of object regions and frames into discriminative VLAD representations.", "Local features are first extracted for global frames and the detected objects of input video $V$ .", "We feed the global frames and cropped object region images into the pretrained CNN model and take the feature maps from the convolutional layer as local features.", "Each feature map has the size as $H \\times W \\times D$ , where $H$ , $W$ and $D$ mean the height, width and the number of channels.", "We organize the local features of object regions and global frames according to the forward and backward temporal sequences obtained based on the BTG, and denote them as $X^{of}_i=\\lbrace x^{of}_{it}\\rbrace $ and $X^{ob}_i=\\lbrace x^{ob}_{it}\\rbrace $ for object sequences, as well as $X^f=\\lbrace x^f_t\\rbrace $ and $X^b=\\lbrace x^b_t\\rbrace $ for frame sequences.", "Inspired by NetVLAD [5] and SeqVLAD [6], we utilize a convolutional gated recurrent unit (C-GRU) architecture to construct the learnable VLAD model, where the C-GRU aims to learn the soft assignments for VLAD encodings.", "Taking the learnable VLAD model on forward temporal sequences of object regions as an example, the C-GRU takes the local feature $x^{of}_{t}$ (here the subscript $i$ is omitted for simplicity) at time $t$ and the hidden state $a_{t-1}$ at time $t-1$ as inputs, and then updates its hidden state $a_{t}$ as follows: $z_t = \\sigma (W_z * x^{of}_{t} + U_z * a_{t-1})$ $r_t = \\sigma (W_r * x^{of}_{t} + U_r * a_{t-1})$ $\\widetilde{a}_t = tanh(W_a * x^{of}_{t} + U_a * (r_t \\odot a_{t-1}))$ $a_t = (1 - z_t) \\odot a_{t-1} + z_t \\odot \\widetilde{a}_t$ where $W_z, W_r, W_a$ and $U_z, U_r, U_a$ denote the 2D-convolutional kernels.", "Noted that all $N$ groups of object region sequences share the same C-GRU parameters.", "$*$ denotes the convolution operation, $\\sigma $ denotes the sigmoid activation function, and $\\odot $ denotes the element-wise multiplication.", "The output hidden state $a_t \\in \\mathbb {R}^{H\\times W\\times K}$ denotes the learned assignments between local features $X^{of}$ and $K$ cluster centers, which are also learnable and introduced below.", "VLAD is a feature encoding method that learns $K$ cluster centers as visual words, denoted as $C = \\lbrace c_k\\rbrace $ , $k=1,\\cdots ,K$ , and then maps each local feature to the nearest $c_k \\in \\mathbb {R}^D$ .", "Its key idea is to accumulate the differences between local features and the corresponding cluster center.", "Inspired by SeqVLAD [6], we set $K$ cluster centers as learnable parameters and adopt “soft assignment” strategy that assign the local features to the cluster centers according to the learned assignment parameters $a_t$ : $vl^{of}_t = \\sum _{h=1}^H \\sum _{w=1}^W a_t(h, w, k)(x^{of}_t-c_k)$ Then we obtain the VLAD representations for forward temporal sequences of object regions as $VL^{of}_{i}=\\lbrace vl^{of}_{it}\\rbrace $ , $t=1,\\cdots ,T$ , and $i=1,\\cdots ,N$ .", "Similarly, the VLAD representations for backward sequences of object regions as well as two temporal sequences of global frames can be obtained as $VL^{ob}_{i}=\\lbrace vl^{ob}_{it}\\rbrace $ , $VL^{f}=\\lbrace vl^{f}_{t}\\rbrace $ and $VL^{b}=\\lbrace vl^{b}_{t}\\rbrace $ ." ], [ "Decoder", "In the decoding stage, we process the forward and backward temporal sequences respectively.", "Taking the forward temporal sequences as example, the decoder is constructed by GRU units with attention mechanism, which utilizes VLAD representations of objects $\\lbrace VL^{of}_{i}\\rbrace $ and frames $VL^{f}$ to generate words for captioning.", "As shown in Fig.", "REF , the attention model for object VLAD representations has a hierarchical structure including temporal attention and object attention.", "The temporal attention is applied to highlight object regions at discriminative time steps when merging $T$ object VLAD representations into one representation, while the object attention is designed to distinguish the different contributions of $N$ different object instances.", "The temporal attention mechanism is formulated as follows: $e_{lt} = w_{att}^T tanh(W_{att}h_{l-1} + U_{att}vl^{of}_{it} + b_{att})$ where $w_{att}, W_{att}, U_{att}$ and $b_{att}$ are parameters.", "$e_{lt}$ computes the relevant score between the visual feature $vl^{of}_{it}$ and the hidden state $h_{l-1}$ of GRU decoder at time $l-1$ , $l \\le L$ indicates the time step at decoding stage.", "Then the relevance scores are normalized as attention scores: $\\beta _{lt} = exp(e_{lt}) / \\sum _{n=1}^T exp(e_{ln})$ Then $T$ object visual features are merged according to above attention scores: $\\phi ^{of}_{li} = \\sum _{t=1}^T \\beta _{lt}vl^{of}_{it}$ The object attention takes the same mechanism as temporal attention, which is applied on $\\lbrace \\phi ^{of}_{li}\\rbrace $ , $i=1,\\cdots ,N$ to distinguish the different contributions of $N$ different object instances: $e^o_{li} = w_{att}^{oT} tanh(W^o_{att}h_{l-1} + U^o_{att}\\phi ^{of}_{li} + b^o_{att})$ $\\beta ^o_{li} = exp(e^o_{li}) / \\sum _{n=1}^N exp(e^o_{li})$ $\\phi ^{of}_{l} = \\sum _{i=1}^N \\beta ^o_{li}\\phi ^{of}_{li}$ where $w_{att}^{o}, W^o_{att}, U^o_{att}$ and $b^o_{att}$ are parameters, and $\\phi ^{of}_{l}$ denotes the discriminative spatio-temporal feature that indicates the object information.", "For $T$ frame VLAD representations $\\lbrace vl^{f}_{t}\\rbrace $ , the temporal attention mechanism is applied on them to obtain the discriminative spatio-temporal feature $\\phi ^{f}_{l}$ that indicates the global context information.", "At time $l$ , the obtained features $\\phi ^{of}_{l}$ and $\\phi ^{f}_{l}$ are fed into the GRU unit to update the hidden state and generate the word: $z^d_l = \\sigma (W_{vz} \\phi ^{f}_{l} + W_{oz} \\phi ^{of}_{l} + W_{dz} x^w_l + U_{dz} h_{l-1})$ $r^d_l = \\sigma (W_{vr} \\phi ^{f}_{l} + W_{or} \\phi ^{of}_{l} + W_{dr} x^w_l + U_{dr} h_{l-1})$ $\\widetilde{h}_l = tanh(W_{vh} \\phi ^{f}_{l} + W_{oh} \\phi ^{of}_{l} + U_{dh}(r^d_l \\odot h_{l-1}))$ $h_{l} = (1 - z^d_l) \\odot h_{l-1} + z^d_l \\odot \\widetilde{h}_l$ where $\\sigma $ denotes the sigmoid function and $x^w_l$ denotes the word embedding for the input word in time $l$ .", "$W_{v*}, W_{o*}, W_{d*}$ and $U_{d*}$ denote the parameters to learn.", "After obtaining the hidden state $h_{l}$ , we apply a linear layer and a softmax layer to compute the probability distribution over all the vocabulary words.", "In the training stage, we utilize the cross-entropy loss to optimize all the learnable parameters.", "While in the testing stage, we take beam search method to generate the captioning descriptions.", "We take a simple but effective way to exploit the complementarity of the forward and backward temporal sequences (corresponding to the forward and backward graphs respectively).", "In each time step, we fuse the obtained predicted scores of words based on forward and backward graphs, and then the word is generated according to the fused scores.", "In such way, we mine the complementary between the forward and backward temporal sequences to boost the video captioning performance." ], [ "Datasets.", "We evaluate our proposed OA-BTG approach on two widely-used datasets, including Microsoft Video Description Corpus (MSVD) [21] and Microsoft Research-Video to Text (MSR-VTT) [22].", "MSVD is an open-domain dataset for video captioning that covers various topics including sports, animals and music.", "It contains 1,970 video clips from Youtube and collects multi-lingual descriptions by Amzon Mechanical Turk (AMT).", "There are totally about 8,000 English descriptions with roughly 40 descriptions per video.", "Following [23], [16], we only consider the English descriptions, taking 1,200, 100, 670 clips for training, validation, testing.", "MSR-VTT is a large-scale benchmark used in the video-to-language challengeWe adopt the dataset of 2016's challenge, and the corresponding competition results can be found in http://ms-multimedia-challenge.com/2016/leaderboard..", "It contains 10,000 video clips with 200,000 clip-sentence pairs in total, and covers comprehensive video categories, diverse video content as well as language descriptions.", "There are totally 20 categories, such as music, sports, movie, etc.", "The descriptions are also collected by AMT, and each video clip is annotated with 20 natural language sentences.", "Following the splits in [22], there are 6,513 clips for training, 497 clips for validation, and 2,990 clips for testing." ], [ "Evaluation Metrics.", "For quantitative evaluation of our proposed approach, we adopt the following common metrics in our experiments: BLEU@4 [24], METEOR [25], and CIDEr [26].", "BLEU@4 measures the fraction of $n$ -grams (here $n=4$ ) between generated sentence and ground-truth descriptions.", "METEOR measures uni-gram precision and recall between generated sentence and ground-truth references, extending exact word matching to including similar words.", "CIDEr is a voting-based metric, which to some extent is robust to incorrect ground-truth descriptions.", "Following [11], [4], [2], all the metrics are computed by using the Microsoft COCO evaluation server [27]." ], [ "Video and sentence preprocessing.", "We sample 40 ($T=40$ ) frames for each input video and extract 5 ($N=5$ ) objet regions for each frame empirically.", "We utilize Mask R-CNN [28] to detect objects, which is based on ResNet-101 [29] and pre-trained on Microsoft COCO dataset [30].", "All the object regions are cropped into images and fed into ResNet-200 to extract local features.", "The global frames are also fed into ResNet-200 to obtain local features.", "We take the output of $res5c$ layer in ResNet-200 as local feature map with the size of $7 \\times 7\\times 2048$ .", "All the reference captioning sentences are tokenized and converted to lower case.", "After removing the punctuations, we collect $12,593$ word tokens for MSVD dataset and $27,891$ word tokens for MSR-VTT dataset." ], [ "Training details.", "For the training video/sentence pairs, we filter out the sentences with more than 16 words, and adopt zero padding strategy to complement the sentences that has less than 16 words.", "During training, begin-of-sentence $<$ BOS$>$ tag and end-of-sentence $<$ EOS$>$ tag are added at the beginning and end of each sentence.", "Unseen words in the vocabulary will be set to $<$ UNK$>$ tags.", "Each word is encoded as a one-hot vector.", "The hidden units of encoder and decoder are set as 512.", "The word embedding size and attention size are set as 512 and 100 respectively.", "For the trainable VLAD models, we set the cluster center number $K$ as 64 for MSVD dataset and 128 for MSR-VTT dataset.", "The reason is that MSR-VTT is a large-scale dataset with diverse video content, thus a larger number of cluster centers are necessary to fully represent the video content.", "During training stage, all the parameters are randomly initialized, and we utilize Adam algorithm to optimize captioning model.", "The learning rate is fixed to be $1 \\times 10^{-4}$ , and the training batch size is 16.", "Dropout is applied on the output of decoder GRU with the rate of $0.5$ to avoid overfitting.", "We also apply gradient clip of $[-10,10]$ to prevent gradient explosion.", "In testing stage, we adopt beam search to generate descriptions, where beam size is set as 5.", "Table: Comparisons with state-of-the-art methods on MSVD dataset.", "All the results are reported as percentage (%).Table: Comparisons with state-of-the-art methods on MSR-VTT dataset.", "All the results are reported as percentage (%)." ], [ "Comparisons with State-of-the-art Methods", "Tables REF and REF show comparative results between OA-BTG and the state-of-the-art methods on MSVD and MSR-VTT datasets respectively.", "We can see that OA-BTG outperforms all the compared methods on popular evaluation metrics, which verifies the effectiveness of bidirectional temporal graph and the object-aware aggregation proposed in our approach.", "Among the compared methods, STAT [36] combines spatial and temporal attention, where the spatial attention selects relevant objects while temporal attention selects important frames.", "hLSTMat [37] utilizes an adjusted temporal attention mechanism to distinguish visual words and non-visual words during sentence generation.", "LSTM-GAN [23] introduces adversarial learning for video captioning.", "Different from them, our OA-BTG approach focus on capturing detailed temporal trajectories for objects in video, as well as learning discriminative visual representations.", "OA-BTG constructs bidirectional temporal graph and performs object-aware feature aggregation to achieve above goals, which helps to generate accurate and fine-grained captions for better performance.", "TSA-ED [34] also utilizes trajectory information, which introduces a trajectory structured attentional encoder-decoder network which explores the fine-grained motion information.", "Although it extracts dense point trajectories, it loss the object-aware information.", "While the trajectory extraction in our OA-BTG approach is applied on the object regions, thus captures the object semantics and its temporal dynamics, which play a key role for generating accurate sentence and improve the video captioning performance.", "Similarly, the recent work SeqVLAD [6] also incorporates a trainable VLAD process into the sequence learning framework to mine fine motion information in successive video frames.", "Our OA-BTG approach obtains higher performance for the following two reasons: (1) OA-BTG applies aggregation process on object regions, which can capture the object-aware semantic information.", "(2) OA-BTG also constructs bidirectional temporal graph to extract the temporal trajectories for each object instance, which captures the detailed temporal dynamics in video content.", "Thus, OA-BTG achieves better captioning performance." ], [ "Ablation Study", "In this subsection, we study in detail about the impact of each component of our proposed OA-BTG.", "The corresponding results are shown in Table REF .", "The baseline method (denoted as BASELINE) only applies a learnable VLAD model on global frame sequences.", "The methods in the second row refer to methods with object-aware aggregation with single-directional temporal graph constructed along or reversely along the temporal order.", "It can be observed that both methods with object-aware aggregation outperform the baseline in popular metrics.", "For example, “OA with Backward TG” improves the performance on BLEU@4, METEOR, CIDEr scores by 0.6%, 1.3%, 1.7% respectively on MSVD dataset, and 1.2%, 1.0%, 3.0% respectively on MSR-VTT dataset.", "These results verify the effectiveness of object-aware aggregation in our proposed approach.", "Comparing OA-BTG with single-directional baseline (OA + Forward/Backward TG), it can be observed that OA-BTG achieves better performance.", "Taking MSVD dataset for example, OA-BTG obtains the average improvements of 3.25%, 1.15%, 5.45% on BLEU@4, METEOR, CIDEr scores, respectively, which indicates the effectiveness of bidirectional temporal graph.", "Similarly, improvements can also be found on MSR-VTT dataset.", "Finally, OA-BTG facilitates the baseline method with both innovations of object-aware aggregation and bidirectional temporal graph, and the comparison between OA-BTG and the baseline method definitely verifies the overall effectiveness of our proposed approach.", "Figure: Successful cases of the description examples generated by our OA-BTG approach.", "Examples of baselinemethod are presented for comparison.Figure: Failure cases of the description examples generated by our OA-BTG approach." ], [ "Qualitative Analysis", "Figures REF and REF present some successful and failure cases of the generated descriptions by our OA-BTG.", "From figure REF , it can be seen that our approach indeed improves the video captioning by capturing objects and their detailed temporal information.", "For instance, the example in top-left demonstrates that our approach can generate accurate depiction of actions by modeling the temporal trajectories of each object.", "The example in top-right shows that our approach not only expresses the correct semantics, but also is capable of capturing detailed actions so as to generate fine-grained description (“cutting a piece of bread”) rather than a general one (“cooking”).", "Overall, all these comparisons verify the effectiveness of our proposed method.", "Figure REF shows two failure cases, where our OA-BTG approach fails to describe “on a couch” and “chase”.", "It needs to not only model salient objects with their trajectories, but also understand interaction relationships among objects, which is very challenging.", "However, our approach still successfully describes “playing with a dog”, “a group of people”, “playing soccer” by modeling object-aware temporal information." ], [ "Conclusion", "In this paper, we have proposed a novel video captioning approach based on object-aware aggregation with bidirectional temporal graph (OA-BTG), which captures the detailed temporal dynamics on salient object instances, as well as learns discriminative spatio-temporal representations for complex video content by aggregating fine local information on object-aware regions and frames.", "First, a bidirectional temporal graph is constructed to capture temporal trajectories for each object instance in two complementary directions, which exploits the detailed temporal dynamics in video for generating accurate and fine-grained captions.", "Then, object-aware aggregation is performed to encoding the fine spatio-temporal information for salient objects and global context.", "The integrity of our model, with contributions of the bidirectional temporal graph and object-aware aggregation captures crucial spatial and temporal cues simultaneously, and thus boosting the performance.", "In the future, we will explore how to construct more effective graph to model the relations among different object instances, as well as explore their interactions between the backward temporal sequences in an end-to-end model." ], [ "Acknowledgments", "This work was supported by the National Natural Science Foundation of China under Grant 61771025 and Grant 61532005." ] ]
1906.04375
[ [ "Orthogonal Cocktail BPSK: Exceeding Shannon Capacity of QPSK Input" ], [ "Abstract Shannon channel capacity of an additive white Gaussian noise channel is the highest reliable transmission bit rate (RTBR) with arbitrary small error probability.", "However, the authors find that the concept is correct only when the channel input and output is treated as a single signal-stream.", "Hence, this work reveals a possibility for increasing the RTBR further by transmitting two independent signal-streams in parallel manner.", "The gain is obtained by separating the two signals at the receiver without any inter-steam interference.", "For doing so, we borrow the QPSK constellation to layer the two independent signals and create the partial decoding method to work with the signal separation from Hamming to Euclidean space.", "The theoretical derivations prove that the proposed method can exceed the conventional QPSK in terms of RTBRs." ], [ "Introduction", "Shannon channel capacity underlies the communication principle for achieving the highest bit rate at which information can be transmitted with arbitrary small probability.", "A standard way to model the input and output relations is the memoryless additive white Gaussian noise (AWGN) channel $\\begin{array}{l}y = x + n,\\end{array}$ where $y$ is the received signal, $x$ is the transmitted signal and $n$ is the received AWGN component from a normally distributed ensemble of power $\\sigma _N^2$ denoted by $n \\sim \\mathcal {N}(0,\\sigma _N^2)$ [1] .", "In the previous literatures, the channel capacities of the finite alphabet inputs have been calculated in terms of the reliable transmission bit rates (RTBRs) by $\\begin{array}{l}\\rm {I}(X;Y) =\\rm {H}(Y) - \\rm {H}(N)\\end{array}$ where $\\rm {I}(X;Y)$ is the mutual information, ${\\rm {H}}(Y)$ is the entropy of the received signal and ${\\rm {H}}(N) = {\\log _2} (\\sqrt{2 \\pi e \\sigma _N^2})$ is that of the AWGN.", "The some numerical results of (REF ) have calculated as shown in Fig.", "1, where the capacity of Gaussian type signal input is also plotted as a reference.", "Figure: Reliable transmission bit rates for BPSK, QPSK and Gaussian type.Though the capacity concept holds for the last decades, there were still some considerations on the possibility of beyond the capacities[2].", "A mathematical incentive can be found from the down-concavity of the mutual information curves as shown in Fig.1, from which one can conclude $\\begin{array}{l}\\tilde{\\rm {I}}_{x}\\left[(E_1+E_2)/\\sigma _N^2\\right] < \\tilde{\\rm {I}}_{x_1}(E_1/\\sigma _N^2)+\\tilde{\\rm {I}}_{x_2}(E_2/\\sigma _N^2)\\end{array}$ when $x=x_1+x_2$ is the signal superposition, $x_1$ and $x_2$ are two independent signals, and $E$ , $E_1$ and $E_2$ are the symbol energies of $x$ , $x_1$ and $x_2$ , respectively.", "In contrast to the conventional signal superposition methods, obtaining a gain from (REF ) requires non inter-symbol interference, i.e, non interference between $x_1$ and $x_2$ .", "Nevertheless, the great difficulty can be encountered when one tries to organize the signal superposition that allows the separation to extract a contribution from (REF ).", "This paper peruses, however, the inequality (REF ) by creating a new method, referred to as the orthogonal cocktail BPSK, that works in Hamming- and Euclidean space for separating the parallel transmission of the independent signals.", "The derivations are done with the assumption of using the ideal channel codes that allow the error free transmission of BPSK and QPSK as well.", "Throughout the present paper, we use the capital letter to express a vector and the small letter to indicate its component, e.g., $A = \\lbrace a_1,a_2, ...., a_M\\rbrace $ , where $A$ represents the vector and $a_i$ the $ith$ component.", "In addition, we use $\\hat{y}$ to express the estimate of $y$ at the receiver and $\\tilde{\\rm {I}}(\\gamma )$ to express the nutual information $\\rm {I}(X;Y)$ with SNR, $\\gamma $ , as the argument [3].", "The details are introduced in the following sections." ], [ "Signal Superposition- and Separation Scheme", "Let us consider a binary information source bit sequence which is partitioned into two independent subsequences expressed in vector form of $C^{(i)} = \\lbrace c^{(i)}_1,c^{(i)}_2,.....,c^{(i)}_{K_i}\\rbrace $ , where $K_i$ is the length of the source subsequence and $i=1,2$ indicates the two source subsequences.", "The two source subsequences are separately encoded, in Hamming space, by two difference channel code matrices $\\begin{array}{l}v^{(i)}_{m} = \\sum \\limits _{k_i}g^{(i)}_{mk_i}c^{(i)}_{k_i}\\end{array}$ where $v^{(i)}_m$ is the $mth$ component of the channel code $V^{(i)}$ , and $g^{(i)}_{mk_i}$ is the element of the code matrix $G^{(i)}$ for $i=1,2$ and $m=1,2,....,M$ , respectively.", "We note that $M$ is the length of the channel code word, and $R_1=K_1/M$ and $R_2=K_2/M$ are the two code rates which are unnecessarily to be equal.", "For the signal modulations, we borrow the QPSK constellation to map the two channel codes, $V^{(1)}$ and $V^{(2)}$ , into the Euclidean space specified by $s^{(1)}=\\lbrace \\sqrt{2}\\alpha ,j0\\rbrace $ , $s^{(2)}=\\lbrace 0, j\\sqrt{2}\\alpha \\rbrace $ , $s^{(3)}=\\lbrace -\\sqrt{2}\\alpha ,j0\\rbrace $ and $s^{(4)}=\\lbrace 0, -j\\sqrt{2}\\alpha \\rbrace $ , where $\\alpha >0$ and $j=\\sqrt{-1}$ , as shown in Fig.2.", "In contrast the conventional QPSK modulation, the proposed method allows $V^{(1)}$ to be demodulated and decoded separately from $V^{(2)}$ .", "This decoding scheme is defined as the partial decoding in this approach because that only one source subsequence, i.e.", "$C^{(1)}$ , is decoded.", "Consequently, using the decoding results of $C^{(1)}$ allows a reliable separation of $V^{(2)}$ from $V^{(1)}$ .", "Then, $V^{(2)}$ can be demodulated over two perpendicular BPSKs: one is constructed by $s^{(1)}$ and $s^{(3)}$ and the other by $s^{(2)}$ and $s^{(4)}$ .", "More important, the Euclidean distance between the two signal points with each BPSK from the decouple is larger than $2\\alpha $ that results, eventually, in a RTBR gain as found latter.", "Thus, we refer the proposed method to as the orthogonal cocktail BPSK (OCB), as explained in the following paragraphs.", "Figure: Constellation of the proposed method.The OCB modulation is classified into two cases with respect to the bit values of $v^{(1)}_m =0$ or 1.", "Case I belongs to $v^{(1)}_m=0$ , whereby we map $v^{(1)}_m=0$ and $v^{(2)}_m=0$ onto $s^{(1)}_m$ , and $v^{(1)}_m=0$ and $v^{(2)}_m=1$ onto $s^{(3)}_m$ .", "Actually, one can regard that the BPSK in horizontal direction is used to the signal mapping of case I.", "Case II belongs $v^{(1)}_m=1$ , whereby $v^{(1)}_m=1$ and $v^{(2)}_m=0$ are mapped onto $s^{(2)}_m$ , and $v^{(1)}_m=1$ and $v^{(2)}_m=1$ onto $s^{(4)}_m$ , where one can find the BPSK in vertical direction.", "For expressing the OCB modulations more intuitive, the signal mapping of the two cases is listed in Table I, in which $s^{(\\kappa )}_m$ for $\\kappa =1,2,3, 4$ is the QPSK constellation with sequential index $m$ added.", "Table: Signal modulation results.Then, the transmitter inputs one symbol another into the AWGN channel by $\\begin{array}{l}y_m = s^{(\\kappa )}_m + n_m, \\ \\ \\ for \\ \\ m\\ =\\ 1, \\ 2,\\, ....,M\\end{array}$ where $y_m$ is the received signal, and $s^{(\\kappa )}_m$ the transmitted symbol and $n_m$ is the Gaussian noise statistically equivalent to that in (REF ).", "At the receiver, all received signals in Euclidean space are recoded sequentially.", "The demodulation starts from $V^{(1)}$ by $\\begin{array}{l}\\hat{y}_m = s^{(1)}_m \\ \\ \\ or \\ \\ s^{(3)}_m\\ \\ \\ \\ for \\ \\ \\ v^{(1)}_m=0\\end{array}$ and $\\begin{array}{l}\\hat{y}_n = s^{(2)}_m \\ \\ or \\ \\ s^{(4)}_m\\ \\ \\ \\ for \\ \\ \\ v^{(1)}_m=1\\end{array}$ where $ \\hat{y}_m $ is the estimate of $ y_m $ .", "Then, we work on the partial decoding scheme defined above by using the estimates of (REF ) and (REF ) to obtain $\\hat{C}^{(1)}$ .", "Once $\\hat{C}^{(1)}$ has been obtained, the receiver reconstructs the channel code $V^{(1)}$ by $\\begin{array}{l}\\hat{v}^{(1)}_m = \\sum \\limits _{k_i}g^{(1)}_{mk_1} \\hat{c}^{(1)}_{k_1},\\end{array}$ which can be used to decouple the QPSK into the two perpendicular BPSKs in Euclidean space.", "Figure: The mapping symbols for (a) case I and (b) case II.The results of (REF ) can be regarded as the reliable reconstruction, whereat $\\hat{v}^{(1)}_m = 0 $ indicates that the recoded signal belongs to case I, while $\\hat{v}^{(1)}_m = 1$ to case II.", "Thus, the two perpendicular BPSKs can be decoupled as shown in Fig.3(a)(b), receptively.", "This allows the detection of $v^{(2)}_m$ as follows.", "If $v^{(1)}_m=0$ , the receiver detects the recoded signal $s^{(\\kappa )}_m$ by $\\begin{array}{l}\\hat{y}_m = s^{(1)}_m \\ \\ for\\ \\ v^{(2)}_m=0\\end{array}$ and $\\begin{array}{l}\\hat{y}_m = s^{(3)}_m\\ \\ for\\ \\ v^{(2)}_m=1\\end{array}$ If $v^{(1)}_m=1$ , the receiver detects $v^{(2)}_m$ by $\\begin{array}{l}j\\hat{y}_m = js^{(2)}_m \\ \\ for\\ \\ v^{(2)}_m=0\\end{array}$ and $\\begin{array}{l}j\\hat{y}_m = js^{(4)}_m\\ \\ for\\ \\ v^{(2)}_m=1\\end{array}$ Then, by taking the estimates of (REF ), (REF ), (REF ) and (REF ) to the decoding of $C^{(2)}$ , we can obtained the $\\hat{C}^{(2)}$ .", "In practical situation, when an error presents in reconstruction of $\\hat{v}^{(1)}_m$ , the detection of $v^{(2)}_m$ can be wrong with $50\\%$ probability.", "Then, the decoding can suffer from the error propagation.", "However, when working with the ideal low density block code, the infinitive error probability of $\\hat{C}^{(1)}$ can lead to the infinitive small probability of the reconstruction of $\\hat{V}^{(1)}$ .", "Thus, the error rate problem in the signal separation can be neglected when we are studying on the capacity issue." ], [ "Up-Bound Issue", "Assume that we are working with the ideal channel codes that allows error free transmissions of QPSK and BPSK, the RTBR of the OCB method is found higher than that of the QPSK input as proved in the following paragraphs.", "First, we prove that the RTBR of $C^{(1)}$ is at a half of QPSK input by $\\begin{array}{l}\\mathbb {R}_{c1} = \\frac{1}{2}\\tilde{\\rm {I}}_q(2\\alpha ^2/\\sigma _N^2)\\end{array}$ where $\\mathbb {R}_{c1}$ is the RTBR of $C^{(1)}$ and $\\tilde{\\rm {I}}_q$ is the mutual information of QPSK input.", "Proof: In order to prove this issue, we first recall the following theorem: when the Euclidean distance $\\tilde{d}(\\xi ,\\xi ^{\\prime })$ is the same, the large Hamming distance of the channel codes can lead to smaller BER.", "This is true when we compare the OCB with the conventional BPSK since the source codes, i.e, $C^{(1)}$ , can be found as a QPSK coded modulation that deleted a half of the source bits.", "The OCB can have the smaller BER in comparison with that of QPSK input.", "Thus, for using infinitive long channel codes, whenever the transmission of QPSK input is of infinitive small error probability, the partial coding with $C^{(1)}$ applies as well.", "The RTBR of $C^{(1)}$ is at half of the QPSK input because the former transmits one channel bit per symbol, while the latter two channel bits.", "Once $C^{(1)}$ is transmitted to the receiver without error, the demodulation of $V^{(2)}$ can be done by using the two BPSK symbols, in each of which the Euclidean distance is $\\sqrt{2}\\alpha ^2$ .", "Thus, the symbol energy is found at $2\\alpha ^2$ that should be used to calculate the mutual information $\\begin{array}{l}\\mathbb {R}_{c2}=\\tilde{\\rm {I}}_b(2\\alpha ^2/\\sigma _N^2)\\end{array}$ where $\\mathbb {R}_{c2}$ is the RTBR of $C^{(2)}$ , and $\\tilde{\\rm {I}}_b$ is the mutual information of BPSK.", "Finally, the summation of RTBRs in (REF ) and (REF ) yields $\\begin{array}{l}\\mathbb {R}_J = \\frac{1}{2}\\tilde{\\rm {I}}_q(2\\alpha ^2/\\sigma _N^2)+\\tilde{\\rm {I}}_b(2\\alpha ^2/\\sigma _N^2)\\end{array}$ where $\\mathbb {R}_J$ is the RTBR of this approach.", "The numerical results of (REF ) are plotted in Fig.4, whereat one can that the curve of OCB on the left side of QPSK.", "This indicates the RTBR exceeding of QPSK input.", "Figure: ADRs of OCB compared with QPSK and BPSK versus linear ratio of E s /σ N 2 E_s/\\sigma _N^2." ], [ "Conclusion", "In this paper, we proposed the OCB method for increasing the RTBR further beyond the QPSK input and, even, the Shannon capacity of Gaussian type signals.", "The proposed method works in Hamming and Euclidean space in separation of the two independent signals transmitted in parallel over an AWGN channel.", "Theoretical derivations prove this approach base on the assumption of using the ideal channel codes." ] ]
1906.04440
[ [ "Simultaneously preperiodic integers for quadratic polynomials" ], [ "Abstract In this article, we study the set of parameters $c \\in \\mathbb{C}$ for which two given complex numbers $a$ and $b$ are simultaneously preperiodic for the quadratic polynomial $f_{c}(z) = z^{2} +c$.", "Combining complex-analytic and arithmetic arguments, Baker and DeMarco showed that this set of parameters is infinite if and only if $a^{2} = b^{2}$.", "Recently, Buff answered a question of theirs, proving that the set of parameters $c \\in \\mathbb{C}$ for which both $0$ and $1$ are preperiodic for $f_{c}$ is equal to $\\lbrace -2, -1, 0 \\rbrace$.", "Following his approach, we complete the description of these sets when $a$ and $b$ are two given integers with $\\lvert a \\rvert \\neq \\lvert b \\rvert$." ], [ "Introduction", "For $c \\in \\mathbb {C}$ , let $f_{c} \\colon \\mathbb {C} \\rightarrow \\mathbb {C}$ be the complex quadratic map $ f_{c} \\colon z \\mapsto z^{2} +c \\, \\text{.}", "$ Given a point $z \\in \\mathbb {C}$ , we study the sequence $\\left( f_{c}^{\\circ n}(z) \\right)_{n \\ge 0}$ of iterates of $f_{c}$ at $z$ .", "The set $\\left\\lbrace f_{c}^{\\circ n}(z) : n \\ge 0 \\right\\rbrace $ is called the forward orbit of $z$ under $f_{c}$ .", "The point $z$ is said to be periodic for $f_{c}$ if there exists an integer $p \\ge 1$ such that $f_{c}^{\\circ p}(z) = z$ .", "The least such integer $p$ is called the period of $z$ .", "The point $z$ is said to be preperiodic for $f_{c}$ if its forward orbit is finite or, equivalently, if there is an integer $k \\ge 0$ such that $f_{c}^{\\circ k}(z)$ is periodic for $f_{c}$ .", "The smallest integer $k$ with this property is called the preperiod of $z$ .", "Definition 1 For $a \\in \\mathbb {C}$ , let $\\mathcal {S}_{a}$ be the set defined by $ \\mathcal {S}_{a} = \\left\\lbrace c \\in \\mathbb {C} : a \\text{ is preperiodic for } f_{c} \\right\\rbrace \\, \\text{.}", "$ In this paper, we wish to examine these sets of parameters.", "For $n \\ge 0$ , let $F_{n} \\in \\mathbb {Z}[c, z]$ be the polynomial given by $ F_{n}(c, z) = f_{c}^{\\circ n}(z) \\, \\text{.}", "$ The sequence $\\left( F_{n} \\right)_{n \\ge 0}$ satisfies $F_{0}(c, z) = z$ and the recursion formulas $ F_{n}(c, z) = F_{n -1}\\left( c, z^{2} +c \\right) = F_{n -1}(c, z)^{2} +c \\quad \\text{for} \\quad n \\ge 1 \\, \\text{.}", "$ In particular, when $n \\ge 1$ , the polynomial $F_{n}$ is monic in $c$ of degree $2^{n -1}$ and monic in $z$ of degree $2^{n}$ .", "Now, given a point $a \\in \\mathbb {C}$ , define – for $k \\ge 0$ and $p \\ge 1$  – the set $ \\mathcal {S}_{a}^{k, p} = \\left\\lbrace c \\in \\mathbb {C} : F_{k +p}(c, a) = F_{k}(c, a) \\right\\rbrace \\, \\text{.}", "$ For all $k \\ge 0$ and $p \\ge 1$ , the set $\\mathcal {S}_{a}^{k, p}$ contains at most $2^{k +p -1}$ elements and consists of the parameters $c \\in \\mathbb {C}$ for which the point $a$ is preperiodic for $f_{c}$ with preperiod less than or equal to $k$ and period dividing $p$ .", "In particular, it follows that the set $ \\mathcal {S}_{a} = \\bigcup _{k \\ge 0, \\, p \\ge 1} \\mathcal {S}_{a}^{k, p} $ is countable.", "Proposition 2 For every $a \\in \\mathbb {C}$ , the set $\\mathcal {S}_{a}$ is infinite.", "To obtain a contradiction, suppose that $\\mathcal {S}_{a}$ contains finitely many elements.", "Then, since the sequence $\\left( \\mathcal {S}_{a}^{n, 1} \\right)_{n \\ge 0}$ is increasing, there exists an integer $N \\ge 0$ such that $\\mathcal {S}_{a}^{n +1, 1} = \\mathcal {S}_{a}^{n, 1}$ for all $n \\ge N$ .", "Now, note that, for every $n \\ge 0$ , we have $ F_{n +2}(c, a) -F_{n +1}(c, a) = \\left( F_{n +1}(c, a) -F_{n}(c, a) \\right) \\left( F_{n +1}(c, a) +F_{n}(c, a) \\right) \\, \\text{.}", "$ It follows that, if $n \\ge N$ and $\\gamma $ is a root of the polynomial $F_{n +1}(c, a) +F_{n}(c, a)$ , then $ F_{n +1}(\\gamma , a) -F_{n}(\\gamma , a) = F_{n +1}(\\gamma , a) +F_{n}(\\gamma , a) = 0 \\, \\text{,} $ and hence $F_{n +1}(\\gamma , a) = F_{n}(\\gamma , a) = 0$ , which yields $\\gamma = 0$ .", "Therefore, we have $F_{n}(0, a) = 0$ and $F_{n +1}(c, a) +F_{n}(c, a) = c^{2^{n}}$ for all $n \\ge N$ .", "In particular, we get $ \\frac{\\partial \\left( F_{N +2} +F_{N +1} \\right)}{\\partial c}(0, a) = 2 \\frac{\\partial F_{N +1}}{\\partial c}(0, a) F_{N +1}(0, a) +2 \\frac{\\partial F_{N}}{\\partial c}(0, a) F_{N}(0, a) +2 = 2 \\, \\text{,} $ which contradicts the fact that $F_{N +2}(c, a) +F_{N +1}(c, a) = c^{2^{N +1}}$ .", "Remark 3 Note that, if $a \\in \\mathbb {C}$ , then $f_{c}(a) = f_{c}(-a)$ for all $c \\in \\mathbb {C}$ .", "Consequently, we have $\\mathcal {S}_{a} = \\mathcal {S}_{-a}$ and $\\mathcal {S}_{a}^{k, p} = \\mathcal {S}_{-a}^{k, p}$ for all $k \\ge 1$ and $p \\ge 1$ .", "Example 4 Assume that $a \\in \\mathbb {C}$ .", "Then (see Figure REF ) we have $\\mathcal {S}_{a}^{0, 1} & = \\left\\lbrace -a^{2} +a \\right\\rbrace \\, \\text{,}\\\\\\mathcal {S}_{a}^{1, 1} & = \\left\\lbrace -a^{2} -a, -a^{2} +a \\right\\rbrace \\, \\text{,}\\\\\\mathcal {S}_{a}^{0, 2} & = \\left\\lbrace -a^{2} -a -1, -a^{2} +a \\right\\rbrace \\, \\text{,}\\\\\\mathcal {S}_{a}^{1, 2} & = \\left\\lbrace -a^{2} -a -1, -a^{2} -a, -a^{2} +a -1, -a^{2} +a \\right\\rbrace \\, \\text{.", "}$ Figure: Some parameters c∈ℂc \\in \\mathbb {C} for which a given complex number aa is preperiodic for f c f_{c}.Here, the problem we are interested in is the description of the sets $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b}$ when $a$ and $b$ are two given complex numbers.", "Example 5 Suppose that $a \\in \\mathbb {C}$ .", "Then (see Figure REF ) we have $ -a^{2} -a -1 = -(a +1)^{2} +(a +1) -1 \\in \\mathcal {S}_{a}^{0, 2} \\cap \\mathcal {S}_{a +1}^{1, 2} $ and $ -a^{2} -a = -(a +1)^{2} +(a +1) \\in \\mathcal {S}_{a}^{1, 1} \\cap \\mathcal {S}_{a +1}^{0, 1} \\, \\text{.}", "$ Figure: Two parameters c∈ℂc \\in \\mathbb {C} for which aa and a+1a +1 are simultaneously preperiodic for f c f_{c} when aa is a given complex number.Example 6 We have $-2 \\in \\mathcal {S}_{0}^{2, 1} \\cap \\mathcal {S}_{1}^{1, 1}$ , $-1 \\in \\mathcal {S}_{0}^{0, 2} \\cap \\mathcal {S}_{1}^{1, 2}$ and $0 \\in \\mathcal {S}_{0}^{0, 1} \\cap \\mathcal {S}_{1}^{0, 1}$ (see Figure REF ).", "Figure: Three parameters c∈ℂc \\in \\mathbb {C} for which both 0 and 1 are preperiodic for f c f_{c}.Since the sets $\\mathcal {S}_{a}$ are countably infinite (see Proposition REF ), we may wonder whether the sets $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b}$ are infinite.", "This question was answered by Baker and DeMarco in [1].", "Using potential theory and an equidistribution result for points of small height with respect to an adelic height function, they proved that the set $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b}$ is infinite if and only if $a^{2} = b^{2}$ .", "As they pointed out, their proof is not effective and does not provide any estimate on the cardinality of these sets when they are finite.", "In their article, Baker and DeMarco conjectured that $-2$ , $-1$ and 0 were the only parameters $c \\in \\mathbb {C}$ for which 0 and 1 are simultaneously preperiodic for $f_{c}$ (see Example REF ).", "Using localization properties of the set of parameters $c \\in \\mathbb {C}$ for which both 0 and 1 have bounded forward orbit under $f_{c}$ and the fact that 0 is the only parameter $c \\in \\mathbb {C}$ that is contained in the main cardioid of the Mandelbrot set and for which 0 is preperiodic for $f_{c}$ , Buff gave an elementary proof of their conjecture in [2].", "Following his approach, we complete the description of the sets $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b}$ when $a$ and $b$ are two given integers with $\\vert a \\vert \\ne \\vert b \\vert $ .", "More precisely, we prove the following theorem, which asserts that Example REF and Example REF present all the parameters $c \\in \\mathbb {C}$ for which two given distinct and non-opposite integers are simultaneously preperiodic for the polynomial $f_{c}$ : Theorem 7 Assume that $a$ and $b$ are two integers with $\\vert b \\vert > \\vert a \\vert $ .", "Then either $a = 0$ , $\\vert b \\vert = 1$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\lbrace -2, -1, 0 \\rbrace $ , or $a = 0$ , $\\vert b \\vert = 2$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\lbrace -2 \\rbrace $ , or $\\vert a \\vert \\ge 1$ , $\\vert b \\vert = \\vert a \\vert +1$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\left\\lbrace -a^{2} -\\vert a \\vert -1, -a^{2} -\\vert a \\vert \\right\\rbrace $ , or $\\vert b \\vert > \\max \\left\\lbrace 2, \\vert a \\vert +1 \\right\\rbrace $ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\varnothing $ .", "Our proof is elementary and uses only basic analytic and arithmetic arguments.", "In particular, the reader does not need to be familiar with complex dynamics.", "In Section , we reprove some well-known results on the dynamics of the polynomials $f_{c}$ .", "In Section , we go back to the study of the parameter space and give a proof of Theorem REF .", "Acknowledgments The author would like to thank his Ph.D. advisors, Xavier Buff and Jasmin Raissy, for helpful discussions without which this paper would not exist." ], [ "The dynamics of the quadratic polynomials", "We shall investigate here the dynamics of the quadratic maps $f_{c} \\colon \\mathbb {C} \\rightarrow \\mathbb {C}$ .", "Given a parameter $c \\in \\mathbb {C}$ , let $\\mathcal {X}_{c}$ be the set $ \\mathcal {X}_{c} = \\left\\lbrace z \\in \\mathbb {C} : z \\text{ is preperiodic for } f_{c} \\right\\rbrace \\, \\text{,} $ and, for $k \\ge 0$ and $p \\ge 1$ , let $\\mathcal {X}_{c}^{k, p}$ be the set $ \\mathcal {X}_{c}^{k, p} = \\left\\lbrace z \\in \\mathbb {C} : F_{k +p}(c, z) = F_{k}(c, z) \\right\\rbrace \\, \\text{.}", "$ For all $k \\ge 0$ and $p \\ge 1$ , the set $\\mathcal {X}_{c}^{k, p}$ contains at most $2^{k +p}$ elements, is invariant under $f_{c}$ and consists of the preperiodic points for $f_{c}$ with preperiod less than or equal to $k$ and period dividing $p$ .", "In particular, we have $ \\mathcal {X}_{c} = \\bigcup _{k \\ge 0, \\, p \\ge 1} \\mathcal {X}_{c}^{k, p} \\, \\text{.}", "$ Moreover, the set $\\mathcal {X}_{c}$ is completely invariant under $f_{c}$  – that is, for every $z \\in \\mathbb {C}$ , $f_{c}(z) \\in \\mathcal {X}_{c}$ if and only if $z \\in \\mathcal {X}_{c}$ .", "Remark 8 Note that, if $c \\in \\mathbb {C}$ , then $f_{c}(z) = f_{c}(-z)$ for all $z \\in \\mathbb {C}$ .", "Therefore, the sets $\\mathcal {X}_{c}$ and $\\mathcal {X}_{c}^{k, p}$ , with $k \\ge 1$ and $p \\ge 1$ , are symmetric with respect to the origin.", "Proposition 9 For every $c \\in \\mathbb {C}$ , we have $ \\mathcal {X}_{c} \\subset \\bigcap _{n \\ge 0} \\left\\lbrace z \\in \\mathbb {C} : \\left|f_{c}^{\\circ n}(z) \\right|\\le \\rho _{c} \\right\\rbrace \\, \\text{,} $ where $\\rho _{c} = \\frac{1 +\\sqrt{1 +4 \\vert c \\vert }}{2}$ .", "For every $z \\in \\mathbb {C}$ , we have $\\left|f_{c}(z) \\right|\\ge \\vert z \\vert ^{2} -\\vert c \\vert $ , and $\\vert z \\vert ^{2} -\\vert c \\vert > \\vert z \\vert $ if and only if $\\vert z \\vert > \\rho _{c}$ .", "It follows by induction that, if $z \\in \\mathbb {C}$ satisfies $\\vert z \\vert > \\rho _{c}$ , then $\\left|f_{c}^{\\circ (k +p)}(z) \\right|> \\left|f_{c}^{\\circ k}(z) \\right|$ for all $k \\ge 0$ and $p \\ge 1$ , and hence $z$ is not preperiodic for $f_{c}$ .", "As the set $\\mathcal {X}_{c}$ is invariant under $f_{c}$ , this completes the proof of the proposition.", "Now, let us study the dynamics of the polynomial $f_{c}$ when $c$ is a real parameter.", "Suppose that $c \\in \\left( -\\infty , \\frac{1}{4} \\right]$ .", "Then the map $f_{c} \\colon \\mathbb {R} \\rightarrow \\mathbb {R}$ is even and strictly increasing on $\\mathbb {R}_{\\ge 0}$ , has two fixed points $\\alpha _{c} \\le \\beta _{c}$  – with equality if and only if $c = \\frac{1}{4}$  – given by $ \\alpha _{c} = \\frac{1 -\\sqrt{1 -4 c}}{2} \\quad \\text{and} \\quad \\beta _{c} = \\frac{1 +\\sqrt{1 -4 c}}{2} $ and satisfies $f_{c}(z) > z$ for all $z \\in \\left( \\beta _{c}, +\\infty \\right)$ .", "In particular, we have $ f_{c}\\left( \\left[ -\\beta _{c}, \\beta _{c} \\right] \\right) = \\left[ c, \\beta _{c} \\right] $ and the sequence $\\left( f_{c}^{\\circ n}(z) \\right)_{n \\ge 0}$ diverges to $+\\infty $ for all $z \\in \\left( -\\infty , -\\beta _{c} \\right) \\cup \\left( \\beta _{c}, +\\infty \\right)$ .", "It follows that, if $c \\in \\left[ -2, \\frac{1}{4} \\right]$ , then $ f_{c}\\left( \\left[ -\\beta _{c}, \\beta _{c} \\right] \\right) \\subset \\left[ -\\beta _{c}, \\beta _{c} \\right] \\, \\text{,} $ and hence, for every $z \\in \\mathbb {R}$ , the point $z$ has bounded forward orbit under $f_{c}$ if and only if $z \\in \\left[ -\\beta _{c}, \\beta _{c} \\right]$ .", "Remark 10 Note that, for every $c \\in \\mathbb {C}$ , we have $\\rho _{c} = \\beta _{-\\vert c \\vert }$ .", "Let us examine more thoroughly the dynamics of the map $f_{c}$ when $c \\in (-\\infty , -2]$ .", "It is related to the dynamics of the shift map in the space of sign sequences.", "Let $\\sigma \\colon \\lbrace -1, 1 \\rbrace ^{\\mathbb {Z}_{\\ge 0}} \\rightarrow \\lbrace -1, 1 \\rbrace ^{\\mathbb {Z}_{\\ge 0}}$ denote the shift map, which sends any sequence $\\varepsilon = \\left( \\epsilon _{n} \\right)_{n \\ge 0}$ of $\\pm 1$ to the sequence $\\left( \\epsilon _{n +1} \\right)_{n \\ge 0}$ .", "A sign sequence $\\varepsilon $ is said to be periodic with period $p \\ge 1$ if $\\sigma ^{\\circ p}(\\varepsilon ) = \\varepsilon $ and $p$ is the least such integer.", "The sequence $\\varepsilon $ is said to be preperiodic with preperiod $k \\ge 0$ if the sequence $\\sigma ^{\\circ k}(\\varepsilon )$ is periodic and $k$ is minimal with this property.", "For $k \\ge 0$ and $p \\ge 1$ , define $ \\Sigma ^{k, p} = \\left\\lbrace \\varepsilon \\in \\lbrace -1, 1 \\rbrace ^{\\mathbb {Z}_{\\ge 0}} : \\sigma ^{\\circ (k +p)}(\\varepsilon ) = \\sigma ^{\\circ k}(\\varepsilon ) \\right\\rbrace $ to be the set of all preperiodic sign sequences with preperiod less than or equal to $k$ and period dividing $p$ , and define $ \\Sigma = \\bigcup _{k \\ge 0, \\, p \\ge 1} \\Sigma ^{k, p} $ to be the collection of all preperiodic sign sequences.", "For all $k \\ge 0$ and $p \\ge 1$ , the set $\\Sigma ^{k, p}$ contains exactly $2^{k +p}$ elements – each of them being completely determined by the choice of its first $k +p$ terms – and is invariant under the shift map.", "Moreover, the set $\\Sigma $ is completely invariant under the shift map – that is, any sign sequence $\\varepsilon $ is preperiodic if and only if the sequence $\\sigma (\\varepsilon )$ is preperiodic.", "Theorem 11 For every $c \\in (-\\infty , -2]$ , there exists a unique map $ \\psi _{c} \\colon \\Sigma \\rightarrow \\mathbb {R} $ that makes the diagram below commute and satisfies $\\epsilon _{0} \\psi _{c}(\\varepsilon ) \\ge 0$ for all $\\varepsilon \\in \\Sigma $ .", "M00) at (0,0) $\\Sigma $ ; M01) at (2,0) $\\Sigma $ ; M10) at (0,-2) $\\mathbb {R}$ ; M11) at (2,-2) $\\mathbb {R}$ ; [myarrow] (M00) to node[above]$\\sigma $ (M01); [myarrow] (M10) to node[below]$f_{c}$ (M11); [myarrow] (M00) to node[left]$\\psi _{c}$ (M10); [myarrow] (M01) to node[right]$\\psi _{c}$ (M11); Furthermore, for every $\\varepsilon \\in \\Sigma $ , we have $ \\epsilon _{0} \\psi _{c}(\\varepsilon ) \\in \\left[ \\sqrt{-\\beta _{c} -c}, \\beta _{c} \\right] \\, \\text{,} $ for all $c \\in (-\\infty , -2]$ , and the map $\\zeta _{\\varepsilon } \\colon (-\\infty , -2] \\rightarrow \\mathbb {R}$ defined by $ \\zeta _{\\varepsilon }(c) = \\psi _{c}(\\varepsilon ) $ is continuous.", "Before proving Theorem REF , observe that $c \\le -\\beta _{c}$ for all $c \\in (-\\infty , -2]$ , with equality if and only if $c = -2$ .", "Consequently, for $c \\in (-\\infty , -2]$ and $\\epsilon = \\pm 1$ , the partial inverse $g_{c}^{\\epsilon } \\colon [c, +\\infty ) \\rightarrow \\mathbb {R}$ of $f_{c}$ given by $ g_{c}^{\\epsilon }(z) = \\epsilon \\sqrt{z -c} $ is well defined on $\\left[ -\\beta _{c}, \\beta _{c} \\right]$ , and we have $ g_{c}^{\\epsilon }\\left( \\left[ -\\beta _{c}, \\beta _{c} \\right] \\right) = \\left[ \\epsilon \\sqrt{-\\beta _{c} -c}, \\epsilon \\beta _{c} \\right] \\subset \\left[ -\\beta _{c}, \\beta _{c} \\right] \\, \\text{.}", "$ Lemma 12 For all $c \\in (-\\infty , -2]$ and all $\\varepsilon = \\left( \\epsilon _{0}, \\cdots , \\epsilon _{p -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{p}$ , with $p \\ge 1$ , the map $g_{c}^{\\varepsilon } \\colon \\left[ -\\beta _{c}, \\beta _{c} \\right] \\rightarrow \\left[ -\\beta _{c}, \\beta _{c} \\right]$ defined by $ g_{c}^{\\varepsilon }(z) = g_{c}^{\\epsilon _{0}} \\circ \\cdots \\circ g_{c}^{\\epsilon _{p -1}}(z) $ has a unique fixed point $\\mathfrak {z}_{\\varepsilon }(c)$ .", "Moreover, for every finite sequence $\\varepsilon $ of $\\pm 1$ , the map $c \\mapsto \\mathfrak {z}_{\\varepsilon }(c)$ is continuous.", "Claim 13 If $c \\in (-\\infty , -2]$ , $\\varepsilon \\in \\lbrace -1, 1 \\rbrace ^{p}$ , with $p \\ge 1$ , and $\\mathfrak {z}$ is a fixed point of $g_{c}^{\\varepsilon }$ , then $\\mathfrak {z} \\in \\mathcal {X}_{c}^{0, p}$ and $\\epsilon _{j} f_{c}^{\\circ j}(\\mathfrak {z}) > 0$ for all $j \\in \\lbrace 0, \\cdots , p -1 \\rbrace $ .", "We have $f_{c}^{\\circ p}(\\mathfrak {z}) = \\mathfrak {z}$ and the set $\\mathcal {X}_{c}^{0, p}$ is invariant under $f_{c}$ .", "Therefore, for all $j \\in \\lbrace 0, \\cdots , p -1 \\rbrace $ , we have $ f_{c}^{\\circ j}(\\mathfrak {z}) = g_{c}^{\\epsilon _{j}} \\circ \\cdots \\circ g_{c}^{\\epsilon _{p -1}}(\\mathfrak {z}) \\in g_{c}^{\\epsilon _{j}}\\left( \\left[ -\\beta _{c}, \\beta _{c} \\right] \\right) \\cap \\mathcal {X}_{c}^{0, p} \\, \\text{,} $ which yields $ \\epsilon _{j} f_{c}^{\\circ j}(\\mathfrak {z}) \\in \\left( \\sqrt{-\\beta _{c} -c}, \\beta _{c} \\right] \\subset \\mathbb {R}_{> 0} $ since $\\epsilon _{j} \\sqrt{-\\beta _{c} -c}$ is preperiodic for $f_{c}$ with preperiod 2 and period 1.", "Fix $c \\in (-\\infty , -2]$ and $p \\ge 1$ .", "For every $\\varepsilon \\in \\lbrace -1, 1 \\rbrace ^{p}$ , the map $g_{c}^{\\varepsilon }$ has a fixed point $\\mathfrak {z}_{\\varepsilon }(c)$ by the intermediate value theorem.", "Now, note that $\\mathfrak {z}_{\\varepsilon }(c)$ is not a fixed point of $g_{c}^{\\varepsilon ^{\\prime }}$ whenever $\\varepsilon \\ne \\varepsilon ^{\\prime } \\in \\lbrace -1, 1 \\rbrace ^{p}$ by Claim REF .", "Therefore, the points $\\mathfrak {z}_{\\varepsilon }(c)$ , with $\\varepsilon \\in \\lbrace -1, 1 \\rbrace ^{p}$ , are pairwise distinct, and, since $\\mathcal {X}_{c}^{0, p}$ contains at most $2^{p}$ elements, it follows that $ \\mathcal {X}_{c}^{0, p} = \\left\\lbrace \\mathfrak {z}_{\\varepsilon }(c) : \\varepsilon \\in \\lbrace -1, 1 \\rbrace ^{p} \\right\\rbrace \\, \\text{.}", "$ Thus, for every $\\varepsilon \\in \\lbrace -1, 1 \\rbrace ^{p}$ , $\\mathfrak {z}_{\\varepsilon }(c)$ is the unique fixed point of the map $g_{c}^{\\varepsilon }$ .", "Now, fix $p \\ge 1$ , $\\varepsilon = \\left( \\epsilon _{0}, \\cdots , \\epsilon _{p -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{p}$ and $c \\in (-\\infty , -2]$ .", "It remains to verify that the map $c^{\\prime } \\mapsto \\mathfrak {z}_{\\varepsilon }\\left( c^{\\prime } \\right)$ is continuous at $c$ .", "For each $c^{\\prime } \\in (-\\infty , -2]$ , choose $\\varepsilon _{c^{\\prime }} \\in \\lbrace -1, 1 \\rbrace ^{p}$ such that $\\left|\\mathfrak {z}_{\\varepsilon }(c) -\\mathfrak {z}_{\\varepsilon _{c^{\\prime }}}\\left( c^{\\prime } \\right) \\right|$ is minimal.", "Then we have $ \\left|\\mathfrak {z}_{\\varepsilon }(c) -\\mathfrak {z}_{\\varepsilon _{c^{\\prime }}}\\left( c^{\\prime } \\right) \\right|\\le \\left( \\prod _{\\varepsilon ^{\\prime } \\in \\lbrace -1, 1 \\rbrace ^{p}} \\left|\\mathfrak {z}_{\\varepsilon }(c) -\\mathfrak {z}_{\\varepsilon ^{\\prime }}\\left( c^{\\prime } \\right) \\right|\\right)^{\\frac{1}{2^{p}}} = \\left|F_{p}\\left( c^{\\prime }, \\mathfrak {z}_{\\varepsilon }(c) \\right) -\\mathfrak {z}_{\\varepsilon }(c) \\right|^{\\frac{1}{2^{p}}} $ for all $c^{\\prime } \\in (-\\infty , -2]$ , and so $\\mathfrak {z}_{\\varepsilon _{c^{\\prime }}}\\left( c^{\\prime } \\right)$ tends to $\\mathfrak {z}_{\\varepsilon }(c)$ as $c^{\\prime }$ approaches $c$ .", "By Claim REF , it follows that, whenever $c^{\\prime }$ is close enough to $c$ , we have $\\epsilon _{j} f_{c^{\\prime }}^{\\circ j}\\left( \\mathfrak {z}_{\\varepsilon _{c^{\\prime }}}\\left( c^{\\prime } \\right) \\right) > 0$ for all $j \\in \\lbrace 0, \\cdots , p -1 \\rbrace $ , which yields $\\varepsilon _{c^{\\prime }} = \\varepsilon $ .", "Thus, the limit of $\\mathfrak {z}_{\\varepsilon }\\left( c^{\\prime } \\right)$ as $c^{\\prime }$ approaches $c$ is $\\mathfrak {z}_{\\varepsilon }(c)$ , and the lemma is proved.", "We may now deduce Theorem REF from Lemma REF .", "Fix $c \\in (-\\infty , -2]$ .", "Assume that $\\psi _{c} \\colon \\Sigma \\rightarrow \\mathbb {R}$ is a map that satisfies $f_{c} \\circ \\psi _{c} = \\psi _{c} \\circ \\sigma $ and $\\epsilon _{0} \\psi _{c}(\\varepsilon ) \\ge 0$ for all $\\varepsilon \\in \\Sigma $ .", "Then, for all $\\varepsilon \\in \\Sigma $ and all $n \\ge 0$ , we have $ \\psi _{c}(\\varepsilon ) = g_{c}^{\\epsilon _{0}} \\circ \\cdots \\circ g_{c}^{\\epsilon _{n}}\\left( \\psi _{c}\\left( \\sigma ^{\\circ (n +1)}(\\varepsilon ) \\right) \\right) \\, \\text{.}", "$ It follows that, if $\\varepsilon $ is a periodic sign sequence with period $p \\ge 1$ , then $\\psi _{c}(\\varepsilon )$ is a fixed point of the map $g_{c}^{\\varepsilon _{p}}$ , where $\\varepsilon _{p} = \\left( \\epsilon _{0}, \\cdots , \\epsilon _{p -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{p}$ , and hence $\\psi _{c}(\\varepsilon ) = \\mathfrak {z}_{\\varepsilon _{p}}(c)$ .", "Therefore, for every $\\varepsilon \\in \\Sigma $ with preperiod $k \\ge 0$ and period $p \\ge 1$ , we have $\\psi _{c}(\\varepsilon ) = g_{c}^{\\varepsilon _{pp}}\\left( \\mathfrak {z}_{\\varepsilon _{p}}(c) \\right)$ , where $\\varepsilon _{pp} = \\left( \\epsilon _{0}, \\cdots , \\epsilon _{k -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{k}$ and $\\varepsilon _{p} = \\left( \\epsilon _{k}, \\cdots , \\epsilon _{k +p -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{p}$ , adopting the convention that $g_{c}^{\\varnothing }$ denotes the identity map of $\\left[ -\\beta _{c}, \\beta _{c} \\right]$ .", "In particular, there is at most one map $\\psi _{c} \\colon \\Sigma \\rightarrow \\mathbb {R}$ that satisfies the conditions above.", "For $\\varepsilon = \\left( \\epsilon _{n} \\right)_{n \\ge 0}$ a preperiodic sign sequence with preperiod $k \\ge 0$ and period $p \\ge 1$ , define $\\varepsilon _{pp} = \\left( \\epsilon _{0}, \\cdots , \\epsilon _{k -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{k}$ , $\\varepsilon _{p} = \\left( \\epsilon _{k}, \\cdots , \\epsilon _{k +p -1} \\right) \\in \\lbrace -1, 1 \\rbrace ^{p}$ and $\\psi _{c}(\\varepsilon ) = g_{c}^{\\varepsilon _{pp}}\\left( \\mathfrak {z}_{\\varepsilon _{p}}(c) \\right)$ .", "If $\\varepsilon $ is a periodic sign sequence with period $p \\ge 1$ , then $f_{c} \\circ \\psi _{c}(\\varepsilon )$ is a fixed point of the map $g_{c}^{\\sigma (\\varepsilon )_{p}}$ since $\\sigma (\\varepsilon )_{p} = \\left( \\epsilon _{1}, \\cdots , \\epsilon _{p -1}, \\epsilon _{0} \\right)$ , and hence $f_{c} \\circ \\psi _{c}(\\varepsilon ) = \\psi _{c} \\circ \\sigma (\\varepsilon )$ .", "Similarly, if $\\varepsilon \\in \\Sigma $ has preperiod $k \\ge 1$ and period $p \\ge 1$ , then $f_{c} \\circ \\psi _{c}(\\varepsilon ) = \\psi _{c} \\circ \\sigma (\\varepsilon )$ since $\\sigma (\\varepsilon )_{pp} = \\left( \\epsilon _{1}, \\cdots , \\epsilon _{k -1} \\right)$ and $\\sigma (\\varepsilon )_{p} = \\varepsilon _{p}$ .", "Moreover, for all $\\varepsilon \\in \\Sigma $ , we have $\\psi _{c}(\\varepsilon ) \\in g_{c}^{\\epsilon _{0}}\\left( \\left[ -\\beta _{c}, \\beta _{c} \\right] \\right)$ , which yields $ \\epsilon _{0} \\psi _{c}(\\varepsilon ) \\in \\left[ \\sqrt{-\\beta _{c} -c}, \\beta _{c} \\right] \\subset \\mathbb {R}_{\\ge 0} \\, \\text{.}", "$ Thus, the map $\\psi _{c} \\colon \\Sigma \\rightarrow \\mathbb {R}$ so defined has the required properties.", "Furthermore, for every $\\varepsilon \\in \\Sigma $ , the map $\\zeta _{\\varepsilon } \\colon c \\mapsto \\psi _{c}(\\varepsilon )$ is clearly continuous.", "Remark 14 Observe that, if $c \\in (-\\infty , -2]$ and $\\varepsilon , \\varepsilon ^{\\prime } \\in \\Sigma $ satisfy $\\epsilon _{0} = -\\epsilon _{0}^{\\prime }$ and $\\sigma (\\varepsilon ) = \\sigma \\left( \\varepsilon ^{\\prime } \\right)$ , then $\\psi _{c}(\\varepsilon ) = -\\psi _{c}\\left( \\varepsilon ^{\\prime } \\right)$ .", "Note that the proof of Theorem REF provides explicit formulas for the maps $\\zeta _{\\varepsilon }$ with $\\varepsilon \\in \\Sigma ^{k, 1}$ and $k \\ge 0$ .", "Example 15 Suppose that $\\epsilon = \\pm 1$ .", "Then for $\\varepsilon \\in \\Sigma ^{1, 1}$ given by $\\epsilon _{0} = \\epsilon $ and $\\epsilon _{1} = -1$ , we have $ \\zeta _{\\varepsilon } \\colon c \\mapsto \\psi _{c}(\\varepsilon ) = -\\epsilon \\alpha _{c} \\, \\text{;} $ for $\\varepsilon \\in \\Sigma ^{1, 1}$ given by $\\epsilon _{0} = \\epsilon $ and $\\epsilon _{1} = 1$ , we have $ \\zeta _{\\varepsilon } \\colon c \\mapsto \\psi _{c}(\\varepsilon ) = \\epsilon \\beta _{c} \\, \\text{;} $ for $\\varepsilon \\in \\Sigma ^{2, 1}$ given by $\\epsilon _{0} = \\epsilon $ , $\\epsilon _{1} = 1$ and $\\epsilon _{2} = -1$ , we have $ \\zeta _{\\varepsilon } \\colon c \\mapsto \\psi _{c}(\\varepsilon ) = \\epsilon \\sqrt{-\\alpha _{c} -c} \\, \\text{;} $ for $\\varepsilon \\in \\Sigma ^{2, 1}$ given by $\\epsilon _{0} = \\epsilon $ , $\\epsilon _{1} = -1$ and $\\epsilon _{2} = 1$ , we have $ \\zeta _{\\varepsilon } \\colon c \\mapsto \\psi _{c}(\\varepsilon ) = \\epsilon \\sqrt{-\\beta _{c} -c} \\, \\text{.}", "$ Proposition 16 Assume that $c \\in (-\\infty , -2]$ .", "Then we have $ \\mathcal {X}_{c}^{k, p} = \\psi _{c}\\left( \\Sigma ^{k, p} \\right) \\subset \\left[ -\\beta _{c}, \\beta _{c} \\right] $ for all $k \\ge 0$ and $p \\ge 1$ (see Figure REF ).", "Furthermore, if $c \\in (-\\infty , -2)$ , then the map $\\psi _{c} \\colon \\Sigma \\rightarrow \\mathbb {R}$ is injective.", "Figure: Graphs of the maps z↦F n (c,z)z \\mapsto F_{n}(c, z), with n∈{0,⋯,3}n \\in \\lbrace 0, \\cdots , 3 \\rbrace , when c∈(-∞,-2]c \\in (-\\infty , -2].For all $n \\ge 0$ , we have $f_{c}^{\\circ n} \\circ \\psi _{c} = \\psi _{c} \\circ \\sigma ^{\\circ n}$ .", "Consequently, $\\psi _{c}\\left( \\Sigma ^{k, p} \\right) \\subset \\mathcal {X}_{c}^{k, p}$ for all $k \\ge 0$ and $p \\ge 1$ .", "Now, suppose that $c \\in (-\\infty , -2)$ .", "Then, for all $\\varepsilon \\in \\Sigma $ and all $n \\ge 0$ , we have $ \\epsilon _{n} f_{c}^{\\circ n}\\left( \\psi _{c}(\\varepsilon ) \\right) \\in \\left[ \\sqrt{-\\beta _{c} -c}, \\beta _{c} \\right] \\subset \\mathbb {R}_{> 0} \\, \\text{.}", "$ Therefore, the map $\\psi _{c}$ is injective, and, since $\\mathcal {X}_{c}^{k, p}$ contains at most $2^{k +p}$ elements, it follows that $\\psi _{c}\\left( \\Sigma ^{k, p} \\right) = \\mathcal {X}_{c}^{k, p}$ , for all $k \\ge 0$ and $p \\ge 1$ .", "It remains to prove that $\\mathcal {X}_{-2}^{k, p} \\subset \\psi _{-2}\\left( \\Sigma ^{k, p} \\right)$ for all $k \\ge 0$ and $p \\ge 1$ .", "Fix $k \\ge 0$ and $p \\ge 1$ , and suppose that $z \\in \\mathcal {X}_{-2}^{k, p}$ .", "Then, for all $c \\in (-\\infty , -2)$ , we have $ \\min _{\\varepsilon \\in \\Sigma ^{k, p}} \\left|z -\\psi _{c}(\\varepsilon ) \\right|\\le \\left( \\prod _{\\varepsilon \\in \\Sigma ^{k, p}} \\left|z -\\psi _{c}(\\varepsilon ) \\right|\\right)^{\\frac{1}{2^{k +p}}} = \\left|F_{k +p}(c, z) -F_{k}(c, z) \\right|^{\\frac{1}{2^{k +p}}} \\, \\text{.}", "$ As the maps $\\zeta _{\\varepsilon }$ , with $\\varepsilon \\in \\Sigma ^{k, p}$ , are continuous at $-2$ , it follows that $z \\in \\psi _{-2}\\left( \\Sigma ^{k, p} \\right)$ .", "Thus, the proposition is proved.", "Remark 17 Applying Montel's theorem, it follows from Proposition REF that, for every $c \\in (-\\infty , -2]$ , the filled-in Julia set of $f_{c}$  – that is, the set of points $z \\in \\mathbb {C}$ that have bounded forward orbit under $f_{c}$  – is also contained in $\\left[ -\\beta _{c}, \\beta _{c} \\right]$ .", "Note that the map $\\psi _{-2}$ is not injective.", "More precisely, we have the following: Proposition 18 For all $\\varepsilon \\ne \\varepsilon ^{\\prime } \\in \\Sigma $ , $\\psi _{-2}(\\varepsilon ) = \\psi _{-2}\\left( \\varepsilon ^{\\prime } \\right)$ if and only if there exists an integer $k \\ge 2$ such that $\\varepsilon , \\varepsilon ^{\\prime } \\in \\Sigma ^{k, 1}$ , $\\epsilon _{j} = \\epsilon _{j}^{\\prime }$ for all $j \\in \\lbrace 0, \\cdots , k -3 \\rbrace $ , $\\epsilon _{k -2} = -\\epsilon _{k -2}^{\\prime }$ , $\\epsilon _{k -1} = \\epsilon _{k -1}^{\\prime } = -1$ and $\\epsilon _{k} = \\epsilon _{k}^{\\prime } = 1$ .", "Suppose that $\\varepsilon \\ne \\varepsilon ^{\\prime } \\in \\Sigma $ satisfy $\\psi _{-2}(\\varepsilon ) = \\psi _{-2}\\left( \\varepsilon ^{\\prime } \\right)$ .", "Then, for all $n \\ge 0$ , we have $ \\epsilon _{n} f_{-2}^{\\circ n}\\left( \\psi _{-2}(\\varepsilon ) \\right) \\ge 0 \\quad \\text{and} \\quad \\epsilon _{n}^{\\prime } f_{-2}^{\\circ n}\\left( \\psi _{-2}(\\varepsilon ) \\right) \\ge 0 \\, \\text{.}", "$ Since $\\varepsilon \\ne \\varepsilon ^{\\prime }$ , it follows that there is an integer $k \\ge 0$ , which we may assume minimal, such that $f_{-2}^{\\circ k}\\left( \\psi _{-2}(\\varepsilon ) \\right) = 0$ .", "For all $j \\in \\lbrace 0, \\cdots , k -1 \\rbrace $ , the inequalities above are strict, and hence $\\epsilon _{j} = \\epsilon _{j}^{\\prime }$ .", "Moreover, we have $f_{-2}^{\\circ (k +1)}\\left( \\psi _{-2}(\\varepsilon ) \\right) = -2$ and $f_{-2}^{\\circ n}\\left( \\psi _{-2}(\\varepsilon ) \\right) = 2$ for all $n \\ge k +2$ , which yields $\\epsilon _{k +1} = \\epsilon _{k +1}^{\\prime } = -1$ and $\\epsilon _{n} = \\epsilon _{n}^{\\prime } = 1$ for all $n \\ge k +2$ .", "Thus, the sign sequences $\\varepsilon $ and $\\varepsilon ^{\\prime }$ have the desired form.", "Conversely, observe that, for $\\varepsilon \\in \\Sigma ^{2, 1}$ with $\\epsilon _{1} = -1$ and $\\epsilon _{2} = 1$ , we have $ \\psi _{-2}(\\varepsilon ) = \\epsilon _{0} \\sqrt{-\\beta _{-2} -(-2)} = 0 \\, \\text{.}", "$ Therefore, if $k \\ge 2$ and $\\varepsilon \\in \\Sigma ^{k, 1}$ satisfies $\\epsilon _{k -1} = -1$ and $\\epsilon _{k} = 1$ , then $ \\psi _{-2}(\\varepsilon ) = g_{-2}^{\\left( \\epsilon _{0}, \\cdots , \\epsilon _{k -3} \\right)}\\left( \\psi _{-2}\\left( \\sigma ^{\\circ (k -2)}(\\varepsilon ) \\right) \\right) = g_{-2}^{\\left( \\epsilon _{0}, \\cdots , \\epsilon _{k -3} \\right)}(0) $ does not depend on $\\epsilon _{k -2}$ .", "This completes the proof of the proposition.", "Remark 19 It follows from Proposition REF and Proposition REF that, for all $k \\ge 0$ and $p \\ge 1$ , the set $\\mathcal {X}_{-2}^{k, p}$ contains exactly $2^{p}$ elements if $k = 0$ and $2^{k +p} -2^{k -1} +1$ elements if $k \\ge 1$ .", "Remark 20 Note that we can actually describe the map $\\psi _{-2} \\colon \\Sigma \\rightarrow \\mathbb {R}$ explicitly.", "For $\\varepsilon \\in \\Sigma $ , define the sequence $\\left( \\delta _{n}(\\varepsilon ) \\right)_{n \\ge 0} \\in \\lbrace 0, 1 \\rbrace ^{\\mathbb {Z}_{\\ge 0}}$ by $ \\delta _{n}(\\varepsilon ) = {\\left\\lbrace \\begin{array}{ll} \\delta _{n -1}(\\varepsilon ) & \\text{if } \\epsilon _{n} = 1\\\\ 1 -\\delta _{n -1}(\\varepsilon ) & \\text{if } \\epsilon _{n} = -1 \\end{array}\\right.}", "\\, \\text{,} $ where $\\delta _{-1}(\\varepsilon ) = 0$ by convention.", "Then the map $\\psi _{-2} \\colon \\Sigma \\rightarrow \\mathbb {R}$ is given by $ \\psi _{-2}(\\varepsilon ) = 2 \\cos \\left( \\pi \\sum _{n = 0}^{+\\infty } \\frac{\\delta _{n}(\\varepsilon )}{2^{n +1}} \\right) \\, \\text{.}", "$" ], [ "Back to the parameter space", "We shall now exploit the statements given in Section  to get results concerning the parameter space.", "Remark 21 By definition, for every point $a \\in \\mathbb {C}$ and every parameter $c \\in \\mathbb {C}$ , $c \\in \\mathcal {S}_{a}$ if and only if $a \\in \\mathcal {X}_{c}$ and, for all $k \\ge 0$ and $p \\ge 1$ , $c \\in \\mathcal {S}_{a}^{k, p}$ if and only if $a \\in \\mathcal {X}_{c}^{k, p}$ .", "Proposition 22 For every $a \\in \\mathbb {C}$ , we have $ \\mathcal {S}_{a} \\subset \\left\\lbrace c \\in \\mathbb {C} : \\vert c \\vert \\le R_{a} \\right\\rbrace \\, \\text{,} $ where $R_{a} = \\vert a \\vert ^{2} +\\sqrt{\\vert a \\vert ^{2} +1} +1$ .", "Suppose that $c \\in \\mathcal {S}_{a}$ .", "Then, by Proposition REF , we have $ \\vert c \\vert -\\vert a \\vert ^{2} \\le \\left|f_{c}(a) \\right|\\le \\rho _{c} \\, \\text{,} $ and hence $\\varphi \\left( \\vert c \\vert \\right) \\le \\vert a \\vert ^{2}$ , where $\\varphi \\colon \\mathbb {R}_{\\ge 0} \\rightarrow \\mathbb {R}$ is given by $ \\varphi (x) = x -\\frac{1 +\\sqrt{1 +4 x}}{2} \\, \\text{.}", "$ The map $\\varphi $ is strictly increasing and satisfies $\\varphi \\left( R_{a} \\right) = \\vert a \\vert ^{2}$ .", "Thus, the proposition is proved.", "Now, let us give a more extensive description of $\\mathcal {S}_{a}$ when $a \\in (-\\infty , -2] \\cup [2, +\\infty )$ .", "Given $\\epsilon = \\pm 1$ , let $\\Sigma _{\\epsilon }^{k, p}$  – with $k \\ge 0$ and $p \\ge 1$  – be the set defined by $ \\Sigma _{\\epsilon }^{k, p} = \\left\\lbrace \\varepsilon = \\left( \\epsilon _{n} \\right)_{n \\ge 0} \\in \\Sigma ^{k, p} : \\epsilon _{0} = \\epsilon \\right\\rbrace \\, \\text{,} $ and let $\\Sigma _{\\epsilon }$ be the set defined by $ \\Sigma _{\\epsilon } = \\bigcup _{k \\ge 0, \\, p \\ge 1} \\Sigma _{\\epsilon }^{k, p} = \\left\\lbrace \\varepsilon \\in \\Sigma : \\epsilon _{0} = \\epsilon \\right\\rbrace \\, \\text{.}", "$ For all $k \\ge 0$ and $p \\ge 1$ , the set $\\Sigma _{\\epsilon }^{k, p}$ contains exactly $2^{k +p -1}$ elements – each of them being completely determined by the choice of its terms with index in $\\lbrace 1, \\cdots , k +p -1 \\rbrace $ .", "Suppose that $a \\in (-\\infty , -2] \\cup [2, +\\infty )$ .", "Then for $\\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}^{2, 1}$ given by $\\epsilon _{1} = -1$ and $\\epsilon _{2} = 1$ , the map $ \\operatorname{sgn}(a) \\zeta _{\\varepsilon } \\colon c \\mapsto \\sqrt{-\\beta _{c} -c} $ is strictly decreasing on $(-\\infty , -2]$ and we have $\\zeta _{\\varepsilon }\\left( c_{a}^{-} \\right) = a$ , where $c_{a}^{-}$ is the parameter defined by $ c_{a}^{-} = -a^{2} -\\sqrt{a^{2} +1} -1 \\in \\mathcal {S}_{a}^{2, 1} \\, \\text{;} $ for $\\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}^{1, 1}$ given by $\\epsilon _{1} = 1$ , the map $ \\operatorname{sgn}(a) \\zeta _{\\varepsilon } \\colon c \\mapsto \\beta _{c} $ is strictly decreasing on $(-\\infty , -2]$ and we have $\\zeta _{\\varepsilon }\\left( c_{a}^{+} \\right) = a$ , where $c_{a}^{+}$ is the parameter defined by $ c_{a}^{+} = -a^{2} +\\vert a \\vert \\in \\mathcal {S}_{a}^{1, 1} \\, \\text{.}", "$ Remark 23 Note that, for every $a \\in \\mathbb {C}$ with $\\vert a \\vert \\ge 2$ , we have $R_{a} = -c_{\\vert a \\vert }^{-}$ .", "Theorem 24 Assume that $a \\in (-\\infty , -2] \\cup [2, +\\infty )$ .", "Then there is a unique map $ \\gamma _{a} \\colon \\Sigma _{\\operatorname{sgn}(a)} \\rightarrow (-\\infty , -2] $ that satisfies $\\zeta _{\\varepsilon }\\left( \\gamma _{a}(\\varepsilon ) \\right) = a$ for all $\\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}$ (see Figure REF ).", "Furthermore, we have $ \\mathcal {S}_{a}^{k, p} = \\gamma _{a}\\left( \\Sigma _{\\operatorname{sgn}(a)}^{k, p} \\right) \\subset \\left[ c_{a}^{-}, c_{a}^{+} \\right] \\, \\text{,} $ for all $k \\ge 0$ and $p \\ge 1$ , (see Figure REF ) and the map $\\gamma _{a}$ is injective.", "Figure: Graphs of the maps ζ ε \\zeta _{\\varepsilon }, with ε∈Σ sgn(a) 2,1 \\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}^{2, 1}, when a∈[2,+∞)a \\in [2, +\\infty ).Figure: Graphs of the maps c↦F n (c,a)c \\mapsto F_{n}(c, a), with n∈{0,⋯,3}n \\in \\lbrace 0, \\cdots , 3 \\rbrace , when a∈[2,+∞)a \\in [2, +\\infty ).Claim 25 If $a \\in (-\\infty , -2] \\cup [2, +\\infty )$ and $\\gamma \\in (-\\infty , -2]$ , then $a$ has at most one preimage under $\\psi _{\\gamma }$ .", "If $\\gamma \\in (-\\infty , -2)$ , then the map $\\psi _{\\gamma }$ is injective.", "If $\\gamma = -2$ and $\\varepsilon \\in \\Sigma $ satisfies $\\psi _{\\gamma }(\\varepsilon ) = a$ , then we have $ 2 \\le \\vert a \\vert = \\left|\\psi _{-2}(\\varepsilon ) \\right|\\le \\beta _{-2} = 2 \\, \\text{,} $ so $\\psi _{-2}(\\varepsilon ) = \\operatorname{sgn}(a) \\beta _{-2}$ , and, by Proposition REF , it follows that $\\varepsilon $ is the sign sequence in $\\Sigma _{\\operatorname{sgn}(a)}^{1, 1}$ given by $\\epsilon _{1} = 1$ .", "Thus, the claim is proved.", "For every $\\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}$ , we have $ \\operatorname{sgn}(a) \\zeta _{\\varepsilon }\\left( c_{a}^{-} \\right) \\ge \\sqrt{-\\beta _{c_{a}^{-}} -c_{a}^{-}} = \\vert a \\vert \\quad \\text{and} \\quad \\operatorname{sgn}(a) \\zeta _{\\varepsilon }\\left( c_{a}^{+} \\right) \\le \\beta _{c_{a}^{+}} = \\vert a \\vert \\, \\text{,} $ and hence, by the intermediate value theorem, there exists $\\gamma _{a}(\\varepsilon ) \\in \\left[ c_{a}^{-}, c_{a}^{+} \\right]$ such that $\\zeta _{\\varepsilon }\\left( \\gamma _{a}(\\varepsilon ) \\right) = a$ .", "Now, note that, if $\\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}^{k, p}$  – with $k \\ge 0$ and $p \\ge 1$  – and $\\gamma \\in (-\\infty , -2]$ satisfy $\\zeta _{\\varepsilon }(\\gamma ) = a$ , then $\\varepsilon $ is a preimage of $a$ under $\\psi _{\\gamma }$ , and in particular $\\gamma \\in \\mathcal {S}_{a}^{k, p}$ .", "Therefore, by Claim REF , the map $\\gamma _{a}$ so defined is injective, and, as $\\mathcal {S}_{a}^{k, p}$ contains at most $2^{k +p -1}$ elements, it follows that $\\gamma _{a}\\left( \\Sigma _{\\operatorname{sgn}(a)}^{k, p} \\right) = \\mathcal {S}_{a}^{k, p}$ , for all $k \\ge 0$ and $p \\ge 1$ .", "Thus, for every $\\varepsilon \\in \\Sigma _{\\operatorname{sgn}(a)}$ , $\\gamma _{a}(\\varepsilon )$ is the unique parameter $\\gamma \\in (-\\infty , -2]$ that satisfies $\\zeta _{\\varepsilon }(\\gamma ) = a$ .", "This completes the proof of the theorem.", "Remark 26 Applying Montel's theorem, it follows from Theorem REF that, for every $a \\in (-\\infty , -2] \\cup [2, +\\infty )$ , the set of parameters $c \\in \\mathbb {C}$ for which the point $a$ has bounded forward orbit under $f_{c}$ is also contained in the line segment $\\left[ c_{a}^{-}, c_{a}^{+} \\right]$ .", "Note that, when $a$ is an integer, the set $\\mathcal {S}_{a}$ has the following arithmetic property: Proposition 27 For every $a \\in \\mathbb {Z}$ , the set $\\mathcal {S}_{a}$ is contained in the set of algebraic integers and is invariant under the action of $\\operatorname{Gal}\\left( \\overline{\\mathbb {Q}} / \\mathbb {Q} \\right)$ .", "For all $k \\ge 0$ and $p \\ge 1$ , the polynomial $F_{k +p}(c, a) -F_{k}(c, a)$ is monic with integer coefficients since $a \\in \\mathbb {Z}$ .", "Thus, the proposition is proved.", "We shall now prove Theorem REF , which we recall below.", "Theorem  REF Assume that $a$ and $b$ are two integers with $\\vert b \\vert > \\vert a \\vert $ .", "Then either $a = 0$ , $\\vert b \\vert = 1$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\lbrace -2, -1, 0 \\rbrace $ , or $a = 0$ , $\\vert b \\vert = 2$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\lbrace -2 \\rbrace $ , or $\\vert a \\vert \\ge 1$ , $\\vert b \\vert = \\vert a \\vert +1$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\left\\lbrace -a^{2} -\\vert a \\vert -1, -a^{2} -\\vert a \\vert \\right\\rbrace $ , or $\\vert b \\vert > \\max \\left\\lbrace 2, \\vert a \\vert +1 \\right\\rbrace $ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\varnothing $ .", "Lemma 28 Assume that $m \\in \\mathbb {Z}$ and $c$ is an algebraic integer whose all Galois conjugates lie in the interval $(m -2, m]$ .", "Then $c = m -1$ or $c = m$ .", "Set $\\alpha = c -m +1$ .", "Then $\\alpha $ is an algebraic integer whose all Galois conjugates $\\alpha _{1}, \\cdots , \\alpha _{d}$ lie in the interval $(-1, 1]$ .", "Therefore, we have $ \\prod _{j = 1}^{d} \\alpha _{j} \\in (-1, 1] \\cap \\mathbb {Z} = \\lbrace 0, 1 \\rbrace \\, \\text{,} $ and it follows that either $\\alpha _{j} = 0$ for some $j \\in \\lbrace 1, \\cdots , d \\rbrace $ , which yields $\\alpha = 0$ , or $\\alpha _{j} = 1$ for all $j \\in \\lbrace 1, \\cdots , d \\rbrace $ .", "Thus, either $c = m -1$ or $c = m$ .", "For a proof of the case $a = 0$ and $\\vert b \\vert = 1$ , we refer the reader to [2].", "Thus, we may assume that $\\vert b \\vert \\ge 2$ .", "By Proposition REF , Theorem REF and Proposition REF , the set $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b}$ is contained in the set of algebraic integers, is invariant under the action of $\\operatorname{Gal}\\left( \\overline{\\mathbb {Q}} / \\mathbb {Q} \\right)$ and satisfies $ \\mathcal {S}_{a} \\cap \\mathcal {S}_{b} \\subset \\left\\lbrace c \\in \\mathbb {C} : \\vert c \\vert \\le R_{a} \\right\\rbrace \\cap \\left[ c_{b}^{-}, c_{b}^{+} \\right] \\, \\text{.}", "$ Suppose that $a = 0$ .", "Then we have $ c_{b}^{+} = -b^{2} +\\vert b \\vert \\le -2 = -R_{a} \\, \\text{,} $ with equality if and only if $\\vert b \\vert = 2$ .", "Therefore, $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} \\subset \\lbrace -2 \\rbrace $ if $\\vert b \\vert = 2$ and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\varnothing $ otherwise.", "Conversely, observe that $-2 \\in \\mathcal {S}_{a}^{2, 1} \\cap \\mathcal {S}_{b}^{1, 1}$ when $\\vert b \\vert = 2$ .", "Now, suppose that $\\vert a \\vert \\ge 1$ .", "Then we have $ c_{b}^{+} -2 < -R_{a} = -a^{2} -\\sqrt{a^{2} +1} -1 < -a^{2} -\\vert a \\vert = c_{b}^{+} \\quad \\text{if} \\quad \\vert b \\vert = \\vert a \\vert +1 $ and $ c_{b}^{+} = -b^{2} +\\vert b \\vert < -a^{2} -\\sqrt{a^{2} +1} -1 = -R_{a} \\quad \\text{if} \\quad \\vert b \\vert \\ge \\vert a \\vert +2 \\, \\text{.}", "$ Therefore, $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} \\subset \\left\\lbrace -a^{2} -\\vert a \\vert -1, -a^{2} -\\vert a \\vert \\right\\rbrace $ if $\\vert b \\vert = \\vert a \\vert +1$ by Lemma REF and $\\mathcal {S}_{a} \\cap \\mathcal {S}_{b} = \\varnothing $ otherwise.", "Conversely, observe that $-a^{2} -\\vert a \\vert -1 \\in \\mathcal {S}_{a}^{1, 2} \\cap \\mathcal {S}_{b}^{1, 2}$ and $-a^{2} -\\vert a \\vert \\in \\mathcal {S}_{a}^{1, 1} \\cap \\mathcal {S}_{b}^{1, 1}$ when $\\vert b \\vert = \\vert a \\vert +1$ .", "Thus, the theorem is proved." ] ]
1906.04514
[ [ "Intrinsic persistent spin helix in two-dimensional group-IV\n monochalcogenide MX (M : Sn, Ge; X: S, Se, Te) monolayer" ], [ "Abstract Energy-saving spintronics are believed to be implementable on the systems hosting persistent spin helix (PSH) since they support an extraordinarily long spin lifetime of carriers.", "However, achieving the PSH requires a unidirectional spin configuration in the momentum space, which is practically non-trivial due to the stringent conditions for fine-tuning the Rashba and Dresselhaus spin-orbit couplings.", "Here, we predict that the PSH can be intrinsically achieved on a two-dimensional (2D) group-IV monochalcogenide M X monolayer, a new class of the noncentrosymmetric 2D materials having in-plane ferroelctricity.", "Due to the C2v point group symmetry in the MX monolayer, a unidirectional spin configuration is preserved in the out-of-plane direction and thus maintains the PSH that is similar to the [110] Dresselhaus model in the [110]-oriented quantum well.", "Our first-principle calculations on various MX (M : Sn, Ge; X: S, Se, Te) monolayers confirmed that such typical spin configuration is observed, in particular, at near the valence band maximum where a sizable spin splitting and a substantially small wavelength of the spin polarization are achieved.", "Importantly, we observe reversible out-of-plane spin orientation under opposite in-plane ferroelectric polarization, indicating that an electrically controllable PSH for spintronic applications is plausible." ], [ "INTRODUCTION", "Recent development of spintronics relies on the new pathway for exploiting electron's spin in semiconductors by utilizing the effect of spin-orbit coupling (SOC)[1], [2].", "In a system with lack of inversion symmetry, the SOC induces an effective magnetic field or known as a spin-orbit field (SOF) acting on spin, so that the effective SOC Hamiltonian can be expressed as $H_{\\text{SOC}}=\\vec{\\Omega }(\\vec{k})\\cdot \\vec{\\sigma }=\\alpha (\\hat{E}\\times \\vec{k})\\cdot \\vec{\\sigma },$ where $\\vec{\\Omega }$ is the SOF vector, $\\vec{k}$ is the wave vector representing momentum of electrons, $\\vec{\\sigma }=(\\sigma _{x}, \\sigma _{y}, \\sigma _{z})$ is the Pauli matrices vector, and $\\alpha $ is the strength of the SOC that is proportional to magnitude of local electric field $\\vec{E}$ induced by the crystal inversion asymmetry.", "Since the SOF is odd in the electron's wave vector $\\vec{k}$ , as was firstly demonstrated by Dresselhauss [3] and Rashba [4], the SOC lifts Kramers' spin degeneracy and leads to a complex $\\vec{k}$ -dependent spin configuration of the electronic bands.", "In particular interest is driven due to a possibility to manipulate this spin configuration by using an external electric field to create non-equilibrium spin polarization [5], leading to various important phenomena such as spin Hall effect [6], spin galvanic effect [7], and spin ballistic transport [8], thus offering for realization of spintronics device such as spin-field effect transistor (SFET)[9].", "From practical perspective, materials having strong Rashba SOC have generated significant interest since they allow for electrostatic manipulation of the spin states [1], [10], paving the way towards non-charge-based computing and information processing [2].", "However, the strong SOC is also known to induce the undesired effect of causing spin decoherence [11], which plays an adverse role in the spin lifetime.", "In a diffusive transport regime, impurities and defects act as scatters which change the momentum of electron and simultaneously randomize the spin due to momentum-dependent SOF, leading to the fast spin decoherence through the Dyakonov-Perel (DP) mechanism of spin-relaxation[11].This process induces spin dephasing and a loss of the spin signal, such that the spin lifetime significantly reduces, thus limiting the performance of potential spintronic devices.", "A possible way to overcome this obstacle is to eliminate the problem of the spin dephasing by suppressing the DP spin relaxation.", "This can be achieved, in particular, by designing a structure where the SOF orientation is enforced to be unidirectional, preserving a unidirectional spin configuration in the momentum space.", "In such situation, electron motion together with the spin precession around the unidirectional SOF leads to a spatially periodic mode of the spin polarization known as persistent spin helix (PSH)[12], [13].", "The corresponding spin wave mode protects the spins of electron from the dephasing due to $SU(2)$ spin rotation symmetry, which is robust against spin-independent scattering and renders an extremely long spin lifetime [12], [14].", "Previously, the PSH has been demonstrated on various [001]-oriented semiconductors quantum well (QW) [15], [16], [17], [18], [19], [20] having equal strength of the Rashba and Dresselhauss SOC, or on [110]-oriented semiconductor QW [21] in which the SOC is described by the [110] Dreseelhauss model.", "Here, for the former, the spin configurations are enforced to be unidirectional in the in-plane [110] direction, whereas for the latter, they are oriented in the out-of-plane [001] direction.", "Similar to the [110]-oriented QW, the PSH state has recently been reported for LaAlO$_{3}$ /SrTiO$_{3}$ interface [22], ZnO [10-10] surface [23], halogen-doped SnSe monolayer [24], and WO$_{2}$ Cl$_{2}$ monolayer[25].", "Although the PSH has been widely studied on various QW systems [15], [16], [17], [18], [19], [20], it is practically non-trivial due to the stringent conditions for fine-tuning the Rashba and Dresselhaus SOCs.", "Therefore, it would be desirable to find a new class of material which intrinsically supports the PSH.", "In this paper, we show that the PSH can be intrinsically achieved on a two-dimensional (2D) group-IV monochalcogenide $MX$ monolayer, a new class of noncentrosymmetric 2D materials having in-plane ferroelctricity[26], [27], [28], [29], [30].", "On the basis of density-functional theory (DFT) calculations on various $MX$ ($M$ : Sn, Ge; $X$ : S, Se, Te) monolayers, supplemented with symmetry analysis, we find that a unidirectional spin orientation is preserved in the out-of-plane direction, yielding a PSH that is similar to the [110] Dresselhaus model in the [110]-oriented QW.", "Such typical spin configuration is observed, in particular, at near the valence band maximum, having a sizable spin splitting and small wavelength of the spin polarization.", "More interestingly, we observe reversible out-of-plane spin orientation under opposite in-plane ferroelectric polarization, suggesting that an electrically controllable PSH is achievable, which is useful for spintronic applications." ], [ "Computational details", "We performed first-principles calculations by using DFT within the generalized gradient approximation (GGA) [31] implemented in the OpenMX code [32].", "Here, we adopted norm-conserving pseudopotentials [33] with an energy cutoff of 350 Ry for charge density.", "The $12\\times 12\\times 1$ k-point mesh was used.", "The wave functions were expanded by linear combination of multiple pseudoatomic orbitals generated using a confinement scheme [34], [35], where two $s$ -, two $p$ -, two $d$ -character numerical pseudo-atomic orbitals were used.", "The SOC was included in the DFT calculations by using $j$ -dependent pseudopotentials [36].", "The spin textures in the momentum space were calculated using the spin density matrix of the spinor wave functions obtained from the DFT calculations as we applied recently on various 2D materials [23], [24], [37], [38], [39], [40].", "Table: Structural-related parameters corresponding to the band gap of the MXMX monolayer.", "aa and bb (in Å) represent the lattice parameters in the xx- and yy-directions, respectively.", "d 1 d_{1} and d 2 d_{2} (in Å) indicate the bondlength between the MM (MM: Sn, Ge) and XX (XX: S, Se, Te) atoms in the in-plane and out-of-plane directions, respectively.", "E g E_{g} (in eV) represents the energy gap where the star (*) indicates direct band gap.Figure: (a) Atomic structure of the MXMX monolayer corresponding to it symmetry operations.", "Black and green balls represent the MM (MM: Sn, Ge) and XX (XX: S, Se, Te) atoms, respectively.", "The unit cell of the crystal is indicated by red lines characterized by aa and bb lattice parameters in the xx and yy directions.", "d 1 d_{1} and d 2 d_{2} represent bondlength between the MM (MM: Sn, Ge) and XX (XX: S, Se, Te) atoms in the in-plane and out-of-plane directions, respectively.", "(b) First Brillouin zone of the MXMX monolayer characterized by high symmetry k →\\vec{k} points (Γ\\Gamma , Y, M, X) are shown.", "(c) Spin-split bands induced by the SOC and C 2v C_{2v} point group symmetry and (d) the corresponding Fermi contours in the momentum space are schematically shown.", "Here, the Fermi contours are characterized by two Fermi loops shifted by the wave vector D →\\vec{D}, exhibiting a unidirectional spin configuration in the out-of-plane direction.", "The red and blue lines (or arrows) represent positive and negative spins, respectively, in the out-of-plane directions.In our DFT calculations, we considered ferroelectric phase of the $MX$ monolayer having black phosporene-type structure [41], [42].", "The minimum energy pathways of ferroelectric transitions were calculated using nudged elestic band (NEB) method[43] based on the interatomic forces and total energy obtained from DFT caclulations.", "The Ferroelectric polarization was calculated using Berry phase approach [44], where both electronic and ionic contributions were considered.", "We used a periodic slab to model the $MX$ monolayer, where a sufficiently large vacuum layer (20 Å) is applied in order to avoid interaction between adjacent layers.", "We used the axes system where layers are chosen to sit on the $x-y$ plane, while the $x$ axis is taken to be parallel to the puckering direction [Fig.", "1(a)].", "The geometries were fully relaxed until the force acting on each atom was less than 1 meV/Å.", "The optimized structural-related parameters are summarized in Table 1, where overall are in good agreement with previously reported data [45], [41], [29]." ], [ "Symmetry-protected PSH state in $MX$ monolayer", "To predict the PSH state in the $MX$ monolayer, we firstly derive an effective low energy Hamiltonian by using symmetry analysis.", "As shown in Fig.", "1(a), the crystal structures of the $MX$ ML has black phosporene-type structures where the symmetry group is isomorphic to $C_{2v}^{7}$ or $Pmn2_{1}$ space group [42], [41].", "There are four symmetry operations in the crystal lattice of the $MX$ monolayer [Fig.", "1(a)]: (i) identity operation $E$ ; (ii) twofold screw rotation $\\bar{C}_{2y}$ (twofold rotation around the $y$ axis, $C_{2y}$ , followed by translation of $\\tau =a/2,b/2$ ), where $a$ and $b$ is the lattice parameters along $\\vec{a}$ and $\\vec{b}$ directions, respectively; (iii) glide reflection $\\bar{M}_{xy}$ (reflection with respect to the $xy$ plane followed by translation $\\tau $ ); and (iv) reflection $M_{yz}$ with respect to the $yz$ plane.", "The effective $\\vec{k}\\cdot \\vec{p}$ Hamiltonian can be constructed by taking into account all symmetry operations in the little group of the wave vector in the reciprocal space.", "Table: Transformation rules for the in-plane wave vector components (k x k_{x}, k y k_{y}) and spin Pauli matrices (σ x ,σ y ,σ z )(\\sigma _{x}, \\sigma _{y}, \\sigma _{z}) under the considered point-group symmetry operations.", "Time-reversal symmetry, implying a reversal of both spin and momentum, is defined as T=iσ y KT=i\\sigma _{y}K, where KK is the complex conjugation, while the point-group operationsare defined as C ^ 2y =iσ y \\hat{C}_{2y}=i\\sigma _{y}, M ^ yz =iσ x \\hat{M}_{yz}=i\\sigma _{x}, and M ^ xy =iσ z \\hat{M}_{xy}=i\\sigma _{z}.Let $Q$ be a high symmetry point in the first Brillouin zone (FBZ) where a pair of spin-degenerate eigen states exsist in the valence band maximum (VBM) or conduction band minimum (CBM).", "This degeneracy appers due to time reversal symmetry $T$ for which the condition that $\\vec{Q}=-\\vec{Q}+\\vec{G}$ is satisfied, where $\\vec{G}$ is the 2D reciprocal-lattice vector.", "Such points are located at the center of the FBZ ($\\Gamma $ point), or some points that are located at the boundary of the FBZ such as $X$ , $Y$ , and $M$ points for a primitive rectangular lattice [Fig.", "1(b)].", "The band dispersion around the $Q$ point can be deduced by identifying all symmetry-allowed terms so that $O^{\\dagger }H(k)O=H(k)$ is obtained, where $O$ denotes all symmetry operations belonging to the little group of the $Q$ point, supplemented by time-reversal symmetry $T$ .", "For simplicity, let we assume that the little group of the wave vector $\\vec{k}$ at the $Q$ point belongs to the $C_{2v}$ point group similar to that of the crystal in the real space.", "Therefore, the wave vector $\\vec{k}$ and spin vector $\\vec{\\sigma }$ can be transformed according to the symmetry operation $O$ in the $C_{2v}$ point group and time reversal symmetry $T$ .", "The corresponding transformation for the $\\vec{k}$ and $\\vec{\\sigma }$ are listed in Table II.", "Collecting all terms which are invariant with respect to the symmetry operation, we obtain the following effective Hamiltonian up to third order correction of $k$[13]: $\\begin{split}H & = E_{0}(k)+\\alpha k_{x}\\sigma _{z}+ (\\alpha ^{^{\\prime }}k_{y}^{2} k_{x}+\\alpha ^{\"}k_{x}^{3})\\sigma _{z}\\\\& = E_{0}(k)+\\alpha ^{(1)}k\\cos \\theta \\sigma _{z}+\\alpha ^{(3)}k\\cos (3\\theta ) \\sigma _{z},\\end{split}$ where $E_{0}(k)=\\hbar ^{2}(k_{x}^{2}+k_{y}^{2})/2m^{*}$ is the nearly free electron/hole energy, $\\alpha ^{(1)}$ defined as $\\alpha ^{(1)}=\\alpha +(k^{2}/4)(\\alpha ^{^{\\prime }}+3\\alpha ^{\"})$ is originated from the contribution of the $k$ -linear parameter $\\alpha $ and the correction provided by the third order parameters ($\\alpha ^{^{\\prime }}$ and $\\alpha ^{\"}$ ), $\\alpha ^{(3)}$ corresponds to the third order parameters by the relation $\\alpha ^{(3)}=(1/4)[\\alpha ^{^{\\prime }}-\\alpha ^{\"}]k^{2}$ , and $\\theta $ is the angle of the momentum $\\vec{k}$ with respect to the $x$ -axis defined as $\\theta =\\cos ^{-1}(k_{x}/k)$ .", "Solving the eigenvalue problem involving the Hamiltonian of Eq.", "(REF ) yields split-split energy dispersions: $E_{\\pm }=E_{0}(k) \\pm [\\alpha ^{(1)}\\cos \\theta +\\alpha ^{(3)}\\cos (3\\theta )]k.$ These dispersions are schematically illustrated in Fig.", "1(c) showing a highly anisotropic spin splitting.", "Since the Hamiltonian of Eq.", "(REF ) is only coupled with $\\sigma _{z}$ , neglecting all the cubic terms leads to the $SU(2)$ symmetry of the Hamiltonian [13], [12], $H=E_{0}(k)+\\alpha k_{x}\\sigma _{z},$ with the energy dispersions, $E_{\\pm }=E_{0}(k) \\pm \\alpha k_{x}.$ Importantly, these dispersions have the shifting property: $E_{+}(\\vec{k})=E_{-}(\\vec{k}+\\vec{D})$ , where $\\vec{D}=2m^{*}\\alpha (1,0,0)/\\hbar ^{2}$ is the shifting wave vector.", "As a result, constant-energy cut shows two Fermi loops whose centers are displaced from their original point by $\\mp \\vec{D}$ as schematically shown in Fig.", "1(d).", "Since the $z$ component of the spin operator $S_{z}$ commutes with this Hamiltonian of Eq.", "(REF ), $[S_{z},H]=0$ , the spin operator $S_{z}$ is a conserved quantity.", "Here, expectation value of the spin $\\left\\langle S\\right\\rangle $ only has the out-of-plane component: $(\\left\\langle S_{x}\\right\\rangle ,\\left\\langle S_{y}\\right\\rangle ,\\left\\langle S_{z}\\right\\rangle )_{\\pm } =\\pm (\\hbar /2)(0,0,1)$ at any wave vector $\\vec{k}$ except for $k_{x}=0$ , resulting in the unidirectional out-of-plane spin configuration in the momentum space [Fig.", "1(d)].", "In such situation, the unidirectional out-of-plane SOF is achieved, implying that the electron motion accompanied by the spin precession around the SOF form a spatially periodic mode of the spin polarization, yielding the PSH that is similar to the [110] Dresselhaus model [12] as recently demonstrated on the [110]-oriented semiconductor QW[21].", "In the next section, we discuss our results from the first-principles DFT calculations on various $MX$ ($M$ : Sn, Ge; $X$ : S, Se, Te) monolayers to confirm the above predicted PSH." ], [ "DFT analysis of $MX$ monolayer", "Figure 2 shows the electronic band structures of various $MX$ ($M$ : Sn, Ge; $X$ : S, Se, Te) monolayers calculated along the selected $\\vec{k}$ paths in the FBZ corresponding to the density of states (DOS) projected to the atomic orbitals.", "Without including the SOC, it is evident that there are two equivalent extrema valleys characterizing the VBM and CBM located at the points that are not time reversal invariant.", "Consistent with previous calculations[29], [41], [45], the $MX$ monolayers show indirect band gap (except for $M$ Se monolayer), where the VBM and CBM are located along the $\\Gamma $ -$Y$ and $\\Gamma $ -$X$ lines, respectively.", "Overall, the calculated band gap [see Table I] is in a good agreement with previous results under GGA-PBE level [41], [45].", "Our calculated results of the DOS projected to the atomic orbitals confirmed that the $M$ -$s$ and $X$ -$p$ orbitals contributes dominantly to the VBM, while the CBM is mainly originated from the contribution of the $M$ -$p$ and $X$ -$s$ orbitals.", "Figure: (a) Electronic band structures of the MXMX monolayers corresponding to density of state projected to the atomic orbitals for: (a) SnS, (b) SnSe, (c) SnTe, (d) GeS, (e) GeSe, and (f) GeTe.", "The balck and red lines show the calculated band structures without and with the SOC, respectively.Turning the SOC strongly modifies the electronic band structures of the $MX$ monolayers [Fig.", "2].", "Importantly, a sizable splitting of the bands produced by the SOC is observed at some high symmetry $\\vec{k}$ points and along certain $\\vec{k}$ paths in the FBZ.", "This splitting is especially pronounced around the X and Y points near both the VBM and CBM.", "However, there are special high-symmetry lines and points in the FBZ where the splitting is zero.", "This is in particular, the case for $\\Gamma $ -$Y$ line, where the wave vector $\\vec{k}=(0,k_{y},0)$ is parallel to the ferroelectric polarization along the $y$ direction.", "To analyze the properties of the spin splitting, we consider SnTe monolayer as a representative example of the $MX$ monolayer.", "Here, we focus our attention on the bands near the VBM (including spin) around the Y point due to the large spin splitting as highligted by the blue lines in Fig.", "3(a).", "Without the SOC, it is clearly seen from the band dispersion that fourfold degenerate state is visible at the $Y$ point [Fig.", "3(b)].", "Taking into account the SOC, this degeneracy splits into two pair doublets with the splitting energy of $\\Delta E_{Y}=9.2$ meV [Fig.", "3(c)].", "Although these doublets remain at the $\\vec{k}$ along the $\\Gamma $ -$Y$ line, they split into a singlet when moving away along the $Y$ -$M$ line, yielding a highly anisotropic spin splitting.", "Figure: (a) Energy band dispersion of SnTe monolayer along the MM-YY-Γ\\Gamma lines calculated without (black lines) and with (red lines) the SOC are shown.", "(b) Zoom-in the energy dispersion near the VBM closed to the YY point along MM-YY and YY-Γ\\Gamma lines as highlighted by the blue lines in Fig.", "3(a).", "(c) Spin splitting properties of the bands around the YY point along the MM-YY-MM lines characterized by: (i) splitting energy (ΔE\\Delta E), i.e., different energy between the VBM along YY-MM line and the energy band at the YY point, and (ii) momentum offset (k 0 k_{0}).To clarify the origin of the anisotropic splitting around the $Y$ point near the VBM, we discuss our system based on the symmetry argument.", "At the $Y$ point, the little group of the wave vector $\\vec{k}$ belongs to the $C_{2v}$ point group[42].", "As previously mentioned that the $C_{2v}$ point group contains the $C_{2y}$ rotation symmetry around the $y$ -axis.", "Applying the $C_{2y}$ rotation twice to the Bloch wave function, we have $C_{2y}^{2}\\psi _{k}=e^{ik_{y}b} \\psi _{k}$ , thus we obtain that $C_{2y}^{2}=e^{ik_{y}b}$ .", "We further define an antiunitary symmetry operator, $\\Theta =C_{2y}T$ , so that $\\Theta ^{2}=C_{2y}^{2}T^{2}=-e^{ik_{y}b}$ for spin half system.", "Therefore, at the $Y$ point ($k_{y}=\\pi /b$ ), we find that $\\Theta ^{2}=-1$ , thus the Bloch states $(\\psi _{k},\\Theta \\psi _{k})$ are double degenerate.", "Figure: Schematic view of the energy level around the YY point near the VBM.", "The SOC splits the states into two doublets with eigenvalues of M yz =±1M_{yz}=\\pm 1, which are further splitted into a singlet with sign-reversed expectetion values of spin.In addition, there is also $M_{yz}$ mirror symmetry in the $C_{2v}$ point group, which commutes with Hamiltonian of the crystal, $[M_{yx},H]=0$ .", "By operating $M_{yz}$ symmetry to the Bloch states, we find that $M_{yz}^{2}=-e^{-ik_{y}b}$ .", "Accordingly, the Bloch states can be labelled using the $M_{yz}$ eigenvalues, i.e., $M_{yz}{\\psi _{k}^{\\pm }}=\\pm ie^{ik_{y}b/2}{\\psi _{k}^{\\pm }}$ .", "Here, for the $Y$ point ($k_{y}=\\pi /b$ ), we find that $M_{yz}^{2}=1$ , thus we obtain $M_{yz}\\psi _{Y}^{\\pm }=\\pm \\psi _{Y}^{\\pm }$ and $M_{yz}\\Theta \\psi _{Y}^{\\pm }=\\pm \\Theta \\psi _{Y}^{\\pm }$ .", "Therefore, there are two conjugated doublets at the $Y$ point, $(\\psi _{Y}^{+},\\Theta \\psi _{Y}^{+})$ or $(\\psi _{Y}^{-},\\Theta \\psi _{Y}^{-})$ , which is distinguished by the $M_{yz}$ eigenvalues as schematically shown in Fig.", "4.", "These conjugated doublets are preserved along the $\\Gamma $ -$Y$ line but they split into singlet when moving to the $Y$ -$M$ line, which are protected by the $M_{yz}$ and $C_{2y}$ symmtery operations.", "As a result, the strong anisotropic splitting is achieved, which is in fact consistent well with our DFT results shown in Fig.", "3(c).", "Figure: Energy profiles of the spin textures calculated around the YY point near the VBM for: (a) upper and (b) lower bands.", "The colours scale in Fig.", "5(a)-(b) indicate the energy band near the VBM.", "Constant energy contours corresponding to a cut at 1 meV below the VBM characterized by (c) S x S_{x}, (d) S y S_{y}, and (e) S z S_{z} components of the spin distribution are shown.", "The colours scale in Fig.", "5(c)-(e) show the modulus of the spin polarization.To further demonstrate the nature of the observed anisotropic splitting around the $Y$ point near the VBM, we show in Figs.", "5(a) and 5(b) the energy profiles of the spin textures for the upper and lower bands, respectively.", "It is found that a complex pattern of the spin polarization is observed around the $Y$ point, which is remarkably different either from Rashba- and Dresselhaus-like spin textures.", "This is in contrast to the widely studied 2D materials such as PtSe$_{2}$ [37], [46], BiSb [47], LaOBiS$_{2}$ [48], and polar transition metal dichalcogenide [39], [40], where the Rashba-like spin textures are identified.", "In particular, we observe a unifrom spin polarization close to the VBM, which persists in a region located at about 0.1 Å$^{-1}$ from $Y$ point along the $Y$ -$M$ and $Y$ -$\\Gamma $ lines [see the region with red colour in Fig.", "5(a)-(b)].", "By carefully analyzing the spin textures measured at the constant energy cut of 1 meV below the VBM, we confirmed that this peculiar spin polarization is mostly dominated by the out-of-plane component $S_{z}$ [Fig.", "5(e)] rather than the in-plane ones ($S_{x}$ , $S_{y}$ ) [Fig.", "5(c)-(d)], leading to the unidirectional out-of-plane spin textures.", "On the other hand, the constant-energy cut also induces the Fermi lines characterized by the shifted two circular loops along the $Y$ -$M$ ($k_{x}$ ) direction and the degenerated nodal point along the $Y$ -$\\Gamma $ ($k_{y}$ ) direction.", "Both the spin textures and Fermi lines are agree well with our $\\vec{k}\\cdot \\vec{p}$ Hamiltonian model derived from the symmetry analysis.", "Since the spin textures are uniformly oriented in the out-of-plane direction, the unidirectional out-of-plane SOF is achieved, maintaining the PSH that is similar to the [110] Dresselhauss model[12].", "Therefore, it is expected that the DP mechanism of the spin relaxation is suppressed, potentially ensurring to induce an extremely long spin lifetime.", "Table: Spin splitting parameter α\\alpha (in eVÅ) and the wavelength of the spin polarization λ\\lambda (in nm) for the selected PSH materials.For a quantitative analysis of the above mentioned spin splitting, we here calculate the strength of the spin splitting by evaluating the band dispersions along the $Y$ -$M$ and the $Y$ -$\\Gamma $ directions near the VBM in term of the effective $\\vec{k}\\cdot \\vec{p}$ Hamiltonian model given in Eq.", "(REF ).", "Here, according to Eq.", "(REF ), the spin-splitting energy ($E_{\\text{Split}}=E_{+}-E_{-})$ ) can be formulated as $E_{\\text{Split}}=2k[(\\alpha +(k^{2}/4)(\\alpha ^{^{\\prime }}+3\\alpha ^{\"}))\\cos \\theta +(k^{2}/4)(\\alpha ^{^{\\prime }}-\\alpha ^{\"})\\cos (3\\theta )].$ The parametrs $\\alpha $ , $\\alpha ^{^{\\prime }}$ , and $\\alpha ^{\"}$ can be calculated by numerically fitting of Eq.", "(REF ) to the spin splitting energy along the $Y$ -$M$ ($k_{x}$ ) and the $Y$ -$\\Gamma $ ($k_{y}$ ) directions obtained from our DFT results, and find that $\\alpha =1.23$ eVÅ, $\\alpha ^{^{\\prime }}=0.0014$ eVÅ$^{3}$ , and $\\alpha ^{\"}=0.0027$ eVÅ$^{3}$ .", "It is clearly seen that the obtained value of the cubic term parameters ($\\alpha ^{^{\\prime }}$ , $\\alpha ^{\"}$ ) is too small compared with that of the linear term parameter $\\alpha $ , indicating that the contribution of the higher order correction is not essential.", "On the other hand, by using the energy dispersion of Eq.", "(REF ), we also obtain the linear term parameter $\\alpha $ through the relation $\\alpha =2E_{R}/k_{0}$ , where $E_{R}$ and $k_{0}$ are the shifting energy and the wave vector as illustrated in Fig.", "3(c).", "This revealed that the calculated value of $\\alpha $ is 1.20 eVÅ, which is fairly agree with that obtained from the higher order correction model.", "Since the spin-splitting is dominated by the linear term, ignoring the higher order correction preserves the $SU(2)$ symmetry of the Hamiltonian, thus maintaining the PSH as we expected.", "It is important to noted here that the PSH predicted in the present system should ensure that a spatially periodic mode of spin polarization is achieved.", "The corresponding spin wave mode is characterized by the wavelength of the spin polarization defined as [12] $\\lambda =(\\pi \\hbar ^{2})/(m^{*}\\alpha )$ , where $m^{*}$ is the hole effective mass.", "Here, the effective mass $m^{*}$ can be evaluated by fitting the sum of the band dispersions along the $Y$ -$M$ direction in the VBM.", "Here, we find that $m^{*}=0.056m_{0}$ , where $m_{0}$ is the free electron mass, which is in a good agreement with previous result reported by Xu et.", "al.", "[45] The resulting wavelength $\\lambda $ is 7.13 nm, which is typically on the scale of the lithographic dimension used in the recent semiconductor industry [50].", "We summarize the calculated results of the $\\alpha $ and $\\lambda $ in Table III and compare the results with a few selected PSH materials from previously reported data.", "It is found that the calculated value of $\\alpha $ in various $MX$ monolayer is much larger than that observed on various QWs such as GaAs/AlGaAs [16], [17] and InAlAs/InGaAs [20], [18], ZnO (10-10) surface[23], and strained LaAlO3/SrTiO3 (001) interface [22].", "However, this value is comparable with those observed on the bulk BiInO$_{3}$ [49], halogen-doped SnSe monolayer[24], and WO$_{2}$ Cl$_{2}$ monolayer[25].", "The associated spin-splitting parameters are sufficient to support room temperature spintronics functionality.", "On the other hand, we observed small wavelength $\\lambda $ (in nm scale) of the spin polarization, which is in fact two order less than that observed on the GaAs/AlGaAs QW [16], [17], rendering that the present system is promising for nanoscale spintronics devices.", "Figure: (a) Nudeged elastic band calculation for the polarization switching process through centrosymmetric (paraelectric) structures in SnTe monolayer.", "Two ferroelectric structures (FE) in the ground state with opposite direction of the electric polarization and a paraelectric structure with zero electric polarization (NP) are shown.", "E b E_{b} is the barrier energy defined as the energy different between the total energy of the ferroelectric and paraelectric structure.", "Reversible out-of-plane spin orientation in SnTe monolayer calculated at 1 meV below the VBM for the ferroelectric structure with opposite polarization: (b) -P and (c) P.Now, we discuss our prediction of the PSH in correlated to the ferroelectricity in the $MX$ monolayer.", "As previously mentioned that the $MX$ monolayer posses in-plane ferroelectricity [26], [27], [28], [29], [30], which is induced by the in-plane atomic distortion in the real space of the crystal [see Fig.", "1(a)].", "Therefore, a substantial electric polarization in the in-plane direction is established.", "For instant, our Berry phase calculation [44] on SnTe monolayer revealed that the magnitude of the in-plane electric polarization is 13.8 $\\mu \\text{C}/\\text{cm}^{2}$ when an effective thickness of 1 nm for monolayer is used, which is in a good agreement with previous result[29].", "Importantly, we predict the feasibility of polarization switching in SnTe monolayer by analyzing the minimum energy pathway of ferroelectric transition calculated using NEB method [43].", "As shown in Fig.", "6(a), we find that the calculated barrier energy for polarization switching process is 2.26 meV/cell in SnTe monolayer.", "This value is comparable to those of the 2D ferroelectric reported in previous work[26], [25], but is much smaller than that in conventional ferroelectric BaTiO$_{3}$[51], suggesting that a switchable in-plane ferroelectric polarization is plausible.", "In deed, polarization switching in various $MX$ monolayers by using an external electric field or strain effects has recently been reported [30].", "By switching the in-plane ferroelectric polarization $\\vec{P}$ in $MX$ monolayer, e.g., by applying an external electric field, a fully reversal of the out-of-plane spin orientation can be expected.", "This is due to the fact that switching the in-plane ferroelectric polarization from $\\vec{P}$ to $-\\vec{P}$ is equivalent to the space inversion operation which changes the wave vector from $\\vec{k}$ to $-\\vec{k}$ , but preserves the spin vector $\\vec{\\sigma }$ [52], [53].", "Suppose that ${\\psi _{\\vec{P}}(\\vec{k})}$ is the Bloch state of the crystal with ferroelectric polarization $\\vec{P}$ .", "Under the space inversion operation $I$ , both the polarization $\\vec{P}$ and the wave vector $\\vec{k}$ are reversed so that $I{\\psi _{\\vec{P}}(\\vec{k})}={\\psi _{-\\vec{P}}(\\vec{-k})}$ .", "However, application of the time reversal symmetry $T$ reverses only the $\\vec{k}$ , while the $\\vec{P}$ remains unchanged, leading to the fact that $TI{\\psi _{\\vec{P}}(\\vec{k})}={\\psi _{-\\vec{P}}(\\vec{k})}$ .", "The expectation values of spin operator $\\left\\langle S\\right\\rangle $ can now be calculated by $\\begin{split}{S}_{-\\vec{P},\\vec{k}} & ={\\psi _{-\\vec{P}}(\\vec{k})}S{\\psi _{-\\vec{P}}(\\vec{k})}\\\\& = {\\psi _{\\vec{P}}(\\vec{k})}I^{-1}T^{-1}STI{\\psi _{\\vec{P}}(\\vec{k})}\\\\& = {\\psi _{\\vec{P}}(\\vec{k})}(-S){\\psi _{\\vec{P}}(\\vec{k})}\\\\& = {-S}_{\\vec{P},\\vec{k}},\\end{split}$ which indicates that the spin orientation can be reversed by switching the ferroelectric polarization.", "This analysis is in fact confirmed by our calculated results of the spin textures shown in Fig.", "6(b)-(c), where the fully reversal of the out-of-plane spin orientation is achieved under opposite in-plane ferroelectric polarization.", "Such an interesting property indicates that an electrically controllable PSH in $MX$ monolayer can be realized, which is very useful for operation in the spintronic devices.", "Thus far, we have predicted that the PSH with large spin splitting is achieved in the $MX$ monolayer.", "In particular, GeTe monolayer is promising for spintronics since it has the largest strength of the spin splitting ($\\alpha =1.67$ eVÅ) among the $MX$ monolayer.", "Because the PSH is achieved on the spin-split bands near the VBM [Fig.", "3(a)], $p$ -type doping for spintronics is expected to be realized.", "Moreover, by injection the hole doping into the valence band of the $MX$ monolayer, it is possible to map the formation and evolution of the PSH state using near-filled scanning Kerr microscopy[54], which allow us to resolve the features down to tens-nm scale with sub-ns time revolution.", "Finally, the hole-doped $MX$ monolayer can also be applied to explore current-induced spin polarization known as a Edelstein effect[55] and associated spin-orbit torque[56], indicating that the present system is promising for spintronic devices." ], [ "CONCLUSION", "In summary, by using first-principles DFT calculations, supplemented with symmetry analyses, we investigated the effect of the SOC on the electronic structures of the $MX$ monolayer.", "We found that due to $C_{2v}$ point group symmetry in the $MX$ monolayer, the unidirectional out-of-plane spin configurations are preserved, inducing the PSH state that is similar to the [110] Dresselhauss model [12] observed on the [110]-oriented semiconductor QW.", "Our first-principle calculations on various $MX$ ($M$ : Sn, Ge; $X$ : S, Se, Te) monolayers confirmed that this PSH is observed at near the VBM, supporting large spin splitting and small wavelength of the spin polarization.", "More importantly, we observed a reversible out-of-plane spin orientations under opposite in-plane ferroelectric polarization, indicating that an electrically controllable PSH in $MX$ monolayer can be realized, which is promising for spintronic devices.", "Recently, there are a number of other 2D materials that are predicted to maintain the in-plane ferroelectricity and the $C_{2v}$ symmetry of the crystals.", "Therefore, it opens a possibility to further explore the achievable PSH states in these materials.", "Among them are coming from the 2D elemental group‐V (As, Sb, and Bi) monolayer with the puckered lattice structure [57], [58].", "Therefore, it is expected that our predictions will stimulate further theoretical and experimental efforts in the exploration of the PSH state in the 2D-based ferroelectric materials, broadening the range of the 2D materials for future spintronic applications.", "The first author (M.A.U.", "Absor) would like to thanks Nanomaterial Reserach Institute, Kanazawa University, Japan, for providing financial support during his research visit.", "This work was partly supported by Grants-in-Aid on Scientific Research (Grant No.", "16K04875) from the Japan Society for the Promotion of Science (JSPS) and a JSPS Grant-in-Aid for Scientific Research on Innovative Areas ”Discrete Geometric Analysis for Materials Design” (Grant No.", "18H04481).", "Part of this research was supported by PDUPT Research Grant (2019) and BOPTN Research Grant (2019), Universitas Gadjah Mada, Indonesia." ] ]
1906.04337
[ [ "Adaptive optics benefit for quantum key distribution uplink from ground\n to a satellite" ], [ "Abstract For quantum communications, the use of Earth-orbiting satellites to extend distances has gained significant attention in recent years, exemplified in particular by the launch of the Micius satellite in 2016.", "The performance of applied protocols such as quantum key distribution (QKD) depends significantly upon the transmission efficiency through the turbulent atmosphere, which is especially challenging for ground-to-satellite uplink scenarios.", "Adaptive optics (AO) techniques have been used in astronomical, communication, and other applications to reduce the detrimental effects of turbulence for many years, but their applicability to quantum protocols, and their requirements specifically in the uplink scenario, are not well established.", "Here, we model the effect of the atmosphere on link efficiency between an Earth station and a satellite using an optical uplink, and how AO can help recover from loss due to turbulence.", "Examining both low-Earth-orbit and geostationary uplink scenarios, we find that a modest link transmissivity improvement of about 3dB can be obtained in the case of a co-aligned downward beacon, while the link can be dramatically improved, up to 7dB, using an offset beacon, such as a laser guide star.", "AO coupled with a laser guide star would thus deliver a significant increase in the secret key generation rate of the QKD ground-to-space uplink system, especially as reductions of channel loss have favourably nonlinear key-rate response within this high-loss regime." ], [ "Introduction", "For two parties to securely communicate over public channels, they must utilize encryption, consuming shared secret keys in the process.", "Whereas classical key generation schemes rely on assumptions of computational complexity, quantum key distribution (QKD) relies on foundational principles of quantum mechanics [1], [2].", "Practical QKD implementations depend on the transmission of quantum optical signals either through optical fiber or free space [3], [4].", "In both cases, however, the fragility of a single photon channel fundamentally limits the distances that can be achieved: losses in fiber restrict the maximum distance to a few hundred kilometers [5], [6], while terrestrial free-space implementations are limited by line-of-sight and thus the curvature of Earth (distances up to 144 have been demonstrated [7], [8]).", "QKD can be scaled up to global distances by using orbiting satellites acting as intermediate nodes between two ground stations [9], [10], [11], [12].", "With a “trusted” node [13], the satellite combines keys generated by separate QKD links to each ground station—effectively encrypting one key with the other—and transmits the result to ground.", "One ground station then uses its own key to extract the other key (from the combined result), which can be used to encrypt messages.", "The satellite is trusted in the sense that it has access to each ground station's key in this process.", "Alternatively, the satellite could be an “untrusted” node [14] by providing entangled photons to both ground stations simultaneously, who can together verify their integrity.", "Both of these approaches were demonstrated recently with the Micius satellite [15], [16].", "With the trusted node approach, either an optical uplink (ground to satellite) or downlink (satellite to ground) is possible, both of which are under active investigation [17], [18], [19], [20], [21].", "With all other things being equal, a downlink would be capable of generating more key bits over time [20], but an uplink has the advantage of simpler design of the satellite payload (implying reduced risk and cost), and the ability to utilize different source types by exchanging them on the ground.", "In either case, with QKD states encoded in photon polarization, the key generation rate is fundamentally limited by the total number of photons detected at the receiver.", "A major contributing factor to losses experienced over such an optical link is atmospheric turbulence, wherein air pockets of different temperatures lead to varying refractive indices in the transverse and longitudinal modes of the beam path [22].", "This creates atmospheric wavefront errors, manifesting in transverse and temporal intensity fluctuations (scintillation), beam wander, and beam broadening in the far field.", "This is particularly impactful for the uplink configuration, where the atmospheric wavefront error is induced primarily near the start of the beam propagation, within the first 20 of atmosphere, and exacerbated by the remaining distance to the satellite receiver.", "Adaptive optics (AO) utilizes sensors and actuating elements to correct phase errors introduced by atmospheric turbulence [23].", "Various levels of AO correction can be applied to the optical beam—the simplest is correcting for beam wander as it leaves the transmitter, which corresponds to a tip/tilt correction of the beam, such as would be performed by a conventional closed-loop fine-pointing system.", "Higher-order corrections can be made by manipulating the phase of the wavefront prior to propagation through the atmosphere.", "Such approaches are used extensively in astronomical observation [24], optometry [25], and have also been studied for optical communications [26], [27], [28].", "In the context of imaging, AO is used to enhance resolution by compensating for medium-induced aberrations.", "By contrast, QKD uses polarization analyzers typically coupled to single-pixel (bucket) detectors, so performance is not limited by imaging resolution.", "In the context of classical communications, where high data bandwidth is desired, AO is employed to minimize scintillation and drop-outs which necessitate overhead owing to error-correction and re-transmission events.", "There, link stability is the primary concern, with received power being secondary.", "For QKD (and some other quantum protocols, e.g., Bell tests [29]), the total number of measured photons is of greater importance than fluctuations over short time scales, due to the way each photon independently contributes to the protocol [20].", "As a consequence, we wish to utilize AO to focus the beam more tightly at the receiver in order to increase the total number of photons received within a satellite pass, equivalent to the long-term-averaged power, even though doing so could increase the short-term power variance in the form of bursts and dropouts.", "We study the effect of the atmosphere on the long-term-averaged received power to determine how large this effect in itself may be, and model an AO system to determine whether (and by how much) AO may improve optical signal collection at a satellite-based QKD receiver.", "We do not consider the spectrum of signal power fluctuations (which is greater in uplink due to the motion of the satellite [30] with respect to the atmosphere), as we assume short-term fluctuations at the receiver remain below the point of detector saturation in the apparatus.", "As long as the detectors are not operating at saturation, the intensity fluctuations will not negatively impact the key-generation rate (and, when coupled with signal-to-noise filtering, could even provide further performance advantages [31]).", "Satellite-to-ground quantum links have been studied previously [32], [33], [22], but we focus specifically on the uplink scenario.", "We model four representative cases of atmospheric conditions relating to ground station locations, with optical links to a satellite receiver of various sizes.", "Our results show that the potential gains of using adaptive optics with a low-Earth orbit (LEO) and a geostationary (GEO) satellite are modest because of anisoplanatism due to, respectively, fast apparent motion and point-ahead.", "Correction with a perfectly offset laser guide star could help increase the benefit of AO in these scenarios.", "In this context, selection of ground station location is the most significant factor determining the optical power that can be captured for use in QKD.", "While drafting this manuscript we became aware of Ref.", "[34], which also examines the quantum LEO uplink case for AO and makes similar conclusions, there based on a numerical wave optics simulation.", "Assessment of AO system imperfections in Ref.", "[34] is done by adjusting the Strehl ratio, without analysis supporting the achievable Strehl ratio.", "The analysis we present employs an analytical Kolmogorov turbulence model (see Ref.", "[35] for additional details) to derive the impact of atmospheric turbulence on the uplink.", "We present a comprehensive description of our model and of the wavefront error terms considered, and our performance estimates are based on realistic assumptions of the AO system limitations, while we also consider the impact of ground site selection on AO performance, and examine both the LEO and GEO cases." ], [ "Optical Model", "The key criterion for QKD is successful transfer of photonic optical states with high probability, as measured by time-averaged collected power.", "This directly corresponds to the link efficiency, $\\epsilon $ , defined as the ratio of the received power, $P_r$ , over the transmitted power, $P_t$ .", "Expressed in , with no AO correction, the link efficiency can be computed from the long-term (time-averaged) beam width (spot radius) at the satellite, $w_\\text{LT}$ , as [36] $\\epsilon &=10\\log _{10}\\left(\\frac{P_r}{P_t}\\right) \\nonumber \\\\&=10\\log _{10}\\left(\\eta _r\\eta _t\\eta ^{\\sec \\psi }_{0}\\frac{D^{2}_{r}}{2 w^{2}_\\text{LT}}I_\\text{LT}\\right).$ Here, $\\eta _r$ is the receiver optical transmittance, $\\eta _t$ is the transmitter optical transmittance, $\\eta _0$ is the atmosphere optical transmittance at zenith, $\\psi $ is the angle of observation from zenith, $D_r$ is the receiver aperture diameter, and $I_\\text{LT}\\le 1$ (with equality in the ideal case) quantifies the effect of residual beam wander.", "The width of an optical beam launched from the ground telescope is affected by diffraction induced by the launch telescope aperture, and by phase error induced by the atmospheric turbulence, which evolve into phase and amplitude errors in the far field.", "The atmospheric turbulence strength can be quantified by the Fried parameter, or atmospheric turbulence coherence length, $r_0$ , which depends on the atmospheric structure constant, $C_n^2(h)$ (for a given altitude $h$ ), and the air mass that the observer is looking through (which depends on zenith angle, $\\psi $ ).", "For a spherical wave [37], $r_0=\\left[0.423 k^2 \\sec \\psi \\int _{0}^{H} C^{2}_{n}(h) \\left(1 - \\frac{h}{H}\\right)^{5/3}\\mathrm {d}h\\right]^{-3/5},$ where $H$ is the satellite orbit altitude and $k$ is the wavenumber of the optical beam.", "We consider the generalized Hufnagel-Valley (HV) atmospheric structure model [38], [39], $\\begin{split}C^{2}_{n}(h) = &~A\\exp \\left(-\\frac{h}{H_A}\\right)+B\\exp \\left(-\\frac{h}{H_B}\\right)\\\\&+~Ch^{10}\\exp \\left(-\\frac{h}{H_C}\\right),\\end{split}$ where $A$ is the coefficient for the surface or boundary layer turbulence strength, $H_A$ is the height for its $1/e$ decay, $B$ and $H_B$ are the equivalent for the turbulence in the troposphere (up to 10), and $C$ and $H_C$ are for the turbulence peak at the tropopause (at about 10).", "Further parameters can be included for isolated turbulence layers, but we omit these.", "This model is used to generate turbulence profiles representing a sea-level site (HV 5-7), an average site (HV 10-10), an excellent site (HV 15-12), and Tenerife [40].", "The values for $H_A$ , $H_B$ , and $H_C$ are 100, 1500, and 1000 respectively for all four considered models, with the remaining parameters shown in Table REF .", "Table: Turbulence parameters for the generalized HV models of each of the four representative conditions studied , .", "AA is the coefficient for the surface or boundary layer turbulence strength, BB is the equivalent for the turbulence in the troposphere (up to 10), and CC is for the turbulence peak at the tropopause (at about 10).", "Lower values in these parameters would signify lower turbulence strength.", "The Tenerife model  has a turbulence profile between the HV 5-7 (sea-level site) and HV 10-10 (average astronomical site)." ], [ "Closed-loop correction of beam wander", "For an initially Gaussian beam, the long-term $1/e^2$ Gaussian beam width (spot radius) $w_\\text{LT}$ , when it reaches the satellite at a distance $L$ from the transmitter, is computed by convolving the diffraction-limited width $w_\\text{diff}(z) = w_0\\sqrt{1 + (z/z_0)^2}$ with the phase-error beam widening from the atmospheric turbulence.", "Here, $w_0$ is the beam waist and $z_0$ is the Rayleigh distance.", "This gives [41] $w_\\text{LT}(z=L) = \\sqrt{w^{2}_{0}\\left(1+\\frac{L^2}{z_0^2}\\right)+2\\left(\\frac{4.2L}{kr_0}\\right)^2}.$ We neglect the effect of the launch telescope aperture clipping the edge of the Gaussian beam—in typical scenarios, this effect is small compared to other contributions.", "Suppose that the transmitter is equipped with a fine tracking system which corrects the beam launch direction based on closed-loop measurement of a beacon laser reference transmitted from the satellite.", "In this case, atmospheric tilt effects within the bandwidth of this system will be compensated, and the long-term beam width, $w_\\text{LT}$ , can be modeled as a short-term beam width, $w_\\text{ST}$ , that is broadened by the residual beam wander.", "This short-term beam width, which we will utilize later, is given by [42] $\\begin{split}&w_\\text{ST}(z=L) =\\\\ &\\Biggl [w^{2}_{0}\\left(1+\\frac{L^2}{z_0^2}\\right)+ 2\\left(\\frac{4.2L}{kr_0}\\left[1-0.26\\left(\\frac{r_0}{w_0}\\right)^{1/3}\\right]\\right)^2\\Biggr ]^{1/2}.\\end{split}$ $I_\\text{LT}$ is computed by assuming that the two-dimensional residual beam wander has Gaussian statistics with standard deviations that are added (in quadrature) to the long-term beam width [36].", "We take the mean, calculated generally as $\\langle I \\rangle = \\beta / (\\beta + 1)$ where $\\beta = (\\Theta / \\sigma )^2 / 8$ .", "Here, $\\Theta \\approx W/L$ is the full angle beam divergence for a beam width, $W$ (diameter), at the satellite (for $I_\\text{LT}$ , $W = \\sqrt{2}w_\\text{LT}$ ) and $\\sigma $ is the one-dimensional residual beam wander standard deviation.", "The residual beam wander, $\\sigma $ , mainly depends on four error sources: the limited signal-to-noise ratio of the beacon sensor ($\\sigma _\\text{SNR}$ ), the closed-loop feedback delay of the fine-pointing system ($\\sigma _\\text{TFD}$ ), centroid anisoplanatism ($\\sigma _\\text{CA}$ ), and tilt anisoplanatism ($\\sigma _\\text{TA}$ ).", "Note that our model assumes the telescopes are physically pointing exactly at each other (with appropriate point-ahead).", "$\\sigma _\\text{SNR}$ : Typically achievable signal-to-noise ratios of commercially available position sensitive devices (PSDs) lead to $\\sigma _\\text{SNR} < {0.15}{}$ with bright sources.", "$\\sigma _\\text{SNR} = {0.15}{}$ is assumed in our model—see, e.g., Ref. [43].", "This error source typically has a small contribution to the overall error.", "$\\sigma _\\text{TFD}$ : Tilt feedback delay error is caused by the atmospheric-induced tilt evolving from the time it is read by the sensor to the moment the correction is applied.", "For a fine-pointing system with a closed-loop correction bandwidth $f_\\text{c}$ , it is calculated as [44] $\\sigma _\\text{TFD}=\\frac{f_\\text{T}}{f_\\text{c}} \\frac{\\lambda }{D_t},$ where $\\lambda $ is the optical wavelength, $D_t$ is the ground transmitter telescope diameter, and $f_\\text{T}$ is the tracking frequency, defined as the frequency at which the one-sigma $\\sigma _{\\text{TFD}}$ is equal to the diffraction angle $\\lambda /D_t$ .", "Given $C^{2}_{n}(h)$ and wind speed profile $v_w(h)$ , the tracking frequency is $f_\\text{T}=0.331D^{-1/6}_{t}\\lambda ^{-1}\\left[\\sec \\psi \\int ^{H}_{0}C^{2}_{n}(h)v^{2}_{w}(h) \\mathrm {d}h\\right]^{1/2}.$ The wind speed profile is based on a Bufton wind model [45], [39], [46], [30], $v_w(h) = v_g + v_t \\exp \\left[-\\left(\\frac{h - h_\\text{peak}}{h_\\text{scale}}\\right)^2\\right] \\mathbin {+} h \\dot{\\psi },$ where $v_g$ is the ground wind speed (5/), $v_t$ is the high-altitude wind speed (20/), $h_\\text{peak}$ is the altitude of the peak (9.4), $h_\\text{scale}$ is the scale height (4.8), and $\\dot{\\psi }$ is the angular velocity of the satellite apparent to the ground station.", "To calculate $\\dot{\\psi }$ , we consider a simplified model of an object in circular orbit around a spherical Earth, resulting in a constant Earth-centred angular velocity.", "From this we derive $\\dot{\\psi }$ for a ground station at Earth's surface targetting a satellite at 600 altitude.", "$\\sigma _\\text{CA}$ : Higher order wavefront errors induced by turbulence eddies smaller than the aperture of the transmitter telescope change the point spread function (PSF) shape incident on the PSD that leads to centroid estimation errors, or centroid anisoplanatism.", "The one-dimensional standard deviation for this is given by [44] $\\sigma _\\text{CA} = 5.51\\times 10^{-2}\\left(\\frac{\\lambda }{D_t}\\right)\\left(\\frac{D_t}{r_0}\\right)^{5/6}.$ This term is mostly dependent on the turbulence strength as determined by the Fried parameter, and is 0.4 for an HV 5-7 model for transmissions at zenith.", "$\\sigma _\\text{TA}$ : The finite speed of light coupled with the distance and motion of the satellite requires the ground station to transmit optical beams ahead of the satellite's apparent position at any given time to ensure they are caught by the satellite receiver.", "This implies that, at the time of measurement and correction, the satellite's downlink beacon will have taken a different path through the atmosphere than the transmitted beam will take back to the satellite, thereby leading to tilt anisoplanatic error.", "Originally derived in Ref.", "[47] and following Ref.", "[48], this error is $\\sigma _\\text{TA} = 6.14 D_t^{-1/6} \\left[\\sec \\psi \\int _0^H C^2_n(h) f_\\Delta (h) \\mathrm {d}h \\right]^{1/2},$ where $f_{\\Delta }$ is a weighting function for a circular aperture given by $\\begin{split}f_\\Delta (h) &= \\int _0^{2\\pi } \\int _0^1 \\Bigl [ \\frac{1}{2} (u^2 + 2us\\cos w + s^2)^{5/6} \\\\&+ \\frac{1}{2}(u^2 - 2us\\cos w + s^2)^{5/6} - u^{5/3} - s^{5/3} \\Bigr ] \\\\&\\qquad \\qquad \\times u [ \\cos ^{-1} u - (3u - 2u^3)\\sqrt{1 - u^2} ] \\mathrm {d}u \\mathrm {d}w,\\end{split}$ and $s = \\frac{\\delta _\\text{tilt} h \\sec \\psi }{D_t}.$ $\\delta _\\text{tilt}$ quantifies the change in tilt angle between the incoming and outgoing beams—the point-ahead angle—and depends on $\\dot{\\psi }$ .", "From our orbit model, $\\delta _\\text{tilt}$ is then determined for a surface ground station, with the object's orbit passing zenith, using the round-trip time necessary for light to propagate the length of the incoming and outgoing beam paths.", "For a satellite orbiting at 600, $\\delta _\\text{tilt}$ is 50 at zenith." ], [ "Adaptive optics correction of wavefront error", "We now introduce to our model adaptive optics to correct higher-order wavefront aberrations.", "The received beam at the satellite produced by such a system is modelled by a diffraction-limited core surrounded by a seeing-limited halo [23].", "The link efficiency equation can be recast as $\\epsilon = 10\\log _{10}\\left(\\eta _r\\eta _t\\eta ^{\\sec \\psi }_{0}\\frac{D^{2}_{r}}{2}\\left[\\frac{I_\\text{diff}S}{w^{2}_\\text{diff}}+\\frac{I_\\text{ST}(1-S)}{w^{2}_\\text{ST}}\\right]\\right),$ where $w_\\text{diff}$ is the diffraction-limited beam width, and $I_\\text{diff}$ (equalling $\\langle I \\rangle $ with $w=w_\\text{diff}$ ) and $I_\\text{ST}$ (equalling $\\langle I \\rangle $ with $w = w_\\text{ST}$ ) quantify the energy loss due to residual beam wander for the diffraction-limited core and the short-term seeing-limited halo, respectively.", "Note that as $\\sigma \\rightarrow 0$ , $I_\\text{LT}$ , $I_\\text{diff}$ , and $I_\\text{ST}$ all approach 1—for convenience, we notate this limiting case as $I=1$ .", "The Strehl ratio $S$ is defined as the fraction of optical power that is in the diffraction-limited core compared to a perfectly corrected system.", "Better AO wavefront correction leads to a higher Strehl ratio.", "For a given root-mean-squared (RMS) wavefront error in radians (for which $2\\pi $ radians equates to an error of $\\lambda $ ), it is evaluated from the Mahajan equation [49], $S \\approx \\exp ({-\\zeta ^2})$ .", "Note that we label the wavefront error $\\zeta $ , in contrast to common treatments, in order to avoid confusion with the residual beam wander $\\sigma $ and its contributing terms.", "We consider three sources of error in our model of an AO system, the standard deviations of which are added in quadrature to determine $S$ : the AO feedback delay error ($\\zeta _\\text{AFD}$ ), the spatial fitting error ($\\zeta _\\text{fit}$ ), and the phase anisoplanatic error ($\\zeta _\\text{PA}$ ), $\\zeta ^2=\\zeta _\\text{AFD}^2+\\zeta _\\text{fit}^2+\\zeta _\\text{PA}^2.$ $\\zeta _\\text{AFD}$ : The AO feedback delay error is similar in origin to the tilt feedback delay error ($\\sigma _\\text{TFD}$ ), being caused by the atmospheric turbulence evolving between the time the wavefront error is measured and the time it is corrected.", "In the case of higher-order aberrations, the tracking frequency is replaced by the Greenwood frequency [23], [50], $f_\\text{G}=2.31\\lambda ^{-6/5}\\left[\\sec \\psi \\int ^{H}_{0}C^{2}_{n}(h) v^{5/3}_{w}(h) \\mathrm {d} h\\right]^{3/5}.$ The associated wavefront error term is [23] $\\zeta _\\text{AFD}=\\left(\\frac{f_\\text{G}}{f_\\text{c}}\\right)^{5/6}.$ This is mainly dependent on the wavelength and the turbulence strength.", "$\\zeta _\\text{fit}$ : The spatial fitting error is caused by the limited degrees of freedom of the wavefront corrector.", "Assuming perfect control based on the Zernike polynomials [23], [51], equations defining the residual wavefront error after correction of $J$ Zernike polynomials under Kolmogorov turbulence can be found in Table IV of Ref.", "[52], for $J$ up to 21.", "For $J>10$ , these can be approximated with [52] $\\zeta _\\text{fit}^2 = 0.2944 J^{-\\sqrt{3}/2}\\left(\\frac{D_t}{r_0}\\right)^{5/3}.$ $\\zeta _\\text{PA}$ : The phase anisoplanatic error is given by [23] $\\zeta _\\text{PA}=\\left(\\frac{\\theta }{\\theta _0}\\right)^{5/6}.$ Here, $\\theta $ is the angle between the reference beam and the corrected beam (usually, $\\theta = \\delta _\\text{tilt}$ ), and the isoplanatic angle $\\theta _0$ is the angle between the object and the reference beam at which the wavefront variance is 1, which evaluates to $\\theta _0 = \\left[2.91k^2(\\sec \\psi )^{8/3}\\int ^{H}_{0}C^{2}_{n}(h)h^{5/3} \\mathrm {d}h\\right]^{-3/5}.$" ], [ "Application to a particular Earth-station-to-satellite scheme", "The model described in the previous sections is used to evaluate the impact of atmospheric turbulence on a ground-to-satellite link and the potential improvement achievable with an AO system.", "The base parameters of the physical system considered are given in Table REF .", "These are the parameters we use in our model to produce the results given below, unless indicated otherwise.", "Table: Summary of the ground-to-satellite link baseline parameters for the simulations.", "These parameters are used in the simulation unless otherwise stated.For the 0.5 transmitter diameter, we chose to correct Zernike polynomials up to the 8th order in our simulation, as we observed minimal increase in the Strehl ratio for higher order compensation.", "This implies that the first 45 Zernike terms are corrected.", "Such can be achieved, for example, by using a Shack-Hartmann or a pyramidal wavefront sensor having 8 sub-apertures on the pupil diameter, associated with a deformable mirror having 9 linear actuators on the pupil diameter.", "The additional actuator is required because the wavefront sensor measures a wavefront slope which is compensated using one actuator at each edge of the sub-aperture in one dimension, and one actuator at each corner of the sub-aperture in two dimensions.", "(This arrangement of actuators relative to sub-apertures is typically referred to as the Fried geometry.)", "For low correction bandwidths and small transmitter diameters, the dominating tilt error term is the tilt feedback delay ($\\sigma _{\\text{TFD}}$ ).", "Once the correction bandwidth and transmitter diameter are sufficiently increased, the tilt anisoplanatic error ($\\sigma _{\\text{TA}}$ ) dominates.", "This can be seen in Figure REF .", "For a 0.5 transmitter, the correction frequency beyond which $\\sigma _\\text{TA}$ dominates is 70.", "At 200, $\\sigma _\\text{TFD}$ is 35 of $\\sigma _\\text{TA}$ .", "(Bandwidths beyond 200 are a considerable technical challenge to implement, as the sampling rate must be greater by a factor of 10 to 20.)", "Figure: (Left) σ TA \\sigma _\\text{TA} (Eq. )", "and σ TFD \\sigma _\\text{TFD} (Eq. )", "contributions to residual beam wander as functions of beam wander correction bandwidth for various transmitter diameters.", "As the diameter is decreased, the system requires a higher beam wander correction bandwidth for σ TA \\sigma _\\text{TA} to remain the dominant error term.", "(Right) Link efficiency (Eq. )", "as a function of transmitter diameter for various beam wander correction bandwidths.The maximum potential impact of correcting the residual beam wander can be quickly evaluated by assuming a perfect tilt correction.", "Substituting the long-term beam width in Eq.", "REF with the short-term value and correcting perfectly for the beam wander by setting $I = 1$ yields a mere 13 improvement in the link efficiency at zenith for each of the modeled atmospheres and various transmitter aperture diameters (0.201.0).", "Evidently, beam wander correction is not a path towards any significant gain in performance.", "We now consider how the inclusion of AO correction to the wavefront error affects the link performance (Eq.", "REF ) for various launch telescope diameters.", "The results are shown in Figure REF .", "Four scenarios are contained in each plot, representing the diffraction-limited beam link efficiency, the baseline case where a beam propagates through a turbulent medium with no corrections applied, the case where beam wander and wavefront phase errors are corrected, and the same case while assuming no wavefront phase anisoplanatism term ($\\zeta _\\text{PA}=0$ ).", "Figure: Comparison of the predicted link efficiency (Eq. )", "as a function of elevation angle of the satellite, from horizon apparent to the ground station, with AO for transmitter diameters of 0.25 (left), 0.50 (middle), and 1.0 (right).", "The solid black curve shows the diffraction-limited efficiency.", "The dotted green curve, the baseline case, shows a turbulence-limited beam with no corrections.", "The solid brown curve shows the efficiency for correcting both beam wander and wavefront phase errors.", "The dash-dotted orange curve represents correcting for beam wander and wavefront phase errors, but assuming that the wavefront phase anisoplanatism error term, ζ PA \\zeta _\\text{PA}, is zero.", "The phase anisoplanatism error term dominates the error sources, and if not corrected renders an AO solution only marginally useful.A larger transmitter aperture produces a smaller diffraction-limited spot at the receiver location.", "The potential for adaptive optics to improve the link efficiency above the turbulence-limited baseline is then also greater—see the dash-dotted orange lines in Fig.", "REF .", "At zenith, the additional gain is as much as 3.2 if a 0.5 transmitter is used, and 4.2 for a 1.0 transmitter, compared to the 0.25 transmitter efficiency.", "However, when we include the anisoplanatic error that arises because the downlink and uplink beams do not follow the same paths through the atmosphere, these gains are lost.", "Figure REF illustrates the effect of different turbulence strengths, using the parameters given in Table REF .", "This shows that the turbulence strength can have a significant impact on the link efficiency and, therefore, site selection is a critical factor determining throughput.", "A gain of approximately 9 is found for a good astronomical site (HV 15-12) compared to a site at sea-level (HV 5-7).", "What small improvement adaptive optics can achieve, however, is outweighed by selection of better sites.", "As with the residual beam wander correction, the AO performance is strongly limited by anisoplanatism, to the extent that there is little gain in using an AO system to correct the wavefront phase errors if the responsible phase anisoplanatic error term is not mitigated.", "This strong contribution of the anisoplantic error is symptomatic of a weak correlation between the turbulence in the downlink and uplink paths.", "Figure: Prediction of the efficiency (Eq. )", "as a function of elevation angle with AO for HV 5-7, a typical sea-level site (top-left), HV 10-10, a typical good site (top-right), HV 15-12, an excellent site (bottom-left), and Tenerife, from measured median turbulence strength at Tenerife in the Canary Islands  (bottom-right).", "Curves follow definitions in Figure .Adaptive optics demonstrates slightly better performance improvement with stronger turbulence profiles.For an AO system to be useful, it is critical to reduce the anisoplanatic phase error term.", "A common and mature approach for doing so in astronomical applications is to use a reference laser guide star (LGS) to sample the turbulence in the proper atmospheric path [53].", "This LGS can be generated by exciting atoms in the 90 altitude sodium layer with a laser, or to use a time-gating camera to observe the Rayleigh backscatter of a pulsed laser at an altitude of typically 18.", "While this LGS mitigates the phase anisoplanatic error ($\\theta = 0$ , and thus $\\zeta _\\text{PA} = 0$ ), a new error term needs to be considered due to the difference in altitude between the satellite and the LGS, which results in a different path taken through the atmosphere by the light emitted by the LGS (and subsequently captured by the telescope) and the quantum signal travelling to the satellite.", "This is referred to as focal anisoplanatism or cone effect, and the corresponding error term can be expressed as [23] $\\zeta _{\\text{cone}}=\\left(\\frac{D_t}{d_0}\\right)^{5/6},$ where $\\small d_0 = \\lambda ^{6/5} \\left(19.77\\sec \\psi \\int _{0}^{H_\\text{LGS}}C_n^2(h)\\left(\\frac{h}{H_\\text{LGS}}\\right)^{5/3}\\text{d}h\\right)^{-3/5},$ and $H_\\text{LGS}$ is the altitude of the LGS.", "The result is illustrated in Figure REF , showing an improvement from using a LGS at 18 (5.3 at zenith, compared to no LGS) to an overall 6.9 efficiency increase at zenith when compared to the baseline.", "Table REF shows example Strehl ratios under these conditions—by using a LGS to reduce the anisoplanatism, the Strehl ratio can be dramatically increased, approaching the values asserted in Ref. [34].", "Note the LGS cannot also be used as a reference for beam wander correction (tip-tilt) because the exact LGS location is not well defined due to the laser beam being affected by some beam wander in both its upward propagation, on its way to generate the LGS, and its downward propagation.", "Therefore, we use the LGS to correct the higher order wavefront corrections ($\\zeta $ terms) and use the beacon from the satellite to correct the beam wander ($\\sigma $ terms).", "This is equivalent to using Eq.", "REF as before, however, replacing $\\zeta _\\text{PA}$ with $\\zeta _\\text{cone}$ .", "Figure: Predicted link efficiency as a function of elevation angle using a laser guide star at 18.", "Curves follow definitions in Figure , with the addition of the solid red curve showing the efficiency using a laser guide star to correct for the wavefront phase anisoplanatism error.", "In comparison to AO without LGS, a significant improvement of approximately 6.4 in the HV 5-7 atmosphere model is seen at zenith.Table: Strehl ratios when using a LGS at 18.", "Note that the decrease of Strehl ratio for increasing transmitter diameter is due to the number of corrected Zernike terms being constant, implying increased spacing between the deformable mirror actuators and increased sub-aperture size in relation to r 0 r_0.Geostationary (GEO) satellites orbit the Earth at approximately 35000 and have the same orbital period as the Earth's rotational period.", "Consequently, the satellite appears stationary in the sky relative to a ground station located anywhere on Earth.", "For this reason, intuitively, one could expect that anisoplanatism would be near minimal in this case.", "Also, a geostationary orbit is interesting for communications (including quantum communications) as it provides coverage over up-to half of Earth at any given time, unlike LEO which only provides coverage to a given ground station during limited time windows.", "We model the case of an uplink to a GEO satellite, with the results shown in Figure REF .", "Due to the larger distances, the overall loss is much higher than for a LEO satellite.", "It can be seen that the anisoplanatism phase error is still the dominating term—this is because point-ahead, still necessary due to the rotating frame, is large enough that the downlink beacon passes through a different portion of atmosphere than the uplink signal.", "Like LEO, this error is problematic for AO correction (without LGS) at lower elevations, however the effect becomes more muted at elevations beyond 50.", "The improvement from incorporating a LGS (also shown in Fig.", "REF ) is a little less than the LEO case—about 5.1 at zenith compared to without LGS—but the improvement at zenith compared to baseline without AO is more pronounced at 9.5.", "Interestingly, at 45 elevation, the improvement compared to without LGS is greater—about 6.3—while the overall improvement compared to without AO is similar at 9.3.", "Of course, this analysis does not touch on the additional technical challenges facing operation in a geostationary orbit, which include greater radiation exposure, the need for increased light shielding, and higher launch costs.", "A GEO satellite would, however, require only static pointing at the transmitter, and thus error from pointing and tracking would be less than for a LEO satellite.", "Figure: Predicted link efficiency as a function of elevation angle for a geostationary orbit and using a laser guide star.", "Curves follow definitions in Figure .In this HV 5-7 atmosphere model, AO even without LGS becomes more significantly impactful at higher elevations (above 50), in contrast to the LEO case.With LGS, an overall improvement of 9.5 is seen at zenith, compared to the turbulence-limited baseline." ], [ "Conclusion", "We have constructed a theoretical model to simulate the effects of atmospheric turbulence on the performance of an uplink from an Earth ground station to a satellite, for purposes where the time-averaged total received optical power is the key parameter, such as QKD.", "The model incorporates the effects of tracking bandwidth, anisoplanatism, atmospheric turbulence strength, transmitter and receiver size and efficiencies, and technological limitations.", "As our results show, for the case of a LEO satellite, the most important impact on the link efficiency is the site selection—a 9 variation was found between a typical site at sea-level to an excellent astronomical site.", "In the case where the transmitter location is limited to a particular site, or where further improvement is desired, an AO system can be used to increase the efficiency up to 6.9, under our assumptions of other system parameters.", "To achieve this, however, it is necessary to correct for anisoplanatism for the AO system to be useful—this can be accomplished by employing a laser guide star.", "Similar anisoplanatism was found when modelling a geostationary satellite—the point ahead necessary for the geostationary link remains large enough that the atmosphere sampled by the downward facing beacon does not correspond well with the signal transmitter upward for elevations below 45.", "For higher elevation angles, anisoplanatism error is still the dominating term but its impact to AO correction is reduced.", "Use of a laser guide star helps significantly in both cases.", "The throughput of QKD protocols depends predominately on the total number of photons measured.", "The 9 improvement from site-selection (if available), as well as the 6.9 from AO coupled with laser guide star, would thus deliver a significant increase in the secret key generation rate of the protocol—especially given the characteristics of secret-key-rate formulae for QKD, which imply a favorably nonlinear improvement due to the super-exponential cliff at high losses [54]." ], [ "Funding", "Canadian Space Agency, Canadian Institute for Advanced Research, Industry Canada, and the Natural Sciences and Engineering Research Council (NSERC)." ], [ "Acknowledgements", "C.J.P.", "thanks NSERC and the province of Ontario for funding.", "B.L.H.", "acknowledges support from NSERC Banting Postdoctoral Fellowships." ] ]
1906.04193
[ [ "A Search for a Contribution from Axion-Like Particles to the X-Ray\n Diffuse Background Utilizing the Earth's Magnetic Field" ], [ "Abstract The Axion Like Particle (ALP) is a hypothetical pseudo-scalar particle beyond the Standard Model, with a compelling possible connection to dark matter and early universe physics.", "ALPs can be converted into photons via interactions with magnetic fields in the universe, i.e., the so-called inverse Primakoff effect.", "In this paper, we propose a novel method to explore ALP-induced photons from X-ray data obtained from the {\\it Suzaku} satellite, arising from a possible interaction of ALPs with the direction-dependent Earth's magnetic field viewed from the satellite.", "{\\it Suzaku} data is suitable for this purpose because its low-altitude Earth orbit result in intrinsically low cosmic-ray background radiation.", "We study whether the X-ray diffuse background (XDB) spectra estimated from the four deep fields collected over eight years, vary with the integrated Earth's magnetic strength in the direction of each target field at each observation epoch, which amounts to $10^2$ Tm-a value greater than that achieved by terrestrial experiments due to the large coherent length.", "From the detailed analysis, we did not find evidence of the XDB confidence level spectra having dependence on the Earth's magnetic strength.", "We obtained 99 % confidence level upper limit on a possible residual contribution to the cosmic X-ray background (CXB) surface brightness to be $1.6\\times 10^{-9}~{\\rm ergs~s}^{-1}{\\rm cm}^{-2}{\\rm sr}^{-1}$ normalized at $10^4$ T${}^2$ m${}^2$ in the 2-6 keV range, which corresponds to 6-15 % of the observed CXB brightness, depending on which model of unresolved point sources are used in the interpretation.", "It is consistent with 80-90 % of the CXB now being resolved into point sources." ], [ "Introduction", "Various cosmological observations have provided strong evidence for the existence of dark matter.", "However, if dark matter is to be an elementary particle, it is a yet-unknown particle beyond the Standard Model.", "The Axion-Like Particle (ALP) is a hypothetical pseudo-scalar particle beyond the Standard Model, and is a consequence of the quantum field for conserving CP symmetry in the strong interaction , .", "ALPs are attractive because they act like cold dark matter (CDM) in the formation of cosmic structure.", "It is possible that ALPs are created by a decay of other CDM-candidate particles.", "If ALP mass is too low, a direct experiment with present techniques is unlikely to find it.", "A possible channel is to observe photons, which are created by ALPs following the inverse Primakoff process in an electromagnetic field.", "As we show in detail in Section 2, the ALP-photon conversion probability $P_{a \\rightarrow \\gamma }$ is approximately proportional to the squared product of magnetic field strength orthogonal to the ALP momentum direction, $B_\\perp $ , and the length, $L$ , i.e., $P_{a \\rightarrow \\gamma }$ $\\propto $ $\\left( B_\\perp L \\right)^2$ .", "There have been many attempts to detect ALP signals in terrestrial experiments and astronomical observations.", "One candidate signal is in the direction of galaxies or galaxy clusters, which was proposed to be due to ALP interaction with inter-stellar or galactic magnetic fields , , , although the results are still under discussion.", "If ALPs are CDM itself or produced from a decay of CDM at cosmological distances and if those can be observed, the distribution of ALPs or ALP-induced photons via its interaction with magnetic fields in cosmic structures should appear to be isotropic in the sky to the zero-th order approximation, unless we have high-sensitivity and high-angular resolution data to resolve the distribution tracing inhomogeneous cosmic structures in the universe.", "In this paper, we propose a novel method to search for ALP-induced photons from the satellite X-ray data, arising from the Primakoff interaction of the ALPs with the Earth's magnetic field.", "The Earth's magnetic field is known to have a dipole structure around the Earth in a north-south direction and we have a good knowledge of its strength and field configuration from various observations.", "Therefore, we can expect the ALP-induced photons in X-ray wavelengths, if produced, to vary with the Earth's magnetic field strength integrated along the line-of-sight direction for each observation, even if ALPs arriving on Earth have an isotropic distribution in the sky.", "To search for such ALP-origin X-ray radiation, we focus on the Suzaku X-ray data in the four deep fields, collected over eight years.", "These fields had been observed frequently by the Suzaku, but the magnetic field strengths varied with each observation depending on its location in the orbit.", "For a null hypothesis of ALP-induced photons, the diffuse X-ray background brightness estimated from the same field should not show any dependence on the integrated magnetic field.", "This is the signal we will search for in this paper.", "Suzaku data is suitable for our purpose, because the satellite, compared to the XMM-Newtonhttp://sci.esa.int/xmm-newton/ or Chandrahttps://chandra.harvard.edu/, has lower background due to its low-altitude orbit that prevents cosmic rays from penetrating the X-ray detectors .", "Our study is somewhat similar to Fraser et al., (2014) , which claimed a detection of seasonal variations in the XMM-Newton X-ray data.", "The work claimed that the X-ray flux modulation at a level 4.6 $\\times $ 10$^{-12}$ ergs s$^{-1}$ cm$^{-2}$ deg$^{-2}$ in 2–6 keV might be due to a conversion of solar axions by their interaction with the Earth's magnetic field , .", "However, Roncadelli and Tavecchio, (2015) claimed that the XMM-Newton satellite which never points toward the Sun cannot observe such ALP-induced photons originating from solar axions due to momentum conservation.", "The structure of this paper is as follows.", "In Section , we briefly review basics of the inverse Primakoff effect and how photons can be induced by the effect from ALPs that are created by dark matter in the expanding universe.", "In Section  we show the main results of this paper using the Suzaku data, when combined with data of the Earth's magnetic field at each orbit of the Suzaku satellite at each observation.", "Section  cotains the discussion and conclusion." ], [ "Process of photon emission from ALPs", "In this section we describe a mechanism of photon emission from ALPs via the interaction with magnetic fields.", "To do this, we consider a model in which dark matter, which fills up space of the universe, preferentially decays into ALPs.", "This model is an example of moduli dark matter model in a string-theory-inspired scenario.", "When a dark matter particle decays into two ALPs, i.e.", "DM $\\rightarrow $ 2ALPs, each ALP has a monochromatic energy $E_a = m_\\phi /2$ , where $m_\\phi $ is the mass of the dark matter particle.", "The emissivity of DM $\\rightarrow $ 2ALPs decaying process is given in terms of the energy density of dark matter, $\\rho _\\phi \\left( r\\right)$ , the decay rate, $\\Gamma _{\\phi \\rightarrow 2a}$ , and $m_\\phi $ as $\\epsilon _a = \\frac{2 \\rho _\\phi \\left( r\\right)\\Gamma _{\\phi \\rightarrow 2a} }{m_\\phi }.$ Considering the spatial distribution of dark matter along the line-of-sight direction, the ALP intensity, $I_{a,{\\rm line}}$ [counts s$^{-1}$ cm$^{-2}$ sr$^{-1}$ ], is given as $I_{a,{\\rm line}} = \\int _{\\rm l.o.s.}", "\\frac{2 \\Gamma _{\\phi \\rightarrow 2a}}{4 \\pi m_\\phi } \\rho _\\phi \\left( r\\right)~dr= \\frac{S_\\phi \\Gamma _{\\phi \\rightarrow 2a}}{2\\pi m_\\phi },$ at $E_a=m_\\phi /2$ , and $S_\\phi $ is the column density of dark matter in the line-of-sight direction , defined as $S_\\phi = \\int _{\\rm l.o.s.}", "\\rho _\\phi (r)~dr.$ In this case, the converted photon spectrum is a line emission.", "If dark matter is uniformly distributed in the universe, we would observe a continuum spectrum of the ALP intensity because free-streaming ALPs undergo a cosmological redshift in the expanding universe.", "Assuming light-mass ALPs, i.e.", "relativistic ALPs, produced by dark matter decay, a superposition of line spectra over different redshifts leads us to observe a continuum spectrum of ALPs , : $\\frac{dN}{dE_a} &=& \\int _{\\rm l.o.s.}", "\\mathrm {d}r~\\frac{\\Gamma _{\\phi \\rightarrow 2a}}{4 \\pi m_\\phi }\\rho _\\phi \\left( r\\right) \\times 2\\delta _D\\!", "\\left( E_a \\left( 1+z \\right) - m_\\phi /2 \\right) \\\\&=&\\frac{\\sqrt{2} c \\Gamma _{\\phi \\rightarrow 2a} \\rho _{\\phi _0}}{\\pi H_0}~m_\\phi ^{-\\frac{5}{2}} ~E_a^{\\frac{1}{2}}~f\\left( \\frac{m_\\phi }{2E_a} \\right)$ where $\\delta _D(x)$ is the Dirac delta function, and the function $f(x)$ is defined as $f(x) \\equiv \\left\\lbrace \\Omega _{m0} +\\left( 1-\\Omega _{m0} -\\Omega _{\\Lambda 0}\\right)/x - \\Omega _{\\Lambda 0}/x^3 \\right\\rbrace ^{-\\frac{1}{2}}.$ In the above equation $z$ is the redshift at decay, $\\rho _{\\phi _0}$ is the present energy density, $H_0$ is present the Hubble constant, $\\Omega _{m0}$ and $\\Omega _{\\Lambda 0}$ are the density parameters of non-relativistic matter and the cosmological constant, respectively.", "The spectral shape of ALPs is transcribed as a simple power-law function whose number index is a positive value of $+1/2$ .", "In this case, the converted photon spectrum is also expected as a power-law function with a photon index of $+1/2$ .", "The ALP-photon conversion probability in a vacuum with a magnetic field via inverse Primakoff effect is given in Ref.", "as $P_{a \\rightarrow \\gamma } \\left(x\\right)= \\left| \\frac{g_{a \\gamma \\gamma }}{2} \\int _0^{x} B_\\perp \\left( x^{\\prime }\\right)\\exp \\left( -i \\frac{m_a^2}{2E_a} x^{\\prime } \\right) dx^{\\prime } \\right|^2,$ with $B_\\perp \\left(x^{\\prime }\\right) \\equiv \\left|\\vec{B} \\left(x^{\\prime }\\right) \\times \\vec{e}_a\\right|.$ Here, $g_{a \\gamma \\gamma }$ is an ALP-photon coupling constant, $m_{a}$ and $E_{a}$ are mass and energy scales of ALP, and $B_\\perp (x)$ is the perpendicular component of magnetic field to the ALP momentum direction, denoted as $\\vec{e}_a$ .", "The ALP-photon momentum transfer $q$ is defined as $q = \\frac{m_a^2}{2E_a}.", "$ Assuming the $B_\\perp (x^{\\prime })$ is uniform in the range $0<x^{\\prime }<L$ , we can write Equation (REF ) as: $P_{a \\rightarrow \\gamma } = \\left( \\frac{g_{a\\gamma \\gamma } B_\\perp }{2}\\right)^2~2L^2~\\frac{1- \\cos \\left( qL \\right) }{(qL)^2}.$ In the limit of light ALP masses compared to the photon energy scale satisfying $q L\\ll 1$ , $1- \\cos \\left( qL \\right) \\simeq \\left( qL \\right)^2 /2$ , and the conversion rate is simply given by $P_{a \\rightarrow \\gamma } = \\left( \\frac{g_{a \\gamma \\gamma } B_\\perp L}{2} \\right)^2.$ under the coherence condition of $qL < \\pi ~ \\rightarrow ~ m_a < \\sqrt{\\frac{2\\pi E_a}{L}}$ The following analysis uses Equation (REF ) to constrain the ALP-photon coupling constant.", "As shown above, the probability of ALP particles converting to photons proportional to $(B_\\perp L)^2$ in the light mass limit.", "Plugging typical values of the strength and coherent length scale of Earth's magnetic field, Equation (REF ) gives $P_{a \\rightarrow \\gamma } &\\simeq & 2.45 \\times 10^{-21}~\\left( \\frac{g_{a \\gamma \\gamma } }{10^{-10}~{\\rm GeV^{-1}}} \\right)^2\\left( \\frac{B_\\perp L}{{\\rm T~m}} \\right)^2$" ], [ "Selection of blank sky observations from ", "To locate ALP-induced photons, we use the Suzaku X-ray data, and search for photons in the detector's field of view (FoV) depending on the integrated magnetic strength along the line-of-sight direction, $\\left(B_\\perp L\\right)^2$ .", "Because most X-ray data contains X-ray emission photons from targeted or unresolved sources, we need to study the X-ray diffuse background (XDB) in blank fields, and search for a residual signal in the background that is correlated with the magnetic strengths following the scaling of $(B_\\perp L)^2$ .", "The X-ray satellite Suzaku is suitable for this study because of its low instrumental background noise and the low background radiation from cosmic rays (compared to other X-ray satellites) due to its low-altitude Earth orbit; an altitude of $\\sim $ 570 km and an inclination of $31^\\circ $ from the Earth's equatorial plane, where the Earth's magnetic field prevents cosmic rays from penetrating the satellite's detectors .", "Figure REF is a schematic illustration of the Suzaku orbit and the Earth's magnetic field configuration.", "Even if the satellite observes the same field or the same angular direction–as denoted by the black dotted line–the integrated strength of perpendicular magnetic components along the line-of-sight direction varies with the satellite position.", "The Suzaku satellite orbits the Earth with a period of approximately 96 minutes and it causes a modulation of the integrated magnetic strength $(B_\\perp L)^2$ with the orbit, or when the target field is observed.", "Thus, we expect variations in the ALP-induced photons, if they exist, depending on the strength $(B_\\perp L)^2$ .", "We calculated the Earth's magnetic field every 60 seconds for each line-of-sight direction of a given target field using the software, International Geomagnetic Reference Field: the 12th generation (IGRF-12 ) for up to 6 times the Earth's radius ($R_E$ ), where typically $B$ $\\sim $ $10^{-7} {~\\rm T}$ .", "The right panel in Figure REF shows a typical case of $(B_\\perp L)^2$ as a function of the satellite position or equivalently the observation time.", "It can be found that a typical value of $\\left( B_\\perp L\\right)^2$ is of order of $10^{4}$ –$10^{5}$ T$^{2}$ m$^{2}$ , which is greater than that of terrestrial experiments such as the CAST experimenthttp://cast.web.cern.ch/CAST/CAST.php.", "If we apply the non-oscillation condition of $qL \\ll 1$ (Equation (REF )), the corresponding mass of ALP is limited to be $m_a$ $\\le $ $\\mu $ eV if we assume that the converted photons are in X-ray wavelengths.", "Note that we considered an oscillation regime of $q L\\sim 1$ to obtain constraints on the ALP-photon coupling constant.", "Figure: Left: Schematic view of the position andobservation direction of Suzaku satellite relative to the Earth'smagnetosphere.", "Right: Time dependence of B ⊥ L 2 \\left( B_\\perp L\\right)^2 ina Lockman hole observation.Gray hatched regions show periods of the Earth occultation, i.e.the Earth exists between a target and Suzaku.To estimate the XDB spectrum, we consider blank sky data from four deep fields selected from the Suzaku archives as tabulated in Table REF .", "The selection criteria are as follows.", "No bright sources in the FoV of Suzaku X-ray Imaging Spectrometer (XIS) , and compact sources in the FoV are already identified and can be masked in our analysis.", "Galactic latitudes of $|b| > 20^\\circ $ to avoid X-ray emission originating from sources in the Galactic disk .", "Sufficiently distant from regions of high X-ray diffuse emissions such as the North Polar Spur.", "Exposure time obtained by standard processing should be more than 200 ksec.", "The above criteria are met by the following four fields, also shown in Table REF .", "First, we use the multiple observation data in the Lockman hole field, which is a famous region with minimum neutral hydrogen column density that was annually observed with Suzaku for calibration.", "We also use the data in the South Ecliptic Pole (SEP) and North Ecliptic Pole (NEP) fields.", "Finally, we use the data in the field of high latitude, the neutral hydrogen cloud or the so-called MBM16 field.", "Table: ALP parameters constrains in this paper in Universal continuousALP (cyan) and Galactic monochromatic ALP (yellow) see details in text.Limits of other experiments are taken from ," ] ]
1906.04429
[ [ "Using synthetic networks for parameter tuning in community detection" ], [ "Abstract Community detection is one of the most important and challenging problems in network analysis.", "However, real-world networks may have very different structural properties and communities of various nature.", "As a result, it is hard (or even impossible) to develop one algorithm suitable for all datasets.", "A standard machine learning tool is to consider a parametric algorithm and choose its parameters based on the dataset at hand.", "However, this approach is not applicable to community detection since usually no labeled data is available for such parameter tuning.", "In this paper, we propose a simple and effective procedure allowing to tune hyperparameters of any given community detection algorithm without requiring any labeled data.", "The core idea is to generate a synthetic network with properties similar to a given real-world one, but with known communities.", "It turns out that tuning parameters on such synthetic graph also improves the quality for a given real-world network.", "To illustrate the effectiveness of the proposed algorithm, we show significant improvements obtained for several well-known parametric community detection algorithms on a variety of synthetic and real-world datasets." ], [ "Introduction", "Community structure, which is one of the most important properties of complex networks, is characterized by the presence of groups of vertices (called communities or clusters) that are better connected to each other than to the rest of the network.", "In social networks, communities are formed based on common interests or on geographical location; on the Web, pages are clustered based on their topics; in protein-protein interaction networks, clusters are formed by proteins having the same specific function within the cell, and so on.", "Being able to identify communities is important for many applications: recommendations in social networks, graph compression, graph visualization, etc.", "The problem of community detection has several peculiarities making it hard to formalize and, consequently, hard to develop a good solution for.", "First, as pointed out in several papers, there is no universal definition of communities [9].", "As a result, there are no standard procedures for comparing the performance of different algorithms.", "Second, real-world networks may have very different structural properties and communities of various nature.", "Hence, it is impossible to develop one algorithm suitable for all datasets, as discussed in, e.g., [23].", "A standard machine learning tool applied in such cases is to consider a parametric algorithm and tune its parameters based on the given dataset.", "Parameters which have to be chosen by the user based on the observed data are usually called hyperparameters and are often tuned via cross-validation, but this procedure requires a training part of the datasets with available ground truth labels.", "However, the problem of community detection is unsupervised, i.e., no ground truth community assignments are given, so standard tuning approaches are not applicable and community detection algorithms are often non-parametric.", "We present a surprisingly simple and effective method for tuning hyperparameters of any community detection algorithm which requires no labeled data and chooses suitable parameters based only on the structural properties of a given graph.", "The core idea is to generate a synthetic network with properties similar to a given real-world one, but with known community assignments, hence we can optimize the hyperparameters on this synthetic graph and then apply the obtained algorithm to the original real-world network.", "It turns out that such a trick significantly improves the performance of the initial algorithm.", "To demonstrate the effectiveness and the generalization ability of the proposed approach, we applied it to three different algorithms on various synthetic and real-world networks.", "In all cases, we obtained substantial improvements compared to the algorithms with default parameters.", "However, since communities in real-world networks cannot be formally defined, it is impossible to provide any theoretical guarantees for those parameter tuning strategies which do not use labeled data.", "As a result, the quality of any parameter tuning algorithm can be demonstrated only empirically.", "Based on the excellent empirical results obtained, we believe that the proposed approach captures some intrinsic properties of real-world communities and would generalize to other datasets and algorithms." ], [ "Background and related work", "During the past few years, many community detection algorithms have been proposed, see [6], [7], [9], [17] for an overview.", "In this section, we take a closer look at the algorithms and concepts used in the current research." ], [ "Modularity", "Let us start with some notation.", "We are given a graph $G = (V,E)$ , $V$ is a set of $n$ vertices, $E$ is a set of $m$ undirected edges.", "Denote by $ a partition of $ V$ into several disjoint communities: $ {C1, ..., Ck}$.", "Also, let $ min$ and $ mout$ be the number of intra- and inter-cluster edges in a graph $ G$ partitioned according $ .", "Finally, $d(i)$ denotes the degree of a vertex $i$ and $D(C) = \\sum _{i \\in C} d(i)$ is the overall degree of a community $C \\in .$Modularity is a widely used measure optimized by many community detection algorithms.", "It was first proposed in [21] and is defined as follows $Q(G,\\gamma ) = \\frac{m_{in}}{m} - \\frac{\\gamma }{4 m^2} \\sum _{C \\in D(C)^2\\,,}where \\gamma is a resolution parameter~\\cite {lancichinetti2011limits}.", "The intuition behind modularity is the following: the first term in~(\\ref {eq:modularity}) is the fraction of intra-cluster edges, which is expected to be relatively high for good partitions, while the second term penalizes this value for having too large communities.", "Namely, the value \\frac{\\sum _{C \\in D(C)^2}{{4 m^2}} is the expected fraction of intra-cluster edges if we preserve the degree sequence but connect all vertices randomly, i.e., if we assume that our graph is constructed according to the configuration model~\\cite {molloy1995critical}.", "}{M}odularity was originally introduced with \\gamma = 1 and many community detection algorithms maximizing this measure were proposed.", "However, it was shown in~\\cite {fortunato2007resolution} that modularity has a resolution limit, i.e., algorithms based on modularity maximization are unable to detect communities smaller than some size.", "Adding a resolution parameter allows to overcome this problem: larger values of \\gamma in general lead to smaller communities.", "However, tuning \\gamma is a challenging task.", "In this paper, we propose a solution to this problem.$" ], [ "Modularity optimization and Louvain algorithm", "Many community detection algorithms are based on modularity optimization.", "In this paper, as one of our base algorithms, we choose arguably the most well-known and widely used greedy algorithm called Louvain [4].", "It starts with each vertex forming its own community and works in several phases.", "To create the first level of a partition, we iterate through all vertices and for each vertex $v$ we compute the gain in modularity coming from removing $v$ from its community and putting it to each of its neighboring communities; then we move $v$ to the community with the largest gain, if it is positive.", "When we cannot improve modularity by such local moves, the first level is formed.", "After that, we replace the obtained communities by supervertices connected by weighted edges; the weight between two supervertices is equal to the number of edges between the vertices of the corresponding communities.", "Then we repeat the process described above with the supervertices and form the second level of a partition.", "After that, we merge the supervertices again, and so on, as long as modularity increases.", "The Louvain algorithm is quite popular since it is fast and was shown to provide partitions of good quality.", "However, by default, it optimizes modularity with $\\gamma = 1$ , therefore, it suffers from a resolution limit." ], [ "Likelihood optimization methods", "Likelihood optimization algorithms are also widely used in community detection.", "Such methods are mathematically sound and have theoretical guarantees under some model assumptions [3].", "The main idea is to assume some underlying random graph model parameterized by community assignments and find a partition $ that maximizes the likelihood $ P(G|$, which is the probability that a graph generated according to the model with communities $ exactly equals $G$ .", "The standard random graph model assumed by likelihood maximization methods is the stochastic block model (SBM) or its simplified version — planted partition model (PPM).", "In these models, the probability that two vertices are connected by an edge depends only on their community assignments.", "Recently, the degree-corrected stochastic block model (DCSBM) together with the degree-corrected planted partition model (DCPPM) were proposed [12].", "These models take into account the observed degree sequence of a graph, and, as a result, they are more realistic.", "It was also noticed that if we fix the parameters of DCPPM, then likelihood maximization based on this model is equivalent to modularity optimization with some $\\gamma $  [22].", "Finally, in a recent paper [24] the independent LFR model (ILFR) was proposed and analyzed.", "It was shown that ILFR gives a better fit for a variety of real-world networks [24].", "In this paper, to illustrate the generalization ability of the proposed hyperparameter tuning strategy, in addition to the Louvain algorithm, we also use parametric likelihood maximization methods based on PPM and ILFR." ], [ "LFR model", "Our parameter tuning strategy is based on constructing a synthetic graph structurally similar to the observed network.", "To do this, we use the LFR model [14] which is the well-known synthetic benchmark usually used for comparison of community detection algorithms.", "LFR generates a graph with power-law distributions of both degrees and community sizes in the following way.", "First, we generate the degrees of vertices by sampling them independently from the power-law distribution with exponent $\\gamma _d$ , mean $\\bar{d}$ and with maximum degree $d_{max}$ .", "Then, using a mixing parameter $\\hat{\\mu }$ , $0 < \\hat{\\mu }< 1$ , we obtain internal and external degrees of vertices: we expect each vertex to share a fraction $1-\\hat{\\mu }$ of its edges with the vertices of its community and a fraction $\\hat{\\mu }$ with the other vertices of the network.", "After that, the sizes of the communities are sampled from a power-law distribution with exponent $\\gamma _C$ and minimum and maximum community sizes $C_{min}$ and $C_{max}$ , respectively.", "Then, vertices are assigned to communities such that the internal degree of any vertex is less than the size of its community.", "Finally, the configuration model [19] with rewiring steps is used to construct a graph with a given degree sequence and with the required fraction of internal edges.", "The detailed description of this procedure can be found in [14]." ], [ "Tuning parameters", "Assume that we are given a graph $G$ and our aim is to find a partition $ of its vertex set into disjoint communities.", "To do this, we have a community detection algorithm $ A$, where $$ is a set of hyperparameters.", "Let $ 0$ be the default hyperparameters.", "Assume that we are also given a quality function $ Q(A,GT)$ allowing to measure goodness of a partition $ A$ obtained by $ A$ compared to the ground truth partition $ GT$.", "Ideally, we would like to find $ = *arg maxQ(A,GT)$.", "However, we cannot do this since $ GT$ is not available.", "Therefore, we propose to construct a synthetic graph $ G'$ which has structural properties similar to $ G$ and also has known community assignments.", "For this purpose, we use the LFR model described in Section~\\ref {sec:lfr}.", "To apply this model, we have to define its parameters, which can be divided into \\textit {graph-based} ($ n$, $ d$, $ d$, $ dmax$) and \\textit {community-based} ($ C$, $ Cmin$, $ Cmax$, $$).$ Graph-based parameters are easy to estimate: $n = |V(G)|$ is the number of vertices in the observed network; $\\bar{d} = \\frac{2|E(G)|}{n}$ is the average degree; $d_{max}$ is the maximum degree in $G$ ; $\\gamma _d$ is the exponent of the power-law degree distribution; we estimate this parameter by fitting the power-law distribution to the cumulative degree distribution (we minimize the sum of the squared residuals in log-log scale).", "Community-based parameters contain some information about the community structure, which is not known for the graph $G$ .", "However, we can try to approximate these parameters by applying the algorithm $\\mathcal {A}_{\\theta _0}$ with default parameters to $G$ .", "This would give us some partition 0 which can be used to estimate the remaining parameters: $\\hat{\\mu }= \\frac{m_{out}}{m}$ is the mixing parameter, i.e., the fraction of inter-community edges in $G$ partitioned according to 0; $\\gamma _C$ is the exponent of the power-law community size distribution; we estimate this parameter by fitting the power-law distribution to the cumulative community size distribution obtained from 0 (we minimize the sum of the squared residuals in log-log scale); $C_{min}$ and $C_{max}$ are the minimum and maximum community sizes in 0.", "We generate a graph $G^{\\prime }$ according to the LFR model with parameters specified above.", "Using $G^{\\prime }$ we can tune the parameters to get a better value of $\\theta $ : $\\theta _{opt} = \\operatornamewithlimits{arg\\,max}_\\theta Q(_{\\mathcal {A}_{\\theta }},{GT}^{\\prime })\\,,$ where ${GT}^{\\prime }$ is known ground truth partition for $G^{\\prime }$ and $_{\\mathcal {A}_{\\theta }}$ is a partition of $G^{\\prime }$ obtained by $\\mathcal {A}_{\\theta }$ .", "It turns out that this simple idea leads to a universal method for tuning $\\theta $ , which successfully improves the results of several algorithms $\\mathcal {A}_{\\theta }$ on a variety of synthetic and real-world datasets, as we show in Section .", "InputinputOutputoutput $n, \\bar{d}, d_{max}, \\gamma _{d} \\leftarrow EstimateGraphParams(G)$ $0 \\leftarrow \\mathcal {A}_{\\theta _0}(G)$ $\\hat{\\mu },\\gamma _C, C_{min}, C_{max} \\leftarrow EstimateCommunityParams(G,0)$ $ParamsList \\leftarrow \\emptyset $ $i \\leftarrow 1$ $n_{graphs}$ $G^{\\prime }, {GT}^{\\prime } \\leftarrow GenerateLFR(n, \\bar{d}, d_{max}, \\gamma _{d}, \\hat{\\mu },\\gamma _C, C_{min}, C_{max})$ $QualityList \\leftarrow \\emptyset $ $\\theta \\in \\lbrace \\theta _i\\rbrace _{i=1}^l$ $Qualities \\leftarrow \\emptyset $ $j \\leftarrow 1$ $n_{runs}$ ${\\theta } \\leftarrow \\mathcal {A}_{\\theta }(G^{\\prime })$ Add $Q({\\theta },{GT}^{\\prime })$ to $Qualities$ $MeanQuality \\leftarrow \\mathrm {mean}(Qualities)$ Add $MeanQuality$ to $QualityList$ $index \\leftarrow \\operatornamewithlimits{arg\\,max}(QualityList)$ Add $\\theta _{index}$ to $ParamsList$ $\\theta = median(ParamsList)$ $\\theta $ Hyperparameter tuning The detailed description of the proposed procedure is given in Algorithm .", "Note that in addition to the general idea described above we also propose two modifications improving the robustness of the algorithm.", "The first one reduces the effect of randomness in the LFR benchmark: if the number of vertices in $G$ is small, then a network generated by the LFR model can be noisy and the optimal parameters $\\theta _{opt}$ computed according to Equation (REF ) may vary from sample to sample.", "Hence, we propose to generate $n_{graphs}$ synthetic networks and take the median of the obtained parameters.", "The value $n_{graphs}$ depends on computational resources: larger values, obviously, lead to more stable results.", "Fortunately, as we discuss in Section REF , this effect of randomness is critical only for small graphs, so we do not have to increase computational complexity much for large datasets.", "The second improvement accounts for a possible randomness of the algorithm $\\mathcal {A}_{\\theta }$ .", "If $\\mathcal {A}_{\\theta }$ includes some random steps, then we can increase the robustness of our procedure by running $\\mathcal {A}_{\\theta }$ several times and averaging the obtained qualities.", "The corresponding parameter is called $n_{runs}$ in Algorithm .", "Formally, in this case Equation (REF ) should be replaced by $\\theta _{opt} = \\operatornamewithlimits{arg\\,max}_\\theta \\frac{1}{n_{runs}} \\sum _{i=1}^{n_{runs}} Q(_{\\mathcal {A}_{\\theta },i},{GT}^{\\prime })\\,,$ where $_{\\mathcal {A}_{\\theta },i}$ is a (random) partition obtained by $\\mathcal {A}_{\\theta }$ on $G^{\\prime }$ .", "If $\\mathcal {A}_{\\theta }$ is deterministic, then it is sufficient to take $n_{runs} = 1$ .", "Note that for the sake of simplicity in Algorithm  we use grid search to approximately find $\\theta _{opt}$ defined in (REF ).", "However, any other method of black-box optimization can be used instead, e.g., random search [2], Bayesian optimization [25], Gaussian processes [10], sequential model-based optimization [11], and so on.", "More advanced black-box optimization methods can significantly speed up the algorithm.", "Let us discuss the time complexity of the proposed algorithm.", "If complexity of $\\mathcal {A}_{\\theta }$ is $f(G)$ , then complexity of Algorithm  is $O\\left(f(G)\\cdot l\\cdot n_{runs}\\cdot n_{graphs}\\right)$ , where $l$ is the number of steps made by the black-box optimization (the complexity of generating $G^{\\prime }$ is usually negligible compared with community detection).", "In other words, the complexity is $n_{runs}\\cdot n_{graphs}$ times larger than the complexity of any black-box parameter optimization algorithm.", "However, as we discuss in Section REF , $n_{runs}$ and $n_{graphs}$ can be equal to one for large datasets.", "Finally, note that it can be tempting to make several iterations of Algorithm  to further improve $\\theta _{opt}$ .", "Namely, in Algorithm  we estimate community-based parameters of LFR using the partition 0 obtained with $\\mathcal {A}_{\\theta _0}$ .", "Then, we obtain better parameters $\\theta _{opt}$ .", "These parameters can be further used to get a better partition using $\\mathcal {A}_{\\theta _{opt}}$ and this partition is expected to give even better community-based parameters.", "However, in our preliminary experiments, we did not notice significant improvements from using several iterations, therefore we propose to use Algorithm  as it is without increasing its computational complexity." ], [ "Parametric algorithms", "We use the following algorithms to illustrate the effectiveness of the proposed hyperparameter tuning strategy.", "This algorithm is described in Section REF , it has the resolution parameter $\\gamma $ with default value $\\gamma _0 = 1$ .", "We take the publicly available implementation from [24],https://github.com/altsoph/community_loglike where the algorithm is called DCPPM since modularity maximization is equivalent to the likelihood optimization for the DCPPM random graph model." ], [ "PPM", "This algorithms is based on likelihood optimization for PPM (see Section REF ).", "We use the publicly available implementation proposed in [24], where the Louvain algorithm is used as a basis to optimize the likelihood for several models.", "Since likelihood optimization for PPM is equivalent to maximizing a simplified version of modularity based on the Erdős–Rényi model instead of the configuration model [22], PPM algorithm also has a resolution parameter $\\gamma $ with the default value $\\gamma _0 = 1$ ." ], [ "ILFR", "This is a likelihood optimization algorithm based on the ILFR model (see Section REF ).", "Again, we use the publicly available implementation from [24].", "ILFR algorithm has one parameter $\\mu $ called mixing parameter and no default value for this parameter is proposed in the literature.", "In this paper, we take $\\mu _0 = 0.3$ , which is close to the average mixing parameter in the real-world datasets under consideration (see Section REF ).", "Our experiments confirm that $\\mu _0 = 0.3$ is a reasonable default value for this algorithm.", "Let us stress that in this paper we are not aiming to develop the best community detection algorithm or to analyze all existing methods.", "Our main goal is to show that hyperparameter tuning is possible in the field of community detection.", "We use several base algorithms described above to illustrate the generalization ability of the proposed approach.", "For each algorithm, our aim is to improve its default parameter by our parameter tuning strategy." ], [ "Synthetic networks", "We generated several synthetic graphs according to the LFR benchmark described in Section REF with $n = 10^4$ , $\\gamma _d = 2.5$ , $\\bar{d} = 20$ , $d_{max} = 200$ , $\\gamma _C = 1.5$ , $C_{min} = 50$ , $C_{max} = 500$ , $\\hat{\\mu }\\in \\lbrace 0.4,0.5,0.6,0.7\\rbrace $ .Note that $\\hat{\\mu }>0.5$ does not mean the absence of community structure since usually a community is much smaller than the rest of the network and even if more than a half of the edges for each vertex go outside the community, the density of edges inside the community is still large.", "On the one hand, one would expect results obtained on such synthetic datasets to be optimistic, since the same LFR model is used both to tune the parameters and to validate the performance of the algorithms.", "On the other hand, recall that the most important ingredient of the model, i.e., the distribution of community sizes, is not known and has to be estimated using the initial community detection algorithm, and incorrect estimates may negatively affect the final performance." ], [ "Real-world networks", "We follow the work [24], where the authors collected and shared 8 real-world datasets publicly available in different sources.https://github.com/altsoph/community_loglike/tree/master/datasets For all these datasets, the ground truth community assignments are available and the communities are non-overlapping.", "These networks are of various sizes and structural properties, see the description in Table REF .", "Table: Real-world datasets" ], [ "Evaluation metrics", "In the literature, there is no universally accepted metric for evaluating the performance of community detection algorithms.", "Therefore, we analyze several standard ones [7].", "Namely, we use two widely used similarity measures based on counting correctly and incorrectly classified pairs of vertices: Rand and Jaccard indices.", "We also consider the Normalized Mutual Information (NMI) of two partitions: if NMI is close to 1, one needs a small amount of information to infer the ground truth partition from the obtained one, i.e., two partitions are similar." ], [ "Experimental setup", "We apply the proposed strategy to the algorithms described in Section REF .", "We use the grid search to find the parameter $\\theta _{opt}$ (we do this to make our results easier to reproduce and we also need this for the analysis of stability in Section REF ).", "For ILFR we try $\\mu $ in the range $[0,1]$ with step size 0.05 and for Louvain and PPM on real-world datasets we take $\\gamma $ in the range $[0,2]$ with step size $0.1$ .", "Although we noticed that in some cases the optimal $\\gamma $ for PPM and Louvain can be larger than 2, such cases rarely occur on real-world datasets.", "On synthetic graphs, we take $\\gamma $ in the range $[0,4]$ (with step size 0.2) to demonstrate the behavior of $\\gamma _{opt}$ depending on $\\hat{\\mu }$ .", "To guarantee stability and reproducibility of the obtained results, we choose a sufficiently large parameter $n_{runs}$ , although we noticed similar improvements with much smaller values.", "Namely, for Karate, Dolphins, Football, and Political books we take $n_{runs} = 10^3$ , for Political blogs and Eu-core $n_{runs} = 100$ , for Cora, AS, and synthetic networks $n_{runs} = 2$ .", "We take $n_{graphs} = 10^3$ for four smallest datasets and $n_{graphs} = 100$ for the other ones (we choose such large values to plot the histograms on Figure REF ).", "Finally, note that it is impossible to measure the statistical significance of obtained improvements on real-world datasets since we have only one copy for each graph.", "However, we can account for the randomness included in the algorithms.", "Namely, Louvain, PPM, and ILFR are randomized, since at each iteration they order the vertices randomly.", "Therefore, to measure if $\\theta _{opt}$ is significantly better or worse than $\\theta _0$ , we can run each algorithm several times and then apply the unpaired t-test (we use 100 runs in all cases)." ], [ "Results", "In this section, we first discuss the improvements obtained for each algorithm and then analyze the stability of the parameter tuning strategy and the effect of the parameter $n_{graphs}$ ." ], [ "Louvain algorithm", "In Table REF , for each similarity measure we present the value for the baseline algorithm (with $\\gamma = 1$ ), the value for the tuned algorithm, and the obtained parameter $\\gamma _{opt}$ .", "Since Louvain is randomized, we provide the mean value together with an estimate of the standard deviation, which is given in brackets.", "The number of runs used to compute these values depends on the size of the dataset and on the available computational resources: $10^4$ for Karate, Dolphins, Football and Political books, $10^3$ for Political blogs and Eu-core, 100 for Cora, AS and synthetic datasets.", "Table: Louvain algorithm, default value is γ 0 =1\\gamma _0 = 1, standard deviation is given in the bracketsOne can see that our tuning strategy improves (or does not change) the results in all cases and the obtained improvements can be huge.", "For example, on Karate we obtain remarkable improvements from $0.761$ to $0.945$ (relative change is $24\\%$ ) according to Rand and from $0.52$ to $0.892$ ($72\\%$ ) according to Jaccard; on Dolphins we get $35\\%$ improvement for Rand and $63\\%$ for Jaccard; on Football we obtain plus $25\\%$ for Jaccard; and so on.", "As discussed in Section REF , we measured the statistical significance of the obtained improvements.", "The results which are significantly better are marked in bold in Table REF .", "On real-world datasets, all improvements except the one for NMI on AS are statistically significant (p-value $\\ll 0.01$ ).The results in Tables REF -REF are rounded to three decimals, so there may be a statistically significant improvement even when the numbers in the table are equal.", "Also, standard deviation less than 0.0005 is rounded to zero.", "Let us note that in many cases the results of the tuned algorithm are much better than the best results reported in [24], where the authors used other strategies for choosing the hyperparameter values.For small datasets, our results for the default Louvain algorithm may differ from the ones reported in [24].", "The reason is the high values of standard deviation.", "The authors of [24] averaged the results over 5 runs of the algorithm, while we use more runs, i.e., our average values are more stable.", "For synthetic datasets, we also observe huge improvements and all of them are statistically significant.", "While for $\\hat{\\mu }\\in \\lbrace 0.4, 0.5\\rbrace $ the default algorithm can be considered as good enough, for large values of $\\hat{\\mu }$ , $\\hat{\\mu }\\in \\lbrace 0.6, 0.7\\rbrace $ , the tuned one is much better.", "For example, for LFR-0.7 the tuned parameter gives Jaccard index almost 4 times larger than the default one.", "We noticed that for most of the datasets the values of $\\gamma _{opt}$ computed using different similarity measures are the same or close to each other.", "However, there are some exceptions.", "The first one is Dolphins, where for Jaccard $\\gamma _{opt} = 0.1$ , for Rand $\\gamma _{opt} = 0.5$ , for NMI $\\gamma _{opt} = 1.0$ .", "We checked that if we take the median value $\\gamma _{opt} = 0.5$ , then for all measures we obtain statistically significant improvements, which seems to be another way to increase the stability of our strategy.", "The most notable case, where $\\gamma _{opt}$ significantly differs for different similarity measures, is AS dataset, where $\\gamma _{opt} = 1.8 > \\gamma _0$ for Rand, $\\gamma _{opt} = 0.6 < \\gamma _0$ for Jaccard, and $\\gamma _{opt} = 0.8 < \\gamma _0$ for NMI.", "We will further make similar observations for other algorithms on this dataset.", "Such instability may mean that this dataset does not have a clear community structure (which can sometimes be the case for real-world networks [18]).", "Table: PPM algorithm, default value γ 0 =1\\gamma _0 = 1, standard deviation is given in the brackets" ], [ "PPM algorithm", "For PPM (Table REF ), our strategy improves the original algorithm for all real-world datasets but Eu-core (for all similarity measures), Karate (only for Jaccard), and Dolphins (only for NMI).", "Note that Karate and Dolphins are the only datasets (except for AS, which will be discussed further in this section), where the obtained values for $\\gamma _{opt}$ are quite different for different similarity measures.", "We checked that if for these two datasets we take the median value of $\\gamma _{opt}$ , (0.8 for Karate and 0.7 for Dolphins), then we obtain improvements in all six cases, five of them, except NMI on Karate, are statistically significant (p-value $\\ll 0.01$ ).", "On Eu-core the quality of PPM with $\\gamma _0 = 1$ is worse than the quality of Louvain with $\\gamma = 1$ .", "This seems to be the reason why PPM chooses a suboptimal parameter $\\gamma _{opt}$ : a partition obtained by PPM does not allow for a good estimate of the community-based parameters.", "As for Louvain, in many cases the obtained improvements are huge: e.g., the relative improvement for the Jaccard index is 147% on Dolphins, 26% on Football, 35% on Political books, 50% on Political blogs, an so on.", "All improvements are statistically significant.", "We also improve the default algorithm on all synthetic datasets and for all similarity measures.", "As for the Louvain algorithm, the improvements are especially huge for large $\\hat{\\mu }$ , $\\hat{\\mu }\\in \\lbrace 0.6,0.7\\rbrace $ .", "All improvements are statistically significant." ], [ "ILFR algorithm", "For real-world datasets, in almost all cases, we obtain significant improvements (see Table REF ).", "One exception is Dolphins for NMI.", "This, again, can be fixed by taking a median of the values $\\mu _{opt}$ obtained for all similarity measures on this dataset: $\\mu _{opt} = 0.15$ improves the results compared to $\\mu _0 = 0.3$ for all three measures.", "Other bad examples are Cora and AS, where Rand and NMI decrease, while Jaccard increases.", "For all other datasets, we obtain improvements.", "In many cases, the difference is huge and statistically significant.", "On synthetic datasets, the default ILFR algorithm is the best among the considered ones.", "In some cases, however, the default algorithm is further improved by our hyperparameter tuning strategy, while in others the difference is not statistically significant.", "Surprisingly, for large values of $\\hat{\\mu }$ the tuned value $\\mu _{opt}$ is much smaller than $\\hat{\\mu }$ .", "For example, for $\\hat{\\mu }= 0.6$ we get $\\mu _{opt} = 0.25$ , although we checked that the estimated parameter used for generating synthetic graphs is very close to $0.6$ .", "For real-world and synthetic networks, the obtained value $\\mu _{opt}$ can be both larger and smaller than $\\mu _0 = 0.3$ .", "Also, for synthetic networks, $\\mu _0$ is close to the obtained $\\mu _{opt}$ .", "We conclude that the chosen default value is reasonable.", "Table: ILFR algorithm, default value μ 0 =0.3\\mu _0 = 0.3, standard deviation is given in the bracketsIn rare cases, $\\mu _{opt}$ for a dataset can be quite different for different similarity measures.", "On AS, $\\mu _{opt} = 0$ for Jaccard and $\\mu _{opt} = 1$ for Rand and NMI.", "Note that if $\\mu = 0$ , then the obtained algorithm tends to group all vertices in one cluster, while for $\\mu = 1$ all vertices form their own clusters.", "Interestingly, for the Jaccard index, such a trivial partition outperforms the default algorithm.", "Moreover, the algorithm putting each vertex in its own cluster has close to the best performance according to the Rand index compared to all algorithms discussed in this section (both default and tuned).", "We conclude that AS does not have a clear community structure.", "Figure: The distribution of γ opt \\gamma _{opt} for the Louvain algorithm, NMI similarity measure" ], [ "Stability of generated graphs", "As discussed in Section , there are two sources of possible noise in the proposed parameter tuning procedure: 1) for small graphs the generated LFR network can be noisy, which may lead to unstable predictions of $\\theta _{opt}$ , 2) the randomness of $\\mathcal {A}$ may also affect the estimate of $\\theta _{opt}$ in Equation (REF ).", "The effect of the second problem can be understood using Tables REF -REF , where the standard deviations for $\\theta _0$ and $\\theta _{opt}$ are presented.", "To analyze the effect of noise caused by the randomness in LFR graphs and to show that it is more pronounced for small datasets, we looked at the distribution of the parameters $\\theta _{opt}$ obtained for different samples of generated graphs.", "We demonstrate this effect using the Louvain algorithm and NMI similarity measure (see Figure REF ), we take $n_{graphs} = 10^3$ for four smallest datasets and $n_{graphs} = 100$ for the other ones.", "Except for the AS dataset, which is noisy according to all our experiments, one can clearly see that the variance of $\\gamma _{opt}$ decreases when $n$ increases.", "As a result, we see that for large datasets even $n_{graphs} = 1$ already provides a good estimate for $\\gamma _{opt}$ ." ], [ "Conclusion", "We proposed and analyzed a surprisingly simple yet effective algorithm for hyperparameter tuning in community detection.", "The core idea is to generate a synthetic graph structurally similar to the observed network but with known community assignments.", "Using this graph, we can apply any standard black-box optimization strategy to approximately find the optimal hyperparameters and use them to cluster the original network.", "We empirically demonstrated that such a trick applied to several algorithms leads to significant improvements on both synthetic and real-world datasets.", "Now, being able to tune parameters of any community detection algorithm, one can develop and successfully apply parametric community detection algorithms, which was not previously possible." ], [ "Acknowledgements", "This study was funded by the Russian Foundation for Basic Research according to the research project 18-31-00207 and Russian President grant supporting leading scientific schools of the Russian Federation NSh-6760.2018.1." ] ]
1906.04555
[ [ "Majorana Zero Modes and Bulk-Boundary Correspondence at Quantum\n Criticality" ], [ "Abstract Majorana zero modes are well studied in the gapped phases of topological systems.", "We investigate Majorana zero modes at the topological quantum criticality in one dimensional topological superconducting model with longer range interaction.", "We identify stable localized Majorana zero modes appearing at criticality under certain conditions.", "Topological invariant number for these non-trivial criticalities is obtained from zeros of a complex function associated with the Hamiltonian.", "Behavior of parametric curve at criticalities validate the invariant obtained and account for the appearance of Majorana zero modes at criticality.", "Trivial and non-trivial topological nature of criticality due to the presence of multicritical point cause an unusual topological transition along the critical line.", "We observe and investigate this unique transition in terms of eigenvalue spectrum.", "Appearance of MZMs at criticality demands integer value of topological invariant number in order to validate the concept of bulk-boundary correspondence.", "Hence we propose a scheme to separate the invariant number into fractional and integer contribution to establish bulk-boundary correspondence at criticality." ], [ "0pt0pt0pt 1.25 Anomalous Bulk-Boundary Correspondence at Topological Quantum Criticality Rahul S Y R Kartik Ranjith Kumar R Poornaprajna Institute of Scientific Research, 4, Sadashivanagar, Bangalore-560 080, India.", "Graduate Studies, Manipal Academy of Higher Education, Madhava Nagar, Manipal-576104, India.", "Sujit Sarkar Poornaprajna Institute of Scientific Research, 4, Sadashivanagar, Bangalore-560 080, India.", "Bulk-boundary correspondence is an important feature of the gapped topological states of quantum matter.", "We study the behavior of the Majorana zero modes (MZMs) at quantum criticality and also observe that all quantum critical lines can not host MZMs.", "We characterize different quantum critical lines based on the presence of MZMs on it and also raise question of the validity of the conventional bulk-boundary correspondence (BBC).", "We find the appearance of anomalous bulk-boundary correspondence (ABBC) at the criticality and also find a topological quantum phase transition along a quantum critical line.", "We observe a minimum principle of topological invariant number at the quantum critical lines seperated by the two different regions with different topological invariant numbers.", "We argue that the presence of of MZM at the quantum critical lines is a very good platform for topological quantum computation.", "Finally we generalize the appearance of ABBC for the further longer range coupling.", "This work provide a new perspective for the topological state of quantum matter.", "Keywords : Topological phase transition, Gapless phase transition, Gapless Majorana zero mode Introduction After the discovery of quantum Hall effect [1], [2], different states of quantum matter are classified based on their topological properties.", "Among the vast variety of topological phases one can identify an important class called symmetry protected topological (SPT) phase [3], [4], [5], [6], [7], where two quantum states have distinct topological properties protected by symmetries of respective class [8].", "Under this symmetry constraint, one can define the topologically equivalent and distinct classes [4].", "Majorana zero modes (MZMs) with well defined bulk-boundary correspondence (BBC) [9], [10], [11], [12] appear at the edge of a gapped 1D topological system where the topological invariant of the bulk of the respective system is equivalent to the number of edge modes present in the system [9], [13], [14], [15].", "Topological invariant is well defined at gapped topological phases but at the critical point/line the system undergoes topological phase transition at which the topological invariant number is invariant is ill-defined.", "This limitation has resulted in the non-characterization of the topological invariant number at criticality in literature [16], [17], [18], [19], [20], [17], [21], [18], [22], [23], [24], [25] and also lacks in addressing several issues raised in the present work.", "Therefore our motivation is at the fundamental level where, we are interested in solving MZM physics and bulk-boundary correspondence (BBC) at topological quantum criticality.", "At the same time we want to raise the question of possibility of existence of BBC and anomalous bulk-boundary correspondence (ABBC) in the same system for different regimes of parameter space?", "We also raise the question how the presence of MZM at the quantum critical lines favour the topological quantum computation.", "This work provides what is the meaning and significance of the MZM physics for the gapless phase of quantum critical lines.", "Model Hamiltonian Model Hamiltonian of our study is [26], $H = &-\\mu \\sum _{i=1}^{N} (1 - 2 c_{i}^{\\dagger }c_{i}) -\\lambda _1 \\sum _{i=1}^{N-1} (c_{i}^{\\dagger }c_{i+1}+ c_{i}^{\\dagger }c_{i+1}^{\\dagger } + h.c) \\nonumber \\\\&-\\lambda _2 \\sum _{i=1}^{N-1} ( c_{i-1}^{\\dagger }c_{i+1} +c_{i+1}^{\\dagger } c_{i-1}^{\\dagger } + h.c),$ where $c_{i}^{\\dagger }$ and $c_{i}$ are fermionic creation and annihilation operators respectively.", "$\\lambda _1$ , $\\lambda _2$ are nearest, next-nearest neighbor coupling and $\\mu $ is the on-site chemical potential.", "Nearest-neighbor hopping amplitude $\\lambda _1$ is also the amplitude of the nearest-neighbor superconducting gap, and the next-nearest-neighbor hopping amplitude $\\lambda _2$ is equal to the next-nearest-neighbor superconducting gap.", "Hamiltonian $H$ in the momentum space reduces to [27], [28], $H_k = \\chi _{z} (k) \\sigma _z - \\chi _{y} (k) \\sigma _y,$ where $ \\chi _{z} (k) = -2 \\lambda _1 \\cos k - 2 \\lambda _2 \\cos 2k + 2\\mu ,$ and $ \\chi _{y} (k) = 2 \\lambda _1 \\sin k + 2 \\lambda _2 \\sin 2k.$ Results of MZM physics: Fig.REF consists of three panels.", "Upper, middle and lower panels which shows the behavior of MZMs at the critical lines, $\\lambda _2=\\mu -\\lambda _1$ , $\\lambda _2=\\mu +\\lambda _1$ and $\\lambda _2 = -\\mu $ respectively (The detailed derivation of critical lines and the real space representation of the Hamiltonian (eq.REF ) is relegated in the supplementary material).", "Upper and lower panel of the fig.REF indicates the absence of MZMs at the criticality for $\\lambda _2=\\mu -\\lambda _1$ and $\\lambda _2=-\\mu $ lines where as middle panel indicates the presence of MZMs at criticality $\\lambda _2=\\mu +\\lambda _1$ .", "These gapless MZMs appear at criticality during the transition between gapped phases $w=2$ to $w=1$ .", "During this transition, one of the edge modes delocalize and merge with the bulk while the other remains localized at criticality.", "At the criticality, we derive the condition for the existence of zero modes in the system.", "Analytical expression of the zero mode is given by, $e^{q} = \\frac{-\\lambda _1 \\pm \\sqrt{\\lambda _1^2 + 4 \\mu \\lambda _2}}{2 \\lambda _2} ,$ (detailed derivation is relegated to the supplementary material).", "The condition for the presence and absence of the MZMs are the following, $e_{1}^{q} <1 $ , $e_{2}^{q} <1$ and $e_{1}^{q} >1 $ , $e_{2}^{q} <1$ .", "In the present problem, critical lines $\\lambda _2=\\mu -\\lambda _1$ and $\\lambda _2=-\\mu $ do not host MZMs since it does not satisfy the condition $e_{1}^{q} >1 $ , $e_{2}^{q} <1$ .", "Critical line $\\lambda _2=\\mu +\\lambda _1$ hosts Majorana zero edge modes since it satisfies the condition $e_{1}^{q} <1 $ , $e_{2}^{q} <1$ .", "Figure: Upper, middle and lower panels correspond to edge mode figures for λ 2 =μ+λ 1 \\lambda _2=\\mu +\\lambda _1,λ 2 =μ-λ 1 \\lambda _2=\\mu -\\lambda _1 and λ 2 =-μ\\lambda _2=-\\mu critical phases.", "Left, middle and rightfigures of respective panels corresponds to probability density of the wavefunction ofthe edge modes, eigenvalues plotted with respect to number of eigenvalues andeigenvalue spectrum plotted with respect to coupling parameter λ 1 \\lambda _1.The dashed line in the right figures of upper and lower panels are for eye guide line.These MZMs that exist on the critical line exist independent of the bulk gap which marks a significant difference from that of the MZMs present in the gapped phases.", "The MZMs present in the gapped phase is characterized by the conventional definition of the winding number and it obeys the BBC.", "However MZMs present on the topological quantum critical line demand alternative definition of winding number for their characterization.", "The one to one correspondence of this winding number and MZMs at criticality is discussed in the next section and the appearance of ABBC is verified.", "Topological characterization and ABBC at topological quantum critical lines: The definition of the topological invariant number calculated from Anderson pseudo spin Hamiltonian (eq.REF ) is, $w = \\left(\\frac{1}{\\pi }\\right)\\oint \\frac{d\\theta _k}{dk} dk,$ where $\\theta _k = \\arctan \\left(\\frac{\\chi _y}{\\chi _z}\\right)$ is topological angle [29], [28] ill-defined at criticality but for gapped phases, the winding number definition through topological angle is well defined [28], [29].", "We solve the problem for winding number and edge mode physics for gapless quantum critical lines from the perspective of complex analysis.", "One can understand the existence of topological state very briefly in the following way: Hamiltonian (eq.REF ) written in the Majorana basis as, $H=-i\\left[ -\\sum _{i=1}^{N} \\mu b_ja_j+ \\lambda _1 \\sum _{i=1}^{N-1} b_ja_{j+1} +\\lambda _2 \\sum _{i=1}^{N-1} b_i a_{i+2}\\right] .$ where $a_j$ and $b_j$ (detailed derivation is relegated in the supplementary material).", "The general form of the Hamiltonian eq.REF can be written as, $H = \\sum _{\\alpha =0,1,2} \\gamma _{\\alpha } b_i a_{i+\\alpha },$ which can be written in the complex form using Fourier transformation, $f(k)= \\sum _{\\alpha =0,1,2} \\gamma _{\\alpha } e^{ik\\alpha }$ , with $z = e^{ik}$ [30].", "The complex function associated with the BDI Hamiltonian can be written as, $ f(z) = \\sum _{\\alpha =0,1,2} \\gamma _{\\alpha } z^{\\alpha }.$ Where for $\\alpha =0,1,2$ is associated with $-\\mu $ , $\\lambda _1$ and $\\lambda _2$ respectively (detailed analysis is relegated in the supplementary material).", "The complex function for the Hamiltonian (eq.REF ) is, $f(z) = -\\mu + \\lambda _1 z +\\lambda _2 z^2.$ Here $f(z)$ has two solutions $z_1$ and $z_2$ , $z_{1,2}= \\frac{-\\lambda _1 \\pm \\sqrt{\\lambda _1^2+4\\mu \\lambda _2}}{2\\lambda _2},$ which are two zeros of the function $f(z)$ (eq.REF ).", "Topological invariant ($w$ ) can be calculated using number of zeros of $f(z)$ inside the unit circle (since this model Hamiltonian does not have any poles).", "Zeros inside, on and outside the unit circle corresponds to topological phase, transition phase and non-topological phase.", "In the upper panel of Fig.REF , the zero modes at criticality is presented from the analysis of eq.REF and eq.REF .", "Fig.REF consists of three figures.", "The left, middle and right figures present zeros lying inside and outside the unit circle which is corresponds to $w=1$ and 0 critical phases respectively.", "In the upper panel of the Fig.REF , left and right plot shows no zeros are inside the unit circle which corresponds to the absence of MZMs resulting in $w=0$ characterization at the criticality $\\lambda _2 = \\mu - \\lambda _1$ and $\\lambda _2=-\\mu $ .", "In the middle plot we find an interesting result for the critical line $\\lambda _2 = \\mu +\\lambda _1$ where one of the zeros lie inside the unit circle while the other lie on the unit circle which corresponds to the MZM present at the criticality.", "Figure: Upper and lower panel are respectively for thetopological characterization of the edge mode physics based on eq.6 andand eq.2.The left, middle and lower figures for each panel are respectively for the critical linesλ 2 =μ-λ 1 \\lambda _2=\\mu -\\lambda _1, λ 2 =μ+λ 1 \\lambda _2=\\mu +\\lambda _1 and λ 2 =-μ\\lambda _2=-\\mu .In the lower panel we present the momentum space representation of topological characterization based on the Anderson pseudo spin Hamiltonian (eq.", "2).", "We observe that the curve in the parameter space just touch the origin for the left most curve, therefore there is no topological phase ($w =0 $ ) for this quantum critical line.", "For the middle figure of this panel one closed curve enclose the origin of the parameter space and other just touch the origin, i.e, the system is in the topological state with $w=1$ .", "In the right figure two closed curve just touch the origin, therefore there is no existence of topological phase ($ w=0 $ ).", "This MZM present at the criticality exist independent of the bulk gap.", "An alternative definition of winding number results in the characterization of criticality with $w=1$ which reflects the anomalous correspondence between the gapless bulk and number of zero modes at the edge.", "Thus the successful characterization of the critical phases proves the appearance of ABBC.", "Here we discuss a interesting behavior of the critical line $\\lambda _2=\\mu -\\lambda _1$ which shows different characterization for different parameter space.", "Now we present different characterization at different parameter space for a same critical line $\\lambda _2=\\mu -\\lambda _1$ .", "Upper and lower panel of the Fig.REF are respectively presents zero mode behavior at the regimes $0>\\lambda _2>-1$ and $\\lambda _2<-1$ of a critical line $\\lambda _2=\\mu -\\lambda _1$ .", "Upper panel indicates the absence of MZMs for a parameter regime of $0>\\lambda _2>-1$ and lower panel indicates the presence of MZMs for a parameter regime $\\lambda _2<-1$ for this quantum critical line.", "Figure: Upper and lower figures correspond to edge mode behavior at differentregimes 0>λ 2 >-10>\\lambda _2>-1 and λ 2 <-1\\lambda _2<-1 of a critical line λ 2 =μ-λ 1 \\lambda _2=\\mu -\\lambda _1.Each panel consist of left and right figures which corresponds to probability density of thewavefunction of the edge modes and eigenvalues plotted with respect to number of eigenvalues.In the upper panel of the fig.REF , left, middle and right figures corresponds to the zeros plotted over a unit circle at different regimes $0>\\lambda _2>-1$ , $\\lambda _2=-1$ and $\\lambda _2<-1$ .", "Only right plot of the Fig.REF has a zero inside the unit circle indicating the presence of MZM at criticality and this phase is characterized with $w=1$ .", "In the lower panel, the closed curve of in the parametric space for the left and middle figures only touch the origin of the parametric space, i.e, the for this regime of parameter space there is no topological phase $( w=0 )$ , with out any edge mode.", "The right figure shows the two closed curve one has touched the origin in the parametric space and the other one has enclosed the origin of the parametric space which present the topological state ($w =1$ ).", "Figure: Upper and lower panel are respectively for thetopological characterization of the edge mode physics based on eq.6 andand eq.2.The left, middle and lower figures for each panel are respectively fordifferent regimes 0>λ 2 >-10>\\lambda _2>-1,λ 2 =-1\\lambda _2=-1 and λ 2 <-1\\lambda _2<-1 of topological quantumcritical line λ 2 =μ-λ 1 \\lambda _2=\\mu -\\lambda _1.Since the critical line $\\lambda _2=\\mu -\\lambda _1$ shows different characterization at different regime of parameter space, one might expect a topological phase transition along this topological quantum critical line independent of the bulk gap.", "Figure: Eigenvalue spectrum of the Hamiltonian plotted with couplingparameter λ 1 \\lambda _1 on a critical line λ 2 =μ-λ 1 \\lambda _2 = \\mu -\\lambda _1 for λ 2 <0\\lambda _2<0.Dashed line represents zero energy axis.In fig.REF , we present the results of eigenvalue spectrum with coupling parameter $\\lambda _1$ for $\\lambda _2 = \\mu - \\lambda _1$ .", "In the region $1<\\lambda _1<2$ , gapless MZMs does not appear and for $\\lambda _1>2$ , zero modes appear.", "Hence these phases are characterized with $w=0$ and $w=1$ phase respectively.", "Considering both the phases as a single critical line $\\lambda _2 = \\mu -\\lambda _1$ , we observe a gapless phase transition with $\\lambda _1=2$ being a topological phase transition point between $w=0$ to $w=1$ phases.", "This topological quantum phase transition along this quantum critical line which we predict not only for the first time prediction for the topological state of matter but also for the entirely new results of quantum criticality.", "Existence of conventional and anomalous bulk-boundary correspondence: In Fig.REF , we explicitly show the presence of both ABBC and BBC for the model Hamiltonian.", "For the parameter regime $\\lambda _2>\\mu +\\lambda _1$ with $w=2$ phase, $\\lambda _2<\\mu +\\lambda _1$ and $\\lambda _2>\\mu -\\lambda _1$ with $w=1$ phase and $\\lambda _2<\\mu -\\lambda _1$ and $\\lambda _2 < -1$ with $w=0 $ phase the system inhabits BBC.", "Figure: Topological quantumphase diagram of the model Hamiltonian with three critical linesλ 2 =μ+λ 1 \\lambda _2=\\mu +\\lambda _1 (AB), λ 2 =μ-λ 1 \\lambda _2=\\mu -\\lambda _1 (AC line) andλ 2 =-μ\\lambda _2=-\\mu (ED line).For the critical lines $\\lambda _2=\\mu +\\lambda _1$ (AB), $\\lambda _2=-\\mu $ (ED) and $\\lambda _2=\\mu -\\lambda _1$ (AC) system inhabits ABBC respectively.", "At criticality $\\lambda _2=\\mu -\\lambda _1$ for $\\lambda _2>0$ (AF), the MZMs are absent and hence $w=0$ phase.", "But for $\\lambda _2 < 0$ regime, critical line $\\lambda _2=\\mu -\\lambda _1$ for $1<\\lambda _1<2$ (FD), the gapless phase is characterized as $w=0$ since there are MZMs at criticality.", "Whereas For $\\lambda _2=\\mu -\\lambda _1$ for $\\lambda _1>2$ (DC) the gapless phase is characterized as $w=1$ since MZMs exist at criticality.", "The study of topological characterization and edge mode behavior of quantum critical lines gives rise to anomalous correspondence between zero modes and the topological invariant number.", "We observe in this topological quantum phase diagram, a minimum principle of winding number for all quantum critical lines, i.e., the winding number of the quantum critical lines takes minimum values of the two seperated gapped phases with different winding number.", "We characterize the topological quantum critical lines and topological quantum phase transition with $w$ , one can also characterize the same topological quantum critical lines in terms of $2 w$ , i.e., the number of edge mode corresponding topological quantum critical lines.", "This topological characterization is sufficient and elegent to describe the entire physics of BBC and ABBC.", "Thus it is clear from this study the meaning and significance of of MZM physics in gapless quantum critical lines.", "Effect of further longer range coupling: We generalize the results for further longer range coupling.", "In addition to $w = 0,1,2$ regions we also get winding number $w = 3$ in appropriate regimes of the parameter space when the next-next-nearest neighbor interaction (NNNN) is included in the model Hamiltonian.", "Considering the phase transition from gapped phases $w=3$ to $w=2$ or from $w=3$ to $w=1$ , one and two gapless MZMs appear at the criticality respectively.", "In general when there is a phase transition occurring from $w=n$ to $w=n-1$ or from $w=n$ to $w=1$ , number of gapless MZMs that appear at the criticality are 1 and $n-1$ respectively.", "Clearly, phases with $n = Z$ MZMs are allowed by further longer ranged Hamiltonians and all the systems that support phases $n\\ge 2$ inhabits both BBC and ABBC at gapped and gapless phases respectively.", "Majorana zero modes and topological quantum computation: The presence of a large number of localized MZMs lead to an extensive ground state degeneracy which can be used to encode the information using braiding techinque and it is more promising because MZMs are robust to pertubations at the end of the chain which again is a key feature for achieve topological quantum computation [31], [32], [33].", "The application of MZMs in the topolgical quantum computation begins with the exchanging MZMs without disturbing the ground state degeneracy of the system and the series of exchanges of MZMs results in performing an algorithmic operation.", "In the present study we have found the existence of MZM for topological quantum critical lines where the bulk is gapless but the edge mods are protected.", "We have shown explicitly that the system with longer range coupling has large number of quantum critical lines which host zero modes along with the zero modes in the gapped phases of the system.", "With this not only the MZMs in the gapped phases are used for braiding, even the MZMs present at criticality can also be used for braiding since MZMs at criticality are also topologically protected.", "Therefore this type of system will have extensive ground state degeneracy and serves as a very good platform for topological quantum computing research.", "Conclusion: In this study we have presented the edge mode physics for the topological quantum critical lines at the ground breaking level.", "To the best of our knowledge this is the first study for the existence of both BBC and ABBC in a same model Hamiltonian system.", "We have found ABBC at the topological quantum critical line along with the BBC for gapped phase regime.", "We have presented a new kind of topological phase transition along the quantum critical line.", "We have also shown the minimum principle of winding number at the quantum critical lines.", "We have generalized the results to further longer range coupling and is shown explicitly for how ABBC and BBC occur in the topological condensed matter systems.", "For the first time, we have presented the meaning and significance of the Majorana zero mode physics for the gapless quantum critical lines.", "Our work provides a new perspective for the topological state of matter.", "We have also provided importance of appearance of MZMs at criticality in the topological quantum computing regime.", "Acknowledgments S.S. would like to acknowledge DST (EMR/2017/000898) for the support.", "R.S., Y.R.K.", "and R.K.R would like to acknowledge PPISR, RRI library for the books and journals.", "Authors would like to acknowledge Prof. Shivram and Prof. R. Srikanth for insightful discussions and critically reviewing the manuscript.", "Finally authors would like to acknowledge ICTS Lectures/seminars/workshops/conferences/discussion meetings of different aspects in physics." ] ]
1906.04462
[ [ "Screening by changes in stereotypical behavior during cell motility" ], [ "Abstract Stereotyped behaviors are series of postures that show very little variability between repeats.", "They have been used to classify the dynamics of individuals, groups and species without reference to the lower-level mechanisms that drive them.", "Stereotypes are easily identified in animals due to strong constraints on the number, shape, and relative positions of anatomical features, such as limbs, that may be used as landmarks for posture identification.", "In contrast, the identification of stereotypes in single cells poses a significant challenge as the cell lacks these landmark features, and finding constraints on cell shape is a non-trivial task.", "Here, we use the maximum caliber variational method to build a minimal model of cell behavior during migration.", "Without reference to biochemical details, we are able to make behavioral predictions over timescales of minutes using only changes in cell shape over timescales of seconds.", "We use drug treatment and genetics to demonstrate that maximum caliber descriptors can discriminate between healthy and aberrant migration, thereby showing potential applications for maximum caliber methods in automated disease screening, for example in the identification of behaviors associated with cancer metastasis." ], [ "Introduction", "Moving in the right way at the right time can be a matter of life and death.", "Whether avoiding a predator or searching for food, choosing the correct movements in response to specific stimuli is a crucial part of how an organism interacts with its environment.", "The repetitive, highly coordinated movements that make up behavioral stereotypes have been shown to be entwined with survival strategies in a number of species, for example the incredible correlation in posture between prey capture events in raptors [1] and the escape response of C. elegans when exposed to extreme heat [2].", "Understanding these stereotypes is vital to creating a full picture of a species' interactions with its environment.", "If stereotypes represent evolved, selection-driven behavior in animals, might the same not be true for single-celled organisms?", "This point of view may be particularly useful in understanding chemotaxis, the guided movement of a cell in response to a chemical gradient.", "During chemotaxis, eukaryotic cells change their shape through the repeated splitting and extension of actin-rich structures called pseudods [3], [4], [5].", "Though this behavior is well known, the study of chemotaxis has traditionally focused on the signaling events that regulate cytoskeletal remodeling.", "Even where pseudopods are acknowledged to be relevant, the focus is on the biochemical mechanisms that generate and regulate them [6], [7], [8].", "These mechanisms are, however, staggeringly complex [9] and the way chemotaxis emerges from these lower-level processes remains largely unknown.", "Rather than delving deeper into the network of biochemical interactions, we can instead learn from the shape changes and movements that this intricate machine has evolved to produce.", "Such an approach, also known as morphological profiling, shows great promise in biomedicine [10].", "Here, we explore this question using Dictyostelium discoideum, a model for chemotaxis noted for its reproducible directional migration towards cyclic adenosine monophosphate (cAMP) [11], [12], which it senses using typical G-protein coupled receptors.", "To capture cell shape (or posture) at any given point in time, we employ Fourier shape descriptors, a rotationally invariant method of quantifying shapes used previously to show that cell shape and environment are intrinsically linked [13] (Fig.", "1A).", "These shape data are naturally of high dimensionality, making further analysis difficult.", "We reduce their dimensionality using principal component analysis (PCA), a method used previously to obtain the key directions of variability from the shapes of cells [13], [14], [15] and animals (Fig.", "1B) [2], [16], [17].", "Our final challenge (and the focus of this paper) is to quantify behavior, which we define as the movement between shapes.", "There are many potential ways to do so [18], [19], however we have adapted the variational maximum caliber (MaxCal) approach [20], [21] to this end.", "These methods have several advantages over conventional alternatives: Firstly, Fourier descriptors capture all available information on shape, and the subsequent PCA provides a natural quantitative means of discarding its less informative elements.", "Easier methods, such as measuring aspect ratio or eccentricity, require us to assume that they provide a useful description a priori, and cannot inform us how much (or what) we have discarded for the sake of simplicity.", "Secondly, our chosen methods are blind to the researcher's preconceptions, as well as to previous descriptions of shape and behavior.", "Any behavioral modes identified have emerged from the data without our deciding on (and imposing) them, as we might if using supervised machine learning or fitting parameters to a preconceived biochemical model.", "Finally, the minimal model we construct using maximum caliber makes no reference to any underlying biochemistry and therefore cannot make potentially incorrect assumptions about it.", "We demonstrate the usefulness of these methods by showing that they successfully discriminate between the behavior of drug-treated or genetically altered cells and their parental strains.", "Cells continuously change shape as they migrate, creating trajectories in the space of shapes that are specific to their situation.", "For example, we have previously shown that cells follow different shape trajectories in environments with low and high chemoattractant signal-to-noise ratios [13], here defined as the local gradient squared over the background concentration (Fig.", "1C).", "In this example, it is important to note that the distributions of cell shape for each condition overlap significantly.", "This means that it is not always possible to accurately determine the cell's condition from a static snapshot of its shape.", "In contrast, the dynamics of shape change in each condition are clearly distinct.", "Our aim here is to quantify the details of these shape changes, making a small set of values that can act as a signature for a given mode of behavior.", "We can then use such signatures to quantitatively compare, or to discriminate between, various conditions or genotypes.", "To this end, we employ the MaxCal method (Fig.", "2A).", "MaxCal was originally proposed by E. T. Jaynes [22] to treat dynamical problems in physics, and is much like his better-known maximum entropy (MaxEnt) method used in equilibrium physics.", "The motivation is the same for both; we wish to find the probability of a given structure for a system in a way that neither assumes something we do not know, nor contradicts something we do know, i.e.", "an observation of the system's behavior.", "In the case of MaxEnt, this is achieved by finding the maximum Shannon entropy with respect to the probabilities of the set of possible configurations the system can take [23].", "MaxCal uses the probabilities of the possible trajectories a dynamical system can follow instead.", "In this case, the entropy is replaced by the caliber $C$ , [20], so called because the flow rate in a pipe relates to its caliber, or internal diameter.", "In essence, the method extracts the degree to which different rates or events within the system are coupled, or co-occur beyond random chance.", "This method has previously been used to sucessfully predict the dynamics of neuronal spiking, the flocking behavior of birds and gene regulatory networks [24], [25], [26].", "Generally, the caliber takes the form $C(\\lbrace p_{j} \\rbrace ) = -\\sum _{j} p_{j}\\ln \\left(p_{j}\\right) + \\mu \\sum _{j} p_{j} + \\sum _{n} \\lambda _{n} \\sum _{j} S_{n,j} p_{j} ,$ where $p_{j}$ is the (potentially time-dependent) probability of the $j$ th trajectory.", "The first term on the right-hand side of Eq.", "(REF ) represents a Shannon-entropy like quantity and the second ensures that the $p_{j}$ are normalized.", "The third constrains the average values of some properties $ S_{n, j} $ of the trajectories $ j $ to the values of some macroscopically observed quantities $ \\langle S_{n} \\rangle $ , making sure we do not contradict any known information.", "By maximizing the caliber, the probabilities of the trajectories $p_{j} = Q^{-1} \\exp \\left(\\sum _{n} \\lambda _{n} S_{n,j} \\right)$ are found, where $ Q = \\sum _{j} \\exp (\\sum _{n} \\lambda _{n} S_{n,j}) $ is the dynamical partition function and $ \\lbrace \\lambda _{n}\\rbrace $ are a set of Lagrange multipliers.", "This Boltzmann-type distribution fulfils detailed balance, even for non-equilibrium problems.", "Practically, the problem is to find these Lagrange multipliers (and hence, the partition function).", "To this end, we exploit their relations to the externally observed average values of some quantities $\\langle S_{n} \\rangle = \\frac{\\partial \\ln Q}{\\partial \\lambda _{n}},$ where the values of the $ \\langle S_{n} \\rangle $ are determined from experiment.", "This training process is equivalent to maximum-likelihood fitting to observed data.", "As our interest is in cell shape and motility, we derive our values for the $ \\langle S_{n} \\rangle $ from the shape dynamics of migrating Dictyostelium cells.", "In order to effectively parameterise our model, we must constrain the continuum of possible shape changes to a much smaller set of discrete unit changes in our principal components (PCs).", "We therefore build our model from discretized values of the shape measures PC 1 and 2, assigning them to the variables $N_{1}$ and $N_{2}$ , respectively.", "Their values are analagous to particle numbers in a chemical reaction.", "The switching between continuous and discrete variables is possible as $\\frac{\\sigma _{x}}{\\langle N_{x}\\rangle }\\approx 0.035$ is small with $x=1,2$ for $PC_{x}$ and $\\sigma _{x}$ the standard deviation.", "We reduce the size of the time-step $\\delta t$ until we no longer observe changes greater than 1 in a single $\\delta t$ (similar to deviations of master equations).", "As PC 2 accounts for less overall variation than PC 1, (see Fig.", "1B), we naturally reach this minimal change for a much larger value of $\\delta _t$ , which is undesirable because by the time $\\delta _t$ is small enough for PC 1, changes in PC 2 are almost never observed, making correlations between the two PCs difficult to detect.", "We therefore scale all changes in PC 2 by a factor of $\\sigma _1/\\sigma _2$ in order that unit changes are observed in both PCs for a similar value of $\\delta _t$ .", "Practically, our training data yielded a $\\delta _t$ of 0.1875s (as each 3s frame in the video data was divided into 16 sections, in which the intermediate PC values were linearly interpolated).", "The advantage of limiting the possible macroscopic shape changes in $\\delta t$ to the following: an increase, a decrease, or no change in each PC.", "As changes in each PC can be considered independently, this gives us a total of 3x3 = 9 cases (that is, no change form the current position, or a move to any of the 8 neighbouring spaces, see Fig.", "2A inset).", "These macroscopic cases are taken to be the observable effects of an underlying microscopic structure.", "From our analogy of a chemical reaction, we treat increases to be particle creation and denote the microscopic variable for an increase in trajectory $j$ as $ S^{x}_{+,j} $ , where $x\\in \\lbrace 1,2\\rbrace $ correspond to PC 1 and 2, respectively.", "For small $\\delta t$ this variable is binary, taking the value 1 when $N_{x}$ increases over a single time-step and taking the value 0 otherwise.", "Decreases will be treated as particle decay, with $N_{x}$ separate variables $ \\lbrace S^{x,i}_{-,j}\\rbrace $ are used to denote decays for the $i$ th particle, with $1 \\le i \\le N_{x}$ .", "These $ \\lbrace S^{x,i}_{-,j}\\rbrace $ are equal to 1 if the $i$ th particle decays in $\\delta t$ and equal to 0 otherwise.", "Hence, in each $\\delta t$ there are $N_{x}+2$ possible microtrajectories for each component; an increase, no change, or the removal of any particle $N_{x}$ (Fig.", "2B).", "We choose such a first-order decay over a zeroth-order decay in order to introduce a virtual force, bringing the system back toward the mean (see Fig.", "S2).", "As the two components may change independently, there are $(N_{1}+2)(N_{2}+2)$ possible microtrajectories in a single $\\delta t$ over PC 1 and 2.", "Applying Eq.", "3, we constrain the probabilities of these microtrajectories such that they agree with the macroscopically observed rates $\\langle S^{x}_{\\alpha } \\rangle $ , with $\\alpha \\in \\lbrace +,-\\rbrace $ an increase or decrease in component $x$ , respectively.", "We then expand the model to include a following time-step, allowing us to capture short-term correlations between events.", "This increases the number of possible trajectories substantially.", "The number of microtrajectories in a given time-step depends on $N_{x}$ at time $t+\\delta t$ , and this quantity is different dependent on the pathway taken in the first time-step, so we must include this history dependence.", "For example, a reduction in component $x$ can happen in $N_{x}$ ways, and will cause $N_{x}$ to go down by one.", "This change is followed by $(N_{x}-1)+2$ possible microtrajectories in the following time-step.", "Multiplying the quantities for each time-step gives us $N_{x}\\big (N_{x}+1\\big )$ microtrajectories in which there is a decrease in the first time-step.", "Accounting for the effect of the changing values of $N_{1}$ and $N_{2}$ over interval ${t,t+\\delta t}$ in each microtrajectory on the interval starting at time $t+\\delta t$ , the number of microtrajectories over $2\\delta t$ is $(N_{1}^{2}+3N_{1}+5)(N_{2}^{2}+3N_{2}+5)$ .", "Each observable has a corresponding value in trajectory $j$ of $S^{xy}_{\\alpha \\beta , j}$ , which is 1 if the correlation is observed and 0 otherwise.", "We can reduce this to 10 time-correlated observables by assuming symmetry under order-reversal, i.e.", "$S^{xy}_{\\alpha \\beta , j} \\equiv S^{yx}_{\\beta \\alpha , j}$ (Fig.", "2C).", "This assumption is justified: if we consider a negatively correlated movement between PC1 and PC2, we may see transitions in the order $1+, 2-, 1+$ .", "Here the two couplets $1+,2-$ and $2-,1+$ both represent the same phenomenon (see Fig S3).", "This leads to an additional 16 observables $\\langle S_{xy}^{\\alpha \\beta } \\rangle $ , where $x,y\\in \\lbrace 1,2\\rbrace $ are shape PCs and $\\alpha , \\beta \\in \\lbrace -,+\\rbrace $ denote a change in the component displayed above.", "We constrain our analysis to the first two shape components only, as further components account for relatively little residual variance in shape, whilst increasing computational complexity geometrically.", "As an example, we show the partition function in a single shape component, in which there are 5 observables, $\\lbrace \\langle S^{+} \\rangle ,\\langle S^{++} \\rangle , \\langle S^{+-} \\rangle , \\langle S^{-} \\rangle , \\langle S^{--} \\rangle \\rbrace $ : $Q_{N} &= \\gamma ^{+}\\big [\\gamma ^{+}\\gamma ^{++}+1+\\big (N+1\\big )\\gamma ^{-}\\gamma ^{+-}\\big ] \\nonumber \\\\&+N\\gamma ^{-}\\big [\\gamma ^{+}\\gamma ^{+-}+1+\\big (N-1\\big )\\gamma ^{-}\\gamma ^{--}\\big ] \\nonumber \\\\&+ \\gamma ^{+}+1+N\\gamma ^{-},$ where $\\gamma ^{\\alpha }=e^{\\lambda ^{\\alpha }}$ corresponds to a rate (when divided by $\\delta t$ ), with $\\lambda ^{\\alpha }$ the Lagrange multiplier associated with observable $ \\langle S^{\\alpha } \\rangle $ .", "The first line in Eq.", "(REF ) shows all possible transitions that begin with an increase over the first time-step, and so the whole line shares the factor $\\gamma ^{+}$ , the rate of increase.", "A subsequent increase contributes a further $\\gamma ^{+}$ , as well as a coupling term $\\gamma ^{++}$ which allows us to capture the likelihood of adjacent transitions beyond the naive probability $\\gamma ^{+} \\gamma ^{+}$ .", "A subsequent decrease can happen in $N+1$ ways, each linked to the rate of decrease $\\gamma ^{-}$ .", "The term $\\gamma ^{+-}$ is a coupling constant, controling the likelihood of an adjacent increase and decrease beyond the naive probability $\\gamma ^{+} \\gamma ^{-}$ .", "Finally, the +1 allows for the possibility of no transition occurring in the subsequent time-step.", "The second and third lines correspond to a decrease in the first time-step, and no transition occurring in the first time-step, respectively.", "The Lagrange multipliers corresponding to observables are found using Eq.", "(REF ), which yields a set of equations to be solved simultaneously (see supplementary material for details).", "In the case of a single component, these equations are ${\\begin{@align}{1}{-1}\\langle S^{+} \\rangle &= \\gamma ^{+}\\bigg [2\\gamma ^{+}\\gamma ^{++} + 2 + (2N+1)\\gamma ^{-}\\gamma ^{+-}\\bigg ]\\\\ \\langle S^{-} \\rangle &= \\gamma ^{-}\\bigg [(2N+1)\\gamma ^{+}\\gamma ^{+-}+2N \\\\ &\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; + 2N(N-1)\\gamma ^{-}\\gamma ^{--}\\bigg ]\\end{@align}}\\\\ {\\begin{@align}{1}{-1}\\langle S^{++} \\rangle &= \\gamma ^{+}\\gamma ^{+}\\gamma ^{++}\\\\ \\langle S^{+-} \\rangle &= 2N\\gamma ^{+}\\gamma ^{-}\\gamma ^{+-}\\\\ \\langle S^{--} \\rangle &= N(N-1)\\gamma ^{-}\\gamma ^{-}\\gamma ^{--}.\\end{@align}}\\qquad \\mathrm {6a,,6b,6c,6d,6e}$ The equations for the two-component partition function and Lagrange multipliers can be found in the SI.", "This method effectively allows us to build a map of the commonality of complex, correlated behaviors relative to basic rates of shape change (as quantified using principal components).", "For a given Lagrange multiplier governing a particular correlation, a value less than zero indicates a behavior that is less common than expected, and a value greater than zero represents a behavior that is more common.", "After training our model on Dictyostelium shape trajectories, we confirmed that the method had adequately captured the observed correlations by using them to simulate the shape changes of untreated cells responding to cAMP.", "In order to illustrate the importance of the correlations, we also ran control simulations trained only on the basic rates of increase and decrease in each PC without these correlations.", "We compared the activities of the uncorrelated and correlated simulations against the observed data.", "The uncorrelated model acts entirely proportional to the observed rates (though, interestingly, did not match them; Fig.", "2D).", "In contrast, individual cells from the experimental data show very strong anticorrelation, with increases in one component coupled with decreases in the other.", "This behavior is clearly replicated by the correlated simulations, in both cases appearing in the plot as a red diagonal from the bottom left to the top right.", "Furthermore, we see suppression of turning behavior in both PCs, with the most poorly represented activity (relative to chance) being a switch in direction in either PC (for example 1+ followed by 1-).", "This too is reflected in the correlated simulations.", "The predictive power of MaxCal simulations goes beyond those correlations on which they were directly trained: We tested the simulations' ability to predict repetition of any given transition.", "These patterns took the form of $N$ transitions in $T$ time steps, e.g.", "five 1+ transitions in ten time-steps.", "The MaxCal model predicted frequencies of appearance for these patterns that closely resembled the real data (Fig.", "3A, model in red, real data in black).", "In contrast, the uncorrelated model predicted patterns at a much lower rate, for example there are runs of 5 consecutive increases in PC 1 in the real data at a rate of around one in 1.35 minutes.", "The correlated model predicts this pattern rate to be one in every three minutes.", "The uncorrelated model predicts the same pattern at a rate of one in 6.67 hours.", "This result indicates that no higher-order correlations are required to recapitulate the data, allowing us to avoid the huge increase in model complexity that their inclusion would entail.", "The greater predictive power of the MaxCal model is reflected by its lower Jensen-Shannon divergence from the observed data for these kinds of pattern (Fig.", "3B).", "The MaxCal model also more closely matches the observed probabilities of generating a given number of transitions in a row, with predictions almost perfect up to 4 transitions in a row (twice the length-scale of the measured correlations), and far stronger predictive power than the uncorrelated model over even longer timescales (Fig.", "3C).", "We wondered whether the MaxCal methods would accurately discriminate between biologically relevant conditions.", "To investigate this, we used two comparisons.", "First, we compared shape data from control AX2 cells against the same cells treated with two drugs targeting cytoskeletal function: the phospholipase A2 inhibitor p-bromophenacyl bromide (BPB) and the phosphoinositide 3-kinase inhibitor LY294002 (LY) (for details, see [27]).", "Second, we compared a stable myosin heavy-chain knockout against its parent strain (again AX2) (Fig.", "4A).", "We first looked at the effects of these conditions on the distribution of cell shapes, to see whether their effect could be identified from shape, rather than behavior.", "The drug treatment caused a substantial change in the distribution within the population (Fig.", "4B), but still left a substantial overlap.", "In contrast, the $ \\emph {mhcA}^{-} $ cells showed no substantial difference to their parent in shape distribution (Fig 4C).", "In both cases, the identification of a condition from a small sample of shape data would not be feasible.", "We then compared the behavioral Lagrange multipliers of each condition, found by MaxCal, producing distributions for the estimated values of these by bootstrapping our data (sampling with replacement).", "The values of $\\gamma _{\\:1\\: 1}^{+-}$ and $\\gamma _{\\:2\\: 2}^{+-}$ are lower in the untreated condition than those in drug-treated condition, indicating the persistence of shape change in WT cells (Fig.", "4D).", "The anticorrelation between PCs 1 and 2 through pseudopod splitting is reflected in $\\gamma _{\\:1\\: 2}^{+-}$ and $\\gamma _{\\:1\\: 2}^{-+}$ , both of which have values greater than 1 in WT cells.", "In comparison, the drug-treated cells have only a moderate anticorrelation.", "In the $ \\emph {mhcA}^{-} $ strain, the differences in the values of $\\gamma _{\\:1\\:1}^{+-}$ , $\\gamma _{\\:1\\:2}^{+-}$ and $\\gamma _{\\:1\\:2}^{-+}$ when compared with their parent show similar changes to those observed in drug treatment (Fig.", "4E).", "In both cases, the differences highlighted by these dynamical measurements are striking.", "We then applied the MaxCal model to the task of classification.", "We settled upon classification using k-nearest-neighbors (kNN).", "In order to see how the strength of our prediction improved with more data, we classified based on the preferred class of N repeats, all known to come from the same data set.", "We estimated the classification power of our methods by cross validation, dividing the drug-treated data and its control into three sets containing different cells, and dividing the $ \\emph {mhcA}^{-} $ and its parent by the day of the experiment.", "We first performed the classification by shape alone, taking small subsamples of frames from each cell and projecting them into their shape PCs, with our distance measure for the kNN being the Euclidean distance in these PCs.", "With one, two or 3 PCs, we were able to achieve reasonable classification of the drug-treated cells against their parent as data set size increased, with the accuracy of classification leveling off at around 0.85 (with 1 being perfect and 0.5 being no better than random chance, Fig.", "5A-C, blue).", "In contrast, classification of $ \\emph {mhcA}^{-} $ cells was little better than random chance, even with relatively many data (Fig.", "5A-C, green).", "This is unsurprising given the similarity of the distributions of these two conditions.", "We then calculated our MaxCal multipliers for subsamples of each of these groups, bootstrapping 100 estimates from $20\\%$ of each set.", "We then repeat our kNN classification, instead using for a distance measure the two MaxCal values that best separate our training classes.", "As the test data come from entirely separated sources (in the case of drug-treated cells coming from different cells, and in the case of the $ \\emph {mhcA}^{-} $ being taken on different days), we can be confident that we do not conflate our training and test data.", "In both the drug-treated case and the $ \\emph {mhcA}^{-} $ mutant, the dynamics differ very cleanly between our test and control data.", "As such, our classification is close to perfect even for only a few samples (Fig.", "5D).", "As the two Lagrange multipliers that best classified the data both encoded correlations between adjacent time-steps, we guessed that this short-term memory might be key to recapitulating the dynamic properties of cell shape change.", "A key aspect of the shape dynamics of AX2 cells is the anticorrelation between the first two PCs at the single-cell level (which is definitionally absent at the population level, as PCA produces uncorrelated axes).", "To see if memory is vital to recapitualting this dynamical aspect of cell shape change, we constructed two versions of the master equation for our MaxCal system (see SI for details).", "The first is Markovian (that is, at a given time the probabilities of each possible next event only depend on the current state of the system).", "We ran Gillespie simulations corresponding to this master equation, and compared the correlations of trajectories from these simulations with those from real data.", "The expected anticorrelation is clearly observed in the data (Fig 5E, black line), but the trajectories of our Markovian Gillespie simulations fail to recapitulate it (Fig.", "5E, blue line).", "We then introduced a memory property to the simulations, allowing the probabilities of each possible event to depend on the nature of the previous event (with the very first event taken using the uncorrelated probabilities).", "The model has nine possible states (with each state containing its own set of event probabilities), corresponding to the nine possible events that might have preceeded it.", "These are an increase, a decrease, or no change in each PC indepenently (3x3 events).", "These non-Marovian simulations recovered the distribution of correlations observed in the data (Fig.", "5E, red line).", "This indicates that such features of cell shape change can only be addressed by methods that acknowledge a dependency on past events.", "Eukaryotic chemotaxis emerges from a vast network of interacting molecular species.", "Here, instead of examining the molecular details of chemotaxis in Dictyostelium discoideum, we have inferred properties that capture cell behavior from observations of shape alone.", "For this purpose, we quantified shape using Fourier shape descriptors, reduced these shape data to a small, tractable dimensionality by principal component analysis, and built a minimal model of behavior using the maximum caliber variational method.", "Unlike conventional modeling approaches, such as master equations and their simplifications, our method is intrinsically non-Markovian, capturing memory effects and short-term history in the values of the behavioral signature it yields (see SI for further discussion of memory, and a comparison to the master equation).", "Our approach has the advantage of ease, requiring only the observation of what a cell naturally does [14], without tagging or genetic manipulation, as well as of generality, being independent from the specific and poorly understood biochemistry of any one cell type.", "This is important to understanding chemotaxis, as the biochemistry governing this process can vary greatly: for example, the spermatazoa of C. elegans chemotax and migrate with no actin at all [28], but strategies for accurate chemotaxis might be shared among biological systems and cell types.", "A number of recent studies have demonstrated the importance of pairwise or short-scale correlations in determining complex behaviors both in space and time.", "The behavior of large flocks of starlings can be predicted from the interactions of individual birds with their nearest neighbors [29], [30], and the pairwise correlations between neurons can be used to replicate activity at much higher coupling orders, correctly reproducing the coordinated firing patterns of large groups of cells [23].", "Furthermore, cells in many circumstances use short-range spatial interactions to organise macroscopically [31].", "Interestingly, these systems appear to exhibit self-organized criticality [32], [33], in which the nature of their short-range interactions leads to periods of quiescence puntuated by sudden changes.", "This could indicate the coupling strengths inherent to a system (such as the temporal correlations in our shape modes in Dictyostelium cells) are crucial for complex behavior.", "Absence of this behavior could be an indicator of disease as illustrated by both of our aberrant cell types.", "Here, we employ a very simple classifier to demonstrate the usefulness of our MaxCal multipliers as a measurement by which we can classify cell behaviours.", "We choose MaxCal because it is a minimal, statistical approach to modelling a complex phenomenon, allowing high descriptive power with no assumptions made about the underlying mechanism.", "As our understanding of the molecular biology controlling cell shape improves, an interesting alternative would be to use our data in training recurrent neural network (RNN) auto-encoders, a self-supervised method in which the neural network trains a model to accurately represent the input data.", "In particular, long short-term memory RNNs have recently been used to accurately identify mitotic events [34] in cell video data and classes of random migration based on cell tracks [35].", "The two approaches are not mutually exclusive; MaxCal can provide a neat, compressed basis in which to identify behavioural states of cells, whilst RNNs could be used to learn time-series rules for transitions between behavioural states.", "It is increasingly clear that cell shape is a powerful discriminatory tool [10].", "For example, diffeomorphic methods of shape analysis have the power to discriminate between healthy and diseased nuclei [36].", "Shape characteristics can also be used as an indicator of gene expression [15]: an automated, shape-based classification of Drosophila haemocytes recently showed that shape characteristics correlate with gene expression levels, specifically that depletion of the tumor suppressor PTEN leads to populations with high numbers of rounded and elongated cells.", "Of particular note is the observation from this study that genes regulate shape transitions as opposed to the shapes themselves, illustrating the importance of tools to quantify behavior as well as shape.", "This may be an appropriate approach to take if, for example, creating automated assistance for pathologists when classifying melanocytic lesions (a task which has already proved tractable to automated image analyses [37]), as classes are few in number, predefined and extensive training data are available.", "A drawback of the method used by [15] is that their classes are decided in advance, and the divisions between them are arbitrary.", "This means that the method cannot find novel important features of shape by definition, as it can only pick between classes decided upon by a person in advance.", "A stronger alternative would be to take some more general description of shape and behavior (such as the one we detail here), which could be used to give biopsied cells a quantitative signature.", "Training would then map these data not onto discrete classes, but onto measured outcomes based on the long-term survival of patients.", "It will be important for any such method to account for the heterogeneity of primary tissue samples as small sub-populations, lost in gross measurements, may be key determinants of patient outcomes.", "Such an approach would allow a classifier to identify signs of disease and metastatic potential not previously observed or conceived of by the researchers themselves.", "As machine learning advances, it will be vital to specify the problem without the insertion of our own biases.", "Then, behavioral quantification will become a powerful tool for medicine." ], [ "Methods", "Cell culture.", "The cells used in our experiments are either of the Dictyostelium discoideum AX2 strain, or a stable myosin heavy-chain knockout ($ \\emph {mhcA}^{-} $ ) in an AX2 background.", "Cells are grown in a medium containing $10 \\mu g/mL$ Geneticin 418 disulfate salt (G418) (Sigma-Aldrich) and $10 \\mu g/mL$ Blasticidine S hydrochloride (Sigma-Aldrich).", "Cells are concentrated to $ c = 5 \\times 10^{6}$ cells$/mL$ in shaking culture (150 rpm).", "Five hours prior to the experiment, cells are washed with $17 mM $ K-Na PBS pH 6.0 (Sigma-Aldrich).", "Four hours prior to the experiment, cells are pulsed every 6 minutes with 200nM cAMP, and are introduced into the microfluidic chamber at $c = 2.5 \\times 10^{5} $ cells$/mL$ .", "Measurements are performed with cells starved for 5-7 h. Drug-treated cells were exposed to $200pM$ p-bromophenacyl bromide and $50nM$ LY294002.", "No.", "cells sampled in AX2 control, drug-treated, AX2 parent, $ \\emph {mhcA}^{-} $ are, respectively, 313, 23, 858,198.", "Microfluidics and imaging.", "The microfluidic device is made of a $\\mu $ -slide 3-in-1 microfluidic chamber (Ibidi) as described in (12), with three $0.4 \\times 1.0 mm^{2}$ inflows that converge under an angle of $\\alpha = 32^{\\circ }$ to the main channel of dimension $0.4 \\times 3.0 \\times 23.7 mm^{3}$ .", "Both side flows are connected to reservoirs, built from two $50 ml$ syringes (Braun Melsungen AG), separately connected to a customized suction control pressure pump (Nanion).", "Two micrometer valves (Upchurch Scientific) reduce the flow velocities at the side flows.", "The central flow is connected to an infusion syringe pump (TSE Systems), which generates a stable flow of $1 ml/h$ .", "Measurements were performed with an Axiovert 135 TV microscope (Zeiss), with LD Plan-Neofluar objectives $20x/0.50 N.A.$ and $40x/0.75 N.A.$ (Zeiss) in combination with a DV2 DualView system (Photometrics).", "A solution of $1 \\mu M$ Alexa Fluor 568 hydrazide (Invitrogen) was used to characterize the concentration profile of cAMP (Sigma-Aldrich) because of their comparable molecular weight.", "Image preprocessing.", "We extracted a binary mask of each cell from the video data using Canny edge detection, thresholding, and binary closing and filling.", "The centroid of each mask was extracted to track cell movement.", "Overlapping masks from multiple cells were discarded in order to avoid unwanted contact effects, such as distortions through contact pressure and cell-cell adhesion.", "For each binary mask, the coordinates with respect to the centroid of 64 points around the perimeter were encoded in a complex number, with each shape therefore recorded as a 64 dimensional vector of the form ${\\bf S} = {\\bf x} + i{\\bf y}$ .", "These vectors were passed through a fast Fourier transform in order to create Fourier shape descriptors.", "Principal component analysis was performed on the power spectra (with the power spectrum $P(f) = |s(f)|^{2}$ for the frequency-domain signal $s(f)$ ) to find the dominant modes of variation.", "This approach is superior to simple descriptors such as circularity and elongation, as key directions of variability within the high-dimensional shape data cannot be known a-priori.", "As we have previously reported [13], 90% of Dictyostelium shape variation can be accounted for using the first three principal components (PCs), corresponding to the degree of cell elongation (PC 1), pseudopod splitting (PC 2) and polarization in the shape (PC 3) (Fig.", "1B), with around 85% of variability accounted for in just two, and 80% in one." ], [ "Acknowledgements", "We are grateful to Börn Meier for sharing his data, and to both André Brown, Linus Schumacher and Peter Thomason for a critial reading of the manuscript.", "This work was supported by Cancer Research UK core funding (L.T.", "), the Deutsche Forschungs-gemeinschaft (DFG fund HE5958/2-1), the Volkswagen Foundation grant I/85100 (D.H), and the BBSRC grant BB/N00065X/1 (R.G.E.)", "well as the ERC Starting Grant 280492-PPHPI (R.G.E.", ")." ], [ "Author contributions statement", "LT and RGE designed the study.", "LT and PW performed the experiments, and LT conducted data analysis and modelling.", "All authors (LT, PW, DH, RHI, RGE) analyzed results and data, and wrote the paper.", "Competing interests All authors declare that there is no conflict of interest, neither financial nor non-financial.", "Figure: Fourier shape descriptors reveal low-dimensional shape space.", "(A) (left) The outline of a live cell is converted to a set of coordinates, which are a function of the distance traveled around the perimeter, ss.", "(right) The original outline can be reconstructed with increasing accuracy by increasing the number nn of included Fourier components.", "We record 64 components for each shape and use all of them in our analyses.", "(B) Principal component analysis (PCA) performed on Fourier spectra of cell shapes from 900 cells in a wide range of chemical gradients reveals that 90(83)% of shape variability in D. discoideum can be accounted for in the first three (two) principal components, corresponding to elongation, splitting and polarization in the spatial domain.", "The inset picture above each PC shows its reconstruction in the spatial domain (i.e.", "after reverse Fourier transformation).", "Each is added to (solid line) and subtracted from (dashed line) the mean cell shape descriptor.", "In order to guarantee their invariance when rotated or flipped we use only the power spectrum of the Fourier component, which renders their reconstructions symmetric.", "We show shapes here that are two standard deviations above (solid lines) and below (dashed lines) the mean shape in each PC.", "For Fourier contributions to each PC, see Fig.", "S1.", "For details on data collection and analysis, see Materials and Methods and .", "(C) Example trajectories in the PC 1 and PC 2 shape space for one low- and one high-signal-to-noise ratio cell.", "Example cell outlines from the two trajectories are superimposed in their correct positions.Figure: MaxCal trained simulations reproduce local correlations.", "(A) The panel shows the trajectory of a cell in shape space over time as it shortens, splits pseudopods, commits to one pseudopod and lengthens again.", "Our aim is to distil this complex behavioral information into a small, quantifiable signature for this behavor in a manner that will yield similar signatures for similar behaviors.", "We subdivide the shape space, and register specific small events when cells cross boundaries.", "The elements of our signature are a series of multipliers that are determined by the rates at which particular events are observed.", "We introduce multipliers for two types of events: simple events, which encode the average rate of increase or decrease in each principal component of shape, and correlated events, which encode how often we see certain combinations of simple events in quick succession.", "For every pair of simple events there is a specific multiplier (see C).", "As multipliers for simple events encode the average rate at which transitions between squares occur in any particular direction, the input data for these events are any two adjacent time points, for example the first two red spots.", "Here we see that PC 1 has decreased, increasing the average value of the multiplier λ 1 - \\lambda _{1}^{-}, and PC 2 has increased, thereby increasing the value of λ 2 + \\lambda _{2}^{+}.", "In contrast, the first two yellow points occupy the same square, and as we are encoding the likelihood of transitions, the values of all simple event multipliers are lowered to represent this inactivity.", "Multipliers for correlated events involve three adjacent time points, as they encode the likelihood of any one simple event following another particular simple event.", "For example, the three red dots increase the value of three correlated multipliers.", "λ 11 -- \\lambda _{\\:1\\:1}^{--} due to a decrease in PC 1 being adjacent to another decrease in PC 1, λ 22 ++ \\lambda _{\\:2\\:2}^{++} due to an increase in PC 2 being adjacent to another increase in PC 2, and λ 12 -+ \\lambda _{\\:1\\:2}^{-+} due to a decrease in PC 1 being adjacent to an increase in PC 2.", "The set of yellow dots does not increase the likelihood of any such correlated events as there are no transitions in the first time step.", "The contributions of the whole trajectory can eventually comprise a quantitative signature in which correlated events appear more or less frequently than would be expected by chance (shown below the main diagram, with blue indicating a higher (and red a lower) likelihood.", "The calculation of the magnitudes of these contributions is complicated and makes up much of this paper.", "Note that, though the example trajectory shown here is real, the sub-divisions and time points are instructional only.", "(Inset) In a single timestep (1δt1\\delta t), a cell may be observed to move to any of the eight neighbouring squares, or may move to none of them, resulting in nine possible observed trajectories.", "(B) Diagram of all possible trajectories for a single shape component over time interval 2δt2\\delta t. The redundancy of each path is indicated by the number above.", "(C) Correlation parameters inferred from data.", "Lagrange multipliers are found corresponding to increases (λ 1 + \\lambda _{1}^{+}) and decreases (λ 1 - \\lambda _{1}^{-}) in PC 1 and increases (λ 2 + \\lambda _{2}^{+}) and decreases (λ 2 - \\lambda _{2}^{-}) in PC 2.", "Additional Lagrange multipliers controlling the rates at which these events occur in neighboring time-steps are shown in the table, and rates are assumed to be symmetric (i.e.", "event A followed by event B is equivalent to event B followed by event A).", "(D) Co-occurrences of transitions in neighboring time-steps are shown for simulations based only on naive probabilities (without correlations, left), for data (centre) and for simulations that include short-term temporal correlations (right).", "All are scaled relative to the naive rates of transition observed in the data.Figure: MaxCal model reproduces long-term cell behavior (A) (top) Example track of naive transitions.", "Possible patterns are highlighted: three 2+ in a row and four 1- in a row are given as examples.", "This panel is illustrative- though the track shown does correspond to the shape transition in the cartoon, there are spaces in between all transitions here, and actual 3-in-a-row transitions would be even denser.", "(Bottom) Observed versus predicted probabilities of various patterns of transitions for simulations containing (red) or ignoring (blue) short-term temporal correlations.", "The black line corresponds to perfect matching of predicted and observed probabilities.", "Patterns are always for a single transition type (e.g.", "1+ only), and are of the form “N transitions in T time-steps” for varying N and T, e.g.", "three 1+ transitions in four time-steps.", "T runs from 10 to 20 in unit steps, N runs from T-8 to T in unit steps.", "(B) Jensen-Shannon divergence between patterns of transitions observed in the data and both correlated (red) and uncorrelated (blue) simulations.", "(C) Probability of observing repeated transitions of a single type under 3 models- the correlated model (red), the uncorrelated model (blue), and when observed (black).", "Results shown for 1+ (left) and 1- (right).", "The x-axis gives the number of repeats in a row of this transition.Figure: Maximum caliber coupling strengths reflect behavioral differences.", "(A) Representative phase contrast images of AX2 (L) cells and mhcA - \\emph {mhcA}^{-} (R) cells.", "Scale bar is 10μm10\\mu m. (B) Outlines of cells as they move for all conditions in the study.", "We imaged drug-treated cells paired with an AX2 parental control in differential interference contrast (DIC) and the mhcA - mhcA^{-} control with an AX2 parent in phase contrast microscopy, so we present these controls separately.", "Here, for clarity, we show outlines at intervals of one minute (or thirty frames).", "(C) Distributions in shape space of the BPB/LY treated cells (red) and the mhcA - \\emph {mhcA}^{-} strain (green) vs their respective controls (blue).", "Darker red and green areas show where the compared conditions overlap.", "(D) Lagrange multipliers λ x =log 10 γ x \\lambda _x = \\text{log}_{10}\\: \\gamma _x from MaxCal model trained on untreated cells (blue) and p-bromophenacyl bromide (BPB) and LY294002 treated cells (red).", "Untreated cells have coupling values for λ 11 +- \\lambda _{\\:1\\:1}^{+-} and λ 22 +- \\lambda _{\\:2\\:2}^{+-} much lower than 0, indicating the persistence in the direction of cell shape change (or more accurately, the rarity of reversals).", "In contrast, treatment with causes cells to have couplings values for λ 11 +- \\lambda _{\\:1\\:1}^{+-} and λ 22 +- \\lambda _{\\:2\\:2}^{+-} that are not significantly different to zero, indicating a lack of such persistence.", "Untreated cells also have much higher values than drug-treated for λ 12 +- \\lambda _{\\:1\\:2}^{+-} and λ 12 -+ \\lambda _{\\:1\\:2}^{-+}, indicating a stronger anticorrelation between the two components.", "Remaining distributions of rates can be found in Fig.", "S4.", "The distributions of all 14 rates differ significantly upon drug treatment α=0.001\\alpha =0.001 (Kolmogorov-Smirnov test).", "(E) The distributions for the same parameters for mhcA - \\emph {mhcA}^{-} cells (green) against their parental control (blue).", "The changes in λ 11 +- \\lambda _{\\:1\\:1}^{+-}, λ 12 +- \\lambda _{\\:1\\:2}^{+-} and λ 12 -+ \\lambda _{\\:1\\:2}^{-+} show similar changes to those observed in drug treatment.", "These experiments used phase contrast microscopy rather than the DIC used in the drug-treated case, leading to the differences in the extremity of these coupling coefficients.Figure: Discriminators trained on MaxCal behavioral descriptors perform better than shape alone.", "(A-C) Success rates for k-nearest neighbors based on the shapes of cells alone.", "Some number of samples are taken from the same condition and individually classified using the first 1 (A), 2 (B) or 3 (C) shape PCs, with the most commonly chosen classification used to identify the group.", "Classification is performed on both untreated vs BPB/LY treated (blue) and parental vs mhcA - \\emph {mhcA}^{-} (green) data.", "(D) A similar k-nearest neighbor classification to (A-C), but with distances between neighbors based on the values of the two Lagrange multipliers that best separate the training data.", "(E) Degree of correlation between PC 1 and PC 2 across a number of simulated cell trajectories.", "Three lines are shown, corresponding to the data (black), a non-Markovian (red) master equation, in which transition probabilities depend on the last transition made, and a Markovian (blue) master equation, in which they simply depend on the current state.", "The Markovian model (without memory) lacks the skew toward negative correlations seen in the data.", "The non-Markovian model (with memory) recovers these." ] ]
1906.04553
[ [ "Can we believe the strong-line abundances in giant HII regions and\n emission-line galaxies?" ], [ "Abstract This review is not a compendium of strong-line methods to derive abundances in giant HII regions.", "It is mostly intended for readers who wish to use such methods but do not have a solid background on the physics of HII regions.", "It is also meant to encourage those using abundance results published in the literature to think more thoroughly about the validity of these results." ], [ "Introduction", "The knowledge of the chemical composition of the interstellar medium of galaxies is fundamental for our understanding of their chemical evolution.", "The royal way to derive the chemical abundances of the elements is considered to be the use of the so-called direct method, which is based on the use collisionally excited emission lines to compute the gaseous elemental abundances with respect to hydrogen.", "It involves the measurement of the physical conditions (temperature and electron density distribution) in the emitting plasma.", "This method requires accurate measurements of weak temperature-sensitive lines, reliable atomic data involved in the procedure, reliable ionisation correction factors for unobserved ions and an estimate of the element depletion in dust grains.", "The major problem lies in the fact that abundances derived from optical recombination lines in H ii regions are larger than those derived from collisionally excited lines typically by a factor of two.", "This decades-old problem has so far no generally accepted solution.", "Until it is solved, element abundances in H ii regions cannot be regarded as completely secure, even if important progress has been achieved at all steps of the procedure to derive them.", "The so-called strong line methods to determine abundances in giant H ii regions appeared in 1979.", "[28] and [1] were the first to propose quantitative estimates of the oxygen abundance in absence of detailed plasma diagnostics, by extrapolating observed trends between strong line ratios and oxygen abundance using the few relevant photoionisation models available at that time.", "Since then, with the help of extent photoionisation model grids, numerous high quality observations confirmed that giant H ii regions form basically a one-parameter sequence driven by their `metallicity'In the following, the term `metallicity' will be used to designate the oxygen abundance, as is ubiquitously the case in the literature on the subject.. One could have thought that, with the continuously improving quality of H ii region spectra, strong-line methods would become obsolete.", "The reverse is happening, due to the deluge of spectroscopic data for H ii regions in close-by galaxies as well as in high-redshift galaxies.", "Most of the time, these data either are of insufficient spectral resolution and signal-to-noise to provide a reliable determination of electron temperatures (especially at high metallicities) or they cover a very limited wavelength range allowing the measurement of only a few selected strong lines.", "The purpose of this presentation is not to review the many strong-line methods that are nowadays available as these have been amply presented in the literature [41], [4], [22], but rather to draw attention to problems worth remembering.", "Due to space limitations, references in this text are kept to a minimum, and I apologise to all those who contributed to the subject and do not have their work quoted here.", "More complete sets of references are given in the papers mentioned above." ], [ "Strong line methods: the idea", "Since, in giant H ii regions, strong-line intensity ratios have been shown to present some trends with abundances derived from temperature-based methods, the idea came up that strong lines alone would be sufficient to derive the metallicitiesWe neglect the issue of reddening which can easily be accounted for if both H$\\alpha $ and H$\\beta $ are measured..", "This is true – only statistically, of course – provided some caution is exercised.", "The first difficulty is that any ratio of the kind $I_C/I_R$ where $I_C$ is the intensity of a collisionally excited line and $I_R$ that a hydrogen recombination line is double-valued with respect to metallicity.", "One thus has to choose which of the two regimes – high or low metallicity – is appropriate for the object under study.", "The choice is either based on external arguments (e.g.", "H ii regions in the central parts of galaxies are supposed to be metal-rich, low-mass star-forming galaxies are supposed to be metal-poor, etc.)", "or on an argument based more or less directly on the N/O ratio (expected to be large at high metallicities).", "The second difficulty is that H ii regions do not strictly form a one-parameter sequence.", "While it has been shown that there is a relation between the metallicity, the hardness of the ionising radiation field and the mean ionisation parameter of the H ii regions – which are the main parameters determining their spectrum – the relation likely suffers some dispersion.", "For example, giant H ii regions of same metallicity can be ionised by star clusters of different ages.", "In addition, other parameters (nebular geometry, dust content, ionising photon leakage, etc.)", "also play a role, even if not dominant.", "Their role should be evaluated, especially when one is looking for metallicity differences between two classes of similar objects.", "For example H ii regions in spiral arms versus H ii regions between spiral arms." ], [ "The importance of calibration", "Any strong-line method needs to be calibrated, either using data sets with abundances from direct methods, or using photoionisation models, or a combination of both.", "If based on observations, the calibrating sample must represent well the entire family of H ii regions to be studied.", "For example, calibrations based on local, high-density H ii regions should no be used for giant H ii regions.", "Calibrations based on giant H ii regions should not be used to derive the global metallicity of large portions of galaxies, such as observed in the Sloan Digital Sky Survey.", "One of the drawbacks of observational calibrations is that the weak emission lines needed to determine the electron temperature cannot be observed at very high metallicities because of their strong dependence on the excitation energyEven when [O iii]$\\lambda 4363$ can be measured but is very weak, it may lead to a completely wrong temperature diagnostic for the entire O$^{++}$ zone, as shown by [39]..", "In addition, it is likely that the calibration samples are biased towards the highest temperatures at a given metallicity.", "The recent use of stacked spectra of emission-line galaxies allows one to obtain high signal-to-noise ratios even for very weak lines [21], [2], [10], [3] and opens the way to observational calibrations for high metallicities but the results may depend on the stacking parameters, which have to be carefully chosen, as shown by [6].", "Using model grids, one is free from the selection biases mentioned above since photoionisation models can be produced at any metallicity.", "But photoionisation models are a simplified representation of reality and require ingredients that are not fully known.", "For example, the spectral energy distribution of the ionising radiation relies on stellar atmosphere models which are difficult to validate observationally, as well as on a correct description of the stellar initial mass function and stellar evolution, for which rotation and the effect of binarity have been introduced only recently.", "The geometrical distribution of the nebular gas does play a role which is generally ignored.", "The ratios of elemental abundances with respect to oxygen are not constant: some dispersion is expected, especially for nitrogen and carbon, since their production sites are not identical.", "The presence of dust grains affects both the chemical composition of the gas phase and the line emission – either directly (because of depletion) or indirectly (because of the effect of grains on the ionization structure and on the nebular temperature).", "Calibrations based on different grids of models may lead to different results.", "Finally, the role of atomic data cannot be ignored.", "They play a crucial role in direct abundance determinations as well as in photoionisation models.", "Even recently, some important atomic data have suffered changes (e.g.", "the collision strengths for [O iii] by [29], later revised by [42] and [43]." ], [ "Indices", "The easiest way to obtain metallicities from strong lines is to use analytic expressions based on some indices, such as the famous ([O iii] + [O ii])/H$\\beta $ [28] or [O iii]/[N ii] [1] (which have been recalibrated many times).", "This method is still being used because of its simplicity and minimal observational requirements.", "Other indices have been also proposed since then.", "However, it must be kept in mind that methods based on indices are especially prone to biases which may lead to erroneous astronomical inferences.", "This method has been extended to the use of two indices to account for a second parameter (namely the ionisation level) that influences the H ii regions spectra [24], [31].", "It is customary to compute the error bars taking into account only the uncertainties in the line intensities and not the intrinsic statistical uncertainty of the method." ], [ "Interpolation", "A more elaborate way to obtain abundances from strong lines is to use an iterative approach to interpolate in a grid of models .", "In this very paper, the uncertainties are evaluated by comparison of results using several methods.", "An even more sophisticated approach is to interpolate within a grid by $\\chi ^2$ minimization or a related technique using selected line ratios [7], .", "Instead of a grid of models, one can also use real objects for which direct abundances are available, based on the assumption that `H ii regions with similar intensities of strong emission lines have similar physical properties and abundances' [32] – which is not entirely true." ], [ "Bayesian methods", "A common belief is that, when many lines are used (i.e.", "not only the few classical strong lines such as [O iii], [O ii], [N ii] with respect to H$\\alpha $ or H$\\beta $ ) the resulting abundances will be more accurate.", "This is not necessarily correct.", "Considering additional lines from other elements may give more information on the physical conditions of the nebulae, but at the same time introduce a dependence on abundance ratios that are not varied in the reference grid of models.", "In addition, lines such as [S ii], [Ne iii], [Ar iii], [S iii] being often relatively weak may add noise or biases to the procedure.", "Nevertheless, a treatment of the available information in a Bayesian framework may lead to reliable inferences [9].", "Bayesian methods are becoming more and more popular to determine chemical abundances [5], [49], [44].", "However it is important to realise that each author uses his own philosophy and the results – including the characterisation of the uncertainties – depend on the priors." ], [ "Machine learning", "The next step, which has already been taken, is to use machine learning techniques to derive abundances – together with other properties [46], [48], [47], [14].", "Such techniques try to make the best use of the information encoded in the observational data by considering not only some pre-selected emission lines, but the entire observation set – without necessarily understanding the underlying physics.", "Machine learning techniques are widely used in many areas of science – including human sciences – when analysing huge amounts of data which depend on a large number of parameters whose role is not fully understood and which are sometimes not even identified.", "It is not clear what is their advantage in the case of abundance determinations except, perhaps, their ability to treat a colossal amount of data in a short time.", "Regarding nebulae and their chemical composition, we know, in principle, what governs the production of the emission lines and even what are the steps undergone by the photons generated in the nebulae before being recorded by our observational devices.", "In fact, machine learning approaches which ignore the physics of the studied phenomena may lead to very strange inferences.", "For example in the on-line accompanying data of the study by [48], it is stated that `the 6 most informative features are lines connected to H and He transitions (i.e.", "H$\\beta $ , H$\\delta $ , He i$\\lambda 5876$ ) and 3 metals lines (i.e.", "[O ii]$\\lambda 3726 + \\lambda 3729$ , [Ne iii]$\\lambda 3869$ , and [Ar iii]$\\lambda 7135$ ).", "[...] the two most informative lines, H$\\delta $ and [Ar iii]$\\lambda 7135$ , sum up to over 40% of the total feature importance'.", "Such a statement will of course puzzle anyone familiar with the physics of nebulae!" ], [ "Biases, misinterpretations and mistakes", "The fact that strong-line methods have become more and more sophisticated over the years should not prevent us from keeping a critical eye on their results and on their astrophysical inferences: a blind confidence is not recommended." ], [ "Very indirect abundance estimates", "The real relation between N/O and O/H is the interstellar medium of galaxies is in itself important to know for understanding the production conditions of nitrogen and the chemical history of galaxies [25].", "But it is also critical for a proper determination of the oxygen abundance in most strong-line methods.", "For example, the [N ii]/[O ii] index promoted by [16] to estimate O/H is actually directly related to N/O.", "The same is true for the [O iii]/[N ii] index, although the latter is also linked to the excitation state of the nebula which, empirically, is linked to metallicity.", "Even in the case when the metallicity derivation is based on a library of models, the assumed N/O versus O/H relation plays an important role.", "This has already been mentioned (Section REF ) for the oxygen abundances based on the grid of [8] but is also true for many other works [12].", "So far, N and O are considered independently only in the grids of [30] and [49]." ], [ "Models are not perfect", "Photoionisation models composing the grids are based on very simple geometries for the gas and the dust.", "Their predicted [O iii]$\\lambda 4363$ /[O iii]$\\lambda 5007$ line ratios are often smaller than observed (this explains why strong-line methods often give larger abundances that temperature-based methods).", "This may be due to a density distribution that is too schematic, but also to an input assumed spectral energy distribution (SED) of the ionising photons that is too soft, or to the presence of temperature inhomogeneities of unknown nature.", "Note that the long-lasting problem of the presence of He ii lines in metal-poor H ii regions seems to be solved by the presence of high-mass X-ray binaries (not yet routinely considered in most models for giant H ii regions) although contributions from shocks are not excluded [38].", "Another problem is that, so far, strong-line methods based on model grids consider only ionisation-bounded cases while photon leakage from star-forming complexes is now well-documented.", "This issue should be considered for giant H ii regions although, in the case of integrated observations of emission-line galaxies, one also need to consider the diffuse ionised medium (see Sect.", ")." ], [ "Gigantic grids are not sufficient", "In methods based on large model grids, one may think that the grids are so extensive that they encompass all the possible situations.", "This is far from true.", "If the grid does not include a reasonable representation of all that is seen in nature, the abundance results may be biased.", "For example, the oxygen abundances derived by [45] based on a library of $2 \\times 10^5$ models by [8] were shown by [21] to be strongly overestimated at the high metallicity end.", "Their library assumed a unique value of N/O for all metallicities and the fitting was made with a $\\chi ^2$ technique summing on all the available strong lines, so that a mismatch of the [N ii]/H$\\alpha $ ratio between observations and models required a modification of the O/H ratio.", "Abundances derived from model grids can be incorrect just because the method is inadequate.", "For example, Fig.", "C4 in [49] demonstrates that the hii-chi-mistry code by [30] may give erroneous oxygen abundances when excluding [O iii]$\\lambda 4363$ from the fitting procedure." ], [ "Fake relations", "Empirical expressions to derive abundances from line ratios can artificially reduce an intrinsic scatter.", "This is exemplified in Fig.", "C5 of [49] in the case of the ON calibration from [32].", "When applied to a sample models where N/O and O/H are loosely correlated, the ON calibration tightens the resulting relation significantly.", "Using the hii-chi-mistry code for spaxel-analysis of integral-field spectroscopic observations of the blue compact dwarf galaxy NGC 4670 [18] find that N/O anti-correlates with O/H.", "This result is in clear contrast with previous studies on the behaviour of N/O versus O/H, and likely stems from the fact that the observational uncertainties are very large leading to an apparent anti-correlation between N/O and O/H.", "As a matter of fact, some of the methods to obtain the metallicity are only indirectly linked to this parameter.", "They work just because, in astrophysical situations, other parameters are related to metallicity.", "This is the case of the [N ii]/[O ii] ratio mentioned above, or of [N ii]/H$\\alpha $ [11].", "The latter is very convenient because it requires only a very narrow spectral range and does not depend on reddening.", "But its is essentially driven by N/O and the nebular excitation – which both happen to be astrophysically related to the metallicity." ], [ "Degeneracies", "Because the ratio of collisional and recombination lines is degenerate with respect to metallicity, a way must be found to distinguish high metallicity from low metallicity objects.", "As mentioned in Section , this is generally done using arguments of astrophysical nature.", "The fact that N and O follow a different chemical evolution in galaxies is obviously of great help.", "However, there may be outliers, and those can be of particular astrophysical interest.", "In their bayesian approach BOND, [49] devised a way to find whether an object is oxygen-rich or oxygen-poor, independently of N/O.", "It relies on the semi-strong lines [Ne iii]$\\lambda 3869$ and [Ar iii]$\\lambda 7135$ whose ratio depends strongly on the electron temperature and weakly on ionisation conditions and abundance ratio.", "Indeed, the [Ar iii]$\\lambda 7135$ and [Ne iii]$\\lambda 3869$ lines have different excitation thresholds (1.7 and 3.2 eV, respectively), therefore different dependencies on the electron temperature; argon and neon are two primary elements and both are rare gases not suspected of dust depletion, so their abundance ratio in the gas phase is expected to be constant.", "The [Ar iii]$\\lambda 7135$ and [Ne iii]$\\lambda 3869$ lines do not arise exactly from the same zone, the ionisation potentials of the parent ions being different, but the [O iii]/[O ii] ratio helps figuring out what the ionisation is.", "BOND allows for a possible small deviation of Ne/Ar from the standard cosmic value by applying appropriate weights to their lines.", "It must be noted, however, that BOND will probably not work correctly in the case of large observational errors in the strong line fluxes (say 20 percent)." ], [ "The problem when comparing two samples and the role of the hardness of the SED", "Some claims about the metallicity and its connexion with other galaxy properties could be wrong by actually arising from the fact that different physical conditions in the galaxies – not accounted for in the method used to obtain the metallicity – mimic a change in abundance.", "Such could be the case for the surprising finding by [36] that oxygen abundances are larger in galaxies with older starbursts.", "As showed by [40] this could be due to a wrong interpretation of the [N ii]/H$\\alpha $ index in terms of just oxygen abundance while it is primarily dependent on the hardness of the ionising radiation field which softens with age due to the gradual disappearance of the most massive stars.", "Similarly, the claim that luminous infrared galaxies (LIRGs) have smaller metallicities than local star-forming galaxies [34] should be reexamined having in mind that the hardness of the ionising radiation is likely different in the two categories of galaxies.", "The same can be said about the claim that metallicity gradients flatten in colliding systems [17].", "So far, only the BOND method of [49] determines the oxygen abundance trying to account for the characteristics of the ionising radiation field.", "The semi-strong line He i$\\lambda 5876$ plays a key role in the process." ], [ "Do we get what we want?", "In a recent study using neural networks to obtain galaxy metallicity from three-colour images, [50] suggest that their approach has `learned a representation of the gas-phase metallicity, from the optical imaging, beyond what is accessible with oxygen spectral lines'.", "This a priori surprising statement is based on the fact that with their method they find a mass-metallicity relation for galaxies that is tighter than the one derived by [45].", "They further comment that possibly their method `is measuring some version of metallicity that is more fundamentally linked to the stellar mass, rather than [the metallicity] derived from oxygen spectral lines'.", "Indeed, galaxy colours are not only determined by the metallicities of their stellar populations (which are different from the metallicities of their interstellar medium) but also by their ages, and both are known to be correlated with stellar mass.", "This interesting case should incite us to remain cautious about the interpretation of the oxygen abundance determined by any method: is it really the present-day oxygen abundance that is being measured or something else related to it?" ], [ "Metallicities of entire galaxies and the importance of the DIG", "While the methods to derive the chemical composition of the ionised interstellar medium were developed for giant H ii regions, they are often applied to entire galaxies or, at least, to a large fraction of their volume [45], [23].", "This is not only because observations are now available for galaxies at high redshifts.", "The `characteristic' metallicity of a galaxy is an important constraint for chemical evolution models.", "Past studies have tried to determine it in nearby galaxies from the study of their H ii regions [51], [26].", "Integrated spectra of galaxies combine the spectra of hundreds of H ii regions of various sizes, ages and metallicities.", "They also contain emission from the diffuse ionized gas (DIG), whose characteristics are different from those of bona fide H ii regions [20].", "Integral field spectroscopy of nearby galaxies (etc CALIFA, MANGA, SAMI, MUSE) now allow one to investigate these issues and try to find ways to account for them for a proper characterisation of the metallicity of nearby and distant galaxies.", "The existence of a diffuse component in the ionised medium of galaxies has been acknowledged several decades ago [33], [15].", "On the basis of a sample of about 100 H i selected galaxies it has been claimed that diffuse H$\\alpha $ emission constitutes roughly 60 per cent of the total H$\\alpha $ emission, irrespective of the Hubble type [27].", "In a more detailed study of 391 galaxies from the CALIFA survey [35], [20] have shown that the DIG is quite complex.", "Regions with the lowest equivalent widths of H$\\alpha $ (called hDIG) are likely ionised by hot low-mass evolved stars (HOLMES) while diffuse zones with H$\\alpha $ equivalent widths between 3 and 14 Å (dubbed mDIG) are rather ionised by radiation leaking out from star-forming complexes (SFc).", "The hDIG, mDIG, and SFc contributions to the total H$\\alpha $ luminosity vary systematically along the Hubble sequence, ranging from about (100, 0, 0) per cent in ellipticals and S0's to (9, 60, 31) per cent in Sa-Sb's and (0, 13, 87) per cent in later types.", "Based on resolved spectroscopic observations of nearby galaxies, a few studies are starting to investigate the effect of the DIG on the abundance determination from emission lines in galaxies [37], [19], [13]." ], [ "Conclusion", "While strong-line methods are routinely used to estimate the metallicities of giant H ii regions as well as the characteristic of metallicities of local and high-redshift emission-line galaxies, one can easily be fooled by their apparent simplicity.", "Even the latest more sophisticated approaches involve a certain degree of approximation.", "The purpose of this text was to draw attention to some pitfalls and to encourage astronomers to consider their results with appropriate caution.", "I thank the organisers for their kind invitation and deeply acknowledge financial support from FAPESP (processo 11/51680-6) which allowed me to participate in the workshop." ] ]
1906.04520
[ [ "Uniqueness of determining the variable fractional order in\n variable-order time-fractional diffusion equations" ], [ "Abstract We study an initial-boundary value problem of variable-order time-fractional diffusion equations in one space dimension.", "Based on the wellposedness of the proposed model and the smoothing properties of its solutions, which are shown to be determined by the behavior of the variable order at the initial time, a uniqueness result for an important inverse problem of determination of the variable order in the time-fractional derivative contained in the proposed model from observations of its solutions is obtained." ], [ "Introduction", "Integer-order diffusion equations were derived under the assumptions that the underlying independent and identically distributed particle movements have (i) a mean free path and (ii) a mean waiting time [22], which hold for the diffusive transport of solutes in homogeneous aquifers when the solute plumes were observed to decay exponentially [2], [3].", "However, field tests showed that the diffusive transport of solutes in heterogeneous aquifers often exhibit highly skewed power-law decaying tails [4], [21], [22].", "Traditional practice to address the impact of the heterogeneity of the media is to tweak the variable parameters that multiply the pre-set integer-order diffusion equations to fit the training data, which tends to recover a rapidly varying, scale-dependent diffusivity and may overfit the training data but yield less accurate prediction on testing data [25].", "The reason is that the solutions of integer-order diffusion equations are Gaussian type and so catch exactly the exponential decaying behavior of the solute transport in homogeneous media but not the highly skewed power-law decaying behavior of the solute transport in heterogeneous media.", "Fractional diffusion equations (FDEs) were derived so that their solutions can have highly skewed power-law decaying tails and so catch the same behavior of the solute transport in heterogeneous media, which explains why FDEs model anomalously diffusive transport in heterogeneous aquifers more accurately than integer-order diffusion equations do [21], [22].", "However, FDEs introduce new mathematical issues that are not common in the context of integer-order diffusion equations.", "Stynes et al.", "[26] proved that the first-order time derivatives of the solutions to the time-fractional diffusion equations (tFDEs) of order $0 < \\alpha < 1$ have a singularity of order $O(t^{\\alpha -1})$ near time $t=0$ , which makes the error estimates in the literature that were proved under full regularity assumptions of the true solutions inappropriate.", "Nevertheless, the singularity of the solutions to the tFDEs at $t=0$ does not seem to be physically relevant to the subdiffusive transport they model.", "The current consensus why this phenomenon occurs lies in the incompatibility between the nonlocality of the power law decaying tails and the locality of the classical initial condition at the time $t=0$ .", "However, the key issue of developing a physically relevant tFDE model that can correct the nonphysical behavior of the existing tFDEs remains unresolved.", "It was speculated in [32] that to eliminate their nonphysical singularity at the time $t=0$ , as $t \\rightarrow 0^+$ the power law decaying tail of the solutions to tFDEs should switch smoothly to an exponentially decaying tail to account for the impact of locality of the initial condition at the time $t=0$ .", "That is, a physically relevant tFDE model should bear a variable-order near the time $t=0$ , since the power of the power law decaying tail is determined by the order of the tFDEs.", "Moreover, variable-order tFDEs themselves occur in many applications [15], [18], [28], [29], [35], [37], as the order of tFDEs is closely related to the fractal dimension of the porous media determined by the Hurst index [21] that changes as the geometrical structure or property of the media changes, e.g., in such applications as shale gas production due to the hydraulic fracturing and shape memory polymer due to the change of its miscrostructure [15], [28], [29].", "Although variable-order tFDEs have appeared in increasingly more applications, their rigorous mathematical analysis is meager.", "In [31], a piecewise-constant order fractional ordinary differential equation was solved analytically on each time interval of a constant order via the Mittag-Leffler expression of the solution to the constant-order fractional differential equations [6], with the solutions on the previous pieces and the solution value at the left end of the current piece as the source term and the initial data, respectively.", "Kian et al.", "[11] studied the wellposedness of a variable-order tFDE, in which the variable order is a function of the space variable, so the Laplace transform technique, which is a widely used analysis technique in the study of tFDEs, can be fully utilized to derive the desired results.", "In the variable-order tFDEs that occur in applications the variable order tends to be time dependent, hence the previous analytical techniques do not apply in the current context.", "New techniques have to be developed to analyze the wellposedness of these problems, and, probably more importantly, the regularity of the solutions to these problems.", "In [32] we proved the wellposedness of the classical initial-boundary value problems of variable-order linear tFDEs in multiple space dimensions.", "We further proved that the regularity of their solutions depends on the behavior of the variable order (and its derivatives) at time $t=0$ , in addition to the usual smoothness assumptions.", "More precisely, their solutions have the full regularity as their integer-order analogues do if the variable order has an integer limit at $t=0$ or have certain singularity at $t=0$ as the constant-order tFDEs do if the variable order has a non-integer value at time $t=0$ .", "In most applications, the parameters in the governing diffusion equations, such as the diffusivity coefficients, source term, the boundary and initial data, are not given a priori.", "Rather, they have to be inferred from the measurements as an inverse problem by solving the forward problems repeatedly [12].", "In recent years the inverse problems of determining the parameters in (constant-order) tFDEs, in particular the order(s) of tFDEs that was not encountered in the inverse problems of integer-order diffusion equations, have attracted increasingly more research actitivities [9], [10], [16], [17], [34], [36].", "In [5] the uniqueness of the inverse problem of simultaneously determining the fractional order and the diffusivity coefficients of the one-dimensional tFDEs with the homogeneous right-hand side and variable diffusivity coefficients with the the observations at the left end point of the spatial interval, was proved.", "The inverse problem of determining the fractional order and a time varying kernel in tFDEs was studied in [8].", "A simultaneous inversion of the space-dependent diffusivity coefficient and the fractional order in one-dimensional tFDEs based on observations from the end points of the spatial interval was developed in [14].", "A numerical inversion of the fractional orders of the multi-term tFDEs in multiple space dimensions was studied in [27].", "To our best knowledge, up to now there is no mathematically proved result on the determination of the variable fractional order in variable-order tFDEs.", "The only available result is a numerical study of simultaneously determining the fractional order and the diffusion coefficient in a variable-order tFDE in which the variable order depends on both space and time variables [33].", "In this paper we prove the uniqueness of determining the variable fractional order of the initial-boundary value problems of variable-order tFDEs in one space dimension with some available observed values of the unknown solutions inside the spatial domain.", "The rest of the paper is organized as follows: In Section 2 we discuss the modeling issues and present a physically relevant initial-boundary value problem of variable-order tFDEs.", "In Section 3 we address the wellposedness of the proposed model and the smoothing properties of its solutions, based on which we prove a uniqueness result in an inverse problem of determination of variable orders in the proposed model from some available observed values of the unknown solutions inside the spatial domain in Section 4." ], [ "Modeling issues by tFDEs", "In this section we address the modeling issues of tFDEs.", "We begin with the widely used conventional tFDE model of order $0 < \\alpha < 1$ [21], [22] ${}_0^C D_t^{\\alpha }u - K\\,u_{xx} = 0,$ where the Caputo fractional differential operator ${}_0^CD_t^{\\alpha }$ is defined by [24] $\\begin{array}{rl}\\displaystyle {}_0I_t^{\\alpha }g(t) & \\displaystyle := \\frac{1}{\\Gamma (\\alpha )} \\int _0^t \\frac{g(s)}{(t-s)^{1-\\alpha }}ds, \\\\[0.125in]\\displaystyle {}_0^C D_t^\\alpha g(t) & \\displaystyle := {}_0I_t^{1-\\alpha }g^{\\prime }(t) = \\frac{1}{\\Gamma (1-\\alpha )} \\int _0^t \\frac{g^{\\prime }(s)}{(t-s)^{\\alpha }}ds.\\end{array}$ We note that the tFDE (REF ) was derived via a stochastic framework of the continuous time random walk formulation as the number of particle jumps tends to infinity (while the mean jump length shrinks) [21], [22].", "In other words, the tFDE (REF ), as the diffusion limit of the continuous time random walk in the phase space, holds for relatively large time $t > 0$ , rather than all the way up to the time $t=0$ as often assumed in the literature.", "This gap explains partially why the solutions to the conventional initial-boundary value problem of the tFDE (REF ) exhibit nonphysical singularity near the initial time $t = 0$ .", "A modified tFDE model was derived in [25] to model the anomalously diffusive transport of solute in heterogeneous porous media $\\begin{array}{c}u_t + k\\;{}_0^C D_t^{\\alpha } u - K\\, u _{xx} = 0,~~(x,t) \\in [0,L]\\times (0,T]; \\\\[0.05in]\\displaystyle u(x,0)=u_0(x),~x\\in [0,L],\\quad u(0,t)=u(L,t) = 0,~t \\in [0,T].\\end{array}$ Here $u_t$ is the first-order derivatives of $u$ and $K>0$ is the diffusivity coefficient.", "With the prescribed initial distribution $u_0({x})$ of the solute that is often present in the water phase initially, the governing tFDE (REF ) models the anomalously diffusive transport of the solute in the heterogeneous porous media.", "During the transport process, a large amount of solute may get absorbed to the solid matrix due to the impact of adsorption that deviates from the transport of the solute in the bulk fluid phase.", "The absorbed solute may get slowly released later to the bulk fluid phase that leads to a subdiffusive transport process, which makes the remediation effort of the contaminant solute in groundwater much less effective than it is observed in laboratory experiments.", "In fact, some studies suggested that the remediation of contaminated aquifers may take decades or centuries longer than integer-order diffusione equation models had predicted [21].", "In the traditional integer-order diffusion equation models, the amount of the adsorbed solute was expressed as a (linear or nonlinear) function of the amount of the solute in the bulk phase to account for the impact of adsorption and desorption.", "This would yield a retardation coefficient in front of the first-order time derivative term in the integer-order diffusion equation [3].", "As it relates only the amount of adsorbed solute in the solid matrix to the amount of solute in the bulk phase at the current time instant, the conventional integer-order diffusion equation model does not account for the amount accumulated to the solid matrix.", "In the tFDE model (REF ), the $u_t$ term models the Fickian diffusive transport of the solute in the bulk fluid phase, which consists of $1/(1+k)$ portion of the total solute mass, and the $k\\;{}_0^C D_t^{\\alpha } u$ term models the subdiffusive transport of the solute absorbed to the solid matrix, which is $k/(1+k)$ portion of the total solute mass.", "Note that the governing tFDE (REF ) holds on the entire time interval including the initial time $t=0$ .", "A possible remedy to eliminate the nonphysical singularity of the solutions to the initial-boundary value problem of the tFDE (REF ) (and (REF )) as proved in [26] is to vary their power law decaying tails smoothly to exponentially decaying tails as $t \\rightarrow 0^+$ to account for the impact of locality of the initial condition at the time $t=0$ .", "This leads to variable-order tFDEs.", "In shale gas production, shale formation often has insufficient permeability due to the existence of fine-scale pores that results in a large amount of shale gas molecules absorbed to the surface of the pore-throat structures and significant decrease of flow rate to the wellbore.", "Hydraulic fracturing is often used to increase the pore sizes and so the kermeability of the shale formation and to increase shale gas production, which changes the fractal dimension of the shale formation and so the order of the tFDEs [13], [21].", "This again leads to variable-order tFDEs.", "Motivated by these observations, in this paper we consider the initial-boundary value problem of the following variable-order linear tFDE $\\begin{array}{c}u_t + k(t)\\; {}_0^C D_t^{\\alpha (t)} u -K \\,u_{xx}= 0,~~(x,t) \\in [0,L]\\times (0,T]; \\\\[0.1in]u(x,0)=u_0(x),~x\\in [0,L], \\quad u(0,t)=u(L,t) = 0,~t \\in [0,T].\\end{array}$ Here ${}_0I_t^{\\alpha (t)}$ and ${}_0^C D_t^{\\alpha (t)}$ denote the variable-order fractional integral and Caputo fractional differential operators, respectively [18], [28], [29] $\\begin{array}{rl}\\displaystyle {}_0 I_t^{\\alpha (t)}g(t) &\\displaystyle := \\frac{1}{\\Gamma (\\alpha (t))}\\int _0^t\\frac{g(s)}{(t-s)^{1-\\alpha (t)}}ds,\\\\[0.125in]\\displaystyle {}_0^C D_t^{\\alpha (t)}g(t) & \\displaystyle :={}_0 I_t^{1-\\alpha (t)}g^{\\prime }(t) = \\frac{1}{\\Gamma (1-\\alpha (t))}\\int _0^t\\frac{g^{\\prime }(s)}{(t-s)^{\\alpha (t)}}ds,\\end{array}$ which are a variable-order analogue of the constant-order fractional integral and differential operators defined in (REF )." ], [ "Wellposedness and smoothing properties of the variable-order linear tFDE (", "Let $m \\in \\mathbb {N}$ , the set of nonnegative integers and let ${\\cal I} \\subset \\mathbb {R}$ be a bounded (open or closed or half open and half closed) interval.", "Let $C^m(\\cal I)$ be the spaces of continuous functions with continuous derivatives up to order $m$ on $\\cal I$ and $C(\\mathcal {I}) := C^0(\\mathcal {I})$ , equipped with the standard norms.", "Let $H^{m}(0,L)$ be the Sobolev spaces of Lebesgue square integrable functions with their weak derivatives of order $m$ being in $L^2(0,L)$ .", "Let $H^m_0(0,L)$ be the completion of $C^\\infty _0(0,L)$ , the space of infinitely differentiable functions with compact support in $[0,L]$ , in $H^m(0,L)$ .", "For non-integer $s\\ge 0$ , the fractional Sobolev spaces $H^s(0,L)$ are defined by interpolation.", "All the spaces are equipped with the standard norms [1].", "For a Banach space ${\\cal X}$ the norm $\\Vert \\cdot \\Vert _{\\cal X}$ , let $C^m(\\mathcal {I};\\mathcal {X})$ be the Banach spaces equipped with the norm unctions on the interval $\\cal I$ [1], [7] $\\begin{array}{c}\\displaystyle C^m(\\mathcal {I};\\mathcal {X}) := \\bigg \\lbrace w(x,t) : \\Bigl \\Vert \\frac{\\partial ^l w}{\\partial t^l} \\Bigr \\Vert _{\\mathcal {X}} \\in C^l({\\cal I}), \\quad l = 0,1,\\ldots ,m \\bigg \\rbrace ,\\\\[0.15in]\\displaystyle \\Vert w \\Vert _{C^m({\\mathcal {I}};\\mathcal {X})} := \\max _{0 \\le l \\le m} \\sup _{t \\in {\\mathcal {I}}} ~ \\Bigl \\Vert \\frac{\\partial ^l w}{\\partial t^l} \\Bigr \\Vert _{\\mathcal {X}}.\\end{array}$ To better characterize the temporal singularity of the solution at the initial time, we define the weighted Banach spaces involving time $C^m_\\gamma ((0,T];\\cal X)$ with $m\\ge 2$ , $0\\le \\mu <1$ modified from those in [20] $\\begin{array}{c}\\displaystyle C^m_\\mu ((0,T];\\mathcal {X}):= \\big \\lbrace w \\in C^1([0,T];\\mathcal {X}) : \\Vert w \\Vert _{C_\\mu ^m((0,T];\\mathcal {X})}<\\infty \\big \\rbrace ,\\\\[0.075in]\\displaystyle \\Vert w \\Vert _{C_\\mu ^m((0,T];\\mathcal {X})} := \\Vert w \\Vert _{C^{1}([0,T];\\mathcal {X})} + \\sum _{l=2}^{m} \\sup _{t \\in (0,T]} t^{l-1-\\mu }\\Bigl \\Vert \\frac{\\partial ^l w}{\\partial t^l} \\Bigr \\Vert _{\\mathcal {X}}.\\end{array}$ Let $\\lbrace \\lambda _i,\\phi _i \\rbrace _{i=1}^\\infty $ be the eigenvalues and eigenfunctions of the Sturm-Liouville problem $- K\\, D_{xx}\\phi _{i}({x}) = \\lambda _i \\phi _i({x}), ~{x}\\in [0,L]; \\quad \\phi _i(0)=\\phi _i(L) = 0.$ It is known that [7] $\\displaystyle \\lambda _i=\\frac{K\\,i^2\\pi ^2}{L^2},~~\\phi _i(x)=\\sin \\big (\\frac{i\\pi x}{L}\\big ),~~i\\in \\mathbb {N}^+.$ Furthermore, for any $\\gamma \\ge 0$ the Sobolev space defined by $\\begin{array}{c}\\displaystyle \\hspace{-7.22743pt} \\check{H}^{\\gamma }(0,L) := \\Big \\lbrace v \\in L^2(0,L): | v |_{\\check{H}^\\gamma }^2 := \\big ((-D_{xx})^\\gamma v,v\\big ) = \\sum _{i=1}^{\\infty } \\lambda _i^{\\gamma } (v,\\phi _i)^2 < \\infty \\Big \\rbrace \\end{array}$ is a subspace of the fractional Sobolev space $H^\\gamma (0,L)$ characterized by [1], [23], [30] $\\check{H}^\\gamma (0,L) = \\big \\lbrace v \\in H^\\gamma (0,L): ~(-D_{xx})^l v(0)= (-D_{xx})^l v(L)= 0,~~ l < \\gamma /2 ,~~l \\in \\mathbb {N} \\big \\rbrace $ and the seminorms $| v |_{\\check{H}^\\gamma }$ and $| v |_{{H}^\\gamma }$ are equivalent in $\\check{H}^\\gamma $ .", "We make the following assumptions on the data:" ], [ "Suppose that $\\alpha , k \\in C[0,T]$ and $0\\le \\alpha (t)\\le \\alpha _*<1,~t\\in [0,T],~~\\lim _{t \\rightarrow 0^+} \\big ( \\alpha (t) - \\alpha (0) \\big ) \\ln t ~\\mbox{exists}.$ Then the following theorems hold [32].", "Theorem 3.1 Suppose that Assumption A holds, and $u_0 \\in \\check{H}^{\\gamma + 2}$ for some $1/2 < \\gamma \\in \\mathbb {R}^+$ .", "Then problem (REF ) has a unique solution $u \\in C^1\\big ([0,T];H^\\gamma \\big )$ and the following stability estimates hold $\\begin{array}{c}\\Vert u\\Vert _{C([0,T];H^\\gamma (0,L))} \\le Q\\Vert u_0\\Vert _{\\check{H}^\\gamma (0,L)}, \\quad \\Vert u\\Vert _{C^1([0,T];H^\\gamma (0,L))} \\le Q \\Vert u_0\\Vert _{\\check{H}^{2+\\gamma }(0,L)}.\\end{array}$ Theorem 3.2 For $2 \\le n \\in \\mathbb {N}$ , suppose that $u_0\\in \\check{H}^{\\gamma + 2n}$ , $\\alpha , k \\in C^{n-1}[0,T]$ , and (REF ) holds.", "If $\\alpha ^{(l)}(0)=0$ for $0 \\le l \\le n-2$ and $\\lim _{t \\rightarrow 0}\\alpha ^{(n-2)}(t)\\ln t$ exists, then $u \\in C^n([0,T];\\check{H}^\\gamma (0,L))$ such that $\\Vert u\\Vert _{C^n([0,T];\\check{H}^\\gamma (0,L)} \\le Q \\Vert u_0\\Vert _{\\check{H}^{\\gamma +2n}(0,L)}.$ If $\\alpha (0) > 0$ , then $u\\in C^n((0,T];\\check{H}^\\gamma (0,L)) \\cap C^{n}_{1-\\alpha (0)}((0,T];\\check{H}^\\gamma (0,L))$ and $\\Vert u\\Vert _{C^{n}_{1-\\alpha (0)}((0,T];\\check{H}^\\gamma (0,L))} \\le Q \\Vert u_0\\Vert _{\\check{H}^{\\gamma +2n}(0,L)}.$" ], [ "The inverse problem of determining the variable order in variable-order tFDEs", "In this section we prove the main result of this paper, the uniqueness of the inverse problem of determining the variable order in the variable-order tFDE model in (REF ) based on some observations of the solution $u(x,t)$ to the initial-boundary value problem (REF ).", "We first prove a lemma for future use.", "Lemma 4.1 Assume that $\\displaystyle \\sum _{i=1}^\\infty g_i(t)\\phi _i(x)=0,~~(x,t)\\in (a,b)\\times [0,T]$ for some $(a,b)\\subset [0,L]$ .", "Then $g_i(t)\\equiv 0$ on $t\\in [0,T]$ for $i\\in \\mathbb {N}^+$ .", "Proof.", "Let $\\mathbf {G}:=(g_i(t)\\phi _i(x))_{i=1}^\\infty $ .", "Then the relation (REF ) can be rewritten in the vector form by $\\mathbf {\\Lambda }_0^\\top \\mathbf {G}=0$ where $\\mathbf {\\Lambda }_0=(\\cdots ,1,\\cdots )$ .", "We then apply $K\\,D_{xx}$ on both sides of (REF ) and use (REF ) to obtain $\\displaystyle \\sum _{i=1}^\\infty g_i(t)\\lambda _i\\phi _i(x)=0,~~(x,t)\\in (a,b)\\times [0,T],$ which can be rewritten as $\\mathbf {\\Lambda }_1^\\top \\mathbf {G}=0$ with $\\mathbf {\\Lambda }_1=(\\lambda _i)_{i=1}^\\infty $ .", "After repeating this procedure for $n$ times we have $\\mathbf {\\Lambda }_n^\\top \\mathbf {G}=0$ with $\\mathbf {\\Lambda }_n=(\\lambda _i^n)_{i=1}^\\infty $ .", "We collect the relations $\\mathbf {\\Lambda }_n^\\top \\mathbf {G}=0$ for $n\\in \\mathbb {N}$ to get an infinite dimension linear system with a Vandermonde coefficient matrix $\\mathbf {\\Lambda }\\,\\mathbf {G}=\\mathbf {0},~~\\mathbf {\\Lambda }:=(\\mathbf {\\Lambda }_n^\\top )_{n=0}^\\infty .$ As $\\lambda _i\\ne \\lambda _j$ for $i\\ne j$ , (REF ) yields $\\mathbf {G}\\equiv \\mathbf {0}$ , i.e., $g_i(t)\\phi _i(x)=0$ on $(x,t)\\in (a,b)\\times [0,T]$ for $i\\in \\mathbb {N}^+$ .", "For any $i\\in \\mathbb {N}^+$ , there exists an $x_i\\in (a,b)$ such that $\\phi _i(x_i)\\ne 0$ , which immediately leads to $g_i(t)\\equiv 0$ on $t\\in [0,T]$ .", "So the proof is finished.", "$\\blacksquare $ Let the admissible set $\\mathcal {A}$ be defined by $\\mathcal {A}:=\\big \\lbrace \\alpha (t):\\alpha (t)~\\mbox{is analytic on}~[0,T]~\\mbox{and satisfies} ~(\\ref {alpha}) \\big \\rbrace .$ Many types of functions can be taken to fulfill the constraints of the admissible set $\\mathcal {A}$ , e.g., the polynomials that satisfy (REF ).", "We prove the main result of this work in the following theorem.", "Theorem 4.1 Suppose that Assumptions A holds, $k(0)\\ne 0$ , and $0 \\lnot \\equiv u_0 \\in H^{\\gamma +2}(0,L)$ for some $\\gamma > 1/2$ .", "Then the variable order $\\alpha \\in \\mathcal {A}$ in the initial-boundary value problem (REF ) is determined uniquely.", "Namely, let $\\hat{\\alpha }\\in \\mathcal {A}$ and $\\hat{u}(x,t)$ be the solution to the problem $\\begin{array}{c}\\hat{u}_t + k(t)\\; {}_0^C D_t^{\\hat{\\alpha }(t)} \\hat{u} - K\\,\\hat{u}_{xx}= 0,~~(x,t) \\in [0,L]\\times (0,T]; \\\\[0.1in]\\hat{u}(x,0)=u_0(x),~x\\in [0,L]; \\quad \\hat{u}(0,t)=\\hat{u}(L,t) = 0,~t\\in [0,T].\\end{array}$ If $u(x,t) = \\hat{u}(x,t), \\quad (x,t) \\in (a,b)\\times [0,T]$ for some $(a,b)\\subset [0,L]$ , then we have $\\alpha (t) = \\hat{\\alpha }(t), \\quad t \\in [0,T].$ Proof.", "We express $\\hat{u}$ and the solution $u$ to problem (REF ) in terms of $\\lbrace \\phi _i\\rbrace _{i=1}^\\infty $ as follows [19], [23], [26] $\\begin{array}{l}\\displaystyle u(x,t)=\\sum _{i=1}^\\infty u_i(t)\\phi _i(x), ~~ u_i(t) := \\big (u(\\cdot ,t),\\phi _i \\big ),~~t \\in [0,T];\\\\[0.2in]\\displaystyle \\hat{u}(x,t)=\\sum _{i=1}^\\infty \\hat{u}_i(t)\\phi _i(x),~~\\hat{u}_i(t) := \\big (\\hat{u}(\\cdot ,t),\\phi _i \\big ),~~t \\in [0,T].\\end{array}$ We plug these expansions into (REF ) and use (REF ) to obtain $\\begin{array}{l}\\displaystyle \\sum _{i=1}^{\\infty } u_i^{\\prime }(t) \\phi _i(x) + k(t) \\sum _{i=1}^{\\infty } {}_0D_t^{\\alpha (t)} u_i(t)\\phi _i(x) = -\\sum _{i=1}^{\\infty }\\lambda _i u_i(t) \\phi _i(x), \\\\[0.175in]\\displaystyle \\quad \\quad \\quad \\forall x \\in [0,L], ~t \\in (0,T].\\end{array}$ Hence, $u$ is the solution to problem (REF ) if and only if $\\lbrace u_i\\rbrace _{i=1}^\\infty $ satisfy $\\begin{array}{c}\\displaystyle u_i^{\\prime }(t)+k(t) {}_0D_t^{\\alpha (t)} u_i(t) = -\\lambda _i u_i(t), ~~t \\in (0,T],\\\\[0.1in]\\displaystyle u_i(0) = u_{0,i} := (u_0,\\phi _i), ~~i =1,2,\\cdots .\\end{array}$ Similarly, $\\hat{u}$ is the solution to problem (REF ) if and only if $\\hat{u}_i(t)$ satisfy $\\begin{array}{c}\\displaystyle \\hat{u}_i^{\\prime }(t)+k(t) {}_0D_t^{\\hat{\\alpha }(t)} \\hat{u}_i(t) = -\\lambda _i\\hat{u}_i(t), ~~t \\in (0,T],\\\\[0.1in]\\displaystyle \\hat{u}_i(0) = u_{0,i}, ~~i=1,2,\\cdots .\\end{array}$ By (REF ) and (REF ) we have $\\begin{array}{c}\\displaystyle 0=u(x,t)-\\hat{u}(x,t)=\\sum _{i=1}^\\infty (u_{i}(t)-\\hat{u}_{i}(t))\\phi _{i}(x), \\quad (x,t) \\in (a,b)\\times [0,T],\\end{array}$ which, by Lemma REF , implies that $u_{i}(t)=\\hat{u}_{i}(t)$ on $t\\in [0,T]$ for any $i\\in \\mathbb {N}^+$ .", "As $u_0(x)\\lnot \\equiv 0$ , there exists at least one $i_*\\in \\mathbb {N}^+$ such that $u_{0,i_*}\\ne 0$ .", "Then we replace $\\hat{u}_i(t)$ by $u_i(t)$ for $i=i_*$ in the first equation of (REF ) and then subtract it from the first equation in (REF ) to obtain $k(t)\\big ({}^C_0D^{\\alpha (t)}_t-{}_0^CD^{\\hat{\\alpha }(t)}_t\\big )u_{i_*}(t)=0,~~t\\in (0,T],~~u_{i_*}(0)=u_{0,i_*}.$ By the assumptions $k(0)\\ne 0$ and $k(t)\\in C[0,T]$ , there exists an $0<\\varepsilon _1<1$ such that $\\big ({}_0^CD^{\\alpha (t)}_t-{}_0^CD^{\\hat{\\alpha }(t)}_t\\big )u_{i_*}(t)=0,~~t\\in (0,\\varepsilon _1],~~u_{i_*}(0)=u_{0,i_*}.$ For any fixed $t\\in (0,\\varepsilon _1]$ , we consider the variable-order fractional derivative ${}_0^CD_t^{\\alpha (t)}u_{i_*}(t)$ as a function of $\\alpha (t)$ and then apply the following relation $\\begin{array}{l}\\displaystyle \\frac{d}{d\\alpha (t)}\\big ({}_0^CD_t^{\\alpha (t)}u_{i_*}(t)\\big )=\\frac{d}{d\\alpha (t)}\\Big (\\frac{1}{\\Gamma (1-\\alpha (t))}\\int _0^t\\frac{u_{i_*}^{\\prime }(s)ds}{(t-s)^{\\alpha (t)}}\\Big )\\\\[0.15in]\\qquad \\displaystyle =\\int _0^t\\Big (\\frac{\\Gamma ^{\\prime }(1-\\alpha (t))}{\\Gamma ^2(1-\\alpha (t))}\\frac{1}{(t-s)^{\\alpha (t)}}-\\frac{1}{\\Gamma (1-\\alpha (t))}\\frac{\\ln (t-s)}{(t-s)^{\\alpha (t)}}\\Big )u^{\\prime }_{i_*}(s)ds\\\\[0.15in]\\displaystyle \\qquad =\\frac{1}{\\Gamma (1-\\alpha (t))}\\int _0^t\\Big (\\frac{\\Gamma ^{\\prime }(1-\\alpha (t))}{\\Gamma (1-\\alpha (t))}-\\ln (t-s)\\Big )\\frac{u^{\\prime }_{i_*}(s)}{(t-s)^{\\alpha (t)}}ds\\end{array}$ and the mid-point formula on (REF ) to obtain $\\begin{array}{l}\\displaystyle \\Big [\\frac{1}{\\Gamma (1-\\bar{\\alpha }(t))}\\int _0^t\\Big (\\frac{\\Gamma ^{\\prime }(1-\\bar{\\alpha }(t))}{\\Gamma (1-\\bar{\\alpha }(t))}-\\ln (t-s)\\Big )\\\\[0.1in]\\displaystyle \\hspace{122.85876pt}\\times \\frac{u^{\\prime }_{i_*}(s)}{(t-s)^{\\bar{\\alpha }(t)}}ds\\Big ]\\,(\\alpha (t)-\\hat{\\alpha }(t))=0,\\end{array}$ on $t\\in (0,\\varepsilon _1]$ where $\\bar{\\alpha }(t)$ lies in between $\\alpha (t)$ and $\\hat{\\alpha }(t)$ for any $t\\in (0,\\varepsilon _1]$ .", "As $u_{0,i_*}\\ne 0$ , we assume $u_{0,i_*}>0$ without loss of generality.", "By Theorem REF , $u(x,t)$ and $u_t(x,t)$ are continuous in time, which implies that $u_{i_*}(t)=(u(\\cdot ,t),\\phi _{i_*})$ and $u_{i_*}^{\\prime }(t)=(u_t(\\cdot ,t),\\phi _{i_*})$ are also continuous in time on $t\\in [0,T]$ .", "Then there exists a positive $\\varepsilon _2\\le \\varepsilon _1$ such that $u_{i_*}(t)\\ge \\sigma >0$ on $[0,\\varepsilon _2]$ for some $\\sigma >0$ .", "Furthermore, for the first equation of (REF ) with $i=i_*$ , as $t$ tends to 0, the second term on its left-hand side will tends to 0 by the continuity of $u^{\\prime }_{i_*}(t)$ on $t\\in [0,T]$ and hence the integrability of the kernel of integral.", "Then, based on this equation with $i=i_*$ , there exists a positive $\\varepsilon _3\\le \\varepsilon _2$ such that $u^{\\prime }_{i_*}(t)\\le -\\lambda _{i_*}\\sigma /2<0,~~t\\in (0,\\varepsilon _3].$ As $\\alpha (t)$ and $\\hat{\\alpha }(t)$ are bounded away from 1, $\\bar{\\alpha }(t)$ also has a positive upper bound less than 1, which leads to the following estimate $\\displaystyle \\Big |\\frac{\\Gamma ^{\\prime }(1-\\bar{\\alpha }(t))}{\\Gamma (1-\\bar{\\alpha }(t))}\\Big |\\le Q_0,~~t\\in (0,\\varepsilon _1].$ For this constant $Q_0$ , there exists a positive $\\varepsilon _4\\le \\varepsilon _3$ such that $\\ln t<-Q_0$ on $t\\in (0,\\varepsilon _4]$ .", "which implies that $\\frac{\\Gamma ^{\\prime }(1-\\bar{\\alpha }(t))}{\\Gamma (1-\\alpha (t))}-\\ln (t-s)>0,~~0<s<t,~~t\\in (0,\\varepsilon _4].$ We incorporate this with (REF ) to conclude that the integral on the left-hand side of (REF ) is negative on $t\\in (0,\\varepsilon _4]$ .", "Hence the only possible way that the equation (REF ) holds true is $\\alpha (t)-\\hat{\\alpha }(t)=0$ on $t\\in (0,\\varepsilon _4]$ .", "As $\\alpha (t),\\hat{\\alpha }(t)\\in \\mathcal {A}$ , the set of analytic functions satisfying (REF ), we obtain $\\alpha (t)=\\hat{\\alpha }(t)$ on $t\\in [0,T]$ , which finishes the proof.", "$\\blacksquare $" ], [ "Acknowledgements", "This work was funded by the OSD/ARO MURI Grant W911NF-15-1-0562 and by the National Science Foundation under Grant DMS-1620194." ] ]
1906.04371
[ [ "WikiDataSets: Standardized sub-graphs from Wikidata" ], [ "Abstract Developing new ideas and algorithms in the fields of graph processing and relational learning requires public datasets.", "While Wikidata is the largest open source knowledge graph, involving more than fifty million entities, it is larger than needed in many cases and even too large to be processed easily.", "Still, it is a goldmine of relevant facts and relations.", "Using this knowledge graph is time consuming and prone to task specific tuning which can affect reproducibility of results.", "Providing a unified framework to extract topic-specific subgraphs solves this problem and allows researchers to evaluate algorithms on common datasets.", "This paper presents various topic-specific subgraphs of Wikidata along with the generic Python code used to extract them.", "These datasets can help develop new methods of knowledge graph processing and relational learning." ], [ "Motivation", "Relational learning as been a hot topic for a couple of years.", "The most widespread datasets used for benchmarking new methods ([1] [2] [3], [4], [5], [6]) are a subset of Wordnet (WN18) and two subsets of Freebase (FB15k, FB1M).", "Recently, Wikidata has been used in the literature, as a whole in [7] or as subsets (about people and films) in [8], [9].", "It is a diverse and qualitative dataset that could useful for many researchers.", "In order to make it easier to use, we decided to build thematic subsets which are publicly availablehttps://graphs.telecom-paristech.fr/.", "They are presented in this paper." ], [ "Overview of the datasets", "A knowledge graph is given by a set of nodes (entities) and a set of facts linking those nodes.", "A fact is a triplet of the form $(head, relation, tail)$ linking two nodes by a typed edge (relation).", "Wikidata is a large knowledge graph, archives of which can be downloaded.", "We propose five topic-related datasets which are subgraphs of the Wikidata knowledge-graph : animal species, companies, countries, films and humans.", "Each graph includes only nodes that are instances of its topic.", "For example, the dataset humans contains the node George Washington because the fact (George Washington, isInstanceOf, human) is true in Wikidata.", "More exactly, to be included nodes should be instances of the topic or of any sub-class of this topic.", "For example, countries contains USSR though USSR is not an instance of country.", "It is however an instance of historical country which is a subclass of country.", "Topics and example of the corresponding sub-classes are reported in Table REF .", "Each graph includes as edges only the relations that stand true in Wikidata.", "For example, the dataset humans contains the nodes George Whashington and Martha Washington and there is an edge between the two as the fact (George Whashington, spouse, Martha Washington) is true.", "Eventually, Wikidata facts linking the selected nodes to Wikidata entities that were not selected (because not instances of the topic) are also kept as attributes of the nodes.", "Note that as all the edges are labeled, each graph is a knowledge-graph itself.", "For each dataset, some metadata is provided in Table REF along with a couple of examples of nodes, edges and attributes in Table REF , the distributions of the edge types in Tables REF , REF , REF , REF , REF and details of the files in Table REF ." ], [ "Presentation of the code", "The proposed datasets were built using the WikiDataSets package, which was made openly available on PyPIhttps://pypi.org/project/wikidatasets/.", "It is also documented online.", "The three following steps are necessary.", "Call to get_subclasses on the Wikidata ID of the topic entity (e.g.", "Q5 for humans) to fetch the various Wikidata entities which are sub-classes of the topic entity (e.g.", "sub-classes of humans).", "Call to query_wikidata_dump to read each line of the Wikidata archive dump latest-all.json.bz2https://dumps.wikimedia.org/wikidatawiki/entities/ and keep only the lines corresponding to selected nodes.", "This returns a list of facts stored as pickle files.", "Labels of the entities and relations are also collected on the way to be able to provide the labels of the attributes.", "Those labels are collected in English when available.", "Eventually build_dataset turns this list of facts into the five files presented in table REF (facts between nodes, attributes, entity dictionary giving for each entity its label and its Wikidata ID, relation dictionary giving for each relation its label and its Wikidata id)." ], [ "Community Detection", "Communities were detected in the humans dataset with the Louvain algorithm [10].", "Using the Scikit-Networkhttps://pypi.org/project/scikit-network/ framework, we extracted communities from the humans dataset with the Louvain algorithm [10].", "In order to visualize the communities, the 50 nodes of highest degree were then extracted along with their neighbors and were represented using the 3d-force-graph libraryhttps://github.com/vasturiano/3d-force-graph.", "A snapshot of the visualization is presented in Figure REF .", "Navigating through the graph, we find communities that seem to make sense (e.g.", "American artists, Vietnamese political leaders).", "We find as well (in pink on Figure REF ) the Chinese Tang dynasty, each small ball corresponding to an emperor and its wife and children." ], [ "Knowledge Graph Embedding", "As noted before, humans is a knowledge graph and it can be embedded using off-the-shelf methods.", "TransH and ANALOGY were tested using the same hyper-parameters as the ones recommended for the FB15k dataset in the original papers [2], [11].", "Attributes are not included in the process and nodes with less than 5 neighbors are filtered out.", "The facts are then randomly split into training (0.8) and testing (0.2) sets.", "Training was done using the TorchKGEhttps://pypi.org/project/torchkge/ library on Nvidia Titan V GPU during 1,000 epochs.", "The embedding quality was evaluated on a link prediction task as in [1]and results are presented in Table REF .", "Let us note that these scores are quite high but this is mainly because entities involved in less than five facts were filtered out.", "Table: Topics and examples of their sub-classes for each dataset.Table: Metadata of each dataset.Table: Examples of nodes, facts and attributes for each dataset.Table: Details of the files for each dataset.Table: Distribution of the top 20 edge types in the animals dataset.Table: Distribution of the top 20 edge types in the companies dataset.Table: Distribution of the top 20 edge types in the countries dataset.Table: Distribution of the top 20 edge types in the films dataset.Table: Distribution of the top 20 edge types in the humans dataset.Figure: Results of the Louvain algorithm on humans dataset.", "Each node has been assigned a community and each color corresponds to a community.Table: Performances of TransH and ANALOGY models on humans dataset filtered to keep onlyentities involved in more than 5 facts resulting in 238,376 entities and 722,993 facts." ] ]
1906.04536
[ [ "Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval" ], [ "Abstract Visual-semantic embedding aims to find a shared latent space where related visual and textual instances are close to each other.", "Most current methods learn injective embedding functions that map an instance to a single point in the shared space.", "Unfortunately, injective embedding cannot effectively handle polysemous instances with multiple possible meanings; at best, it would find an average representation of different meanings.", "This hinders its use in real-world scenarios where individual instances and their cross-modal associations are often ambiguous.", "In this work, we introduce Polysemous Instance Embedding Networks (PIE-Nets) that compute multiple and diverse representations of an instance by combining global context with locally-guided features via multi-head self-attention and residual learning.", "To learn visual-semantic embedding, we tie-up two PIE-Nets and optimize them jointly in the multiple instance learning framework.", "Most existing work on cross-modal retrieval focuses on image-text data.", "Here, we also tackle a more challenging case of video-text retrieval.", "To facilitate further research in video-text retrieval, we release a new dataset of 50K video-sentence pairs collected from social media, dubbed MRW (my reaction when).", "We demonstrate our approach on both image-text and video-text retrieval scenarios using MS-COCO, TGIF, and our new MRW dataset." ], [ "Introduction", "Visual-semantic embedding [12], [26] aims to find a joint mapping of instances from visual and textual domains to a shared embedding space so that related instances from source domains are mapped to nearby places in the target space.", "This has a variety of downstream applications in computer vision including tagging [12], retrieval [14], captioning [26], visual question answering [24].", "Formally, the goal of visual-semantic embedding is to learn two mapping functions $f: \\mathcal {X} \\rightarrow \\mathcal {Z}$ and $g: \\mathcal {Y} \\rightarrow \\mathcal {Z}$ jointly, where $\\mathcal {X}$ and $\\mathcal {Y}$ are visual and textual domains, respectively, and $\\mathcal {Z}$ is a shared embedding space.", "The functions are often designed to be injective so that there is a one-to-one mapping from an instance $x$ (or $y$ ) to a single point $z \\in \\mathbb {R}^d$ in the embedding space.", "They are often optimized to satisfy the following constraint: $d( f(x_i), g(y_i) ) < d( f(x_i), g(y_j) ), \\:\\:\\:\\: \\forall i \\ne j$ where $d(\\cdot ,\\cdot )$ is a certain distance measure, such as Euclidean and cosine distance.", "This simple and intuitive setup, which we refer to as injective instance embedding, is currently the most popular approach in the literature [53].", "Figure: Cross-modal retrieval in the real-world could be challenging with ambiguous instances (each instance can have multiple meanings/concepts) and their partial associations (not all individual meanings/concepts may match).", "Addressing these two challenges is the focus of this work.Unfortunately, injective embedding can suffer when there is ambiguity in individual instances.", "Consider an ambiguous instance with multiple meanings/senses, e.g., polysemy words and images containing multiple objects.", "Even though each of the meanings/senses can map to different points in the embedding space, injective embedding is always forced to find a single point, which could be an (inaccurate) weighted geometric mean of all the desirable points.", "The issue gets intensified for videos and sentences because the ambiguity in individual images and words can aggregate and get compounded, severely limiting its use in real-world applications such as text-to-video retrieval.", "Another case where injective embedding could be problematic is partial cross-domain association, a characteristic commonly observed in the real-world datasets.", "For instance, a text sentence may describe only certain regions of an image while ignoring other parts [56], and a video may contain extra frames not described by its associated sentence [31].", "These associations are implicit/hidden, making it unclear which part(s) of the image/video the text description refers to.", "This is especially problematic for injective embedding because information about any ignored parts will be lost in the mapped point and, once mapped, there is no way to recover from the information loss.", "In this work, we address the above issues by (1) formulating instance embedding as a one-to-many mapping task and (2) optimizing the mapping functions to be robust to ambiguous instances and partial cross-modal associations.", "To address the issues with ambiguous instances, we propose a novel one-to-many instance embedding model, Polysemous Instance Embedding Network (PIE-Net), which extracts $K$ embeddings of each instance by combining global and local information of its input.", "Specifically, we obtain $K$ locally-guided representations by attending to different parts of an input instance (e.g., regions, frames, words) using a multi-head self-attention module [34], [50].", "We then combine each of such local representation with global representation via residual learning [20] to avoid learning redundant information.", "Furthermore, to prevent the $K$ embeddings from collapsing into the mode (or the mean) of all the desirable embeddings, we regularize the $K$ locally-guided representations to be diverse.", "To our knowledge, we are the first to apply multi-head self-attention with residual learning for the application of instance embedding.", "To address the partial association issue, we tie-up two PIE-Nets and train our model in the multiple-instance learning (MIL) framework [7].", "We call this approach Polysemous Visual-Semantic Embedding (PVSE).", "Our intuition is: when two instances are only partially associated, the learning constraint of Equation (REF ) will unnecessarily penalize embedding mismatches because it expects two instances to be perfectly associated.", "Capitalizing on our one-to-many instance embedding, our MIL objective relaxes the constraint of Equation (REF ) so that only one of $K \\times K$ embedding pairs is well-aligned, making our model more robust to partial cross-domain association.", "We illustrate this intuition in Figure REF .", "This relaxation, however, could cause a discrepancy between two embedding distributions because $(K \\times K - 1)$ embedding pairs are left unconstrained.", "We thus regularize the learned embedding space by minimizing the discrepancy using the Maximum Mean Discrepancy (MMD) [16], a popular technique for determining whether two sets of data are from the same probability distribution.", "We demonstrate our approach on two cross-modal retrieval scenarios: image-text and video-text.", "For image-text retrieval, we evaluate on the MS-COCO dataset [32]; for video-text retrieval, we evaluate on the TGIF dataset [31] as well as our new MRW (my reaction when) dataset, which we collected to promote further research in cross-modal video-text retrieval under ambiguity and partial association.", "The dataset contains 50K video-sentence pairs collected from social media, where the videos depict physical or emotional reactions to certain situations described in text.", "We compare our method with well-established baselines and carefully conduct an ablation study to justify various design choices.", "We report strong performance on all three datasets, and achieve the state-of-the-art result on image-to-text retrieval task on the MS-COCO dataset.", "Figure: We represent each instance with kk embeddings, each representing different parts of the instance, e.g., regions of an image, frames of a video, or words of a sentence.", "Conventional approaches measure the visual-semantic distance by considering all kk embeddings, and thus would suffer when not all concepts are related.", "We instead assume there is a partial match and measure the distance between only the most related combination (squares)." ], [ "Related Work", "Here we briefly review some of the most relevant work on instance embedding for cross-modal retrieval.", "Correlation maximization: Most existing methods are based on one-to-one mapping of instances into a shared embedding space.", "One popular approach is maximizing correlation between related instances in the embedding space.", "Rasiwasia  [41] use canonical correlation analysis (CCA) to maximize correlation between images and text, while Gong  [14] extend CCA to a triplet scenario, e.g., images, tags, and their semantic concepts.", "Most recent methods incorporate deep neural networks to learn their embedding models in an end-to-end fashion.", "Andrew  [2] propose deep CCA (DCCA), and Yan  [57] apply it to image-to-sentence and sentence-to-image retrieval.", "Triplet ranking: Another popular approach is based on triplet ranking [12], [28], [54], [58], which encourages the distance between positive pairs (e.g., ground-truth pairs) to be closer than negative pairs (e.g., randomly selected pairs).", "Frome  [12] propose a deep visual-semantic embedding (DeViSE) model, using a hinge loss to implement triplet ranking.", "Faghri  [10] extend this with the idea of hard negative mining, which focuses on maximum violating negative pairs, and report improved convergence rates.", "Figure: The architecture of Polysemous Visual-Semantic Embedding (PVSE) for video-sentence data.Learning with auxiliary tasks: Several methods learn the embeddings in conjunction by solving auxiliary tasks, e.g., signal reconstruction [11], [8], [49], semantic concept categorization [41], [23], and minimizing the divergence between embedding distributions induced by different modalities [49], [59].", "Adversarial training [15] is also used by many: Wang  [52] encourage the embeddings from different modalities to be indistinguishable using a domain discriminator, while Gu  [17] learn the embeddings with image-to-text and text-to-image synthesis tasks in the adversarial learning framework.", "Attention-based embedding: All the above approaches are based on one-to-one mapping and thus could suffer from polysemous instances.", "To alleviate this, recent methods incorporate cross-attention mechanisms to selectively attend to local parts of an instance given the context of a conditioning instance from another modality [22], [30], e.g., attend to different image regions given different text queries.", "Intuitively, this can resolve the issues with ambiguous instances and their partial associations because the same instance can be mapped to different points depending on the presence of the conditioning instance.", "However, such approach comes with computational overhead at inference time because each query instance needs to be encoded as many times as the number of references instances in the database; this severely limits its use in real-world applications.", "Different from previous approaches, our method is based on multi-head self-attention [34], [50] which does not require a conditioning instance when encoding, and therefore each instance is encoded only once, significantly reducing computational overhead at inference time.", "Beyond injective embedding: Similar to our motivation, some attempts have been made to go beyond the injective mapping.", "One approach is to design the embedding function to be stochastic and map an instance to a certain probability distribution (e.g., Gaussian) instead of a single point [43], [38], [39].", "However, learning distributions is typically difficult/expensive and often lead to approximate solutions such as Monte Carlo sampling.", "The work most similar to ours is by Ren  [44], where they compute multiple representations of an image by extracting local features using the region proposal method [13]; text instances are still represented by a single embedding vector.", "Different from theirs, our method computes multiple and diverse representations from both modalities, where each representation is a combination of global context and locally-guided features, instead of just a local feature.", "Song  [48], a prequel to this work, also compute multiple representations of each instance using multi-head self-attention.", "We extend their approach by combining global and locally-guided features via residual learning.", "We also extend the preliminary version of the MRW dataset with an increased number of sample pairs.", "Lastly, we report more comprehensive experimental results, adding results on the MS-COCO [32] dataset for image-text cross-retrieval." ], [ "Approach", "Our Polysemous Visual-Semantic Embedding (PVSE) model, shown in Figure REF , is composed of modality-specific feature extractors followed by two sub-networks with an identical architecture; we call the sub-network Polysemous Instance Embedding Network (PIE-Net).", "The two PIE-Nets are independent of each other and do not share the weights.", "The PIE-Net takes as input a global context vector and multiple local feature vectors (Section REF ), computes locally-guided features using the local feature transformer (Section REF ), and outputs $K$ embeddings by combining the global context vector with locally-guided features (Section REF ).", "We train the PVSE model in the Multiple Instance Learning (MIL) [7] framework.", "We explain how we make our model robust to ambiguous instances and partial cross-modal associations via our loss functions (Section REF ) and finish with implementation details (Section REF )." ], [ "Modality-Specific Feature Encoder", "Image encoder: We use the ResNet-152 [20] pretrained on ImageNet [46] to encode an image $x$ .", "We take the feature map before the final average pooling layer as local features $\\Psi (x) \\in \\mathbb {R}^{7 \\times 7 \\times 2048}$ .", "We then apply average pooling to $\\Psi (x)$ and feed the output to one fully-connected layer to obtain global features $\\phi (x) \\in \\mathbb {R}^{H}$ .", "Video encoder: We use the ResNet-152 to encode each of $T$ frames from a video $x$ , taking the 2048-dim output from the final average pooling layer, and use them as local features $\\Psi (x) \\in \\mathbb {R}^{T \\times 2048}$ .", "We then feed $\\Psi (x)$ into a bidirectional GRU (bi-GRU) [6] with $H$ hidden units, and take the final hidden states as global features $\\phi (x) \\in \\mathbb {R}^{H}$ .", "Sentence encoder: We encode each of $L$ words from a sentence $x$ using the GloVe [40] pretrained on the CommonCrawl dataset, producing $L$ 300-dim vectors, and use them as local features $\\Psi (x) \\in \\mathbb {R}^{L \\times 300}$ .", "We then feed them into a bi-GRU with $H$ hidden units, and take the final hidden states as global features $\\phi (x) \\in \\mathbb {R}^{H}$ ." ], [ "Local Feature Transformer", "The local feature transformer takes local features $\\Psi (x)$ and transforms them into $K$ locally-guided representations $\\Upsilon (x)$ .", "Our intuition is that different combinations of local information could yield diverse and refined representations of an instance.", "We implement this intuition by employing a multi-head self-attention module to obtain $K$ attention maps, prepare $K$ combinations of local features by attending to different parts of an instance, and apply non-linear transformations to obtain $K$ locally-guided representations.", "We use a two-layer perceptron to implement the multi-head self-attention module.We have experimented with a more sophisticated version of the multi-head self-attention [50], but it did not improve performance further.", "Given local features $\\Psi (x) \\in \\mathbb {R}^{B \\times D}$$B$ is 49 ($=7 \\times 7$ ) for images, $T$ for videos, and $L$ for sentences; $D$ is 2048 for images and videos, and 300 for sentences, it computes $K$ attention maps $\\alpha \\in \\mathbb {R}^{K \\times B}$ : $\\alpha = \\mbox{softmax}\\left( w_2 \\: \\mbox{tanh} \\left( w_1 \\Psi (x)^{\\intercal } \\right)\\right)$ where $w_2 \\in \\mathbb {R}^{K \\times A}$ , $w_1 \\in \\mathbb {R}^{A \\times D}$ ; we set $A=D/2$ per empirical evidence.", "The softmax is applied row-wise so that each of the $K$ attention coefficients sum up to one.", "Finally, we multiply the attention map with local features and further apply a non-linear transformation to obtain $K$ locally-guided representations $\\Upsilon (x) \\in \\mathbb {R}^{K \\times H}$ : $\\Upsilon (x) = \\sigma \\left( (\\alpha \\Psi (x)) w_3 + b_3 \\right)$ where $w_3 \\in \\mathbb {R}^{D \\times H}$ and $b_3 \\in \\mathbb {R}^{H}$ .", "We use the sigmoid as our activation function $\\sigma (\\cdot )$ ." ], [ "Feature Fusion With Residual Learning", "The fusion block combines global features $\\phi (x)$ and locally-guided features $\\Upsilon (x)$ to obtain the final $K$ embedding output.", "We note that there is an inherent information overlap between the two features (both are derived from the same instance).", "To prevent $\\Upsilon (x)$ from becoming redundant with $\\phi (x)$ and encourage it to learn only locally-specific information, we cast the feature fusion as a residual learning task.", "Specifically, we consider $\\phi (x)$ as input to the residual block and $\\Upsilon (x)$ as residuals with its own parameters to optimize ($w_1, w_2, w_3, b_3$ ).", "As shown in [20], this residual mapping makes it easier to optimize the parameters associated with $\\Upsilon (x)$ , helping us find meaningful locally-specific information; in the extreme case, if global features $\\phi (x)$ were the optimal, the residuals will be pushed to zero and the approach will fall back to the standard injective embedding.", "We compute $K$ embedding vectors $z \\in \\mathbb {R}^{K \\times H}$ as: $z = \\mbox{LayerNorm} \\left( \\Phi (x) + \\Upsilon (x) \\right)$ where $\\Phi (x) \\in \\mathbb {R}^{K \\times H}$ is $K$ repetitions of $\\phi (x)$ .", "Following [50], we apply the layer normalization [3] to the output." ], [ "Optimization and Inference", "Given a dataset $\\mathcal {D}=\\lbrace (x_i, y_i)\\rbrace _{i=1}^{N}$ with $N$ instance pairs ($x$ are either images or videos, $y$ are sentences), we optimize our PVSE model to minimize a learning objective: $\\mathcal {L} = \\mathcal {L}_{mil} + \\lambda _1 \\mathcal {L}_{div} + \\lambda _2 \\mathcal {L}_{mmd}$ where $\\lambda _1$ and $\\lambda _2$ are scalar weights that balance the influence of the loss terms.", "We describe each loss term below.", "MIL Loss: We train our model in the Multiple Instance Learning (MIL) framework [7], designing a learning constraint for the cross-modal retrieval scenario: $\\min _{p,q} d( z^x_{i,p}, z^y_{i,q} ) < d( z^x_{i,p}, z^y_{j,q} ), \\:\\:\\:\\: \\forall i \\ne j, \\:\\: \\forall p, q$ where $z^x$ and $z^y$ are the PIE-Net embeddings of $x$ and $y$ , respectively, and $p,q = 1, \\cdots , K$ .", "We use the cosine distance as our distance metric, $d(a,b) = (a \\cdot b) / (\\Vert a\\Vert \\Vert b\\Vert $ ).", "Making an analogy to the MIL for binary classification [1], the left side of the constraint is the “positive” bag where at least one of $K \\times K$ embedding pairs is assumed to be positive (match), while the right side is the “negative” bag containing only negative (mismatch) pairs.", "Optimizing under this constraint allows our model to be robust to partial cross-modal association because it can ignore mismatching embedding pairs of partially associated instances.", "We implement the above constraint by designing our MIL loss function $\\mathcal {L}_{mil}$ to be: $\\frac{1}{N^2} \\sum _{i,j}^{N}\\max \\left(0,\\rho - \\min _{p,q} d(z^x_{i,p}, z^y_{j,q}) + \\min _{p,q} d(z^x_{i,p}, z^y_{i,q})\\right) \\nonumber $ where $\\rho $ is a margin parameter.", "Notice that we have the min operator for $d(z^x_{i,p}, z^y_{j,q})$ , similar to [44]; this can be seen as a form of hard negative mining, which we found to be effective and accelerate the convergence.", "Figure: Our dataset contains videos depicting reactions to the situations described in the corresponding sentences.", "Here we show the four most common reaction types: (a) physical, (b) emotional, (c) animal, (d) lexical.Diversity Loss: To ensure that our PIE-Net produces diverse representations of an instance, we design a diversity loss $\\mathcal {L}_{div}$ that penalizes the redundancy among $K$ locally-guided features.", "To measure the redundancy, we compute a Gram matrix of $\\Upsilon (x)$ (and of $\\Upsilon (y)$ ) that encodes the correlations between all combinations of locally-guided features, i.e., $G_{i,j} = \\sum _{h} \\Upsilon (x)_{ih} \\Upsilon (x)_{jh}$ .", "We normalize each $\\Upsilon (x)_{i}$ prior to the computation so that they are on an $l_2$ ball.", "The diagonal entries in $G$ are always one (they are on a unit ball); the off-diagonals are zero iff two locally-guided features are orthogonal to each other.", "Therefore, the sum of off-diagonal entries in $G$ indicates the redundancy among $K$ locally-guided features.", "Based on this, we define our diversity loss as: $\\mathcal {L}_{div} =\\frac{1}{K^2} \\left( \\Vert G^x - I\\Vert _2 + \\Vert G^y - I\\Vert _2 \\right)$ where $G^x$ and $G^y$ are the gram matrices of $\\Upsilon (x)$ and $\\Upsilon (y)$ , respectively, and $I \\in \\mathbb {R}^{K \\times K}$ is an identity matrix.", "Note that we do not compute the diversity loss on the final embedding representations $z^x$ and $z^y$ because they already have global information baked in, making the orthogonality constraint invalid.", "This also ensures that the loss gets back-propagated through appropriate parts in the computational graph, and does not affect the global feature encoders, i.e., the FC layer for the image encoder, and the bi-GRUs for the video and sentence encoders.", "Domain Discrepancy Loss: Optimizing our model under the MIL loss has one drawback: two distributions induced by $z^x$ and $z^y$ , which we denote by $Z^x$ and $Z^y$ , respectively, may diverge quickly because we only consider the minimum distance pair, $\\min _{p,q} d(z^x_{p}, z^y_{q})$ , in loss computation and let the other $(K \\times K - 1)$ pairs left to be unconstrained.", "It is therefore necessary to regularize the discrepancy between the two distributions.", "One popular way to measure the discrepancy between two probability distributions is the Maximum Mean Discrepancy (MMD) [16].", "The MMD between two distributions $P$ and $Q$ over a function space $\\mathcal {F}$ is $\\mbox{MMD}(P,Q) = \\sup _{f \\in \\mathcal {F}} \\left(\\mathbb {E}_{X \\sim P} \\left[ f(X) \\right]- \\mathbb {E}_{Y \\sim Q} \\left[ f(Y) \\right]\\right)$ When $\\mathcal {F}$ is a reproducing kernel Hilbert space (RKHS) with a kernel $\\kappa : \\mathcal {X} \\times \\mathcal {X} \\rightarrow \\mathbb {R}$ that measures the similarity between two samples, Gretton  [16] showed that the supremum is achieved at $f(x) = \\mathbb {E}_{X^{\\prime } \\sim P}[\\kappa (x, X^{\\prime })] - \\mathbb {E}_{X^{\\prime } \\sim Q}[\\kappa (x, X^{\\prime })]$ .", "Substituting this to Equation (REF ) and squaring the result, and approximating the expectation over our empirical distributions $Z^x$ and $Z^y$ , we have our domain discrepancy loss $\\mathcal {L}_{mmd}$ defined as $\\frac{\\sum \\kappa (z^x_{i,p}, z^x_{j,q})- 2\\sum \\kappa (z^x_{i,p}, z^y_{j,q})+ \\sum \\kappa (z^y_{i,p}, z^y_{j,q})}{K^2N^2} \\nonumber $ where the summation in each term is taken over all pairs of embeddings $(i,j,p,q) \\in [1, \\cdots , K^2N^2]$ .", "We use a radial basis function (RBF) kernel as our kernel function.", "Inference: At test time, we assume a database of $M$ instances (e.g., videos) and their $KM$ embedding vectors.", "Given a query instance (e.g., a sentence), we compute $K$ embedding vectors and find the best matching instance in the database by comparing the cosine distances between all $K^2M$ combinations of embeddings." ], [ "Implementation Details", "We subsample frames at 8 FPS and store them in a binary storage format.https://github.com/TwentyBN/GulpIO We set the maximum length of video to be 8 frames; for videos longer than 8 frames we select random subsequences during training, while during inference we sample 8 frames evenly spread across each video.", "We do not limit the sentence length as it has a minimal effect on the GPU memory footprint.", "We cross-validate the optimal hyper-parameter settings, varying $K \\in [1:8]$ , $H \\in [512, 1024, 2048], \\rho \\in [0.1:1.0], \\lambda _1, \\lambda _2 \\in [0.1, 0.01, 0.001]$ .", "We use the AMSGRAD optimizer [42] with an initial learning rate of 2e-4 and reduce it by half when the loss stagnates.", "We train our model end-to-end, except for the pretrained CNN weights, for 50 epochs with a batch of 128 samples.", "We then finetune the whole model (including the CNN weights) for another 50 epochs." ], [ "MRW Dataset", "To promote future research in video-text cross-modal retrieval, especially with ambiguous instances and their partial cross-domain association, we release a new dataset of 50K video-sentence pairs collected from social media; we call our dataset MRW (my reaction when).", "Table REF provides descriptive statistics of several video-sentence datasets.", "Most existing datasets are designed for video captioning [45], [55], [31], with sentences providing textual descriptions of visual content in videos (video $\\rightarrow $ text relationship).", "Our dataset is unique in that it provides videos that display physical or emotional reactions to the given sentences (text $\\rightarrow $ video relationship); these are called reaction GIFs.", "According to a subreddit r/reactiongifhttps://www.reddit.com/r/reactiongifs: A reaction GIF is a physical or emotional response that is captured in an animated GIF which you can link in response to someone or something on the Internet.", "The reaction must not be in response to something that happens within the GIF, or it is considered a “scene”.", "This definition clearly differentiates ours from existing datasets: There is an inherently weaker association of concepts between video and text; see Figure REF .", "This introduces several additional challenges to cross-modal retrieval, part of which are the focus of this work, i.e., dealing with ambiguous instances and partial cross-domain association.", "We provide detailed data analyses and compare it with existing video captioning datasets in the supplementary material.", "Table: Descriptive statistics of our dataset compared to existing video-sentence datasets.Table: MS-COCO results.", "Besides our results, we also provide previously reported results to facilitate comprehensive comparisons." ], [ "Experiments", "We evaluate our approach on image-text and video-text cross-modal retrieval scenarios.", "For image-text cross-retrieval, we evaluate on the MS-COCO dataset [32]; for video-text we use the TGIF [31] and our MRW datasets.", "For MS-COCO we use the data split of [28], which provides 113,287 training, 5K validation and 5K test samples; each image comes with 5 captions.", "We report results on both 1K unique test images (averaged over 5 folds) and the full 5K test images.", "For TGIF we use the original data split [31] with 80K training, 10,708 validation and 34,101 test samples; since most test videos come with 3 captions, we report results on 11,360 unique test videos.", "For MRW, we use a data split of 44,107 training, 1K validation and 5K test samples; all the videos come with one caption.", "Following the convention in cross-modal retrieval, we report results using Recall@$k$ (R@$k$ ) at $k=1, 5, 10$ , which measures the the fraction of queries for which the correct item is retrieved among the top $k$ results.", "We also report the median rank (Med R) of the closest ground truth result in the list, as well as the normalized median rank (nMR) that divides the median rank by the number of total items.", "For cross-validation, we select the best model that achieves the highest $rsum = R@1 + R@5 + R@10$ in both directions (visual-to-text and text-to-visual) on a validation set.", "While we report quantitative results in the main paper, our supplementary material contains qualitative results with visualizations of multi-head self-attention maps." ], [ "Image-Text Retrieval Results", "Table REF shows the results on MS-COCO.", "To facilitate comprehensive comparisons, we provide previously reported results on this dataset.We omit results from cross-attention models [22], [30] that require a pair of instances (e.g., image and text) when encoding each instance.", "Our approach outperforms most of the baselines, and achieves the new state-of-the-art on the image-to-text task on the 5K test set.", "We note that both GXN [17] and SCO [23] are trained with multiple objectives; in addition to solving the ranking task, GXN performs image-text cross-modal synthesis as part of training, while SCO performs classification of semantic concepts and their orders as part of training.", "Compared to the two methods, our model is trained with a single objective (ranking) and thus could be considered as a simpler model.", "The most direct comparison to ours would be with VSE++ [10].", "Both our model and VSE++ share the same image and sentence encoders.", "When we let our PIE-Net to produce single embeddings for input instances (K=1), the only difference becomes that VSE++ directly uses our global features as their embedding representations, while we use the output from our PIE-Nets.", "The performance gap between ours (K=1) and VSE++ shows the effectiveness of our PIE-Net, which combines global context with locally-guided features produced by our local feature transformer." ], [ "Video-Text Retrieval Results", "Table REF and Table REF show the results on TGIF and MRW datasets.", "Because there is no previously reported results on these datasets for the cross-model retrieval scenario, we run the baseline models and report their results.", "We can see that our method show strong performance compared to all the baselines.", "We provide implementation details of the baseline models in the supplementary material.", "We notice is that the overall performance is much lower than the results from MS-COCO.", "This shows how challenging video-text retrieval is (and video understanding in a broader context), and calls for further research in this task.", "We can also see that there is a large performance gap between the two datasets.", "This suggests the two datasets have significantly different characteristics: the TGIF contains sentences describing visual content in videos, while our MRW dataset contains videos showing one of possible reactions to certain situations described in sentences.", "This makes the association between video and text modalities much weaker for the MRW than for the TGIF.", "Table: Experimental results on the TGIF dataset.Table: Experimental results on the MRW dataset.Figure: Performance (rsum) with different numbers of embeddings, K=[0:8]K=[0:8].", "The results at K=0K=0 is when we take out the PIE-Net and use the global feature as the embedding output.Figure: Performance (rsum) on MS-COCO and MRW with different ablative settings.", "The error bars are obtained from multiple runs over K=[1:8]K=[1:8].Figure: Performance (rsum) on MS-COCO with different loss weights for ℒ div \\mathcal {L}_{div} and ℒ mmd \\mathcal {L}_{mmd}.", "The error bars are obtained from multiple runs of K=[2:4]K=[2:4] and λ (·) =[0.0,0.01,0.1,1.0]\\lambda _{(\\cdot )}=[0.0, 0.01, 0.1, 1.0]." ], [ "Ablation Results", "The number of embeddings $K$ : Tables REF , REF , REF show that computing multiple embeddings per instance improves performance compared to just a single embedding (see the last two rows in each table).", "To better understand the effect of $K$ , we vary it from 1 to 8, and also compare with $K=0$ , a baseline where we bypass our Local Feature Transformer and simply use the global feature as the final embedding representation.", "Figure REF shows the performance on all three datasets based on the rsum metric (R@1 + R@5 + R@10 for image/video-to-text and back).", "The results are from the models before fine-tuning the ResNet-152 weights.", "We can see that there is a significant improvement from $K=0$ to $K=1$ ; this shows the effectiveness of our Local Feature Transformer.", "We can make an interesting observation by comparing the optimal $K$ settings across different datasets: $K=3$ for COCO and TGIF, and $K=5$ for MRW.", "While this cannot be used as strong evidence, we believe this shows the level of ambiguity is higher on MRW than the other two datasets.", "Global vs. locally-guided features: We analyze the importance of global and locally-guided features, as well as different strategies to combine them.", "Figure REF shows results on several ablative settings: No Global is when we use locally-guided features alone (discard global features); No Residual is when we simply concatenate global and locally-guided features, instead of combining them via residual learning.", "We report results on both MS-COCO and MRW because the two datasets exhibit the biggest difference in the level of ambiguity.", "We notice that the performance drops significantly on both datasets when we discard global features.", "Together with $K=0$ results in Figure REF (discard locally-guided features), this shows the importance of balancing global and local information in the final embedding.", "We also see that simply concatenating the two features (no residual learning) hurts the performance, and the drop is more significant on the MRW dataset.", "This suggests our residual learning setup is especially crucial for highly ambiguous data.", "MIL objective: Figure REF also shows the result of No MIL, which is when we concatenate the $K$ embeddings and optimize the standard triplet ranking objective [12], [28], [10], i.e., the “Conventional” setup in Figure REF .", "While the differences are relatively smaller than with the other ablative settings, there are statistically significant differences between the two results on both datasets ($p=0.046$ on MS-COCO and $p=0.015$ on MRW).", "We also see that the difference between No MIL and Ours on MRW is more pronounced than on MS-COCO.", "This suggests thed MIL objective is especially effective for highly ambiguous data.", "Sensitivity analysis on different loss weights: Figure REF shows the sensitivity of our approach when we vary the relative loss weights, i.e., $\\lambda _1$ and $\\lambda _2$ in Equation (REF ).", "Note that the weights are relative, not absolute, e.g., instead of directly multiplying $\\lambda _1 = 1.0$ to $\\mathcal {L}_{div}$ , we first scale it to $\\lambda _1 \\times (\\mathcal {L}_{mil} / \\mathcal {L}_{div})$ and then multiply it to $\\mathcal {L}_{div}$ .", "The results show that both loss terms are important in our model.", "We can see, in particular, that $\\mathcal {L}_{mmd}$ plays an important role in our model.", "Without it, the two embedding spaces induced by different modalities may diverge quickly due to the MIL objective, which may result in a poor convergence rate.", "Overall, our results suggests that the model is not much sensitive to the two relative weight terms." ], [ "Conclusion", "Ambiguous instances and their partial associations pose significant challenges to cross-modal retrieval.", "Unlike the traditional approaches that use injective embedding to compute a single representation per instance, we propose a Polysemous Instance Embedding Network (PIE-Net) that computes multiple and diverse representations per instance.", "To obtain visual-semantic embedding that is robust to partial cross-modal association, we tie-up two PIE-Nets, one per modality, and jointly train them using the Multiple Instance Learning objective.", "We demonstrate our approach on the image-text and video-text cross-modal retrieval scenarios and report strong results compared to several baselines.", "Part of our contribution is also in the newly collected MRW dataset.", "Unlike existing video-sentence datasets that contain sentences describing visual content in videos, ours contain videos illustrating one of possible reactions to certain situations described in sentences, which makes video-sentence association somewhat ambiguous.", "This poses new challenges to cross-modal retrieval; we hope there will be further progress on this challenging new dataset." ], [ "MRW Dataset", "Our dataset consists of 50,107 video-sentence pairs collected from popular social media websites including reddit, Imgur, and Tumblr.", "We crawled the data using the GIPHY APIhttps://developers.giphy.com with query terms mrw, mfw, hifw, reaction, and reactiongif; we crawled the data from August 2016 to March 2019.", "Table REF shows the descriptive statistics of our dataset.", "We are continuously crawling the data, and plan to release updated versions in the future.", "Table: Descriptive statistics of the MRW dataset." ], [ "Previous Work on Animated GIF", "Note that most of the videos in our dataset have the animated GIF format.", "Technically speaking, animated GIFs and videos have different formats; the former is lossless, palette-based, and has no audio.", "In this paper, however, we use the two terms interchangeably because the distinction is unnecessary in our method.", "Below, to provide the context for our work, we briefly review previous work that focused on animated GIF.", "There is increasing interest in conducting research around animated GIFs.", "Bakhshi  [4] studied what makes animated GIFs engaging on social networks and identified a number of factors that contribute to it: the animation, lack of sound, immediacy of consumption, low bandwidth and minimal time demands, the storytelling capabilities and utility for expressing emotions.", "Previous work in the computer vision and multimedia communities used animated GIFs for various tasks in video understanding.", "Jou  [25] propose a method to predict viewer perceived emotions for animated GIFs.", "Gygli  [19] propose the Video2GIF dataset for video highlighting, and further extended it to emotion recognition [18].", "Chen  [5] propose the GIFGIF+ dataset for emotion recognition.", "Zhou  [60] propose the Image2GIF dataset for video prediction, along with a method to generate cinemagraphs from a single image by predicting future frames.", "Recent work use animated GIFs to tackle the vision & language problems.", "Li  [31] propose the TGIF dataset for video captioning; Jang  [24] propose the TGIF-QA dataset for video visual question answering.", "Similar to the TGIF dataset [31], our dataset includes video-sentence pairs.", "However, our sentences are created by real users from Internet communities rather than study participants, thus posing real-world challenges.", "More importantly, our dataset has implicit concept association between videos and sentences (videos contain physical or emotional reactions to sentences), while the TGIF dataset has explicit concept association (sentences describe visual content in videos)." ], [ "Analysis of Facial Expressions", "Facial expression plays an important role in our dataset: 6,380 samples contain the hashtag MFW (my face when), indicating that those GIFs contain emotional reactions manifested by facial expressions.", "To better understand the landscape of our dataset, we analyze the types of facial expressions contained in our dataset by leverage automatic tools.", "First, we count the number of faces appearing in the animated GIFs.", "To do this, we applied the dlib CNN face detector [27] on five frames sampled from each animated GIF with an equal interval.", "The results show that there are, on average, $0.73$ faces in a given frame of an animated GIF.", "Also, 34,052 animated GIFs contain at least one face.", "This means that 72% of our videos contain faces, which is quite significant.", "This suggests that employing techniques tailored specifically for face understanding could potentially improve performance on our dataset.", "Next, we use the Affectiva Affdex [37] to analyze facial expressions depicted in the animated GIFs, detecting the intensity of expressions from two frames per second in each animated GIF.", "We looked at six expressions of basic emotions [9], namely, joy, fear, sadness, disgust, surprise and anger.", "We analyzed only the frames that contain a face with its bounding box region larger than 15% of the image.", "Figure REF shows the results.", "Overall, joy with average intensity of 9.1% and disgust (7.4%) are the most common facial expressions in our dataset." ], [ "Comparison to the TGIF Dataset", "Image and video captioning often involves describing objects and actions depicted explicitly in visual content [32], [31].", "For reaction GIFs, however, visual-textual association is not always explicit.", "For example, as is the case in our dataset, objects and actions depicted in visual content might be a physical or emotional reaction to the scenario posed in the sentence.", "In this section, we qualitatively compare our dataset with the TGIF dataset [31], which contains 120K video-sentence pairs for video captioning.", "We chose the dataset because both datasets contain animated GIFs collected from social media, and thus contain similar visual content.", "Figure: Distributions of nouns and verbs in our MRW and the TGIF  datasets.", "Compared to the TGIF dataset, words in our dataset depict more abstract concepts (e.g., post, time, day, start, realize, think, try), suggesting the ambiguous nature in our dataset.We first compare words appearing in both datasets.", "Figure REF shows word clouds of nouns and verbs extracted from our MRW dataset and the TGIF dataset [31].", "Sentences in the TGIF dataset are constructed by crowdworkers to describe the visual content explicitly displayed in animated GIFs.", "Therefore, its nouns and verbs mainly describe physical objects, people and actions that can be visualized, e.g., cat, shirt, stand, dance.", "In contrast, MRW sentences are constructed by the Internet users, typically from subcommunities in social networks that focus on reaction GIFs.", "As can be seen from Figure REF , verbs and nouns in our MRW dataset additionally include abstract terms that cannot necessarily be visualized, e.g., time, day, realize, think.", "This shows that our dataset contains ambiguous terms and their associations, which pose significant challenges to cross-modal retrieval.", "Next, we compare whether video-sentence associations are explicit/implicit in both datasets.", "To this end, we conducted a user study in which we asked six participants to verify the association between sentences and animated GIFs.", "We randomly sampled 100 animated GIFs from the test sets of both our dataset and TGIF dataset [31].", "We paired each animated GIF with both its associated sentence and a randomly selected sentence from the corresponding dataset, resulting in 200 GIF-sentence pairs per dataset.", "The results show that, in case of our dataset (MRW), 80.4% of the associated pairs are positively marked as being relevant, suggesting humans are able to distinguish the true vs. fake pairs despite implicit concept association.", "On the other hand, 50.7% of the randomly assigned sentences are also marked as matching sentences.", "The high false positive rate shows the ambiguous nature of GIF-sentence association in our dataset.", "In contrast, for the TGIF dataset with clear explicit association, 95.2% of the positive pairs are correctly marked as relevant and only 2.6% of the irrelevant pairs are marked as being relevant.", "This human baseline demonstrates the challenging nature of GIF-sentence association in our dataset, due to their implicit rather than explicit association." ], [ "Application: Animated GIF Search", "Animated GIFs are becoming increasingly popular [4]; more people use them to tell stories, summarize events, express emotion, and enhance (or even replace) text-based communication.", "To reflect this trend, several social networks and messaging apps have recently incorporated GIF-related features into their systems, e.g., Facebook users can create posts and leave comments using GIFs, Instagram and Snapchat users can put “GIF stickers” into their personal videos, and Slack users can send messages using GIFs.", "This rapid increase in popularity and real-world demand necessitates more advanced and specialized systems for animated GIF search.", "Current solutions to animated GIF search rely entirely on concept tags associated with animated GIFs and matching them with user queries.", "The tags are typically provided by users or produced by editors at companies like GIPHY.", "In the former case, noise becomes an issue; in the latter, it is expensive and would not scale well.", "One of the motivations behind collecting our MRW dataset is to build a text-based animated GIF search engine, targeted for real-world scenarios mentioned above.", "Existing video captioning datasets, such as TGIF [31], are inappropriate for our purpose because of the explicit nature of visual-textual association, i.e., sentences simply describe what is being shown in videos.", "Rather, we need a dataset that captures various types of nuances used in social media, e.g., humor, irony, satire, sarcasm, incongruity, etc.", "Because our dataset provides video-text pairs with implicit visual-textual association, we believe that it has the potential to provide training data for building text-based animated GIF search engines targeted for social media.", "To demonstrate the potential, we provide qualitative results on text-to-video retrieval using our dataset, shown in Figure REF .", "Each set of results show a query text and the top five retrieved videos, along with their ranks and cosine similarity scores.", "We would like the readers to take a close look at each set of results and decide which of the five retrieved videos depict the most likely visual response to the query sentence.", "The answers are provided below.", "For better viewing experience, we provide an HTML page with animated GIFs instead of static images.", "We strongly encourage the readers to check the HTML page to better appreciate the results.", "(Answers: 3, 5, 2, 4, 1, 5, 4) Table: NO_CAPTION" ], [ "Baseline Implementation Details", "In the experiment section, we provided baseline results for MS-COCO, TGIF, and MRW datasets.", "For MS-COCO, we provided previously reported results.", "For TGIF and MRW, on the other hand, we reported our own results because there has not been previous results on the datasets.", "Due to the space limit, we omitted implementation details of the baseline approaches; here we provide implementation details of the four baseline approaches: DeViSE [12], VSE++ [10], Order Embedding [51], and Corr-AE [11].", "For fair comparison, all four baselines share the same video and sentence encoders as described in Section 3.1 of the main paper.", "The only difference is in the loss function we train the models with.", "Following the notation used in the main paper, we denote the output of the video and sentence encoders by $\\phi (x)$ and $\\phi (y)$ , respectively.", "We employ the following loss functions for the baselines: DeViSE [12]: We implement the conventional hinge loss in the triplet ranking setup; see Equation (REF ).", "It penalizes the cases when the distance between positive pairs (i.e., the ground truth) is further away than negative pairs (e.g., randomly sampled) with a margin parameter $\\rho $ (we measure the cosine distance).", "VSE++ [10]: We implement the hard negative mining version of the conventional hinge loss triplet ranking loss; see Equation (REF ).", "We have experimented with the original version and found that it fails to find a suitable solution to the objective, producing retrieval results that are almost identical to random guess.", "We suspect that the high noise present in both TGIF and MRW datasets makes the max function too strict as a constraint.", "We therefore replace the $\\max _q$ function with a “filter” function that includes only highly-violating cases while ignoring others.", "Intuitively, we implement the filter function to be an outlier detection function based on z-scores, where any z-score greater than 3 or less than -3 is considered to be an outlier.", "Specifically, we compute the z-scores for all of possible $(i,j,k)$ combinations inside Equation (REF ) and discard instances if their absolute z-score is below 3.0.", "This way, we are considering multiple hard negatives instead of just one.", "We have empirically found this modification to be crucial to achieve reasonable performances on the TGIF and MRW datasets.", "Order Embedding [51]: We used the original implementation provided by the authors of [51].", "Corr-AE [11]: We implement the correspondence cross-modal autoencoder proposed by Feng  [11] (see Figure 4 in [11]).", "Given the encoder output $\\phi (x)$ and $\\phi (y)$ , we build two autoencoders, one per modality, so that each autoencoder can reconstruct both $\\phi (x)$ and $\\phi (y)$ .", "The autoencoders have four fully-connected layers with [512, 256, 256, 512] hidden units, respectively.", "Each of the fully connected layers is followed by a ReLU activation and a layer normalization [3].", "Formally, a video autoencoder takes as input $\\phi (x)$ and outputs $[\\tilde{\\phi }(x|x); \\tilde{\\phi }(y|x)]$ , and a sentence autoencoder takes as input $\\phi (y)$ and outputs $[\\tilde{\\phi }(x|y); \\tilde{\\phi }(y|y)]$ .", "We then train the model by optimizing the loss form shown in Equation (REF ).", "We note that this loss is different from the original formulation of Corr-AE [11], where the first term in Equation (REF ) is replaced by a Euclidean loss, i.e., $\\mathcal {L}_2 = \\frac{1}{N}\\sum _{i=1}^N \\left( \\Vert \\phi (x_i) - \\phi (y_i) \\Vert _2^2\\right)$ .", "We found that using $\\mathcal {L}_2$ instead of $\\mathcal {L}_{DeViSE}$ makes the learning much harder, producing results that is almost identical to random guess." ], [ "Image-to-Text Retrieval Results on MS-COCO", "Figure REF shows examples of visual-textual attention maps on the MS-COCO dataset; the task is image-to-text retrieval.", "The first column shows query images with ground-truth sentences.", "Each of the other three columns shows visual (spatial) attention maps and their top-ranked text retrieval results, as well as their ranks and cosine similarity scores (green: correct, red: incorrect).", "We color-code words in the retrieved sentences according to their textual attention intensity values, normalized between [0, 1].", "A glimpse at the results in each row shows that the three attention maps attend to different regions of the query image.", "Looking closely, we notice that salient regions are typically attended by multiple attention maps.", "For example, all three attention maps in Figure REF highlight: (a) the photographer, (b) the bench, (c) the fruit stand, (e) the pink flowers, (f) the stop sign, (h) the woman, (j) the fire hydrant.", "However, this is not always the case: In Figure REF (i), none of the attention maps highlights the most salient object, the black dog, and each attention map highlights different regions in the image.", "Even though all three attention maps do not “attend to” the dog, their top-ranked text retrieval results are still highly relevant to the query image; all three retrieved sentences have the word dog in them.", "This is possible because our PIE-Net computes embedding vectors by combining global context with locally-guided features.", "In this example, the global context provides information about the black dog, while each of the three locally-guided features contains region-specific information, specifically, (first map): the book shelf, (second map): the floor, (third map): the brown cushion.", "The most interesting observation is that there are subtle variations in the retrieved sentences depending on where the visual attention is focused on.", "For example, in Figure REF (a), the first result focuses on the photographer as a whole, the second focuses on the tiny camera (the visual attention is more narrowly focused on the photographer), and the third focuses on the pizza on the table (notice the visual attention on the table).", "In Figure REF (d), the first result focuses on the ship, the second focuses on the building, and the third on an (imaginary) bird that could have been flying over the buildings.", "In Figure REF (g), the first result focuses on the boat and the muddy water (notice visual attention on the muddy water region at the lower left corner), while the second focuses on the table of people (notice visual attention on the table region).", "In Figure REF (j), the first results focuses on the fire hydrant and the yellow wall that is right behind the hydrant, while the second focuses on the hydrant as well as the building with two windows (notice now the visual attention is more widely spread out than the first result).", "We encourage the readers to look closely at Figure REF to appreciate the subtle variations in the retrieved sentences depending on their corresponding visual attention." ], [ "Video-to-Text Retrieval Results on TGIF", "Figure REF shows examples of visual-textual attention maps on the TGIF dataset; the task is video-to-text retrieval.", "In each set of results, we show: (top) a query video and its ground-truth sentence, (bottom three rows): three visual (temporal) attention maps and their top-ranked text retrieval results, as well as their ranks and cosine similarity scores (green: correct, red: incorrect).", "We color-code words in the retrieval results according to their textual attention intensity values, normalized between [0, 1].", "Similar to the results on MS-COCO, here we see that visual and textual attention maps tend to highlight salient video frames and words, respectively.", "Looking closely, we notice that the retrieved results tend to capture the concepts highlighted by their corresponding visual attention.", "For example, in Figure REF (a), the top ranked result contain “lady dressed in black” and ”drinking a glass of wine”, and the visual attention highlights both the early part of the video, where a woman is drinking from a bottle of whisky, and the latter part, where her black dress is shown.", "For the second ranked result, the visual attention no longer highlights the latter part, and the retrieved text focuses solely on drinking action (no mention of her black dress).", "In Figure REF (b), the top ranked result focuses on scoring a goal, while the second rank result also focus on the guy being hit in the face with the ball.", "Notice the difference of visual attention maps between the first and the second case." ], [ "Text-to-Video Retrieval Results on MRW", "Figure REF shows examples of text-to-video retrieval results on the MRW dataset.", "In each row, we show a query sentence and top five retrieved videos along with their ranks and cosine similarity scores.", "Unlike the previous two figures, here we do not directly show the ground-truth matches (but rather ask the readers to find them; we provide the answers above).", "The purpose of this is to emphasize the ambiguous and implicit nature of visual-textual association present in our dataset.", "Most of the top five retrieved videos seem to be a good match to the query sentence.", "For example, Figure REF (a) shows five videos that all contain a human face, each expressing subtly different emotions.", "Figure REF (b) shows five videos that all contain an animal (squirrel, cat, etc), and most videos contain food.", "All five retrieved videos in Figure REF show some form of awkward (dancing) moves.", "We believe that the relatively poor retrieval performance reported in our main paper is partly explained by our qualitative results: visual-textual associations are highly ambiguous and there could be multiple correct matches.", "This calls for a different metric that measures the perceptual similarity between queries and retrieved results, rather than exact match.", "There has been some progress on perceptual metrics in the image synthesis literature (e.g., Inception Score [47]).", "We are not aware of a suitable perceptual metric for cross-modal retrieval, and this could be a promising direction for future research." ] ]
1906.04402
[ [ "Fast Rates for a kNN Classifier Robust to Unknown Asymmetric Label Noise" ], [ "Abstract We consider classification in the presence of class-dependent asymmetric label noise with unknown noise probabilities.", "In this setting, identifiability conditions are known, but additional assumptions were shown to be required for finite sample rates, and so far only the parametric rate has been obtained.", "Assuming these identifiability conditions, together with a measure-smoothness condition on the regression function and Tsybakov's margin condition, we show that the Robust kNN classifier of Gao et al.", "attains, the minimax optimal rates of the noise-free setting, up to a log factor, even when trained on data with unknown asymmetric label noise.", "Hence, our results provide a solid theoretical backing for this empirically successful algorithm.", "By contrast the standard kNN is not even consistent in the setting of asymmetric label noise.", "A key idea in our analysis is a simple kNN based method for estimating the maximum of a function that requires far less assumptions than existing mode estimators do, and which may be of independent interest for noise proportion estimation and randomised optimisation problems." ], [ "Introduction", "Label noise is a pervasive issue in real-world classification tasks, as perfectly accurate labels are often very costly, and sometimes impossible, to produce [23], [6], [4], [11].", "We consider asymmetric label noise with unknown class-conditional noise probabilities – that is, the labels we observe have randomly flipped in some proportion that depends on the class.", "This type of noise is both realistic and amenable to analysis [4], [23].", "In this setting the classical kNN algorithm is no longer consistent (see Section ).", "Most existing theoretical work in this direction assumes that the noise probabilities are known in advance by the learner [22], at least approximately [23].", "However, in many situations such knowledge is not available, for instance in positive unlabelled (PU) learning [10] one may regard unlabelled data as a class of negative examples contaminated with positives in an unknown proportion.", "Other examples include the problem of nucleur particle classification discussed by [4].", "That work also established identifiability conditions sufficient for recovering unknown noise probabilities from corrupted data.", "[3] proved that the identifiability conditions are insufficient to obtain finite sample convergence rates.", "Consequently, [26] introduced additional conditions external to the classification task with which it is possible to obtain the parametric rate (of order $n^{-1/2}$ where $n$ is the sample size) [4].", "To the best of our knowledge it is unknown if faster rates are possible with unknown asymmetric label noise.", "Here we answer this question in the affirmative by analysing an existing Robust kNN classifier [12].", "Previously, [12] conducted a comprehensive empirical study which demonstrates that the Robust kNN, introduced therein, typically outperforms a range of competitors for classification problems with asymmetric label noise.", "[12] also proved the consistency of their method, but only under the restrictive assumption of prior knowledge of the label noise probabilities.", "We prove that the Robust kNN classifier attains fast rates for classification problems in a flexible non-parametric setting with unknown asymmetric label noise.", "More precisely, we work under a measure-smoothness condition on the regression function, introduced in recent analyses of kNN in the noise-free setting [7], termed the `modified Lipschitz' condition in [9], as well as Tsybakov's margin condition.", "We assume in addition conditions equivalent to those of label noise identifiability [4], [19].", "We show that the Robust kNN introduced by [12] attains, up to a log factor, the known minimax optimal fast rate of the label noise free setting – despite the presence of unknown asymmetric label noise." ], [ "Problem Setup", "Suppose we have a feature space $\\mathcal {X}$ with a metric $\\rho $ and a set of labels $\\mathcal {Y}= \\lbrace 0,1\\rbrace $ .", "Let $\\mathbb {P}$ be a fixed but unknown distribution on $\\mathcal {X}\\times \\mathcal {Y}$ .", "Our goal is to learn a classifier $\\phi :\\mathcal {X}\\rightarrow \\mathcal {Y}$ which minimises the risk $\\mathcal {R}\\left(\\phi \\right):= \\mathbb {E}\\left[ \\phi (X) \\ne Y \\right].$ Our data will be generated by a corrupted distribution ${\\mathbb {P}}_{\\text{corr}}$ on $\\mathcal {X}\\times \\mathcal {Y}$ with asymmetric label noise, so there exist probabilities $p_{0}, p_{1}\\in (0,1)$ with $p_{0}+p_{1}<1$ such that random pairs $(X,\\tilde{Y})\\sim {\\mathbb {P}}_{\\text{corr}}$ are generated by $(X,Y) \\sim \\mathbb {P}$ and $\\tilde{Y}\\ne Y$ with probability $p_{Y}$ and $\\tilde{Y}=Y$ otherwise, i.e.", "$p_{0}= {\\mathbb {P}}_{\\text{corr}}[\\tilde{Y}=1|Y=0]$ and $p_{1}= {\\mathbb {P}}_{\\text{corr}}[\\tilde{Y}=0|Y=1]$ .", "We have access to a data set ${\\mathcal {D}}_{\\text{corr}}= \\lbrace (X_i,\\tilde{Y}_i)\\rbrace _{i \\in [n]}$ only, consisting of i.i.d.", "pairs generated from the corrupted distribution $(X_i,\\tilde{Y}_i) \\sim {\\mathbb {P}}_{\\text{corr}}$ .", "We let $\\mu $ denote the marginal distribution over the features i.e.", "$\\mu (A) = \\mathbb {P}\\left[ X \\in A \\right]$ for Borel sets $A \\subseteq \\mathcal {X}$ , and let $\\eta :\\mathcal {X}\\rightarrow [0,1]$ denote the regression function i.e.", "$\\eta (x) = \\mathbb {P}[ Y=1 | X=x]$ .", "Further, let $\\mathcal {X}_{\\mu }\\subseteq \\mathcal {X}$ denote the support of the measure $\\mu $ .", "It follows from the assumption of feature independent label noise that the corrupted distribution ${\\mathbb {P}}_{\\text{corr}}$ has the same marginal distribution as $\\mathbb {P}$ i.e.", "${\\mathbb {P}}_{\\text{corr}}\\left[ X \\in A \\right]= \\mu \\left(A\\right)$ for $A \\subseteq \\mathcal {X}$ .", "Denote by ${\\eta }_{\\text{corr}}:\\mathcal {X}\\rightarrow [0,1]$ the corrupted regression function ${\\eta }_{\\text{corr}}(x) = {\\mathbb {P}}_{\\text{corr}}[\\tilde{Y}=1|X=x]$ .", "As observed in [19], ${\\eta }_{\\text{corr}}$ and $\\eta $ are related by ${\\eta }_{\\text{corr}}(x) &= \\left(1-p_{1}\\right) \\cdot \\mathbb {P}\\left[Y=1|X=x\\right] + p_{0}\\cdot \\mathbb {P}\\left[Y=0|X=x\\right]\\nonumber \\\\&=\\left(1-p_{0}-p_{1}\\right) \\cdot \\eta (x)+p_{0}.$ We shall use this connection to provide a label noise robust plug-in classifier." ], [ "Approach – roadmap", "The `plug-in' classification method is inspired by the fact that the mapping $\\phi _*:\\mathcal {X}\\rightarrow \\mathcal {Y}$ for $x \\in \\mathcal {X}$ defined by $\\phi _*(x) = {1}\\left\\lbrace \\eta (x)\\ge 1/2\\right\\rbrace $ is a Bayes classifier and minimises the risk $\\mathcal {R}(\\phi )$ over all measurable classifiers $\\phi :\\mathcal {X}\\rightarrow \\mathcal {Y}$ .", "The approach is to first produce an estimate $\\hat{\\eta }:\\mathcal {X}\\rightarrow [0,1]$ of the regression function $\\eta $ and then take $\\hat{\\phi }(x):={1}\\left\\lbrace \\hat{\\eta }(x)\\ge 1/2\\right\\rbrace $ .", "To apply this method in the label-noise setting we must first give a method for constructing an estimate $\\hat{\\eta }$ based upon the corrupted sample ${\\mathcal {D}}_{\\text{corr}}$ .", "By eq.", "(REF ) for each $x \\in \\mathcal {X}$ we have $\\eta (x)=\\left(1-p_{0}-p_{1}\\right)^{-1}\\cdot \\left({\\eta }_{\\text{corr}}(x)-p_{0}\\right).$ However, all quantities on the RHS are unknown.", "Our strategy is to decompose the problem under mild conditions, so that we can plug in estimates.", "The following simple lemma makes this precise.", "Lemma 3.1 Let ${\\hat{\\eta }}_{\\text{corr}}:\\mathcal {X}\\rightarrow [0,1]$ be an estimate of ${\\eta }_{\\text{corr}}$ and define $\\hat{\\eta } :\\mathcal {X}\\rightarrow [0,1]$ by $\\hat{\\eta }(x):= \\left({\\hat{\\eta }}_{\\text{corr}}(x)-\\hat{p}_0\\right)/\\left(1-\\hat{p}_0-\\hat{p}_1\\right)$ .", "Suppose that $p_{0}+p_{1}<1$ , and $\\hat{p}_0, \\hat{p}_1\\in \\left[0,1\\right)$ with $\\hat{p}_0+\\hat{p}_1<1$ .", "Suppose further that $\\max \\left\\lbrace \\left| \\hat{p}_0-p_{0}\\right|, \\left| \\hat{p}_1-p_{1}\\right| \\right\\rbrace \\le \\left(1-p_{0}-p_{1}\\right)/4$ .", "Then for all $x \\in \\mathcal {X}$ we have $&\\left| \\hat{\\eta }(x)-\\eta (x)\\right| \\le \\dots \\\\&8\\cdot \\frac{\\max \\left\\lbrace \\left| \\hat{\\eta }{}_{\\text{corr}}(x) - {\\eta }_{\\text{corr}}(x)\\right|, \\left| \\hat{p}_0-p_{0}\\right|, \\left| \\hat{p}_1-p_{1}\\right| \\right\\rbrace }{1-p_{0}-p_{1}}.$ The lemma follows from eq.", "(REF ) by a straightforward manipulation.", "See Appendix for details.", "We note that the conditions in Lemma REF that involve estimates may be ensured by access to a sufficiently large sample, and are therefore not restrictive.", "Consequently, we shall obtain $\\hat{\\eta }(x)$ in two steps, summarised in the plug-in template Algorithm .", "First we construct an estimator ${\\hat{\\eta }}_{\\text{corr}}$ for the corrupted regression function ${\\eta }_{\\text{corr}}$ based upon the corrupted sample ${\\mathcal {D}}_{\\text{corr}}$ using supervised regression methods.", "The key remaining challenge is then to obtain estimates $\\hat{p}_0$ and $\\hat{p}_1$ for $p_{0}$ and $p_{1}$ , respectively.", "The latter is known to be impossible without further assumptions (see Section 4 in [26]).", "[htbp] Plug-in classification with label noise Compute an estimate ${\\hat{\\eta }}_{\\text{corr}}$ of the corrupted regression function ${\\eta }_{\\text{corr}}$ based on ${\\mathcal {D}}_{\\text{corr}}$ ; Compute $\\hat{p}_0$ and $\\hat{p}_1$ by estimating the extrema of ${\\hat{\\eta }}_{\\text{corr}}$ ; Let $\\hat{\\phi }(x):= {1}\\left\\lbrace {\\hat{\\eta }}_{\\text{corr}}(x) \\ge 1/2 \\cdot \\left(1+\\hat{p}_0-\\hat{p}_1\\right) \\right\\rbrace $ .", "Next, we discuss the assumptions that we employ for the remainder of this work." ], [ "Main Assumptions and Relation to Previous Work\n", "We employ two kinds of assumptions: (i) Assumptions REF and REF represent identifiability conditions for asymmetric label noise; (ii) Assumptions REF and REF are conditions under which minimax optimal fast rates are known in the noise-free setting.", "We now briefly explain each of these, which also serves to place our forthcoming analysis into the context of previous work.", "We already made use of the following in Lemma REF : Assumption 1 (Most labels are correct) $p_{0}+p_{1}<1$ .", "Assumption 2 (Range assumption) We have $\\inf _{x \\in \\mathcal {X}_{\\mu }}\\left\\lbrace \\eta (x) \\right\\rbrace =0$ and $\\sup _{x \\in \\mathcal {X}_{\\mu }}\\left\\lbrace \\eta (x) \\right\\rbrace =1$ .", "Assumption REF was introduced by [19] who showed it to be equivalent to the `mutual irreducibility' condition given in [26], [4].", "The above form will be more directly useful, since from Assumption REF and eq.", "(REF ) it follows that $\\inf _{x \\in \\mathcal {X}_{\\mu }}\\left\\lbrace {\\eta }_{\\text{corr}}(x) \\right\\rbrace =p_{0}$ and $\\sup _{x \\in \\mathcal {X}_{\\mu }}\\left\\lbrace {\\eta }_{\\text{corr}}(x) \\right\\rbrace =1-p_{1}$ .", "Hence, we may obtain estimates $\\hat{p}_0$ and $\\hat{p}_1$ by estimating the extrema of the corrupted regression function ${\\eta }_{\\text{corr}}$ .", "Recall that the above assumptions alone do not permit finite sample convergence rates; therefore [25] assumed in addition that there are positive measure balls $B_0,B_1$ in the input space such that $\\forall x \\in B_i$ $\\eta (x)= i$ , and obtained the parametric rate, i.e.", "of order $n^{-1/2}$ .", "We will not assume this, instead we now consider assumptions that have already succeeded in establishing fast rates in the noise-free setting.", "The following smoothness condition with respect to the marginal distribution was first proposed by [7], and further employed by [9] who also termed it the `modified Lipschitz' condition.", "It generalises the combination of Hölder-continuity and strong density assumptions (i.e.", "marginal density bounded from below) that have been prevalent in previous theoretical analyses of plug-in classifiers after [1].", "Definition 3.1 (Measure-smoothness) A function $f:\\mathcal {X}\\rightarrow [0,1]$ is measure-smooth with exponent $\\lambda >0$ and constant $\\omega >0$ if for all $x_0, x_1 \\in \\mathcal {X}$ we have $\\left|f(x_0)-f(x_1)\\right|\\le \\omega \\cdot \\mu \\left(B_{\\rho (x_0,x_1)}(x_0)\\right)^{\\lambda }$ .", "Assumption 3 (Measure-smooth regression function) The regression function $\\eta $ is measure-smooth with exponent $\\lambda $ and constant $\\omega $ .", "A sufficient condition for Assumption REF to hold with $\\lambda = \\beta /d$ is that $\\eta $ is $\\beta $ -Hölder and $\\mu $ is absolutely continuous with respect to the Lebesgue measure on $[0,1]^d$ with a uniform lower bound on the density.", "However, Assumption REF does not require the existence of a density for $\\mu $ and also applies naturally to classification in general metric spaces, including discrete distributions [9].", "The final assumption is Tsybakov's margin condition, which has become a widely used device for explaining fast rates, since it was first proposed by [18].", "[Tsybakov margin condition]assumptionmarginAssumption There exists $\\alpha \\ge 0$ and $C_{\\alpha }\\ge 1$ such that for all $\\xi >0$ we have $\\mu \\left( \\left\\lbrace x \\in \\mathcal {X}: 0< \\left|\\eta (x)-\\frac{1}{2}\\right| <\\xi \\right\\rbrace \\right) \\le C_{\\alpha }\\cdot \\xi ^{\\alpha }.$ Note that Assumption REF always holds with $\\alpha =0$ and $C_{\\alpha }= 1$ ; hence it is not restrictive.", "Under the assumptions of Tsybakov margin, and measure-smoothness of the regression function, [7] obtained, in the noise-free setting (i.e.", "without label noise), for the k nearest neighbor (kNN) classifier, the convergence rate of order $n^{-\\frac{\\lambda (\\alpha +1)}{2\\lambda +1}}$ – which corresponds (after having made explicit the dimensional dependence) to the minimax optimal rate computed by [1] (that is, the lower bound is over all classifiers).", "With these two distributional assumptions, this rate can therefore be regarded as quantifying the statistical hardness of classification in the minimax sense (see e.g.", "[28]) when a perfect labelling is available.", "It is not at all obvious whether, and how, the same rate can be achieved in the presence of unknown asymmetric label noise conditions?", "The aim in the remainder of this paper is to answer this question." ], [ "Notation and tools", "Whilst we are motivated by the estimation of ${\\eta }_{\\text{corr}}$ we shall frame our results in a more general fashion for clarity.", "Suppose we have a distribution $\\mathbb {P}$ on $\\mathcal {X}\\times [0,1]$ and let $f:\\mathcal {X}\\rightarrow [0,1]$ be the function $f(x):= \\mathbb {E}\\left[Z|X=x\\right]$ .", "Our goals are to estimate $f$ and its extrema based on a sample $\\mathcal {D}_f = \\left\\lbrace \\left(X_i,Z_i\\right)\\right\\rbrace _{i \\in [n]}$ with $(X_i,Z_i) \\sim \\mathbb {P}$ generated i.i.d.", "We let $\\mathbf {X}: = \\left\\lbrace X_i\\right\\rbrace _{i \\in [n]}$ and $\\mathbf {Z}:=\\left\\lbrace Z_i\\right\\rbrace _{i \\in [n]}$ .", "Given $x \\in \\mathcal {X}$ we define $\\left\\lbrace \\tau _{n,q}(x)\\right\\rbrace _{q \\in [n]}$ to be an enumeration of $[n]$ such that for each $q \\in [n-1]$ , $\\rho \\left(x,X_{\\tau _{n,q}(x)}\\right) \\le \\rho \\left(x,X_{\\tau _{n,q+1}(x)}\\right)$ .", "We define the $k$ -nearest neighbour estimate $\\hat{f}_{n,k}: \\mathcal {X}\\rightarrow [0,1]$ of $f$ by $\\hat{f}_{n,k}(x):=\\frac{1}{k} \\cdot \\sum _{q \\in [k]}Z_{\\tau _{n,q}(x)}.$ Given a point $x \\in \\mathcal {X}$ and $r>0$ we let $B_r(x)$ denote the open metric ball of radius $r$ , centered at $x$ .", "It will be useful to give a high probability bound on the measure of an open metric ball centered at a given point with random radius equal to the distance of its $k$ -th nearest neighbour.", "lemmaMeasureOfBallLemma Take $x \\in \\mathcal {X}, k \\in [n]$ and $\\zeta \\ge 0$ .", "Then, $\\mathbb {P}^n\\left[ \\mu \\left(B_{\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right)}(x)\\right) > \\frac{(1+\\zeta )k}{n}\\right] \\le e^{-k(\\zeta -\\log (1+\\zeta ))}.$ A bound of this form appears in [2] (Sec.", "1.2) for the special case where the marginal distribution has a continuous distribution function.", "Their proof relies of the fact that, in this special case $\\mu (B_{\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right)}(x))$ follows the distribution of the $k$ -th uniform order statistic whose properties are well studied.", "However, since we consider a general metric space setting and do not assume a continuous distribution function, below we show from first principles that this bound is still valid, by exploiting the continuity properties of measures.", "For any $x\\in \\mathcal {X}$ and $p\\in [0,1]$ we define (following [7]) the smallest radius for which the open ball centered at $x$ has probability at least $p$ : $r_p(x) = \\inf \\left\\lbrace r>0: \\mu \\left(B_r(x)\\right)\\ge p\\right\\rbrace .$ Take $r>r_p(x)$ , so $\\mu \\left(B_r(x)\\right) \\ge p$ .", "Note that $\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right) \\ge r$ if and only if $\\sum _{i\\in [n]}{1}_{\\lbrace X_i \\in B_r(x)\\rbrace }<k$ .", "Moreover, taking $\\tilde{p}= \\frac{1}{n}\\sum _{i\\in [n]}\\mathbb {E}\\left[{1}_{\\lbrace X_i \\in B_r(x)\\rbrace }\\right]=\\mu \\left(B_r(x)\\right) \\ge p,$ implies $k \\le (1-\\epsilon ) n \\tilde{p}\\le n \\tilde{p}$ where $\\epsilon = 1-{k}/{(np)}$ .", "Thus, by the multiplicative Chernoff bound – Theorem 4.5 in [21] – we have, $\\mathbb {P}^n\\left[\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right) \\ge r\\right] &=\\mathbb {P}^n\\left[\\sum _{i\\in [n]}{1}_{\\lbrace X_i \\in B_r(x)\\rbrace }<k\\right]\\\\&\\le \\mathbb {P}^n\\left[\\sum _{i\\in [n]}{1}_{\\lbrace X_i \\in B_r(x)\\rbrace }< (1-\\epsilon )n\\tilde{p}\\right]\\\\&\\le \\exp (-n\\tilde{p} [\\epsilon + (1-\\epsilon )\\log (1-\\epsilon )])\\\\& \\le \\exp (-np [\\epsilon + (1-\\epsilon )\\log (1-\\epsilon )]).$ Since the above inequality holds for all $r>r_p(x)$ , it follows by continuity of $\\mu $ from above that we have $&\\mathbb {P}^n\\left[ \\rho \\left(x,X_{\\tau _{n,k}(x)}\\right) > r_p(x)\\right] \\le e^{-np [\\epsilon + (1-\\epsilon )\\log (1-\\epsilon )]}.$ This implies that with probability at least $1-\\exp (-np [\\epsilon + (1-\\epsilon )\\log (1-\\epsilon )])$ we have $\\mu \\left(B_{\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right)}(x)\\right) &\\le \\mu (B_{r_p(x)})\\le p,$ where the last inequality follows by continuity of measure from below.", "Recall that $\\epsilon =1-k/(np)$ .", "To obtain the conclusion of the lemma we first note that the bound holds trivially whenever $\\zeta \\ge n/k - 1$ since this implies $\\mathbb {P}^n&\\left[ \\mu \\left(B_{\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right)}(x)\\right) > \\frac{(1+\\zeta )k}{n}\\right] \\\\ &\\le \\mathbb {P}^n\\left[ \\mu \\left(B_{\\rho \\left(x,X_{\\tau _{n,k}(x)}\\right)}(x)\\right) > 1\\right]=0.$ For $\\zeta \\in \\left[0, n/k-1\\right]$ , we choose $p=(1+\\zeta )k/n \\in [0,1]$ .", "Plugging into $\\epsilon $ yields $\\epsilon =\\zeta /(1+\\zeta )$ , and rearranging the RHS of the probability bound completes the proof.", "When the centre of the metric ball is one of the data points, we have the following.", "Corollary 3.1 Take $k,j \\in [n]$ and $\\zeta >0$ .", "Then, $\\mathbb {P}^n\\left[ \\mu (B_{\\rho \\left(X_j,X_{\\tau _{n,k}(x)}\\right)}(X_j)) > \\frac{(1+\\zeta )k}{n}\\right] \\le e^{-(k-1)(\\zeta -\\log (1+\\zeta ))}.$ Since this is a bound also for the ball with a non-random centre point, we may use it in both cases.", "Fix $X_j$ and apply Lemma REF to the $k-1$ -th nearest neighbour in the remaining sample of size $n-1$ .", "Expectation w.r.t.", "$X_j$ on both sides leaves the RHS unchanged.", "Then use that $\\frac{k-1}{n-1} < \\frac{k}{n}$ ." ], [ "Results", "We are now in a position to follow through the plan of our analysis.", "The following three subsections correspond directly to the three steps of the algorithm template (see Algorithm ) through a kNN based classification rule." ], [ "Pointwise function estimation", "As a first step we deal with point-wise estimation of the corrupted regression function, which we approach as a kNN regression task [17], [15].", "However, for our purposes we require a bound that holds both for non-random points $x\\in \\mathcal {X}_{\\mu }$ and for feature vectors $X_j$ occurring in the data, for reasons that will become clear in the subsequent Section REF where we will need to estimate the maximum of the function.", "Theorem 4.1 (Pointwise estimation bound) Suppose that $f$ satisfies measure-smoothness with exponent $\\lambda >0$ and constant $\\omega $ .", "Take $n\\in \\mathbb {N}$ , $\\delta \\in \\left(0,1\\right)$ , $k \\in \\mathbb {N}\\cap [4 \\log (3/\\delta )+1, n/2]$ and suppose that $\\mathcal {D}_f$ is generated i.i.d.", "from $\\mathbb {P}$ .", "Suppose that $X$ is either a fixed point $x \\in \\mathcal {X}_{\\mu }$ or $X_j$ for some fixed $j \\in [n]$ .", "The following bound holds with probability at least $1-\\delta $ over $\\mathcal {D}_f$ $\\left|\\hat{f}_{n,k}(X)-f(X)\\right| \\le \\sqrt{\\frac{\\log (3/\\delta )}{2k}}+\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.$ We leave $k$ unspecified for now, but we see that a choice of $k\\in {\\Theta }(\\omega ^{-\\frac{2}{2\\lambda +1}}\\cdot n^{\\frac{2\\lambda }{2\\lambda +1}})$ gives the rate $\\mathcal {O}(\\omega ^{\\frac{1}{2\\lambda +1}}\\cdot n^{-\\frac{\\lambda }{2\\lambda +1}})$ .", "The following lemma will be handy.", "Lemma 4.1 Let $f, X, n, k$ as in Theorem REF , and $q\\in [k]$ .", "Then for any $\\delta >0$ , and $n/2\\ge k\\ge 4\\log (1/\\delta )+1$ , we have w.p.", "$1-\\delta $ that $\\left|f\\left(X_{\\tau _{n,q}(X)}\\right)-f(X)\\right|&\\le \\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.$ By the measure-smoothness property, $\\left|f\\left(X_{\\tau _{n,q}(X)}\\right)-f(X)\\right| &\\le \\omega \\cdot \\mu \\left(B_{\\rho (X,X_{\\tau _{n,q}(x)})}(X)\\right)^{\\lambda } \\\\& \\le \\omega \\cdot \\mu \\left(B_{\\rho (X,X_{\\tau _{n,k}(X)})}(X)\\right)^{\\lambda } \\\\&\\le \\omega \\cdot \\left(\\frac{(1+\\zeta )k}{n}\\right)^{\\lambda },$ with probability $1-\\exp (-(k-1)(\\zeta -\\log (1+\\zeta )))$ , where $\\zeta \\ge 0$ , and the last inequality follows by Corollary REF .", "For simplicity we choose $\\zeta =1$ , whence $1/(1-\\log (2)) < 4$ , so for $k\\ge 4\\log (1/\\delta )+1$ we have with probability $1-\\delta $ the statement of the Lemma.", "Finally, note that the measure of the ball cannot be larger than 1, so we require $k\\le n/2$ .", "We shall use the notation $\\tilde{f}_{n,k}(x)=\\mathbb {E}_{\\mathbf {Z}|\\mathbf {X}}\\left[\\hat{f}_{n,k}(x)\\right] =\\frac{1}{k} \\cdot \\sum _{q \\in [k]}f\\left(X_{\\tau _{n,q}(x)}\\right)$ .", "By the triangle inequality we have $\\left|\\hat{f}_{n,k}(X)-f(X)\\right| \\le \\left|\\hat{f}_{n,k}(X)-\\tilde{f}_{n,k}(X)\\right|+\\left|\\tilde{f}_{n,k}(X)-f(X)\\right|.$ To bound the first term, note that $X$ is a deterministic function of $\\mathbf {X}=\\left\\lbrace X_i \\right\\rbrace _{i \\in [n]}$ .", "By construction we have $\\hat{f}_{n,k}(X) = \\frac{1}{k} \\cdot \\sum _{q \\in [k]}Z_{\\tau _{n,q}(X)}$ .", "Moreover, for each $q \\in [k]$ , $Z_{\\tau _{n,q}(X)}$ is a random variable in $[0,1]$ with conditional expectation (given $\\mathbf {X}$ ) equal to $\\mathbb {E}_{\\mathbf {Z}|\\mathbf {X}}\\left[Z_{\\tau _{n,q}(X)}\\right] = f\\left(X_{\\tau _{n,q}(X)}\\right)$ , and these are independent.", "Hence, it follows from Chernoff bounds [5] that the first term is bounded with probability at least $1-2\\delta /3$ over $\\mathcal {D}_f$ , as the following $\\left|\\hat{f}_{n,k}(X)-\\tilde{f}_{n,k}(X)\\right|\\le \\sqrt{\\frac{\\log (3/\\delta )}{2k}}.$ To bound the second term in eq.", "(REF ) we use Lemma REF with $1-\\delta /3$ , so for the allowed range of values of $k$ we have $\\left|\\tilde{f}_{n,k}(X)-f(X)\\right|&\\le \\frac{1}{k} \\cdot \\sum _{q \\in [k]}\\left|f\\left(X_{\\tau _{n,q}(X)}\\right)-f(X)\\right| \\\\& \\le \\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.$ Taking the union bound, eqs.", "(REF ) and (REF ) hold simultaneously with probability at least $1-\\delta $ .", "Plugging inequalities (REF ) and (REF ) back into (REF ) completes the proof." ], [ "Maximum estimation with kNN", "In Section we discussed how the noise probabilities $p_{0}$ and $p_{1}$ are determined by the extrema of the corrupted regression function ${\\eta }_{\\text{corr}}$ .", "This motivates the question of determining the maximum of a function $f$ , which is the focus of this section, although we believe the results of this section may also be of independent interest.", "As in Section REF , we shall assume we have access to a sample $\\mathcal {D}_f = \\left\\lbrace \\left(X_i,Z_i\\right)\\right\\rbrace _{i \\in [n]}$ with $(X_i,Z_i) \\sim \\mathbb {P}$ generated i.i.d.", "where $\\mathbb {P}$ is an unknown distribution on $\\mathcal {X}\\times [0,1]$ with $f(x)= \\mathbb {E}\\left[Z|X=x\\right]$ .", "Our aim is to estimate $M(f):=\\sup _{x \\in \\mathcal {X}_{\\mu }}\\left\\lbrace f(x) \\right\\rbrace $ based on $\\mathcal {D}_f$ .", "We give a bound for a simple estimator under the assumption of measure-smoothness of the regression function.", "Before proceeding we should point out that mode-estimation via kNN was previously proposed by [8], [16], [15], but both solve a related but different problem to ours.", "The former papers deal with the unsupervised problem of finding the point where the density is highest.", "The latter work deals with finding the point where a function is maximal.", "The key difference is that performance is judged in terms of distance in the input space, whereas we care about distance in function output.", "As a consequence, previous works require strong curvature assumptions: That the Hessian exists and is negative definite for all modal points.", "By contrast, we are able to work on metric spaces where the notion of a Hessian does not even make sense, and we only require the measure-smoothness condition which holds, for instance, whenever the regression function is Hölder and its density is bounded from below.", "We also do not require a bounded input domain.", "Take the following estimator for $M(f)$ , defined as the empirical maximum of the values of the regression estimator: ${\\widehat{M}_{n,k}\\left(f\\right)} = \\max _{i \\in [n]}\\left\\lbrace \\widehat{f}_{n,k}\\left(X_i\\right)\\right\\rbrace .$ Theorem 4.2 (Maximum estimation bound with measure-smoothness) Suppose that $f$ is measure-smooth with exponent $\\lambda >0$ and constant $\\omega >0$ .", "Take $n\\in \\mathbb {N}$ , $\\delta \\in \\left(0,1\\right)$ , $k \\in \\mathbb {N}\\cap [4 \\log (2/\\delta )+1, n/2]$ .", "Suppose further that $\\mathcal {D}_f$ is generated i.i.d.", "from $\\mathbb {P}$ .", "Then for any $\\delta \\in \\left(0,1\\right)$ the following holds with probability at least $1-\\delta $ over $\\mathcal {D}_f$ , $\\left| {\\widehat{M}_{n,k}\\left(f\\right)}-M(f)\\right| \\le \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}+2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.$ By Theorem REF combined with the union bound, the following holds simultaneously for all $i \\in [n]$ , with probability at least $1-\\delta /2$ over $\\mathcal {D}_f$ , $\\left| \\widehat{f}_{n,k}(X_i)-f(X_i) \\right| &\\le \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}+\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.", "$ Given (REF ) we can upper bound ${\\widehat{M}_{n,k}\\left(f\\right)}$ by ${\\widehat{M}_{n,k}\\left(f\\right)} &\\le \\max _{i \\in [n]}\\left\\lbrace f(X_i) \\right\\rbrace + \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}+\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }\\\\& \\le M(f)+ \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}+2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.$ We now lower bound ${\\widehat{M}_{n,k}\\left(f\\right)}$ as follows.", "Take $\\epsilon >0$ and choose $x_0 \\in \\mathcal {X}_{\\mu }$ with $f(x_0)\\ge M(f) -\\epsilon $ (a point that nearly achieves the supremum of $f$ ).", "By Lemma REF with probability at least $1-\\delta /2$ over $\\mathcal {D}_f$ we have $|f\\left(X_{\\tau _{n,1}(x_0)}\\right)- f(x_0)|\\le \\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.", "$ In conjunction, the bounds (REF ) and (REF ) imply ${\\widehat{M}_{n,k}\\left(f\\right)} & \\ge \\hat{f}_{n,k}\\left(X_{\\tau _{n,1}(x_0)}\\right)\\\\& \\ge f\\left(X_{\\tau _{n,1}(x_0}\\right)- \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}-\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }\\\\& \\ge f(x_0)- \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}-2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }\\\\& \\ge M(f)- \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}-2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }-\\epsilon .$ Combining this with the upper bound we see that given (REF ) and (REF ) we have $\\left| {\\widehat{M}_{n,k}\\left(f\\right)}-M(f)\\right| &\\le \\sqrt{\\frac{\\log (6n/\\delta )}{2k}}+2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda } +\\epsilon .$ By the union bound applied to (REF ) and (REF ), (REF ) holds with probability at least $1-\\delta $ .", "Letting $\\epsilon \\rightarrow 0$ and applying continuity of measure completes the proof of the theorem." ], [ "Main result: Fast rates in the presence of unknown asymmetric label noise", "We put everything together in this section, and complete the analysis of the label noise robust kNN classifier given in Algorithm REF .", "This classifier is simply the kNN instantiation of Algorithm .", "Algorithm REF was previously proposed by [12] with an empirical demonstration of its success in practice, but without an analysis of its finite sample behaviour when the noise probabilities are unknown.", "We shall now prove that it attains the known minimax optimal rates of the noiseless setting, up to logarithmic factors, despite the presence of unknown asymmetric label noise, provided the assumptions discussed in Section REF .", "[htbp] A k nearest neighbour method for label noise Define ${\\hat{\\eta }}_{\\text{corr}}$ by ${\\hat{\\eta }}_{\\text{corr}}(x) :=\\frac{1}{k} \\cdot \\sum _{q \\in [k]}\\tilde{Y}_{\\tau _{n,q}(x)}$ ; Compute $\\hat{p}_0:= \\min _{i \\in [n]}\\left\\lbrace {\\hat{\\eta }}_{\\text{corr}}(X_i)\\right\\rbrace $ and Compute $\\hat{p}_1: = 1-\\max _{i \\in [n]}\\left\\lbrace {\\hat{\\eta }}_{\\text{corr}}(X_i)\\right\\rbrace $ ; Let $\\hat{\\phi }_{n,k}(x):= {1}\\left\\lbrace {\\hat{\\eta }}_{\\text{corr}}(x) \\ge 1/2 \\cdot \\left(1+\\hat{p}_0-\\hat{p}_1\\right) \\right\\rbrace $ .", "Theorem 4.3 Suppose that Assumptions REF and REF hold, Assumption REF holds with exponent $\\lambda >0$ and constant $\\omega >0$ , and Assumption REF holds with constant $\\alpha \\ge 0$ and $C_{\\alpha }\\ge 1$ .", "Take any $n \\in \\mathbb {N}$ , $\\delta \\in \\left(0,1\\right)$ , and suppose we have ${\\mathcal {D}}_{\\text{corr}}= \\lbrace (X_i,\\tilde{Y}_i)\\rbrace _{i \\in [n]}$ with $(X_i,\\tilde{Y}_i) \\sim {\\mathbb {P}}_{\\text{corr}}$ .", "Let $\\hat{\\phi }_{n,k}$ be the label-noise robust $k$ NN classifier with an arbitrary choice of $k \\in \\mathbb {N}\\cap [4 \\log (3/\\delta )+1, n/2]$ (Algorithm REF ).", "With probability at least $1-\\delta $ over ${\\mathcal {D}}_{\\text{corr}}$ , we have $\\mathcal {R}\\left(\\hat{\\phi }_{n,k}\\right) &\\le \\mathcal {R}\\left(\\phi _*\\right)+C_{\\alpha }\\cdot \\left(\\frac{8}{ 1-p_{0}-p_{1}} \\right)^{\\alpha +1}\\nonumber \\\\&\\cdot \\left[\\sqrt{\\frac{\\log (18n/\\delta )}{k}}+2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }\\right]^{\\alpha +1}+\\delta ,$ where $\\phi _*(x)\\equiv {1}\\left\\lbrace \\eta (x)\\ge 1/2\\right\\rbrace $ is the Bayes classifier.", "In particular, if we take $k_n=\\left\\lceil \\left(\\frac{\\log (18n/\\delta )}{2\\omega ^2} \\right)^{\\frac{1}{2\\lambda +1}} \\cdot n^{\\frac{2\\lambda }{2\\lambda +1}} \\right\\rceil $ then for $n\\ge 5 \\cdot (10 \\cdot \\omega ^2)^{\\frac{1}{2\\lambda }} \\cdot \\log (18n/\\delta )$ , w.p.", "at least $1-\\delta $ , $\\mathcal {R}\\left(\\hat{\\phi }_{n,k^*_n}\\right) \\le \\mathcal {R}\\left(\\phi _*\\right) +C_{\\alpha }&\\cdot \\left(\\frac{2^{2\\lambda +5}\\cdot \\omega ^{\\frac{1}{2\\lambda +1}} }{ 1-p_{0}-p_{1}} \\right)^{\\alpha +1}\\\\&\\cdot \\left( \\frac{\\log (18n/\\delta )}{n}\\right)^{\\frac{\\lambda (\\alpha +1)}{2\\lambda +1}} +\\delta .$ Observe first that the measure-smoothness property of $\\eta $ implies that ${\\eta }_{\\text{corr}}$ is measure-smooth with the same exponent, and constant $(1-p_{0}-p_{1})\\cdot \\omega \\le \\omega $ , so we can work with the latter to avoid clutter.", "We define the subset of the input domain where the corrupted regression function has low estimation error: $\\mathcal {G}_{\\delta }:= \\left\\lbrace x \\in \\mathcal {X}_{\\mu }: \\left|{\\hat{\\eta }}_{\\text{corr}}(x)-{\\eta }_{\\text{corr}}(x)\\right|\\le \\xi (n,k,\\delta )\\right\\rbrace .$ where $\\xi (n,k,\\delta )$ is a small error that will be made precise shortly.", "We want to ensure that a randomly drawn test point is in this set with probability at least $1-\\delta /3$ .", "By Theorem REF , for each $x \\in \\mathcal {X}_{\\mu }$ , the following holds with probability at least $1-\\delta ^2/3$ , $\\left|{\\hat{\\eta }}_{\\text{corr}}(x)-{\\eta }_{\\text{corr}}(x)\\right|&\\le \\sqrt{\\frac{\\log (3/(\\delta ^2/3))}{2k}}+\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }\\\\&= \\sqrt{\\frac{2\\log (3/\\delta )}{2k}}+\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }\\\\&=:\\xi _1(n,k,\\delta ).", "$ That is, for each fixed $x \\in \\mathcal {X}_{\\mu }$ , we have $x \\in \\mathcal {G}_{\\delta }$ with probability at least $1-\\delta ^2/3$ i.e.", "$\\mathbb {E}_{\\mathcal {D}_f}\\left[{1}\\left\\lbrace x \\notin \\mathcal {G}_{\\delta }\\right\\rbrace \\right] \\le \\delta ^2/3$ .", "We now integrate over $\\mu $ and use Fubini's theorem as follows: $\\mathbb {E}_{\\mathcal {D}_f}\\left[\\mu \\left(\\mathcal {X}_{\\mu }\\backslash \\mathcal {G}_{\\delta }\\right)\\right] &= \\mathbb {E}_{\\mathcal {D}_f}\\left[\\int {1}\\left\\lbrace x \\notin \\mathcal {G}_{\\delta }\\right\\rbrace d\\mu (x) \\right] \\nonumber \\\\&=\\int \\mathbb {E}_{\\mathcal {D}_f}\\left[{1}\\left\\lbrace x \\notin \\mathcal {G}_{\\delta }\\right\\rbrace \\right]d\\mu (x)\\le \\delta ^2/3.$ Thus, by Markov's inequality we have $\\mu \\left(\\mathcal {X}_{\\mu }\\backslash \\mathcal {G}_{\\delta }\\right)\\le \\delta $ with probability at least $1-\\delta /3$ .", "Furthermore, by Theorem REF with probability at least $1-\\delta /3$ , $\\left|\\hat{p}_0-p_{0}\\right| \\le \\sqrt{\\frac{\\log (6n/(\\delta /3))}{2k}}+2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }&=: \\xi _2(n,k,\\delta ).", "$ Similarly, with probability at least $1-\\delta /3$ we have $\\left|\\hat{p}_1-p_{1}\\right| \\le \\xi _2(n,k,\\delta )$ , and we let $\\xi (n,k,\\delta )&:=\\max \\lbrace \\xi _1(n,k,\\delta ),\\xi _2(n,k,\\delta )\\rbrace \\nonumber \\\\&\\le \\sqrt{\\frac{\\log (18n/\\delta )}{k}}+2\\omega \\cdot \\left(\\frac{2k}{n}\\right)^{\\lambda }.", "$ By the union bound, with probability at least $1-\\delta $ , we have $\\mu \\left(\\mathcal {X}_{\\mu }\\backslash \\mathcal {G}_{\\delta }\\right)\\le \\delta $ and $\\max \\left\\lbrace \\left|\\hat{p}_0-p_{0}\\right| ,\\left|\\hat{p}_1-p_{1}\\right|\\right\\rbrace \\le \\xi (n,k,\\delta )$ .", "Hence, it suffices to assume that $\\mu \\left(\\mathcal {X}_{\\mu }\\backslash \\mathcal {G}_{\\delta }\\right)\\le \\delta $ and $\\max \\left\\lbrace \\left|\\hat{p}_0-p_{0}\\right| ,\\left|\\hat{p}_1-p_{1}\\right|\\right\\rbrace \\le \\xi (n,k,\\delta )$ holds and show that eq (REF ) holds.", "Observe that we can rewrite $\\hat{\\phi }_{n}:\\mathcal {X}\\rightarrow \\mathcal {Y}$ as $\\hat{\\phi }_{n}(x)={1}\\left\\lbrace \\hat{\\eta }(x)\\ge 1/2\\right\\rbrace $ , where $\\hat{\\eta }(x):= \\left({\\hat{\\eta }}_{\\text{corr}}(x)-\\hat{p}_0\\right)/\\left(1-\\hat{p}_0-\\hat{p}_1\\right)$ .", "By Lemma REF for all $x \\in \\mathcal {G}_{\\delta }$ we have deterministically that: $&\\left| \\hat{\\eta }(x)-\\eta (x)\\right| \\\\&\\le 8 \\cdot \\frac{\\max \\left\\lbrace \\left| \\hat{\\eta }{}_{\\text{corr}}(x) - {\\eta }_{\\text{corr}}(x)\\right|, \\left| \\hat{p}_0-p_{0}\\right|, \\left| \\hat{p}_1-p_{1}\\right| \\right\\rbrace }{\\left(1-p_{0}-p_{1}\\right)}\\\\&\\le 8 \\cdot \\left(1-p_{0}-p_{1}\\right)^{-1} \\cdot \\xi (n,k,\\delta ).$ Hence, observe that, given any $x \\in \\mathcal {X}$ with $\\hat{\\phi }_{n}(x)\\ne \\phi _*(x)\\equiv {1}\\left\\lbrace \\eta (x)\\ge 1/2\\right\\rbrace $ we must have $\\left|\\eta (x)-1/2\\right|\\le 8 \\cdot \\left(1-p_{0}-p_{1}\\right)^{-1} \\cdot \\xi (n,k,\\delta )$ .", "Hence, by Assumption REF , with probability at least $1-\\delta $ , we have $&\\mathcal {R}\\left(\\hat{\\phi }_{n,k}\\right) -\\mathcal {R}\\left(\\phi _*\\right)\\\\ & = \\int _{\\mathcal {X}} \\left|\\eta (x)-\\frac{1}{2}\\right|\\cdot {1}\\left\\lbrace \\hat{\\phi }_{n,\\delta }(x)\\ne \\phi _*(x)\\right\\rbrace d\\mu (x)\\\\&\\le \\int _{\\mathcal {G}_{\\delta }} \\left|\\eta (x)-\\frac{1}{2}\\right|\\cdot {1}\\left\\lbrace \\hat{\\phi }_{n,\\delta }(x)\\ne \\phi _*(x)\\right\\rbrace d\\mu (x)+\\mu \\left(\\mathcal {X}\\backslash \\mathcal {G}_{\\delta }\\right)\\\\&\\le \\int _{\\mathcal {X}} \\left|\\eta (x)-\\frac{1}{2}\\right|\\cdot {1}\\left\\lbrace \\left|\\eta (x)-\\frac{1}{2}\\right| \\le \\frac{ 8 \\cdot \\xi (n,k,\\delta )}{ 1-p_{0}-p_{1}} \\right\\rbrace d\\mu (x)+\\delta \\\\&\\le C_{\\alpha }\\cdot \\left(\\frac{8 \\cdot \\xi (n,k,\\delta )}{ 1-p_{0}-p_{1}} \\right)^{\\alpha +1} +\\delta .$ Plugging in eq.", "(REF ) completes the proof of the first part.", "The second part follows by choosing $k$ that approximately equates the two terms on the right hand side of eq.", "(REF ).", "That is, with the choice $k_n=\\left\\lceil \\left(\\frac{\\log (18n/\\delta )}{2\\omega ^2} \\right)^{\\frac{1}{2\\lambda +1}} \\cdot n^{\\frac{2\\lambda }{2\\lambda +1}} \\right\\rceil ,$ given $n\\ge 5 \\cdot (10 \\cdot \\omega ^2)^{\\frac{1}{2\\lambda }} \\cdot \\log (18n/\\delta )$ we have $k_n \\ge 4\\log (3/\\delta )+1$ and $\\xi (n,k_n,\\delta )$ takes the form $\\xi (n,k_n,\\delta )= 4^{\\lambda +1}\\cdot \\omega ^{\\frac{1}{2\\lambda +1}} \\cdot \\left( \\frac{\\log (18n/\\delta )}{n}\\right)^{\\frac{\\lambda }{2\\lambda +1}}.$ Plugging this into the excess risk completes the proof of the second part of the theorem." ], [ "On setting the value of $k$", "We used the theoretically optimal value of $k$ in our analysis, which is not available in practice.", "Methods exist to set $k$ in a data-driven manner.", "Cross-validation is amongst the most popular practical approaches [14], [12], and there is also an ample literature on adaptive methods (e.g.", "[13]) that allow us to retain nearly optimal rates without access to the unknown parameters of the analysis." ], [ "Discussion: Inconsistency of kNN in the presence of asymmetric label noise", "Our main result implies that under the measure-smoothness condition, and provided the label-noise identifiability conditions hold, the statistical difficulty of classification with or without asymmetric label noise is the same in the minimax sense, up to constants and log factors.", "However, the algorithm that we used to achieve this rate was not the classical kNN.", "This invites the question as to whether this must be so (or is it an artefact of the proof)?", "We find this question interesting in the light of observations and claims in the literature about the label-noise robustness of kNN and other high capacity models [27], [12] (see also the introductory section of [20]).", "To shed light on this, we show in this section that the classical kNN cannot achieve these rates, and even fails to be consistent in the presence of asymmetric label noise.", "In fact, in Theorem REF below we shall see that any algorithm that is Bayes-consistent in the classical sense may become inconsistent under this type of noise.", "Indeed, on closer inspection, the robustness claims in the literature about kNN and other high capacity models [27], [12], [20] – explicitly or tacitly – refer either to class-unconditional (i.e.", "symmetric) label noise, or assume that the regression function is bounded away from $1/2$ .", "A recent result of [6] even gives convergence rates for $k$ -NN under instance-dependent label noise, but requires that the label noise probabilities become symmetric as the regression function approaches $1/2$ .", "In order to talk about consistency in the presence of label noise we need to make explicit the distinct roles of the train and test distributions in our notation.", "For any distribution $\\mathbb {P}$ on $\\mathcal {X}\\times \\lbrace 0,1\\rbrace $ and classifier $\\phi : \\mathcal {X}\\rightarrow \\lbrace 0,1\\rbrace $ the risk is defined as $\\mathcal {R}\\left(\\phi \\right)=\\mathcal {R}\\left(\\phi ;\\mathbb {P}\\right):= \\mathbb {E}\\left[{\\phi }(X)\\ne Y\\right]$ , where $\\mathbb {E}$ denotes the expectation with respect to $\\mathbb {P}$ .", "In addition we let $\\mathcal {E}\\left(\\phi ;\\mathbb {P}\\right)$ denote the excess risk defined by $\\mathcal {E}\\left(\\phi ,\\mathbb {P}\\right):=\\mathcal {R}(\\phi ;\\mathbb {P}) -\\inf _{\\tilde{\\phi }} \\lbrace \\mathcal {R}(\\tilde{\\phi };\\mathbb {P})\\rbrace $ , where the infimum is over all measurable functions $\\tilde{\\phi }:\\mathcal {X}\\rightarrow \\lbrace 0,1\\rbrace $ .", "Definition 5.1 (Consistency) Let $\\mathbb {P}_{\\text{train}}$ ,$\\mathbb {P}_{\\text{test}}$ be probability distributions on $\\mathcal {X}\\times \\lbrace 0,1\\rbrace $ .", "Take $\\tilde{\\mathcal {D}} = \\lbrace (X_i,\\tilde{Y}_i)\\rbrace _{i\\in \\mathbb {N}}$ with $(X_i,\\tilde{Y}_i)$ sampled independently from $\\mathbb {P}_{\\text{train}}$ .", "For each $n \\in \\mathbb {N}$ we let $\\hat{\\phi }_n$ denote the classifier obtained by applying the learning algorithm $\\hat{\\phi }$ to the data set $\\tilde{\\mathcal {D}}$ .", "We shall say that a classification algorithm $\\hat{\\phi }$ is consistent with training distribution $\\mathbb {P}_{\\text{train}}$ and test distribution $\\mathbb {P}_{\\text{test}}$ if we have $\\lim _{n \\rightarrow \\infty } \\mathcal {E}(\\hat{\\phi }_n;\\mathbb {P}_{\\text{test}}) = 0$ almost surely over the data $\\tilde{\\mathcal {D}}$ .", "Let $\\mathbb {P}_{\\left(\\mu ,\\eta \\right)}$ denote the probability distribution on $\\mathcal {X}\\times \\lbrace 0,1\\rbrace $ with marginal $\\mu $ and a regression function $\\eta $ .", "Learning with label noise means that the training distribution $\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ and test distribution $\\mathbb {P}_{\\left(\\mu ,\\eta \\right)}$ are different.", "We define the input set of disagreement: $\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}}):= \\left\\lbrace x \\in \\mathcal {X}: \\left(\\eta (x)-\\frac{1}{2}\\right)\\left({\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right)<0\\right\\rbrace .$ Note that $\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}})$ consists of points which are sufficiently close to the boundary with asymmetric label noise.", "Theorem 5.1 Suppose that a classification algorithm $\\hat{\\phi }$ is consistent with $\\mathbb {P}_{train}=\\mathbb {P}_{test}=\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ , and let $\\eta \\ne {\\eta }_{\\text{corr}}$ .", "If $\\mu \\left(\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}}) \\right)>0$ then $\\hat{\\phi }$ is inconsistent with $\\mathbb {P}_{train}=\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ and $\\mathbb {P}_{test}=\\mathbb {P}_{\\left(\\mu ,\\eta \\right)}$ .", "The proof is given in Appendix.", "The essence of the argument is the simple observation that if the regression functions of the training and testing distributions disagree, then the trained classifier cannot agree with both.", "Below we give a family of examples on $\\mathcal {X}=\\mathbb {R}$ with class-conditional label noise, where the standard $k_n$ -NN classifier is inconsistent, yet the $k_n$ -NN method for asymmetric label noise (Algorithm REF ) is consistent.", "Example: Take any $p_{0}, p_{1}\\in \\left(0,1/2\\right)$ with $p_0\\ne p_1$ and let $m:=\\left({2-3p_{0}-p_{1}}\\right)/\\left({4\\left(1-p_{0}-p_{1}\\right)}\\right)$ .", "It follows that $m \\in \\left(0,1\\right)$ .", "Let $\\mathcal {X}=[0,1]$ and let $\\mu $ be the Lebesgue measure on $\\mathcal {X}$ .", "Define $\\eta :\\mathcal {X}\\rightarrow [0,1]$ by $\\eta (x):= {\\left\\lbrace \\begin{array}{ll} \\frac{3x}{2} &\\text{ if }x \\in \\left[0,\\frac{2m}{3}\\right]\\\\m &\\text{ if }x \\in \\left[\\frac{2m}{3},\\frac{2m+1}{3}\\right]\\\\\\frac{3x-1}{2} &\\text{ if }x \\in \\left[\\frac{2m+1}{3},1\\right].\\end{array}\\right.", "}$ A special case of this example, with $p_0=0.1$ and $p_1=0.3$ is depicted in Figure REF .", "Figure: A pair of train and test regression functions (η,η corr \\eta ,{\\eta }_{\\text{corr}}) exemplifying a setting where classical kNN is inconsistent yet the kNN method for asymmetric label noise is consistent.Indeed, for this family of examples, it follows from Theorem 1 in [7] that the standard $k_n$ -NN classifier is strongly Bayes-consistent with $\\mathbb {P}_{train}=\\mathbb {P}_{test}=\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ whenever $k_n/n \\rightarrow 0$ and $k_n/(\\log (n)) \\rightarrow \\infty $ as $n \\rightarrow \\infty $ .", "Moreover, it follows from the definition of $m$ , $p_0\\ne p_1$ , and eq.", "(REF ) that for $x \\in \\left[\\frac{2m}{3},\\frac{2m+1}{3}\\right]$ we have $\\left(\\eta (x)-\\frac{1}{2}\\right)\\left({\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right)<0.$ Hence, $\\mu \\left(\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}})\\right) = 1/3>0$ .", "Thus, by Theorem REF , the $k_n$ -NN classifier is inconsistent with $\\mathbb {P}_{train}=\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ and $\\mathbb {P}_{test}=\\mathbb {P}_{\\left(\\mu ,{\\eta }\\right)}$ .", "On the other hand, one can readily check that Assumptions REF and REF hold.", "Moreover, Assumption REF holds with exponent $\\lambda =1$ and constant $\\omega = 3$ , and Assumption REF holds with exponent $\\alpha =0$ and constant $C_{\\alpha }=1$ .", "Thus, by Theorem REF combined with the Borel-Cantelli lemma, the $k_n$ method for asymmetric label noise (Algorithm REF ) is consistent with $\\mathbb {P}_{train}=\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ and $\\mathbb {P}_{test}=\\mathbb {P}_{\\left(\\mu ,{\\eta }\\right)}$ whenever $k_n/n \\rightarrow 0$ and when $k_n/(\\log (n)) \\rightarrow \\infty $ as $n \\rightarrow \\infty $ ." ], [ "Conclusions", "We obtained fast rates in the presence of unknown asymmetric label noise that match the minimax optimal rates of the noiseless setting, up to logarithmic factors, under measure-smoothness and Tsybakov margin assumptions.", "On the practical side, our results provide theoretical support for the Robust kNN algorithm of [12] whose analysis so far only exists under known noise probabilities.", "On the theoretical side, our results entail that under the stated conditions the statistical difficulty of classification with or without unknown asymmetric label noise is the same in the minimax sense.", "This is especially interesting given recent results which show that under more general non-parametric settings the optimal rates for unknown asymmetric label noise can be strictly slower than those for the noiseless case [24].", "We have also seen that the algorithm achieving the rate in the presence of unknown asymmetric label noise must be different from any classical Bayes-consistent classifier, as those fail to be consistent under the label noise.", "Finally, a key ingredient in our analysis a simple method for estimating the maximum of a function that requires far less assumptions than existing mode estimators do and may have wider applicability." ], [ "Acknowledgement", "This work is funded by EPSRC under Fellowship grant EP/P004245/1, and a Turing Fellowship (grant EP/N510129/1).", "We would also like to thank the anonymous reviewers for their careful feedback, and Joe Mellor for improving the presentation." ], [ "Proof of Theorem ", "The proof of Theorem REF is as follows.", "Define the input set of disagreement with margin $\\theta $ : $\\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}}) &:= \\left(\\left\\lbrace \\eta (x)\\le \\frac{1}{2}-\\theta \\right\\rbrace \\cap \\left\\lbrace {\\eta }_{\\text{corr}}(x)\\ge \\frac{1}{2}+\\theta \\right\\rbrace \\right)\\\\ &\\cup \\left(\\left\\lbrace \\eta (x)\\ge \\frac{1}{2}+\\theta \\right\\rbrace \\cap \\left\\lbrace {\\eta }_{\\text{corr}}(x)\\le \\frac{1}{2}-\\theta \\right\\rbrace \\right).$ We can write $\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}})$ as a union of such sets: $\\mathcal {A}_0(\\eta ) = \\bigcup _{\\theta >0}\\mathcal {A}_{\\theta }(\\eta )$ , and hence $\\lim _{\\theta \\rightarrow 0}\\left\\lbrace \\mu \\left( \\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}})\\right) \\right\\rbrace = \\mu \\left(\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}}) \\right)>0.$ Now, take some $\\theta >0$ s.t.", "$\\mathcal {A}_0(\\eta ,{\\eta }_{\\text{corr}})>0$ .", "Lemma REF below will show that $\\mathcal {E}\\left({\\phi };\\mathbb {P}_{\\left(\\mu ,\\eta \\right)}\\right)+\\mathcal {E}\\left({\\phi };\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}\\right) \\ge \\theta \\cdot \\mu \\left(\\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}})\\right)>0.$ Since $\\hat{\\phi }$ is consistent with $\\mathbb {P}_{train}=\\mathbb {P}_{test}=\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ , we have $\\lim _{n \\rightarrow \\infty }\\mathcal {E}\\left(\\hat{\\phi }_n;\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}\\right) = 0$ .", "Hence, from eq.", "(REF ) it follows that $\\limsup _{n \\rightarrow \\infty }\\mathcal {E}\\left(\\hat{\\phi }_n;\\mathbb {P}_{\\left(\\mu ,{\\eta }\\right)}\\right) \\ge \\theta \\cdot \\mu \\left(\\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}})\\right)>0$ .", "That is, $\\hat{\\phi }$ is inconsistent when trained with train distribution $\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ and tested on distribution $\\mathbb {P}_{\\left(\\mu ,{\\eta }\\right)}$ .", "It remains to prove the lemma used in the proof above.", "Lemma A.1 Let $\\mu $ be a Borel probability measure on $\\mathcal {X}$ .", "Given $\\eta :\\mathcal {X}\\rightarrow [0,1]$ and $\\theta >0$ consider the set $\\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}}) \\subseteq \\mathcal {X}$ as defined in eq.", "(REF ).", "Then given any classifier $\\phi : \\mathcal {X}\\rightarrow \\lbrace 0,1\\rbrace $ we have $\\mathcal {E}\\left({\\phi };\\mathbb {P}_{\\left(\\mu ,\\eta \\right)}\\right)+\\mathcal {E}\\left({\\phi };\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}\\right) \\ge \\theta \\cdot \\mu \\left(\\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}})\\right)$ .", "Recall that, for any regression function $\\tilde{\\eta }:\\mathcal {X}\\rightarrow [0,1]$ the excess risk can be written as: $\\mathcal {E}\\left(\\phi ;\\mathbb {P}_{\\left(\\mu ,\\tilde{\\eta }\\right)}\\right) = $ $\\int \\left| \\tilde{\\eta }(x)-\\frac{1}{2}\\right| \\cdot {1}\\left\\lbrace \\left(\\tilde{\\eta }(x)-\\frac{1}{2}\\right)\\left(\\phi (x)-\\frac{1}{2}\\right)<0\\right\\rbrace d\\mu (x).$ Now if $x \\in \\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}})$ then $\\left(\\eta (x)-\\frac{1}{2}\\right)\\left({\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right)<0$ so for both possible values $\\phi (x) \\in \\lbrace 0,1\\rbrace $ we have ${1}\\left\\lbrace \\left({\\eta }(x)-\\frac{1}{2}\\right)\\left(\\phi (x)-\\frac{1}{2}\\right)<0\\right\\rbrace + {1}\\left\\lbrace \\left({\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right)\\left(\\phi (x)-\\frac{1}{2}\\right)<0\\right\\rbrace = 1.$ Moreover, if $x \\in \\mathcal {A}_{\\theta }(\\eta ,{\\eta }_{\\text{corr}})$ then $\\min \\left\\lbrace \\left|{\\eta }(x)-\\frac{1}{2}\\right|,\\left|{\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right| \\right\\rbrace \\ge \\theta $ and so $\\left|{\\eta }(x)-\\frac{1}{2}\\right|\\cdot {1}\\left\\lbrace \\left({\\eta }(x)-\\frac{1}{2}\\right)\\left(\\phi (x)-\\frac{1}{2}\\right)<0\\right\\rbrace + \\left|{\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right|\\cdot {1}\\left\\lbrace \\left({\\eta }_{\\text{corr}}(x)-\\frac{1}{2}\\right)\\left(\\phi (x)-\\frac{1}{2}\\right)<0\\right\\rbrace \\ge \\theta .$ Integrating with respect to $\\mu $ and applying (REF ) to both $\\mathbb {P}_{\\left(\\mu ,\\eta \\right)}$ and $\\mathbb {P}_{\\left(\\mu ,{\\eta }_{\\text{corr}}\\right)}$ gives the conclusion of the lemma." ], [ "Proof of Lemma ", "Given $\\hat{a},a \\in [-1,1]$ , $b,\\hat{b} >0$ with $|\\hat{b}-b| \\le b/2$ , and $a/b \\in [0,1]$ , $\\left|\\frac{\\hat{a}}{\\hat{b}}-\\frac{a}{b}\\right|= \\frac{1}{\\hat{b}}\\cdot \\left| (\\hat{a}-a)+\\frac{a}{b} \\cdot (b-\\hat{b})\\right| \\le \\frac{2}{b}\\left( |\\hat{a}-a|+ \\left| \\frac{a}{b}\\right| \\cdot |\\hat{b}-b| \\right) \\le \\frac{4}{b}\\cdot \\max \\left\\lbrace |\\hat{a}-a|, |\\hat{b}-b| \\right\\rbrace ,$ where we have used the fact that $\\hat{b} \\ge b/2$ .", "By the definition of $\\hat{\\eta }(x)$ together with eq.", "(REF ) we have $\\hat{\\eta }(x):= \\frac{{\\hat{\\eta }}_{\\text{corr}}(x)-\\hat{p}_0}{1-\\hat{p}_0-\\hat{p}_1}\\hspace{28.45274pt}\\text{and}\\hspace{28.45274pt}\\eta (x) = \\frac{{\\eta }_{\\text{corr}}(x)-p_{0}}{1-p_{0}-p_{1}}.$ Now take $\\hat{a} = \\hat{\\eta }{}_{\\text{corr}}(x)-\\hat{p}_0$ , $a ={\\eta }_{\\text{corr}}(x)-p_{0}$ , $\\hat{b} = 1-\\hat{p}_0-\\hat{p}_1$ and $b = 1-p_{0}-p_{1}$ .", "Given the assumptions that $p_{0}+p_{1}<1$ , so $b>0$ and $\\max \\left\\lbrace \\left| \\hat{p}_0-p_{0}\\right|, \\left| \\hat{p}_1-p_{1}\\right| \\right\\rbrace \\le \\left(1-p_{0}-p_{1}\\right)/4$ this implies $|\\hat{b}-b| = 2\\cdot \\max \\left\\lbrace \\left| \\hat{p}_0-p_{0}\\right|, \\left| \\hat{p}_1-p_{1}\\right| \\right\\rbrace \\le \\frac{1}{2}\\cdot \\left( 1-p_{0}-p_{1}\\right) = \\frac{b}{2},$ which also implies $\\hat{b}\\ge b/2 >0$ .", "Hence, by (REF ) we deduce $\\left| \\hat{\\eta }(x)-\\eta (x)\\right| \\le \\frac{4}{1-p_{0}-p_{1}} \\cdot \\max \\left\\lbrace \\left| \\left( \\hat{\\eta }{}_{\\text{corr}}(x)-\\hat{p}_0\\right)- \\left({\\eta }_{\\text{corr}}(x)-p_{0}\\right) \\right| ,\\left| \\left(1-\\hat{p}_0-\\hat{p}_1\\right)- \\left(1-p_{0}-p_{1}\\right) \\right| \\right\\rbrace \\\\\\le \\frac{8}{1-p_{0}-p_{1}} \\cdot \\max \\left\\lbrace \\left| \\hat{\\eta }{}_{\\text{corr}}(x)-{\\eta }_{\\text{corr}}(x) \\right| ,\\left| \\hat{p}_0- p_{0}\\right|,\\left| \\hat{p}_1-p_{1}\\right| \\right\\rbrace .$ This completes the proof of Lemma REF ." ] ]
1906.04542
[ [ "Vector and axial-vector meson properties in a nonlocal SU(2) PNJL model" ], [ "Abstract We study the features of a SU(2) Polyakov-Nambu-Jona-Lasinio model that includes wave function renormalization and nonlocal vector interactions.", "Within this framework we analyze, among other properties, the masses, width and decay constants of light vector and axial-vector mesons at finite temperature.", "Then we obtain the corresponding phase diagram in a finite density scenario, after characterizing the deconfinement and chiral restoration transitions." ], [ "Introduction", "The phase diagram of strongly interacting matter at finite temperature $T$ and chemical potential $\\mu $ has been extensively studied along the past decades.", "Quantum Chromodynamics (QCD) predicts that at very high temperatures ($T \\gg \\Lambda _{\\rm QCD}$ ) and low densities this matter appears in the form of a plasma of quarks and gluons [1].", "At such extreme conditions, QCD is weakly coupled and first-principle perturbative calculations based on an expansion in the coupling constant can be used to explore the phase diagram.", "However, in the low-energy regime the analysis of hadron phenomenology starting from first principles is still a challenge for theoretical physics.", "Although substantial progress has been achieved in this sense through lattice QCD (lQCD) calculations, this approach shows significant difficulties, e.g.", "when dealing with small current quark masses and/or finite chemical potential.", "Thus, most of the present knowledge about the behavior of strongly interacting matter arises from the study of effective models, which offer the possibility to get predictions of the transition features at regions that are not accessible through lattice techniques.", "Here we will concentrate on one particular class of effective theories, viz.", "the nonlocal Polyakov$-$ Nambu$-$ Jona-Lasinio (nlPNJL) models (see Refs.", "[2], [3] and references therein), in which quarks interact through covariant nonlocal chirally symmetric four point couplings in a background color field, and gluon self-interactions are effectively introduced by a Polyakov loop effective potential.", "These approaches, which can be considered as an improvement over the (local) PNJL model, offer a common framework to study both the chiral restoration and deconfinement transitions.", "In fact, the nonlocal character of the interactions leads to a momentum dependence in the quark propagator that can be made consistent [4] with lattice results.", "This scheme has been used to describe the chiral restoration transition for hadronic systems at finite temperature and/or chemical potential (see e.g.", "Refs.", "[6], [9], [10], [8], [7], [5], [3]).", "In this work, following Ref.", "[2], we concentrate in the incorporation of vector and axial-vector interactions extended to finite $T$ and $\\mu $ .", "Therefore, besides the scalar and pseudoscalar quark-antiquark currents, we include couplings between vector and axial-vector nonlocal currents satisfying proper QCD symmetry requirements.", "Within this theoretical framework, we study the thermal behavior of several properties for the vector meson $\\rho $ and the axial-vector meson $\\rm a_1$ .", "We start by analyzing the temperature dependence of their masses, decay constants and decay widths, and then we characterize the chiral and deconfinement transitions through the corresponding order parameters to finally obtain the QCD phase diagram at finite chemical potential.", "This article is organized as follows.", "In Sect.", "we present the general formalism to describe a system at finite temperature and density.", "The numerical and phenomenological analyses for several meson properties and the corresponding QCD phase diagram are included in Sect.", "and .", "Finally, in Sect.", "we summarize our results and present the conclusions." ], [ "Thermodynamics", "We consider a two-flavor chiral quark model that includes nonlocal vector and axial-vector quark-antiquark currents.", "The corresponding Euclidean effective action and nonlocal fermion currents can be found in Ref. [2].", "We perform a bosonization of the fermionic theory [11] in a standard way by considering the partition function $\\mathcal {Z} = \\int \\mathcal {D}\\, \\bar{\\psi }\\mathcal {D}\\psi \\,\\exp [-S_E]$ and introducing auxiliary fields.", "After integrating out the fermion fields the partition function can be written in the mean field approximation (MFA), in which the bosonic fields are expanded around their vacuum expectation values, $\\phi (x) = \\bar{\\phi }+ \\delta \\phi (x)$ .", "We extend now the analysis of this model to a system at finite temperature and chemical potential.", "Following the same prescriptions described in Refs.", "[6], [12] the thermodynamic potential in the MFA is given by [14] $\\Omega ^{\\rm MFA} \\ = \\ \\Omega ^{\\rm reg} + \\Omega ^{\\rm free} +\\mathcal {U}(\\Phi ,T) + \\Omega _0 \\ ,$ where $\\Omega ^{\\rm reg} &= \\,- \\,4 T \\sum _{c=r,g,b} \\ \\sum _{n=-\\infty }^{\\infty }\\int \\frac{d^3\\vec{p}}{(2\\pi )^3} \\ \\ln \\left[ \\frac{ (\\rho _{n,\\vec{p}}^c)^2 + m^2(\\rho _{n,\\vec{p}}^c)}{z^2(\\rho _{n, \\vec{p}}^c)\\ [(\\rho _{n,\\vec{p}}^c)^2 + m^2]}\\right]+\\frac{\\bar{\\sigma }_1^2 + \\kappa _p^2\\; \\bar{\\sigma }_2^2}{2\\,G_S} - \\frac{\\bar{\\omega }^2}{2\\,G_0} \\ , \\nonumber \\\\\\Omega ^{\\rm free} \\ &= \\ -4 T \\sum _{c=r,g,b} \\ \\sum _{s=\\pm 1} \\int \\frac{d^3 \\vec{p}}{(2\\pi )^3}\\; \\mbox{Re}\\;\\ln \\left[ 1 + \\exp \\left(-\\;\\frac{\\epsilon _p + s(\\mu + i \\phi _c)}{T}\\right)\\right]\\ .$ The constants $\\bar{\\sigma }_{1,2}$ and $\\bar{\\omega }$ are the mean field values of the scalar and the isospin zero vector fields.", "At nonzero quark densities, the flavor singlet term of the vector interaction develops a nonzero expectation value, while all other components of the vector and axial vector interactions have vanishing mean fields [13].", "The mean field values can be calculated by minimizing $\\Omega ^{\\rm MFA}$ , while $m(p)$ and $z(p)$ are the momentum-dependent effective mass and the wave function renormalization (WFR).", "These functions are related to the nonlocal form factors and the vacuum expectation values of the scalar fields by $m(p) &=& z(p)\\, \\left[ m\\, +\\, \\bar{\\sigma }_1\\, g(p)\\right] \\ ,\\nonumber \\\\z(p) &=& \\left[ 1\\,-\\,\\bar{\\sigma }_2 \\,f(p)\\right]^{-1}\\ ,$ where $g(p)$ and $f(p)$ are the Fourier transforms of the form factors included in the nonlocal quark antiquark currents (see Ref. [2]).", "We have also defined $\\Big ({\\rho _{n,\\vec{p}}^c} \\Big )^2 =\\Big [ (2 n +1 )\\pi T + \\phi _c - \\imath \\tilde{\\mu } \\Big ]^2 + {\\vec{p}}\\ \\!", "^2 \\ ,$ in which the sums over color indices run over $c=r,g,b$ with the color background fields components being $\\phi _r = - \\phi _g = \\phi $ , $\\phi _b = 0$ , and $\\epsilon _p = \\sqrt{\\vec{p}^{\\;2}+m^2}\\;$ .", "The vector coupling generates a shifting in the chemical potential as [14] $\\tilde{\\mu } = \\mu - g({\\rho _0}_{n,\\vec{p}}^c)\\ z({\\rho _0}_{n,\\vec{p}}^c)\\ \\bar{\\omega } \\ ,$ where ${\\rho _0}_{n,\\vec{p}}^c = \\rho _{n,\\vec{p}}^c\\ \\vert _{\\bar{\\omega }=0} \\ .$ The term $\\Omega ^{\\rm reg}$ is obtained after following the same regularization prescription as in previous works [6].", "Namely, we subtract the thermodynamic potential of a free fermion gas, and then we add it in a regularized form.", "Finally, the last term in Eq.", "(REF ) is a constant fixed by the condition that $\\Omega ^{\\rm MFA}$ vanishes at $T=\\mu =0$ .", "The chiral quark condensates are given by the vacuum expectation values $\\langle \\bar{q}q \\rangle $ .", "The corresponding expressions can be obtained by differentiating $\\Omega ^{\\rm MFA}$ with respect to the current quark masses.", "In this effective model we assume that fermions move on a static and constant background gauge field $\\phi $ , the Polyakov field, that couples with fermions through the covariant derivative in the fermion kinetic term.", "The traced Polyakov loop (PL), $\\Phi =\\frac{1}{3} {\\rm Tr}\\, \\exp ( i \\phi /T)$ , can be considered as the order parameter for confinement in the infinite quark mass limit  [15], [16].", "The effective gauge field self-interactions are given by the Polyakov loop potential $\\mathcal {U}(\\Phi ,T)$ , whose functional form is usually based on properties of pure gauge QCD.", "Among the most used effective potentials, the Ansatz that provides a good agreement with lQCD results [5], [3] is a polynomial function based on a Ginzburg-Landau Ansatz [17], [18]: $\\frac{{\\cal {U}}_{\\rm poly}(\\Phi ,T)}{T ^4} \\ = \\ -\\,\\frac{b_2(T)}{2}\\, \\Phi ^2-\\,\\frac{b_3}{3}\\, \\Phi ^3 +\\,\\frac{b_4}{4}\\, \\Phi ^4 \\ ,$ where $b_2(T) = a_0 +a_1 \\left(\\dfrac{T_0}{T}\\right) + a_2\\left(\\dfrac{T_0}{T}\\right)^2+ a_3\\left(\\dfrac{T_0}{T}\\right)^3\\ .$ The numerical values for the parameters can be found in Ref. [17].", "From lattice calculations one would expect to find a deconfinement temperature of $T_0 = 270$  MeV given the absence of dynamical quarks.", "However, it has been argued that in the presence of light dynamical quarks this temperature scale, which is a further parameter of the model, should be adequately reduced to about 210 and 190 MeV for the case of two and three flavors respectively, with an uncertainty of approximately 30 MeV [19]." ], [ "Meson Masses", "In general, meson masses can be obtained from the terms in the Euclidean action that are quadratic in the bosonic fields.", "The resulting scalar and pseudoscalar meson sector of the bosonized Euclidean action can be writen as [4] $S^{\\rm S,PS}_E = \\dfrac{1}{2} \\int \\frac{d^4 p}{(2\\pi )^4}\\ G_K(p^2)\\, \\delta K(p)\\, \\delta K(-p)$ with $K= \\sigma , \\sigma ^{\\prime }$ and $\\vec{\\pi }$ .", "In the vector meson sector we find instead [2] $ S_E^{\\rm V,A} &=& \\dfrac{1}{2} \\int \\frac{d^4 p}{(2\\pi )^4}\\ \\Big \\lbrace G_V^{\\mu \\nu }(p^2)\\,\\delta V_\\mu (p)\\cdot \\delta V_\\nu (-p) \\nonumber \\\\&& \\hspace{-34.14322pt} + \\,i\\,G_{\\pi a}(p^{2})\\Big [p^\\mu \\,\\delta \\vec{a}_{\\mu }(-p) \\cdot \\delta \\vec{\\pi }(p)-p^\\mu \\,\\delta \\vec{a}_{\\mu }(p)\\cdot \\delta \\vec{\\pi }(-p)\\Big ]\\Big \\rbrace , \\nonumber \\\\$ with $V_\\mu = v_\\mu ^0, a_\\mu ^0, \\vec{v}_\\mu $ , and $\\vec{a}_\\mu $ .", "For this case we obtain the tensors $G_{v^0}^{\\mu \\nu }$ , $G_{a^0}^{\\mu \\nu }$ , $G_v^{\\mu \\nu }$ and $G_a^{\\mu \\nu }$ from the expansion of the fermionic determinant.", "One has $G_{v,a}^{\\mu \\nu }(p^2) &=& G_{\\rho ,{\\rm a}_1}(p^2)\\left(\\delta ^{\\mu \\nu }-\\dfrac{p^{\\mu }p^{\\nu }}{p^2}\\right)+ L_\\pm (p^2)\\dfrac{p^{\\mu }p^{\\nu }}{p^{2}} ,\\nonumber \\\\$ where the functions $G_{\\rho ,{\\rm a}_1}(p^2)$ and $L_\\pm (p^2)$ correspond to the transverse and longitudinal projections of the vector and axial-vector fields, describing meson states with spin 1 and 0, respectively.", "Regarding the isospin zero channels, it is easy to see that the expressions for $G_{v^0}^{\\mu \\nu }$ and $G_{a^0}^{\\mu \\nu }$ can be obtained from $G_v^{\\mu \\nu }$ and $G_a^{\\mu \\nu }$ simply by replacing the vector coupling constant $G_V\\rightarrow G_0$ and $G_V\\rightarrow G_5$ in each case.", "It can also be seen in Eq.", "(REF ) that due to the addition of the vector meson sector, a mixing between the pion fields and the longitudinal part of the axial-vector fields arises [21], [22].", "To remove this mixing and obtain the correct pion mass of the physical state $\\tilde{\\vec{\\pi }}$ we introduce a mixing function $\\lambda (p^2)$ , defined in such a way that the cross terms in the quadratic expansion vanish [2].", "Once cross terms have been eliminated, the resulting functions $G_M(p^2)$ stand for the inverses of the effective meson propagators.", "At finite temperature, the meson masses are calculated by solving $G_M (p_M^2) = 0$ with $p_M = (0,im_M)$ (see Appendix  for the analytical expressions).", "The mass values determined by these equations are the spatial “screening-masses” corresponding to the zeroth Matsubara mode, and their inverses describe the persistence lengths of these modes at equilibrium with the heat bath [23].", "Finally, the meson wave function renormalization and the meson-quark effective coupling constant can be obtained from $Z_M^{-1} \\, = \\, g_{Mqq}^{-2} \\,= \\, \\frac{dG_M(p^2)}{dp^2}\\bigg \\vert _{p^2=-m_M^2} \\ ,$ and thus the physical states of the meson fields can be calculated." ], [ "Decay constants", "The pion weak decay constant $f_\\pi $ is given by the matrix elements of axial currents between the vacuum and the physical one-pion states at the pion pole, $\\langle 0 \\vert J_{A\\mu }^a (0) \\vert \\tilde{\\pi }^b(p) \\rangle =i \\, \\delta ^{ab} \\, f_\\pi (p^2) \\; p_\\mu \\ .$ On the other hand, the matrix elements of the electromagnetic current $J_{em}$ between the neutral vector meson state and the vacuum determine the vector decay constant $f_v$ $\\langle 0 \\vert J_{em\\ \\mu } (0) \\vert \\rho _\\nu ^0 (p) \\rangle =e \\, f_v(p^2) (\\delta _{\\mu \\nu } p^2 - p_\\mu p_\\nu ) \\ ,$ where $e$ is the electron charge.", "We can easily notice that, as required from the conservation of the electromagnetic current, the matrix element is transverse.", "In order to obtain these matrix elements within our model, we have to gauge the effective action through the introduction of gauge fields, and then calculate the functional derivatives of the bosonized action with respect to the currents and the renormalized meson fields.", "In addition, due to the nonlocality of the interaction, the gauging procedure requires the introduction of gauge fields not only through the usual covariant derivative in the Euclidean action but also through a transport function that comes with the fermion fields in the nonlocal currents (see e.g.", "Refs.", "[11], [24], [25]).", "After a lengthy calculation described in Ref [2], we find that the decay constants at $T=0$ are given by $f_\\pi &=& \\dfrac{m_q \\, Z_\\pi ^{1/2}}{m_\\pi ^2} \\left[ F_0 (-m_\\pi ^2)+\\dfrac{G_{\\pi a}(p^2)}{L_-(p^2)} \\, F_1 (-m_\\pi ^2)\\right] \\ ,\\nonumber \\\\f_v \\ &=& \\ \\dfrac{Z_\\rho ^{1/2}}{3\\, m_\\rho ^2}\\, \\left[ J_V^{\\rm (I)} (-m_\\rho ^2) + J_V^{\\rm (II)} (-m_\\rho ^2)\\right]\\ .$ The detailed analytical expressions for the functions $F_0$ , $F_1$ and $J_{V}^{\\rm (I, II)}$ at finite temperature can be found in Appendix .", "Moreover, another important quantity that can be studied within the effective model is the axial-vector decay constant $f_{\\rm a}$ , which is defined from the matrix elements of the electroweak charged currents $J_{ew}$ between the axial-vector meson state and the vacuum at $p^2 = -m_{\\rm a_1}^2$ .", "We have $\\langle 0 \\vert J_{ew\\ \\mu } (0) \\vert a_{1 \\nu }(p)\\rangle = \\Pi _{\\mu \\nu }(p^2) \\ ,$ with $\\Pi _{\\mu \\nu } = T(p^2) \\big ( \\delta _{\\mu \\nu }\\,p^2 \\, - \\, p_\\mu \\,p_\\nu \\big )\\ +\\ L(p^2) \\, p_\\mu \\, p_\\nu $ .", "Given the mixing between the pion field and the axial-vector field, the longitudinal projection of $\\Pi _{\\mu \\nu }$ contributes to the pion weak decay constant.", "The term from which $f_{\\rm a}$ can be calculated is the transverse projection, which corresponds to the physical state $\\tilde{\\vec{a}}_\\mu $ .", "Therefore, we define $f_{\\rm a} = T(p^2)\\vert _{p^2=-m_{{\\rm a}_1}^2}$ and find the following expression $f_{\\rm a} \\ &=& \\ \\dfrac{Z_{{\\rm a}_1}^{1/2}}{3\\, m_{{\\rm a}_1}^2}\\, \\left[ J_A^{\\rm (I)} (-m_{{\\rm a}_1}^2) + J_A^{\\rm (II)} (-m_{{\\rm a}_1}^2)\\right]\\ ,$ where $J_A^{\\rm (I)} (p^2) &=& -\\,4N_c\\,\\int \\dfrac{d^4 q}{(2\\pi )^4}\\, g(q)\\,\\Bigg \\lbrace \\dfrac{3}{2}\\,\\dfrac{[z(q^+)+z(q^-)]}{D(q^+)D(q^-)}\\Big [(q^+\\cdot q^-) - m(q^+)\\,m(q^-) \\Big ] \\nonumber \\\\& & + \\; \\dfrac{1}{2}\\,\\dfrac{z(q^+)}{D(q^+)} \\, +\\, \\dfrac{1}{2}\\, \\dfrac{z(q^-)}{D(q^-)}\\, - \\,\\dfrac{q^2}{(q\\cdot p)}\\left[\\dfrac{z(q^+)}{D(q^+)} - \\dfrac{z(q^-)}{D(q^-)}\\right]\\nonumber \\\\& & + \\,\\dfrac{z(q^+)z(q^-)}{D(q^+)D(q^-)}\\,\\left[(q\\cdot p) - \\dfrac{q^2\\,p^2}{(q\\cdot p)}\\right]\\,\\bigg [\\bar{\\sigma }_1\\, \\big [m(q^-) - m(q^+)\\big ]\\,\\alpha ^+_g(q,p) \\nonumber \\\\& & + \\; \\bar{\\sigma }_2\\,\\big [q^2 + \\dfrac{p^2}{4}+ m(q^+)\\,m(q^-) \\big ]\\,\\alpha ^-_f (q,p)\\, +\\frac{2}{q^2}\\left(\\bar{\\sigma }_1\\ g(q) + m_q \\right)\\big [m(q^-) - m(q^+)\\big ]\\ \\bigg ] \\Bigg \\rbrace \\ , \\nonumber \\\\J_A^{\\rm (II)} (p^2) &=& -\\,4N_c \\int \\dfrac{d^4 q}{(2\\pi )^4}\\,\\dfrac{z(q)}{D(q)}\\left\\lbrace \\dfrac{q^2}{(q\\cdot p)}\\Big [g(q^+)-g(q^-)\\Big ] + \\left[(q\\cdot p) -\\dfrac{q^2\\,p^2}{(q\\cdot p)}\\right]\\alpha ^-_g (q,p)\\right\\rbrace ,$ and $\\alpha _f^\\pm (q,p) = \\int \\limits ^1_0 d\\lambda \\, \\dfrac{\\lambda }{2}\\, \\left[f^{\\prime }\\left(q+\\lambda \\dfrac{p}{2}\\right)\\ \\pm \\ f^{\\prime }\\left(q-\\lambda \\dfrac{p}{2}\\right)\\right].$" ], [ "Decay widths", "Various transition amplitudes can be calculated by expanding the bosonized action to higher orders in meson fluctuations.", "In this work we are particularly focused in the study of the processes $\\rho \\rightarrow \\pi \\pi $  [2] and ${\\rm a}_1\\rightarrow \\rho \\pi $ at finite temperature.", "The decay amplitudes $\\mathcal {A}_\\rho (v^a(p)\\rightarrow \\pi ^b(q_1)\\pi ^c(q_2))$ and $\\mathcal {A}_{{\\rm a}_1}(a^a(p)\\rightarrow v^b(q_1)\\pi ^c(q_2))$ are obtained by calculating the corresponding functional derivatives of the effective action.", "Thus, we define $\\frac{\\delta ^3 S_E^{\\rm bos}}{\\delta \\tilde{v}_\\mu ^a(p) \\delta \\tilde{\\pi }^b(q_1) \\delta \\tilde{\\pi }^c(q_2)} \\bigg \\vert _{\\delta v_\\mu =\\delta \\pi =0}&=& \\nonumber \\\\& & \\hspace{-71.13188pt} \\hat{\\delta }^{(4)} (p + q_1 + q_2) \\ \\epsilon _{abc}\\; V^{\\rho \\rightarrow \\pi \\pi }_\\mu \\nonumber \\\\\\frac{\\delta ^3 S_E^{\\rm bos}}{\\delta \\tilde{a}_\\mu ^a(p) \\delta \\tilde{v}_\\nu ^b(q_1)\\delta \\tilde{\\pi }^c(q_2) } \\bigg \\vert _{\\delta a_\\mu =\\delta v_\\nu =\\delta \\pi =0}&=& \\nonumber \\\\& & \\hspace{-71.13188pt} \\hat{\\delta }^{(4)} (p + q_1 + q_2) \\ \\epsilon _{abc}\\; \\Pi ^{\\rm a_1 \\rightarrow \\rho \\pi }_{\\mu \\nu }, \\nonumber \\\\ $ where $\\hat{\\delta }^{(4)}(p + q_1 + q_2) = (2\\pi )^4\\,\\delta ^{(4)}(p + q_1 + q_2)$ , and $V^{\\rho \\rightarrow \\pi \\pi }_\\mu &=& \\tilde{F}_{\\rho \\pi \\pi }(p^2,q_1^2,q_2^2)\\dfrac{(q_{1\\mu } + q_{2\\mu })}{2} \\nonumber \\\\& & + \\,\\tilde{G}_{\\rho \\pi \\pi }(p^2,q_1^2,q_2^2)\\dfrac{(q_{1\\mu } - q_{2\\nu })}{2} \\ , \\nonumber \\\\\\Pi ^{\\rm a_1 \\rightarrow \\rho \\pi }_{\\mu \\nu } &=& \\tilde{F}_1(p^2,q_1^2,q_2^2)\\, \\delta _{\\mu \\nu } \\, + \\,\\tilde{F}_2(p^2,q_1^2,q_2^2)\\, q_{1\\mu } q_{1\\nu } \\nonumber \\\\& & + \\, \\tilde{F}_3(p^2,q_1^2,q_2^2)\\, q_{2\\mu } q_{2\\nu }\\, +\\, \\tilde{F}_4(p^2,q_1^2,q_2^2)\\, q_{1\\mu } q_{2\\nu } \\nonumber \\\\& & + \\, \\tilde{F}_5(p^2,q_1^2,q_2^2)\\, q_{2\\mu } q_{1\\nu } .$ The form factors $\\tilde{G}_{\\rho \\pi \\pi }(p^2,q_1^2, q_2^2)$ , $\\tilde{F}_{\\rho \\pi \\pi }(p^2,q_1^2, q_2^2)$ and $\\tilde{F}_i(p^2, q_1^2, q_2^2)$ are one-loop functions that arise from the expansion of the effective action, and the explicit forms of those used for our calculations can be found in Appendix .", "Finally, after some algebra we obtain the following expressions for the amplitudes $\\vert \\mathcal {A}_\\rho \\vert ^2 &=& \\dfrac{m_\\rho ^2}{3} \\bigg (1-\\dfrac{4m_\\pi ^2}{m_\\rho ^2}\\bigg ) g_{\\rho \\pi \\pi }^2 \\ , \\nonumber \\\\\\vert \\mathcal {A}_{{\\rm a}_1} \\vert ^2 &=& 2\\,g_{{\\rm a}\\rho \\pi }^2 + \\dfrac{1}{16\\,m_\\rho ^2m_{{\\rm a}_1}^2} \\Big \\lbrace 2\\,g_{{\\rm a}\\rho \\pi }(m_{{\\rm a}_1}^2-m_\\pi ^2+m_\\rho ^2) \\nonumber \\\\& & \\hspace{-14.22636pt} +f_{{\\rm a}\\rho \\pi }\\big [m_{{\\rm a}_1}^4-2\\,m_{{\\rm a}_1}^2(m_\\rho ^2+m_\\pi ^2)+(m_\\rho ^2-m_\\pi ^2)^2\\big ]\\Big \\rbrace ^2\\ ,\\nonumber \\\\$ with $g_{\\rho \\pi \\pi }\\equiv \\tilde{G}_{\\rho \\pi \\pi }(-m_\\rho ^2,-m_\\pi ^2,-m_\\pi ^2)$ , $g_{{\\rm a}\\rho \\pi }\\equiv \\tilde{F}_1(-m_{{\\rm a}_1}^2,-m_\\rho ^2,-m_\\pi ^2)$ and $f_{{\\rm a}\\rho \\pi }\\equiv \\tilde{F}_3(-m_{{\\rm a}_1}^2,-m_\\rho ^2,-m_\\pi ^2)-\\tilde{F}_4(-m_{{\\rm a}_1}^2,-m_\\rho ^2,-m_\\pi ^2)$ .", "It is quite straightforward to see that only the transverse piece of $ V^{\\rho \\rightarrow \\pi \\pi }_\\mu $ contributes to the $\\rho \\rightarrow \\pi \\pi $ decay width.", "That is not, however, the case for $\\rm a_1\\rightarrow \\rho \\pi $ .", "In order to study the thermal dependence of these decay widths, it is necessary to modify the two-body phase space to include finite temperature effects.", "Following Refs.", "[27], [28], the decay of a particle at rest of mass $M$ , into particles of masses $m_1$ and $m_2$ in equilibrium with the heat bath, is given by $&&\\Gamma _{M\\rightarrow m_1 m_2}\\ \\big \\vert _{p=0} = \\frac{\\vert \\mathcal {A}_M \\vert ^2}{32 \\pi M}\\ \\times \\nonumber \\\\&& \\sqrt{\\left(1-\\frac{(m_1 + m_2)^2}{M^2}\\right)\\left(1-\\frac{(m_1 - m_2)^2}{M^2}\\right)}\\ \\times \\nonumber \\\\&& \\frac{\\exp {\\left[\\frac{1}{2T}M\\right]}}{\\cosh {\\left[\\frac{1}{2T}M\\right]}-\\cosh {\\left[\\frac{1}{2T}\\frac{(m_1-m_2)(m_1+m_2)}{M}\\right]}} \\ ,$ where $\\mathcal {A}_M$ is evaluated within the effective model.", "Hence, together with the results of Eq.", "(REF ) it leads to $\\Gamma _{\\rho \\rightarrow \\pi \\pi } &=& \\frac{\\vert \\mathcal {A}_{\\rho } \\vert ^2}{32 \\pi \\, m_\\rho }\\bigg (1-\\dfrac{4m_\\pi ^2}{m_\\rho ^2}\\bigg )^{1/2}\\frac{ \\exp {\\left[ \\frac{1}{2T}\\,m_\\rho \\right]} }{\\cosh { \\left[\\frac{1}{2T}\\, m_\\rho \\right]} \\ -\\ 1 }\\ , \\nonumber \\\\\\Gamma _{{\\rm a}_1\\rightarrow \\rho \\pi } &=& \\frac{\\vert \\mathcal {A}_{{\\rm a}_1} \\vert ^2}{32 \\pi \\, m_{{\\rm a}_1}^3 }\\Big [m_{{\\rm a}_1}^2 - (m_\\rho + m_\\pi )^2\\Big ]^{1/2} \\, \\times \\nonumber \\\\&& \\frac{\\Big [m_{{\\rm a}_1}^2 - (m_\\rho - m_\\pi )^2\\Big ]^{1/2} \\exp {\\left[\\frac{1}{2T}m_{{\\rm a}_1}\\right]}}{\\cosh {\\left[\\frac{1}{2T}m_{{\\rm a}_1}\\right]}-\\cosh {\\left[\\frac{1}{2T}\\frac{(m_\\rho -m_\\pi )(m_\\rho +m_\\pi )}{m_{{\\rm a}_1}}\\right]}} \\ .", "\\nonumber \\\\$" ], [ "Vacuum properties", "To fully define the model it is necessary to specify the form factors $f(z)$ and $g(z)$ in the nonlocal fermion currents.", "In this work we will consider an exponential momentum dependence, which guarantee a fast ultraviolet convergence of the loop integrals, $&& g(p) = {\\rm exp}(-p^2 / \\Lambda _0^2)\\ , \\quad f(p) = {\\rm exp}(-p^2 / \\Lambda _1^2)\\ .$ Notice that in these form factors two energy scales, $\\Lambda _0$ and $\\Lambda _1$ , are introduced, which act as effective momentum cutoff and have to be taken as additional parameters of the model.", "Moreover, the model in the MFA includes other five free parameters, namely the current quark mass $m$ , the coupling constants $G_S$ , $G_V$ , $G_0$ and $\\kappa _p$ .", "Given the form factor functions, it is possible to set the model parameters to reproduce the observed meson phenomenology.", "First, we perform a fit to lQCD results quoted in Ref.", "[20] for the functions $m(p)$ and $z(p)$ , from which we obtain the values of $\\Lambda _0$ and $\\Lambda _1$ $&&\\Lambda _0 = 1092 \\pm 22 {\\rm \\ MeV}\\ , \\quad \\Lambda _1 = 1173 \\pm 60 {\\rm \\ MeV}\\ .$ The curves corresponding to the functions $m(p)$ and $z(p)$ , together with lQCD data are shown in Fig.", "REF .", "Figure: Fit to lattice data from Ref.", "for the functions m(p)m(p) and z(p)z(p), Eq.", "().The fit has been carried out considering results up to 3 GeV.By requiring that the model reproduce the empirical values of three physical quantities, chosen to be the masses of the mesons $\\pi $ and $\\rho $ and the pion weak decay constant $f_{\\pi }$ , together with the value of $z(p=0)$ , one can determine the model parameters quoted in Table REF .", "Table: Model parameter values.Regarding the coupling constant $G_0$ , we will follow the prescription used in Ref [14], parameterizing it as $G_0 = \\eta \\ G_V$ .", "Therefore, the strength of the isoscalar vector coupling can be evaluated by considering different values for $\\eta $ , and can be used to tune the model.", "Given that the influence of this vector coupling increases with the chemical potential, at zero density $\\bar{\\omega }$ vanishes for all temperatures, and thus, the vector interactions do not contribute to the mean field thermodynamic potential.", "Effective theories that do not include an explicit mechanism of confinement, such as PNJL models, usually present a threshold above which the constituent quarks can be simultaneously on shell.", "This threshold, which depends on the model parametrization, is typically of the order of 1 GeV.", "Since the main goal of this work is to describe the properties of vector mesons, particularly the $\\rm a_1(1260)$ meson, the parameter set quoted in Table REF was chosen in such a way that the threshold, approximately of $1.25$  GeV, is larger than the meson masses obtained within our model.", "Once we have fixed the model parametrization, we can calculate the values of several meson properties.", "Our numerical results are summarized in Table REF , together with the corresponding phenomenological estimates.", "In general, it is seen that meson masses, mixing angles and decay constants predicted by the model are in a reasonable agreement with phenomenological expectations.", "As in precedent analyses [4], [7], [5], [3], we obtain relatively low values for $m$ , and a somewhat large value for the light quark condensate.", "On the other hand, we find that the product $-\\langle \\bar{q}q\\rangle m$ , which gives $7.4\\times 10^{-5}$  GeV$^4$ is in agreement with the scale-independent result obtained from the Gell-Mann-Oakes-Renner relation at the leading order in the chiral expansion, namely $-\\langle \\bar{q}q\\rangle m = f_\\pi ^2 m_\\pi ^2/2 \\simeq 8.3\\times 10^{-5}$  GeV$^4$ .", "Table: Numerical results for various phenomenological quantities." ], [ "Results at finite temperature", "We begin this section analyzing the chiral restoration and deconfinement transition at zero chemical potential through the thermal dependence of the corresponding order parameters, namely the chiral quark condensate $\\langle \\overline{q}q \\rangle $ and the trace of the Polyakov loop $\\Phi $ , respectively.", "Both chiral and deconfinement critical temperatures, $T_{ch}$ and $T_\\Phi $ , are obtained by the position of the peaks in the associated susceptibilities $\\chi _{ch}$ and $\\chi _\\Phi $ , defined as $\\chi _{ch} = \\frac{\\partial \\langle \\overline{q}q \\rangle }{\\partial T}\\quad {\\rm and}\\quad \\chi _\\Phi = \\frac{d \\Phi }{d T} \\ .", "\\nonumber $ In Fig.", "REF we plot in solid line the quark condensate normalized by its value at $T=0$ , and in dashed line the trace of the Polyakov loop, both as function of the temperature $T$ .", "Using the potential $\\mathcal {U}_{\\rm poly}(\\Phi ,T)$ in Eq.", "(REF ) with $T_0 = 210$  MeV, we find that the chiral and deconfinement critical temperatures are approximately the same (less than $3\\%$ of difference).", "This behavior is in agreement with lQCD calculations [29] and other nlPNJL models [8], [5], [3], showing that at $\\mu = 0$ chiral restoration and deconfinement take place simultaneously as crossover phase transitions.", "Figure: Normalized quark condensate and trace of the Polyakov loop in solid and dashed line, respectively, as function of the temperature.The obtained chiral critical temperature, $T_{ch} = 202$  MeV, is somewhat larger than lQCD estimations, $T_{ch}^{lQCD} \\sim 160$  MeV [29], however this value is strongly dependent on the presence of the Polyakov loop.", "For instance, nonlocal NJL models (without Polyakov loop) [31], [30], [32] predict a chiral critical temperature around 120 MeV, which is quite lower than $T_{ch}^{lQCD}$ .", "Therefore, in our model framework we can test the effect of the effective Polyakov potential on the chiral critical temperature by modifying the value of $T_0$ , staying within the range quoted in Ref. [19].", "In Fig.", "REF we plot the chiral critical temperature as a function of $T_0$ for two different set of parameters.", "The solid line stands for the parametrization present in Table REF , while the dashed one represents the parametrization used in Ref. [33].", "With this analysis we can not only estimate how much $T_{ch}$ changes with $T_0$ , but also determine its dependence with the parametrization.", "Figure: Chiral critical temperatures as function of T 0 T_0 for two different set of parameters.In solid line the parametrization used in this work, and in dashed line the one used in Ref.", ".We see in the figure that the splitting between lines remains almost constant with a gap of approximately 15 MeV, and the variation of $T_{ch}$ in the range between $T_0=120-290$  MeV is almost of 70 MeV.", "Hence, with the proper choice of model parameters and $T_0$ , we can always obtain a chiral critical temperature in good agreement with lQCD estimations.", "Consequently, for all the following figures we will plot our results against the reduced temperature $T/T_{ch}$ .", "Our numerical results for the evolution of the meson masses as functions of the reduced temperature are shown in Fig.", "REF .", "The upper panel shows the behavior of the scalar and pseudoscalar mesons $\\sigma $ and $\\pi $ , which are chiral partners.", "Similarly, in the lower panel we find the vector and axial vector mesons $\\rho $ and a$_1$ .", "Figure: Pseudoscalar and scalar (vector and axial-vector) meson masses in solid and dashed line, respectively, in the upper (lower) panel.", "We also quote, in dotted line, the effective constituent quark mass.It is seen that pseudoscalar and vector meson masses (solid lines) remain approximately constant up to the critical temperature $T_{ch}$ , while scalar and axial-vector meson masses (dashed lines) start to drop somewhat below $T_{ch}$ .", "Right above $T_{ch}$ masses get increased in such a way that the chiral partners become degenerate, as expected from chiral restoration.", "Afterwards, when the temperature is further increased, the masses rise continuously, showing that they are basically dominated by the thermal energy.", "We also plot in both cases the effective constituent quark masses, shown in dotted lines.", "It is interesting to notice that up to certain temperature $T_m$ (denoted in the figure with a large dot) the mesons have a lower mass than the masses of their constituent quarks.", "However, when $T>T_m$ , meson masses are no longer discrete solutions of Eqs.", "(REF ) and (REF ), which implies a passage from the discrete to the continuum, known as Mott transition [34], [35].", "From the figure one can see that the Mott temperature for the scalar and vector mesons is located at $T / T_{ch} \\sim 1$ .", "Regarding the thermal behavior of the decay constants, defined in Eqs.", "(REF ) and (REF ) and plotted in the upper panel of Fig.", "REF , we see that they remain constant up to near the chiral critical temperature, and beyond this point they tend to zero.", "We can observe that the pion decay constant vanishes faster (at $T/T_{ch} \\sim 1.5$ ) than the vector and axial-vector decay constants (at $T/T_{ch} \\sim 3$ ).", "Also, $f_{\\rm a_1}$ presents a peak immediately above $T_{ch}$ due to the $\\rm a_1$ mass behavior, which, before being dominated by the thermal energy, decreases to become degenerated with its chiral parter.", "The lower panel of Fig.", "REF shows the quark-meson couplings of the vector and axial-vector mesons in solid and dashed line, respectively.", "Both quark-meson couplings present a similar thermal behavior, remaining constant at low temperatures and decreasing before reaching $T_{ch}$ .", "Near above the critical temperature, the couplings become degenerated.", "Figure: Normalized decay constants by its zero temperature value (effective quark meson couplings) in the upper (lower) panel, for the ρ\\rho and a 1 \\rm a_1 meson in solid and dashed line, respectively.In Fig.", "REF we plot the $\\rho $ and $\\rm a_1$ decay widths, $\\Gamma _\\rho $ and $\\Gamma _{\\rm a_1}$ , defined in Eq.", "(REF ).", "It can be seen that the solid line, which represents $\\Gamma _\\rho $ , and the dashed line, $\\Gamma _{\\rm a_1}$ , show a decreasing behavior as $T$ rises.", "Figure: In solid and dashed lines the ρ\\rho and a 1 \\rm a_1 decay widths, respectively, as function of the reduced temperature.", "In dotted line, Γ ρ \\Gamma _\\rho for constant meson masses.It is found in these cases that the thermal dependence in the meson masses directly affects the decay widths.", "This happens because of the kinematic condition in Eq.", "(REF ), which goes to zero as $T$ increases.", "As a result the decay widths tend to vanish, even when the phase space becomes larger due to the Bose enhancement.", "On the contrary, if the masses are considered to be constant for all temperatures, then $\\Gamma _\\rho $ would follow the dotted line curve in Fig.", "REF , and monotonously increase as a function of $T$ .", "This is fully consistent with the results obtained, for instance, within QCD sum rules [36], where the same condition is imposed to the masses.", "Despite having the same general tendency towards zero, $\\Gamma _\\rho $ and $\\Gamma _{\\rm a_1}$ present a transition at different temperatures.", "In the case of the $\\rho $ meson, is only when $T>T_{ch}$ that $\\Gamma _\\rho $ starts to drop, since the $m_\\pi $ grows faster than the $m_\\rho $ with temperature (see Fig.", "REF ).", "For the $\\rm a_1$ decay however, the width begins to diminish before the chiral critical temperature, vanishing close to $T_{ch}$ .", "This is caused by the chiral partners mass degeneration, near above $T_{ch}$ the vector and axial-vector mesons have approximately the same mass (see lower panel of Fig.", "REF ), and therefore when this happens the kinematic condition in Eq.", "(REF ), vanishes.", "Concerning the axial-vector decay width $\\Gamma _{\\rm a_1}$ it begs to be mentioned that, while we are only considering the main channel of the partial decay $\\rm a_1 \\rightarrow \\rho \\pi $ , the physical decay process is actually $\\rm a_1 \\rightarrow \\pi \\pi \\pi $ .", "Other partial widths contribute approximately with $40 \\%$ of the total width [28].", "Moreover, the decay $\\rm a_1 \\rightarrow \\sigma \\pi $ and the direct decay $\\rm a_1 \\rightarrow \\pi \\pi \\pi $ are also kinematically allowed for a higher temperature range, since $m_\\sigma $ and $m_\\pi $ are smaller than $m_\\rho $ near and above $T_{ch}$ .", "Even though these processes are not calculated here, they would contribute to the total width and could modify the decreasing behavior of $\\Gamma _{\\rm a_1}$ ." ], [ "QCD phase diagram", "Through the study of the order parameters one can find regions in which the chiral symmetry is either broken or approximately restored through first order or crossover phase transitions, and phases in which the system remains either in confined or deconfined states.", "For relatively high temperatures chiral restoration takes place as a crossover, whereas at low temperatures the order parameter has a discontinuity at a given critical chemical potential signaling a first order phase transition.", "This gap in the quark condensate induces also a jump in the trace of the PL.", "The value of $\\Phi $ at both sides of the discontinuity indicate if the system remains confined or not.", "Values close to zero or one correspond to confinement or deconfinement, respectively.", "When the chiral restoration occurs as a first order phase transition, the PL susceptibility present a divergent behavior at the chiral critical temperature even when the order parameter $\\Phi $ remains close to zero.", "Therefore, another definition is needed for the deconfinement critical temperatures in this region of the phase diagram.", "As in Ref [8], we define the critical temperature requiring that $\\Phi $ takes a value in the range between $0.4$ and $0.6$ , which could be taken as large enough to denote deconfinement.", "Given that deconfinement occurs at temperatures where the order parameter becomes close to one, the phase in which quarks remain confined (signaled by $\\Phi \\lesssim 0.4$ ) even though chiral symmetry has been restored is usually referred to as a quarkyonic phase [37], [38], [39].", "As it is explained in Ref [14] the isoscalar vector coupling constant $G_0$ is considered to be a free parameter which may be adjusted in the MFA to reproduce the behavior of thermodynamic properties obtained in lattice QCD and other effective theories.", "Therefore, as was stated, the vector coupling strength will be evaluated by defining the ratio $\\eta = G_0/G_V$ , and to study how the vector interactions affect the transitions and the location of the CEP, we built the corresponding phase diagrams for different values of $\\eta $ .", "In Fig.", "REF the first order phase transition for the chiral symmetry restoration can be seen in solid lines, while the dashed lines show the crossover transition.", "In addition, the deconfinement transition range, defined by $0.4<\\Phi <0.6$ , is denoted with the color shaded area.", "Finally, the dot indicates the position of the critical endpoint.", "If we move in the $T-\\mu $ plane, along the first order phase transition curve, the critical temperature rises from zero up to a critical endpoint (CEP) temperature $T_{CEP}$ , while the critical chemical potential decreases from $\\mu _{ch}$ to a critical endpoint chemical potential $\\mu _{CEP}$ .", "Beyond this point, the chiral restoration phase transition proceeds as a crossover.", "Figure: QCD phase diagrams for the polynomial PL potential within a nlPNJL model for η=0,0.3,0.5\\eta =0,0.3,0.5 in the upper, middle and lower panel, respectively.In Table REF we summarize, for the three considered values of $\\eta $ , the critical temperatures $T_c$ , critical chemical potentials $\\mu _c$ and the CEP coordinates ($\\mu _{\\rm CEP}$ , $T_{\\rm CEP}$ ).", "Table: CEP coordinates and critical temperatures and densities for the different cases of vector strength.In the upper panel of Fig.", "REF we plot the phase diagram for a mean field theory without vector interactions ($\\eta =0 $ ), while in the two lower panels we show the phase diagrams in presence of the isoscalar vector coupling for two different values of $\\eta $ , $\\eta =0.3$ and $\\eta =0.5$ .", "We notice that the influence of the vector interaction increases with the chemical potential, in particular, the position of the CEP and the values of $\\mu _{ch}$ reflect notably this influence (for increasing $\\eta $ the CEP's tend to be located towards a lower $T$ and higher $\\mu $ ).", "In addition, from Fig.", "REF two different behaviors for the hadronic matter depending on the chemical potential are observed.", "When the chiral restoration transition is first order ($\\mu > \\mu _{CEP}$ ), as the temperature increases we find a transition from a hadronic phase with broken chiral symmetry (BP), to a quarkyonic phase (QP) where the chiral symmetry is restored but the quarks are still confined into hadrons.", "If the temperature continues raising, the deconfinement transition takes place reaching a partonic phase in which quarks are deconfined and the chiral symmetry is restored (RP).", "On the other hand, for densities lower than $\\mu _{CEP}$ one goes from the BP to the RP though crossover phase transitions when the temperature increases.", "Hence, the chiral restoration and deconfinement take place almost simultaneously." ], [ "Summary and conclusions", "In this work we have analyzed the thermal properties of the lightest vector mesons and characterized the chiral and deconfinement transitions at finite $T$ and $\\mu $ , within the context of a $SU(2)$ nonlocal NJL model with vector interactions and wave function renormalization (WFR).", "Gauge interactions have been effectively introduced through a coupling between quarks and a constant background color gauge field, the Polyakov field, whereas gluons self-interactions have been implemented through the polynomial effective Polyakov loop potential.", "Regarding the vacuum properties, we have been able to find a parameter set that reproduces lattice QCD results for the momentum dependence of the effective quark mass and WFR, and the related threshold, where constituent quarks can be simultaneously on shell, is larger than the obtained meson masses.", "At finite temperature, we study the masses, widths and decay constants of the lightest vector and axial-vector mesons.", "We found, as expected, that meson masses and decay constants remain approximately constant up to the critical chiral temperature.", "Beyond the chiral critical temperature, the masses get increased, becoming degenerated with their chiral partners.", "For the process $\\rho \\rightarrow \\pi \\pi $ , the decay width starts to drop above the chiral critical temperature, since beyond this temperature the $\\pi $ mass grows faster than the $\\rho $ mass.", "For the ${\\rm a}_1$ decay, due to the chiral partners mass degeneration, the width begins to diminish before the chiral critical temperature, vanishing close to $T_{ch}$ .", "This indicates that the non considered decay processes for the ${\\rm a}_1$ could be relevant for temperatures close to the chiral critical temperature.", "At zero $\\mu $ , the model shows a crossover phase transition, corresponding to the restoration of the $SU(2)$ chiral symmetry.", "In addition, one finds a deconfinement phase transition, which occurs at almost the same critical temperature.", "On the other hand, at zero temperature chiral restoration takes place via a first order transition.", "At finite density, in the first order region, the critical temperatures for the restoration of the chiral symmetry and deconfinement transition begin to separate.", "The region between them denotes the quarkyonic phase.", "In addition, when the vector interactions are added to the model ($\\eta \\ne 0$ ) we found that the position of the CEP and $\\mu _{ch}$ are influenced by the strength of the coupling constant $G_0$ , noticing that when $\\eta $ increases the critical end-point appears at a lower $T$ and higher $\\mu $ ." ], [ "Acknowledgements", "This work has been partially funded by the National University of La Plata, Project No.", "X824, by CONICET under Grant No.", "PIP 2017/1220170100700 and by ANPCyT, Grant No.", "PICT 2017-0571.", "The authors would like to thank D. Gómez Dumm and N.N.", "Scoccola for useful discussions." ], [ "Screening masses and the decays constants", "Here we quote the analytical expressions for the effective thermal meson propagators in the imaginary time formalism, $G_{M}(\\nu _m,\\vec{p})$ , where $\\nu _m = 2 m \\pi T$ are the bosonic Matsubara frecuencies.", "Also we will present the functions defined in Eq.", "(REF ) to calculate the decay constants $f_\\pi $ and $f_v$ (see Ref.", "[2] for details of their derivations).", "For the vector and axial vector sector, the functions $G_\\rho (\\nu _m,\\vec{p})$ and $G_{{\\rm a}_1}(\\nu _m,\\vec{p})$ can be written as $G_{\\rho \\atopwithdelims (){\\rm a}_1}(\\nu _m,\\vec{p}) &=& \\dfrac{1}{G_V}- 8\\!\\!\\sum _{c=r,g,b} T \\!\\!", "\\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3}\\, h^{2}(q_{nc})\\,\\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^{+})D(q_{nc}^{-})} \\left[\\dfrac{q_{nc}^{2}}{3}+\\dfrac{2(p_m\\cdot q_{nc})^{2}}{3p^{2}}-\\dfrac{p_m^{2}}{4}\\pm m(q_{nc}^{-})m(q_{nc}^{+})\\right]\\ , \\nonumber \\\\$ with $D(q_{nc}) = q_{nc}^2 + m^2(q_{nc})$ and $q_{nc}^\\pm = q_{nc} \\pm p_m/2$ .", "In order to calculate the physical state $\\delta \\tilde{\\vec{\\pi }}$ we define from Eq.", "(REF ) the mixing function $\\lambda (p^2)$ as $\\lambda (p^2) = \\dfrac{G_{\\pi a}(p^2)}{L_-(p^2)} \\ .$ Thus, to calculate the pion mass we find $G_{\\tilde{\\pi }}(p_m^2)= G_{\\pi }(p_m^2)- \\lambda (p_m^2)\\ G_{\\pi a}(p_m^2)\\,p_m^2\\ ,$ with $G_{\\pi }(p_m^2) & = & \\dfrac{1}{G_S} \\, - \\, 8\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ \\ g(q_{nc})^2\\,\\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^{+})D(q_{nc}^{-})}\\Big [(q_{nc}^{+}\\cdot q_{nc}^-)\\,+\\,m(q_{nc}^{+})\\,m(q_{nc}^{-})\\Big ] \\ , \\nonumber \\\\G_{\\pi a}(p_m^2) & = & \\dfrac{8}{p_m^{2}}\\,\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ \\ g(q_{nc})^2\\,\\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^{+})D(q_{nc}^{-})}\\Big [(q_{nc}^{+}\\cdot p_m)\\,m(q_{nc}^{-})-(q_{nc}^{-}\\cdot p_m)\\,m(q_{nc}^{+})\\Big ] \\ , \\nonumber \\\\L_{-}(p_m^{2}) & = &\\dfrac{1}{G_V}\\, - \\, 8\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ \\ g(q_{nc})^2\\,\\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^{+})D(q_{nc}^{-})}\\left[q_{nc}^{2}-\\dfrac{2(p_m\\cdot q_{nc})^{2}}{p_m^{2}}+\\dfrac{p_m^{2}}{4} -m(q_{nc}^{-})m(q_{nc}^{+})\\right] .", "\\nonumber \\\\$ In the case of the $f_\\pi $ and $f_v$ decay constants we need to evaluate the functions $F_0(p_m^2)$ , $F_1(p_m^2)$ and $J_{V}^{\\rm I,II}(p_m^2)$ .", "After a rather lengthy calculation we find for these functions the following expressions $F_0 (p_m^2) &=& 8\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ g(q_{nc})\\, \\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^+)D(q_{nc}^-)}\\ \\left[ (q_{nc}^+\\cdot q_{nc}^-) + m(q_{nc}^+)\\,m(q_{nc}^-)\\right] \\ ,\\nonumber \\\\F_1 (p_m^2) &=& 8\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ g(q_{nc})\\, \\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^+)D(q_{nc}^-)}\\ \\left[ (q_{nc}^+\\cdot p_m)\\,m(q_{nc}^-) - (q_{nc}^-\\cdot p_m)\\,m(q_{nc}^+)\\right]\\ ,$ and $J_V^{\\rm (I)} (p_m^2) &=& -\\,4\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ g(q_{nc})\\,\\Bigg \\lbrace \\dfrac{3}{2}\\,\\dfrac{[z(q_{nc}^+)+z(q_{nc}^-)]}{D(q_{nc}^+)D(q_{nc}^-)}\\Big [(q_{nc}^+\\cdot q_{nc}^-) + m(q_{nc}^+)\\,m(q_{nc}^-) \\Big ] \\nonumber \\\\& & + \\; \\dfrac{1}{2}\\,\\dfrac{z(q_{nc}^+)}{D(q_{nc}^+)} \\, +\\, \\dfrac{1}{2}\\, \\dfrac{z(q_{nc}^-)}{D(q_{nc}^-)}\\, + \\,\\dfrac{q_{nc}^2}{(q_{nc}\\cdot p_m)}\\left[\\dfrac{z(q_{nc}^+)}{D(q_{nc}^+)} - \\dfrac{z(q_{nc}^-)}{D(q_{nc}^-)}\\right]\\nonumber \\\\& & + \\,\\dfrac{z(q_{nc}^+)z(q_{nc}^-)}{D(q_{nc}^+)D(q_{nc}^-)}\\,\\left[(q_{nc}\\cdot p_m) - \\dfrac{q_{nc}^2\\,p_m^2}{(q_{nc}\\cdot p_m)}\\right]\\,\\bigg [-\\,\\bar{\\sigma }_1\\, \\big [m(q_{nc}^+) + m(q_{nc}^-)\\big ]\\,\\alpha ^+_g(q_{nc},p_m) \\nonumber \\\\& & + \\; \\bar{\\sigma }_2\\,\\big [q_{nc}^2 + \\dfrac{p_m^2}{4}- m(q_{nc}^+)\\,m(q_{nc}^-) \\big ]\\,\\alpha ^+_f (q_{nc},p_m)\\,\\bigg ] \\Bigg \\rbrace \\ , \\nonumber \\\\J_V^{\\rm (II)} (p_m^2) &=& -\\,4\\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2\\pi )^3} \\ \\dfrac{z(q_{nc})}{D(q_{nc})}\\left\\lbrace \\dfrac{q_{nc}^2}{(q_{nc}\\cdot p_m)}\\Big [g(q_{nc}^+)-g(q_{nc}^-)\\Big ] \\right.", "\\nonumber \\\\& & + \\left.", "\\left[(q_{nc}\\cdot p_m) -\\dfrac{q_{nc}^2\\,p_m^2}{(q_{nc}\\cdot p_m)}\\right]\\alpha ^+_g (q_{nc},p_m)\\right\\rbrace \\ ,$ where $\\alpha ^+_f (q,p) \\ = \\ \\int _{-1}^1 d\\lambda \\,\\dfrac{\\lambda }{2}\\,f^\\prime \\left( q-\\lambda \\dfrac{p}{2}\\right) \\ .$" ], [ "Analytic expressions for the decays widths", "We start this Appendix with the analytical expression for the factors $\\tilde{G}_{\\rho \\pi \\pi }(p_m^2,q_{1,m}^2,q_{2,m}^2)$ , and $\\tilde{F}_i (p_m^2,q_{1,m}^2,q_{2,m}^2)$ $\\tilde{G}_{\\rho \\pi \\pi } (p_m^2,q_{1,m}^2,q_{2,m}^2) & = & Z_\\rho ^{1/2}\\,Z_\\pi \\,\\bigg [ G_{\\rho \\pi \\pi }(p_m^2,q_{1,m}^2,q_{2,m}^2) +\\lambda (q_{2,m}^2) \\ G_{\\rho \\pi a} (p_m^2,q_{1,m}^2,q_{2,m}^2) \\nonumber \\\\&& + \\lambda (q_{2,m}^2)^2 \\ G_{\\rho a a} (p_m^2,q_{1,m}^2,q_{2,m}^2) \\bigg ] \\ , \\nonumber \\\\\\tilde{F}_i (p_m^2,q_{1,m}^2,q_{2,m}^2) & = & Z_\\rho ^{1/2}\\,Z_\\pi ^{1/2}\\,Z_{\\rm a_1}^{1/2} \\ F_i (p_m^2,q_{1,m}^2,q_{2,m}^2) \\ .$ To calculate the $\\rho \\rightarrow \\pi \\pi $ decay amplitude, we have to evaluate the functions $G_{\\rho x y}(p_m^2,q_{1,m}^2,q_{2,m}^2)$ , where subindices $x$ and $y$ stand for either $\\pi $ or $a$ , at $q_1^2 = q_2^2 = (p-q_1)^2 = -m_\\pi ^2$ , $p^2 = -m_\\rho ^2$ .", "It is convenient to introduce the momentum $v = q_1 - p/2$ , which satisfies $p\\cdot v = 0$ , $v^2 = m_\\rho ^2/4 -m_\\pi ^2$ .", "Then $G_{\\rho x y}(p_m^2,q_{1,m}^2,q_{2,m}^2) & = & 16 \\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2 \\pi )^3}\\,g(q_{nc})\\,g\\left(q_{nc} + v_m/2 + p_m/4\\right)\\,g\\left(q_{nc} + v_m/2 -p_m/4\\right) \\nonumber \\\\&& \\times \\dfrac{z(q_{nc}^+)z(q_{nc}^-)z(q_{nc}+v_m)}{D(q_{nc}^+)D(q_{nc}^-)D(q_{nc}+v_m)}\\; f_{x y}(q_{nc},p_m,v_m) \\ ,$ where we have defined $q_{nc}^\\pm = q_{nc} \\pm p_m/2$ .", "We find for $f_{xy}(q,p,v)$ the expressions (to simplify the notation we will omit the subindexes in $q$ , $p$ and $v$ ) $f_{\\pi \\pi } &=& \\bigg [(q^+ \\cdot q^-) + m(q^+)\\,m(q^-)\\bigg ] \\, \\bigg [1 + \\dfrac{(q\\cdot v)}{v^2}\\bigg ]\\nonumber \\\\& & -\\, \\dfrac{(q\\cdot v)}{v^2} \\bigg \\lbrace 2\\, \\Big [\\,q\\cdot (q+v)\\Big ] \\, + \\, m(q+v)\\,\\Big [m(q^+)+\\,m(q^-)\\Big ]\\bigg \\rbrace \\ ,\\nonumber \\\\f_{\\pi a} &=& - 2\\, m(q+v) \\left[ (q^+\\cdot q^-)\\, - \\,2\\, \\dfrac{(q\\cdot v)^2}{v^2}\\, + \\, m(q^+)m(q^-)\\right] \\nonumber \\\\& & + \\, \\bigg [1 + \\dfrac{(q\\cdot v)}{v^2}\\bigg ] \\, \\bigg \\lbrace (q^+\\cdot p)\\,m(q^-)- (q^-\\cdot p)\\,m(q^+)\\,-\\,2(q \\cdot v) \\Big [m(q^+)+m(q^-)\\Big ]\\bigg \\rbrace \\ ,\\nonumber \\\\f_{aa} &=&\\bigg [1 + \\dfrac{(q\\cdot v)}{v^2} \\bigg ] \\bigg [q^{+2}\\,q^{-2}\\, - \\, (q^+\\cdot q^-) \\, (q+v)^2 \\, - \\, \\Big ( v^2 + \\dfrac{p^2}{4} \\Big ) m(q^+)m(q^-) \\bigg ]\\nonumber \\\\& & + \\; m(q+v) \\bigg \\lbrace m(q^+)\\, (q^- \\cdot p)\\, -\\, m(q^-)\\, (q^+ \\cdot p)\\,+\\, \\dfrac{(q\\cdot v)}{v^2}\\bigg (v^2 - \\dfrac{p^2}{4} \\bigg )\\,\\Big [ m(q^+)\\, + \\, m(q^-)\\Big ] \\bigg \\rbrace \\nonumber \\\\& & + \\; 2\\,\\dfrac{(q\\cdot v)}{v^2} \\,(q+v)^2 \\bigg [(q\\cdot v) - \\dfrac{p^2}{4}\\bigg ]\\ .$ In the case of the ${\\rm a}_1\\rightarrow \\rho \\pi $ decay amplitude the functions $\\tilde{F}_i (p^2,q_1^2,q_2^2)$ can be written as $F_i (p_m^2,q_{1,m}^2,q_{2,m}^2) &=&16\\, \\sum _{c=r,g,b} T \\sum _{n=-\\infty }^{\\infty } \\int \\frac{d^3\\vec{q}}{(2 \\pi )^3}\\,g(q_{nc})\\,g (q_{nc} + q_{1,m}/2 )\\,g (q_{nc} - q_{2,m}/2) \\nonumber \\\\&& \\times \\dfrac{z(q_{nc}^+)z(q_{nc}^-)z(\\bar{q}_{nc})}{D(q_{nc}^+)D(q_{nc}^-)D(\\bar{q}_{nc})}\\ f^i(p_m^2,q_{1,m}^2,q_{2,m}^2) \\ ,$ where we have defined $q_{nc}^\\pm = q_{nc} \\pm (q_{1,m}+q_{2,m})/2$ and $\\bar{q_{nc}} = q_{nc} + (q_{1,m}-q_{2,m})/2$ .", "We obtain $f^1 &=& m(\\bar{q})\\, q^+ \\cdot q^- \\ - \\ m(q^+)\\, \\bar{q}\\cdot q^- \\ - \\ m(q^-)\\, \\bar{q}\\cdot q^+ \\ - \\ m(\\bar{q})m(q^+)m(q^-) \\ + \\ 2\\, \\beta _a\\, \\Big (m(q^+)-m(\\bar{q})\\Big ) \\nonumber \\\\& & + \\ \\lambda (q_2^2) \\, \\Bigg [m(\\bar{q})m(q^-)\\, q^+ \\cdot p_2 \\ + \\ m(\\bar{q})m(q^+)\\, q^- \\cdot p_2 \\ - \\ m(q^+)m(q^-)\\, \\bar{q} \\cdot p_2 \\ - \\ q^{+2}\\, (q^- \\cdot p_2) \\nonumber \\\\& & + \\ ( 2\\, q \\cdot p_2 \\ + \\ p_1 \\cdot p_2 ) (q^+ \\cdot q^-) \\ + \\ 2\\,\\beta _a\\,\\Big (\\bar{q}^2 \\ - \\ q^{+2}\\Big ) \\Bigg ] \\\\f^3 &=& \\frac{m(\\bar{q})}{2} \\ +\\ \\frac{m(q^+)}{2} \\ - \\ 2\\, \\alpha _b\\, m(q^+) \\ + \\ 2\\, \\beta _c\\, \\Big (m(q^+) \\ - \\ m(\\bar{q})\\Big ) \\nonumber \\\\& & + \\ \\lambda (q_2^2)\\, \\left[m(\\bar{q})m(q^+) \\ - \\ \\dfrac{\\bar{q}^2\\ + \\ q^{+2}}{2} \\ - \\ 2\\, \\alpha _b \\, \\Big (m(\\bar{q})m(q^+) \\ - \\ q^{+2} \\Big )\\ + \\ 2\\, \\beta _c\\, \\Big (\\bar{q}^2 \\ - \\ q^{+2}\\Big )\\right] \\\\f^4 &=& \\frac{m(\\bar{q})}{2} \\ +\\ \\frac{m(q^-)}{2} \\ + \\ \\alpha _a\\, \\Big (m(q^-) \\ - \\ m(q^+)\\Big ) \\ + \\ 2\\, \\beta _d\\, \\Big (m(q^+) \\ -\\ m(\\bar{q})\\Big ) \\nonumber \\\\& & + \\ \\lambda (q_2^2)\\, \\Bigg \\lbrace \\frac{1}{2}\\Big [m(q^-)\\big (m(\\bar{q}) \\ + \\ m(q^+)\\big ) \\ +\\ m(\\bar{q})m(q^+) \\ - \\ \\bar{q}^2 \\Big ]\\ + \\ 2\\, \\beta _d\\, \\Big (\\bar{q}^2 \\ - \\ q^{+2}\\Big ) \\nonumber \\\\& & \\hspace{42.67912pt} + \\ \\alpha _a\\, \\Big [m(q^-)\\big (m(\\bar{q}) \\ + \\ m(q^+)\\big ) \\ - \\ m(\\bar{q})m(q^+) \\ + \\ q^{+2} \\Big ] \\Bigg \\rbrace \\ ,$ where the coefficients $\\alpha _i$ and $\\beta _i$ are $\\alpha _a \\ = \\ \\dfrac{(p_1\\cdot q)(p_1\\cdot p_2)\\ - \\ (p_2\\cdot q)p_2^2}{(p_1\\cdot p_2)^2 \\ - \\ p_1^2p_2^2}\\qquad \\qquad \\qquad \\qquad \\alpha _b \\ = \\ \\dfrac{(p_2\\cdot q)(p_2\\cdot p_1)\\ - \\ (p_1\\cdot q)p_1^2}{(p_1\\cdot p_2)^2 \\ - \\ p_1^2p_2^2}$ $\\beta _a & = & \\dfrac{q^2 (p_1\\cdot p_2)^2 \\ + \\ p_1^2 (p_2\\cdot q)^2 \\ + \\ p_2^2 (p_1\\cdot q)^2 \\ - \\ 2 \\,(p_1\\cdot q)(p_2\\cdot q)(p_1\\cdot p_2) \\ - \\ q^2 p_1^2 p_2^2 }{2\\, \\big [(p_1\\cdot p_2)^2 \\ - \\ p_1^2p_2^2\\big ]}\\nonumber \\\\\\beta _b & = & \\dfrac{(p_1\\cdot p_2)^2 \\big [q^2p_2^2 \\ + \\ 2 \\, (p_2\\cdot q)^2\\big ]\\ + \\ p_1^2p_2^2 (p_2\\cdot q)^2 \\ + \\ 3\\,p_2^4 (p_1\\cdot q)^2\\ - \\ 6 \\,p_2^2(p_1\\cdot q)(p_2\\cdot q)(p_1\\cdot p_2) \\ - \\ q^2 p_1^2 p_2^4}{2\\, \\big [(p_1\\cdot p_2)^2 \\ - \\ p_1^2p_2^2\\big ]^2}\\nonumber \\\\\\beta _c & = & \\dfrac{(p_1\\cdot p_2)^2 \\big [q^2p_1^2 \\ + \\ 2 \\,(p_1\\cdot q)^2\\big ]\\ + \\ p_1^2p_2^2 (p_1\\cdot q)^2 \\ + \\ 3\\,p_1^4 (p_2\\cdot q)^2\\ - \\ 6 \\,p_1^2 (p_1\\cdot q)(p_2\\cdot q)(p_1\\cdot p_2) \\ - \\ q^2 p_1^4 p_2^2}{2\\, \\big [(p_1\\cdot p_2)^2 \\ - \\ p_1^2p_2^2\\big ]^2} \\nonumber \\\\\\beta _d & = & \\Big \\lbrace 4 \\, (p_1\\cdot q)(p_2\\cdot q)(p_1\\cdot p_2)^2 \\ + \\ p_1^2p_2^2 \\big [2\\, (p_1\\cdot q)(p_2\\cdot q) \\ + \\ q^2(p_1\\cdot p_2) \\big ] \\nonumber \\\\& & \\hspace{56.9055pt} - \\ 3\\, (p_1\\cdot p_2)\\big [p_1^2 (p_2\\cdot q)^2 \\ + \\ p_2^2 (p_1\\cdot q)^2\\big ] \\ - \\ q^2 (p_1\\cdot p_2)^3 \\Big \\rbrace \\dfrac{1}{2\\, \\big [(p_1\\cdot p_2)^2 \\ - \\ p_1^2p_2^2\\big ]^2} .$" ] ]
1906.04257
[ [ "Bipartite and Series-Parallel Graphs Without Planar Lombardi Drawings" ], [ "Abstract We find a family of planar bipartite graphs all of whose Lombardi drawings (drawings with circular arcs for edges, meeting at equal angles at the vertices) are nonplanar.", "We also find families of embedded series-parallel graphs and apex-trees (graphs formed by adding one vertex to a tree) for which there is no planar Lombardi drawing consistent with the given embedding." ], [ "Introduction", "Lombardi drawing is a style of graph drawing using curved edges.", "In this style, each edge must be drawn as a circular arc, and consecutive edges around each vertex must meet at equal angles.", "Many classes of graphs are known to have such drawings, including regular bipartite graphs and all 2-degenerate graphs (graphs that can be reduced to the empty graph by repeatedly removing vertices of degree at most two) [5].", "This drawing style can significantly reduce the area usage of tree drawings [6], and display many of the symmetries of more general graphs [5].", "When a given graph is planar, we would like to find a planar Lombardi drawing of it.", "When this is possible, the resulting drawings have simple edge shapes, no crossings, and optimal angular resolution, all of which are properties that lead to more readable drawings.", "It is known that all Halin graphs have planar Lombardi drawings [5], that all 3-regular planar graphs [7] and all 4-regular polyhedral graphs [8] have planar Lombardi drawings, and that all outerpaths have planar Lombardi drawings [4].", "For some other classes of planar graphs, even when a Lombardi drawing exists, it might not be planar.", "Classes of planar graphs that are known to not always be drawable planarly in Lombardi style include the nested triangle graphs [5], 4-regular planar graphs [7], planar 3-trees [4], and the graphs of knot and link diagrams [8].", "However, for several other important classes of planar graphs, the existence of a planar Lombardi drawing has remained open.", "These include the outerplanar graphs, the series-parallel graphs, and the planar bipartite graphs.", "Outerplanar and series-parallel graphs are 2-degenerate, and always have Lombardi drawings.", "Planar bipartite graphs are 3-degenerate and such graphs usually have Lombardi drawings.The only obstacle to Lombardi drawing for 3-degenerate graphs is the forced placement of two vertices on the same point, but the only examples for which this is known to happen are neither planar nor bipartite [5].", "However the known Lombardi drawings for these graphs are not necessarily planar.", "In this paper we settle this open problem for two of these classes of graphs, the planar bipartite graphs and the (embedded) series-parallel graphs.", "We construct a family of planar bipartite graphs whose Lombardi drawings are all nonplanar.", "We also construct a family of series-parallel graphs with a given embedding such that no planar Lombardi drawing respects that embedding.", "Our construction for series-parallel graphs can be extended to maximal series-parallel graphs, to bipartite series-parallel graphs and to apex-trees, the graphs formed by adding a single vertex to a tree." ], [ "The graphs", "We begin by describing the family of planar bipartite graphs $B(k)$ that we will prove (for sufficiently large $k$ ) do not have a planar Lombardi drawing.", "Each vertex in $B(k)$ has degree either 2 or $2k$ .", "To construct $B(k)$ , begin with a complete bipartite graph $K_{2,2k}$ and its unique planar embedding; in fig:nested-K2n, the two-vertex side of the bipartition of this graph is shown by the yellow vertices and the $2k$ -vertex side is shown by the blue vertices.", "Each yellow vertex has exactly $2k$ blue neighbors.", "Next, partition the blue vertices into $k$ pairs of vertices, each sharing a face.", "For each pair of blue vertices in this partition, add another complete bipartite graph $K_{2,2k-2}$ connecting these two blue vertices to $2k-2$ additional vertices (shown as red in the figure).", "After this addition, each blue vertex has exactly $2k$ neighbors, two of them yellow and the rest red.", "Each red vertex has exactly two neighbors.", "There are two yellow vertices, $2k$ blue vertices, and $k(2k-2)$ red vertices, for a total of $2k^2+2$ vertices in the overall graph.", "Clearly, the graphs $B(k)$ are all planar, because they are formed by attaching together planar subgraphs (complete bipartite graphs where one side has two vertices) on pairs of vertices that are cofacial in both subgraphs.", "They are bipartite, with the yellow and red vertices on one side of their bipartition and the blue vertices on the other side.", "Although they are not 3-vertex-connected, all of their planar embeddings are isomorphic.", "Figure: The embedded series-parallel graph S(3)S(3) formed by our construction.Analogously, we define a family of embedded series-parallel graphs $S(k)$ .", "Again, each such graph will have two yellow vertices and $2k$ blue vertices, connected in the pattern of a complete bipartite graph $K_{2,2k}$ .", "For each yellow–blue edge $e$ of this graph, we add a path of $2k-1$ red vertices.", "We connect every vertex in this path to the blue endpoint of $e$ , and we connect one endpoint of the path to the yellow endpoint of $e$ .", "We fix an embedding of $S(k)$ in which every yellow–blue quadrilateral contains either zero or four red paths (fig:series-parallel).", "The resulting graph has two yellow vertices, $2k$ blue vertices, and $4k(2k-1)$ red vertices, for a total of $8k^2-2k+2$ vertices.", "The yellow and blue vertices have degree $4k$ , while the red vertices have degrees two or three.", "We claim that, for sufficiently large values of $k$ , the graphs $G(k)$ and $S(k)$ do not have planar Lombardi drawings.", "Therefore, neither every planar bipartite graph nor every series-parallel graph has a planar Lombardi drawing.", "In the remainder of this paper we prove this claim." ], [ "Equiangular arc-quadrilaterals", "The key feature of both of our graph constructions $B(k)$ and $S(k)$ is the existence of many yellow–blue quadrilateral faces in which all vertices have equal and high degree (this degree is $d=2k$ in $B(k)$ and $d=4k$ in $S(k)$ ).", "If such a graph is to have a Lombardi drawing, each of these faces must necessarily be drawn as a quadrilateral with circular-arc sides and with the same interior angle $2\\pi /d$ at all four of its vertices.", "Equiangular arc-quadrilaterals have been investigated before from the point of view of conformal mapping [2]; in this section we investigate some of their additional properties.", "Our main tool is the following lemma: Lemma Let $abcd$ be a non-self-crossing quadrilateral in the plane with circular-arc sides and equal interior angles.", "Then the four points $abcd$ lie on a circle and the quadrilateral $abcd$ either lies entirely inside or entirely outside the circle.", "We abbreviate the conclusion of the lemma by saying that $abcd$ is cyclic.", "The properties of being an equiangular non-self-crossing circular-arc quadrilateral and of being cyclic are both invariant under Möbius transformations, which preserve both circularity and the crossing angles of curves.", "Therefore, if we can find a Möbius transformation of a given equiangular circular-arc quadrilateral such that the transformed quadrilateral is cyclic, the original quadrilateral will also be cyclic, as the lemma states it to be.", "Start by finding a Möbius transformations which makes two opposite arcs $ab$ and $cd$ come from circles with the same radius as each other.", "Because of the equality of crossing angles, and by symmetry, both of the other two circular arcs $bc$ and $ad$ must come from circles whose centers lie on the perpendicular bisector of $ab$ and $cd$ .", "There remains a one–dimensional family of Möbius transformations that preserve the position of the circles containing the transformed copies of arcs $ab$ and $cd$ but that move the other two circles along the bisector of these two fixed circles.", "We can use this remaining degree of freedom to move the other two circles so that their centers are equidistant from the midpoint of the centers of the two fixed circles.", "Figure: Illustration for lem:cyclic: Four circles with centers on a rhombus, and with opposite pairs of circles having equal radii, define two rectangles of pairwise intersection points.After this transformation, it follows from the equality of crossing angles that the circles containing the transformed copies of arcs $bc$ and $ad$ have the same radii as each other, and the four circles have been transformed into a position centered at the vertices of a rhombus with opposite pairs having the same radius as each other.", "By symmetry, the transformed copies of vertices $abcd$ must lie on one of the two rectangles defined by the crossing points of these four transformed circles.", "(fig:4circle-rect).", "If the interior angle of $abcd$ is less than $\\pi $ , then the transformed copy of $abcd$ must lie within the circle that circumscribes the inner rectangle, forming the boundary of a hole in the union of the four transformed disks.", "If the interior angle is greater than $\\pi $ , it must lie outside the outer circle, forming the outer boundary of the union of the four disks.", "In either case, the transformed copy of $abcd$ is cyclic, so $abcd$ itself must be cyclic.", "A special case of this lemma for right-angled arc-quadrilaterals was used previously by the author to prove that some 4-regular planar graphs have no planar Lombardi drawing [7].", "Another special case, for arc-quadrilaterals in which all interior angles are zero, has been used previously in mesh generation [1].", "Definition We define the tilt of an equiangular circular-arc quadrilateral to be the maximum interior angle of any of the four circular-arc bigons between the quadrilateral and its enclosing circle.", "Each of the four bigons has equal angles at its two vertices.", "At each of the four vertices, the two bigon angles and the interior angle of the quadrilateral add to $\\pi $ .", "It follows that opposite bigons have the same angles as each other, and each vertex of the quadrilateral is incident to a bigon with vertices of the tilt angle." ], [ "Bipolar coordinates", "To describe a second parameter of equiangular circular-arc quadrilaterals, it is convenient to introduce the bipolar coordinate system, defined from a pair of points $s$ and $t$ , the foci of the coordinate system.", "These coordinates are conventionally denoted $\\sigma $ and $\\tau $ .", "The $\\sigma $ -coordinate $\\sigma _p$ of a point $p$ is the (oriented) angle $spt$ , whose level sets are the blue circular arcs through the two foci in fig:apollo.", "The $\\tau $ -coordinate $\\tau _p$ of $p$ is the logarithm of the ratio of the two distances from $p$ to the two foci, whose level sets are the red circles separating the two foci in fig:apollo.", "This coordinate system has the convenient property that any (orientation-preserving) Möbius transformation that preserves the location of the two foci acts by translation on the coordinates.This property can be seen as a reflection of the fact that the bipolar coordinate system comes from a conformal mapping of a rectangular grid; see, e.g., [3].", "Lemma Any Möbius transformation that preserves the location of the two foci acts on the bipolar coordinates of any point by adding a fixed value to its $\\sigma $ -coordinate (modulo $2\\pi $ ) and adding another fixed value to its $\\tau $ -coordinate, with the added values depending on the transformation but not on the point.", "All Möbius transformations preserves circles, incidences between points and curves, and angles between pairs of incident curves.", "Therefore, any focus-preserving Möbius transformation takes circles through the two foci (the level sets for $\\sigma $ -coordinates) to other circles through the two foci, and it takes the perpendicular family of circles (the level sets for $\\tau $ -coordinates) to other circles in the same family.", "Therefore it acts separately on the $\\sigma $ - and $\\tau $ -coordinates.", "The additivity of its action on the $\\sigma $ -coordinates follows from the preservation of angles between pairs of circles through the two foci.", "To show that the transformation acts additively on $\\tau $ -coordinate (the logarithm of the ratio of distances of a point from the two foci), we can assume without loss of generality (by scaling, translating, and rotating the plane if necessary) that the two foci are at the two points $q=\\pm 1$ of the complex plane.", "Consider the general form $q\\mapsto (aq+b)/(cq+d)$ of a Möbius transformation as a fractional linear transformation of the complex plane.", "For a transformation to fix $q=1$ we need $a+b=c+d$ and for it to fix $q=-1$ we need $b-a=-(d-c)$ .", "Solving these two equations in four unknowns gives $a=d$ and $b=c$ .", "Therefore, the transformations fixing the foci take the special form $q\\mapsto (aq+b)/(bq+a)$ .", "For a transformation of this form, and for any point $x$ on the interval $[-1,1]$ of real numbers with distance ratio $(1+x)/(1-x)$ , the image of $x$ has distance ratio $\\frac{1+(ax+b)(bx+a)}{1-(ax+b)(bx+a)}=\\frac{a+b}{a-b}\\cdot \\frac{1+x}{1-x},$ multiplying the original distance ratio of $x$ by a value that depends only on the transformation.", "Because the transformation acts separately on $\\sigma $ - and $\\tau $ -coordinates, we obtain the same multiplicative action on distance ratios for any other point on the complex plane with the same $\\tau $ -coordinate as $x$ .", "This multiplicative action on distance ratios translates into an additive action on their logarithms.", "Figure: Illustration for lem:lift: If quadrilateral sptqsptq has high tilt, and rr lies between the quadrilateral and its enclosing circle, to the left of the red arc, then rr must have a higher τ\\tau -coordinate value than pp.Another advantage of bipolar coordinates is that they provide a way of comparing angles at the two foci, that will be convenient for relating the tilts of different quadrilaterals to each other: Observation Let $sptq$ be an equiangular arc-quadrilateral with interior angle $\\theta $ and tilt $\\varphi $ .", "Then, in the bipolar coordinate system with foci $s$ and $t$ , the angle (difference between the $\\sigma $ -coordinates) of arc $tp$ in the limit as it approaches $t$ and of arc $sq$ as it approaches $s$ is exactly $2\\theta +2\\varphi -\\pi $ .", "We can also use bipolar coordinates to show that heavily tilted quadrilaterals lead to an increase in $\\tau $ -coordinate: Lemma Let $sptq$ be an equilateral arc-quadrilateral with tilt at least $3\\pi /4$ , such that the large angle between vertex $s$ and the circle $C$ containing the quadrilateral is on the clockwise side of $s$ (the side closest to $p$ ).", "Let $r$ be a point in the bigon between arc $tq$ and $C$ , such that circular arc $srt$ makes an angle of at most $\\pi /2$ with circular arc $spt$ .", "Then, in the bipolar coordinate system for foci $s$ and $t$ , $\\tau _r > \\tau _p$ .", "Because of the Möbius invariance of coordinate differences in the bipolar coordinate system, we can without loss of generality perform a Möbius transformation so that $s$ , $p$ , and $t$ are the bottom, left, and topmost points of $C$ , as shown in fig:lift.", "After this transformation, points above the horizontal line through $p$ will have higher $\\tau $ -coordinate than $o$ , and points below the horizontal line through $o$ will have lower $\\tau $ -coordinate.", "As the figure shows, an arc with tilt exactly $3\\pi /4$ through $p$ and $s$ passes through the center of circle $C$ , causing the region in which $r$ may lie to be bounded by a vertical line segment (red) from the circle's center to $t$ .", "All points within this region have higher $\\tau $ -coordinate than $p$ .", "For tilt values greater than $3\\pi /4$ , the arc from $p$ to $s$ with that tilt extends even farther beyond the center of $C$ , so (although arc $tq$ may also extend farther to the left) the region in which $r$ may lie remains bounded within the upper left quarter of $C$ , within which all $\\tau $ -coordinates are greater than that of $p$ ." ], [ "Nonplanarity", "We are now ready for our main theorems.", "Theorem 1 For $k>8$ , the bipartite graph $B(k)$ does not have a planar Lombardi drawing.", "Let $s$ and $t$ be the two yellow vertices of the graph $B(k)$ .", "We will consider a bipolar coordinate system with foci $s$ and $t$ .", "Note that graph $B(k)$ contains $k$ quadrilateral faces $sp_itq_i$ , where $p_i$ and $q_i$ are blue vertices.", "Because all four vertices of these quadrilaterals have degree $2k$ , these quadrilaterals must be drawn (if a planar Lombardi drawing is to exist) as equiangular arc-quadrilaterals with interior angle $\\pi /k$ .", "Figure: Two arc-quadrilaterals with sharp angles between them at their shared vertices reach into each other's pockets to touch their circumscribing circles.", "The two smaller pockets on the outer arcs of the circles have significantly different τ\\tau -coordinates from each other in bipolar coordinates with the shared vertices as foci.Consider the two consecutive quadrilaterals $sp_itq_i$ and $sp_{i+1}tq_{i+1}$ whose enclosing circles $C_i$ and $C_{i+1}$ meet each other at the sharpest angle of any two consecutive enclosing circles.", "The sum of the angles between the $k$ consecutive circles is $2\\pi $ so this minimum angle is at most $2\\pi /k$ .", "In order for point $q_i$ to lie on circle $C_i$ , some arc of circle $C_i$ must lie on the same side as $q_i$ of quadrilateral $sp_{i+1}tq_{i+1}$ .", "This arc must stay outside of quadrilateral $sp_{i+1}tq_{i+1}$ from its crossing point with the quadrilateral until terminating at either $s$ or $t$ ; by symmetry, we can assume without loss of generality that it terminates at the lower vertex $s$ , as shown in fig:reacharound.", "Then, near $s$ , quadrilateral $sp_{i+1}tq_{i+1}$ lies between circles $C_i$ and $C_{i+1}$ , so it must have tilt at least $\\pi (1-\\tfrac{2}{k})$ .", "By obs:tilt-angle and the equal spacing of angles around $s$ and $t$ , all quadrilaterals $sp_itq_i$ must have the same tilt.", "Because $k>8$ , this tilt is $\\ge 3\\pi /4$ , so each quadrilateral $sp_itp_i$ meets the precondition of having high tilt of lem:lift.", "For any quadrilateral $sp_itq_i$ , the point $p_{i+1}$ of the next quadrilateral is connected to $s$ by an arc of quadrilateral $sp_{i+1}tq_{i+1}$ that lies entirely within $C_{i+1}$ and makes an angle of $\\pi /k$ to arc $sq_i$ , so the arc of $C_{i+1}$ containing $p_{i+1}$ makes an angle of at most $3\\pi /k$ to the arc of $C_i$ containing $p_i$ .", "Thus, point $p_{i+1}$ meets the other precondition of lem:lift for the position of the point $r$ with respect to the quadrilateral.", "By this lemma, each point $p_{i+1}$ has a greater $\\tau $ -coordinate than $p_i$ .", "But it is impossible for this monotonic increase in $\\tau $ -coordinates to continue all the way around the circle of quadrilaterals surrounding the two foci and back to the starting point.", "This impossibility shows that the drawing cannot exist.", "Theorem 2 For $k>8$ the series-parallel graph $S(k)$ , embedded as shown in fig:series-parallel, does not have a planar Lombardi drawing.", "As with $B(k)$ , this graph contains $k$ quadrilateral faces, sharing the same two opposite yellow vertices, in which all vertices have equal degree ($4k$ in $S(k)$ instead of $2k$ in $B(k)$ ).", "The proof of thm:bipartite-nonplanar used only this property of $B(k)$ , and not the precise value of the interior angle of these quadrilaterals, so it applies equally well to $S(k)$ .", "We remark that the construction of $S(k)$ can be adjusted in several different ways to obtain more constrained families of embedded series-parallel and related graphs that, again, have no planar Lombardi drawing: If we add an edge between the two yellow vertices, and adjust the lengths of the red chains to keep the yellow and blue degrees equal, we obtain a family of embedded maximal series-parallel graphs (that is, embedded 2-trees) with no planar Lombardi drawing.", "If we subdivide the yellow–red and red–red edges of $S(k)$ , we obtain a family of embedded bipartite series-parallel graphs with no planar Lombardi drawing.", "If we replace the red chains of $S(k)$ by an appropriate number of degree-one red vertices, connected to the blue vertices, we obtain a family of embedded apex-trees (graphs formed by adding a single vertex to a tree) with no planar Lombardi drawing.", "The apex vertex (the vertex whose removal produces a tree) can be chosen to be either of the two yellow vertices.", "We omit the details." ], [ "Conclusions", "We have shown that bipartite planar graphs, and series-parallel graphs with a fixed planar embedding, do not always have planar Lombardi drawings, even though their low degeneracy implies that they always have (nonplanar) Lombardi drawings.", "In the question of which important subfamilies of planar graphs have planar Lombardi drawings, several important cases remain unsolved.", "These include the outerplanar graphs, both with and without assuming an outerplanar embedding, the cactus graphs, and the series-parallel graphs without a fixed choice of embedding.", "We leave these as open for future research." ] ]
1906.04401
[ [ "Simultaneously Learning Architectures and Features of Deep Neural\n Networks" ], [ "Abstract This paper presents a novel method which simultaneously learns the number of filters and network features repeatedly over multiple epochs.", "We propose a novel pruning loss to explicitly enforces the optimizer to focus on promising candidate filters while suppressing contributions of less relevant ones.", "In the meanwhile, we further propose to enforce the diversities between filters and this diversity-based regularization term improves the trade-off between model sizes and accuracies.", "It turns out the interplay between architecture and feature optimizations improves the final compressed models, and the proposed method is compared favorably to existing methods, in terms of both models sizes and accuracies for a wide range of applications including image classification, image compression and audio classification." ], [ "INTRODUCTION", "Large and deep neural networks, despite of their great successes in a wide variety of applications, call for compact and efficient model representations to reduce the vast amount of network parameters and computational operations, that are resource-hungry in terms of memory, energy and communication bandwidth consumption.", "This need is imperative especially for resource constrained devices such as mobile phones, wearable and Internet of Things (IoT) devices.", "Neural network compression is a set of techniques that address these challenges raised in real life industrial applications.", "Minimizing network sizes without compromising original network performances has been pursued by a wealth of methods, which often adopt a three-phase learning process, i.e.", "training-pruning-tuning.", "In essence, network features are first learned, followed by the pruning stage to reduce network sizes.", "The subsequent fine-tuning phase aims to restore deteriorated performances incurred by undue pruning.", "This ad hoc three phase approach, although empirically justified e.g.", "in [14], [17], [12], [20], [22], was recently questioned with regards to its efficiency and effectiveness.", "Specifically [15], [3] argued that the network architecture should be optimized first, and then features should be learned from scratch in subsequent steps.", "In contrast to the two aforementioned opposing approaches, the present paper illustrates a novel method which simultaneously learns both the number of filters and network features over multiple optimization epochs.", "This integrated optimization process brings about immediate benefits and challenges — on the one hand, separated processing steps such as training, pruning, fine-tuning etc, are no longer needed and the integrated optimization step guarantees consistent performances for the given neural network compression scenarios.", "On the other hand, the dynamic change of network architectures has significant influences on the optimization of features, which in turn might affect the optimal network architectures.", "It turns out the interplay between architecture and feature optimizations plays a crucial role in improving the final compressed models." ], [ "RELATED WORK", "Network pruning was pioneered [11], [6], [4] in the early development of neural network, since when a broad range of methods have been developed.", "We focus on neural network compression methods that prune filters or channels.", "For thorough review of other approaches we refer to a recent survey paper [2].", "Li et al.", "[12] proposed to prune filters with small effects on the output accuracy and managed to reduce about one third of inference cost without compromising original accuracy on CIFAR-10 dataset.", "Wen et al.", "[20] proposed a structured sparsity regularization framework, in which the group lasso constrain term was incorporated to penalize and remove unimportant filters and channels.", "Zhou et al.", "[22] also adopted a similar regularization framework, with tensor trace norm and group sparsity incorporated to penalize the number of neurons.", "Up to 70% of model parameters were reduced without scarifying classification accuracies on CIFAR-10 datasets.", "Recently Liu et al.", "[14] proposed an interesting network slimming method, which imposes L1 regularization on channel-wise scaling factors in batch-normalization layers and demonstrated remarkable compression ratio and speedup using a surprisingly simple implementation.", "Nevertheless, network slimming based on scaling factors is not guaranteed to achieve desired accuracies and separate fine-tunings are needed to restore reduced accuracies.", "Qin et al.", "[17] proposed a functionality-oriented filter pruning method to remove less important filters, in terms of their contributions to classification accuracies.", "It was shown that the efforts for model retraining is moderate but still necessary, as in the most of state-of-the-art compression methods.", "DIVNET adopted Determinantal Point Process (DPP) to enforce diversities between individual neural activations [16].", "Diversity of filter weights defined in (REF ) is related to orthogonality of weight matrix, which has been extensively studied.", "An example being [5], proposed to learn Stiefel layers, which have orthogonal weights, and demonstrated its applicability in compressing network parameters.", "Interestingly, the notion of diversity regularized machine (DRM) has been proposed to generate an ensemble of SVMs in the PAC learning framework [21], yet its definition of diversity is critically different from our definition in (REF ), and its applicability to deep neural networks is unclear." ], [ "SIMULTANEOUS LEARNING OF ARCHITECTURE AND FEATURE", "The proposed compression method belongs to the general category of filter-pruning approaches.", "In contrast to existing methods [14], [17], [12], [20], [22], [15], [3], we adopt following techniques to ensure that simultaneous optimization of network architectures and features is a technically sound approach.", "First, we introduce an explicit pruning loss estimation as an additional regularization term in the optimization objective function.", "As demonstrated by experiment results in Section , the introduced pruning loss enforces the optimizer to focus on promising candidate filters while suppressing contributions of less relevant ones.", "Second, based on the importance of filters, we explicitly turn-off unimportant filters below given percentile threshold.", "We found the explicit shutting down of less relevant filters is indispensable to prevent biased estimation of pruning loss.", "Third, we also propose to enforce the diversities between filters and this diversity-based regularization term improves the trade-off between model sizes and accuracies, as demonstrated in various applications.", "Our proposed method is inspired by network slimming [14] and main differences from this prior art are two-folds: a) we introduce the pruning loss and incorporate explicit pruning into the learning process, without resorting to the multi-pass pruning-retraining cycles; b) we also introduce filter-diversity based regularization term which improves the trade-off between model sizes and accuracies." ], [ "Loss Function", "Liu et al.", "[14] proposed to push towards zero the scaling factor in batch normalization (BN) step during learning, and subsequently, the insignificant channels with small scaling factors are pruned.", "This sparsity-induced penalty is introduced by regularizing L1-norm of the learnable parameter $\\gamma $ in the BN step i.e., $g(\\gamma ) = \\left| \\gamma \\right|; \\textnormal { where } \\hat{z}= \\frac{z_{in} - \\mu _B}{\\sqrt{\\sigma ^2 + \\epsilon }}; z_{out} = \\gamma \\hat{z} + \\beta ,$ in which $z_{in}$ denote filter inputs, $\\mu _B, \\sigma $ the filter-wise mean and variance of inputs, $\\gamma , \\beta $ the scaling and offset parameters of batch normalization (BN) and $\\epsilon $ a small constant to prevent numerical un-stability for small variance.", "It is assumed that there is always a BN filter appended after each convolution and fully connected filter, so that the scaling factor $\\gamma $ is directly leveraged to prune unimportant filters with small $\\gamma $ values.", "Alternatively, we propose to directly introduce scaling factor to each filter since it is more universal than reusing BN parameters, especially considering the networks which have no BN layers.", "By incorporating a filter-wise sparsity term, the object function to be minimized is given by: $L = \\sum _{(\\textbf {x},y)} loss( f(\\textbf {x},\\textbf {W}), y) + \\lambda \\sum _{\\gamma \\in \\Gamma } g(\\gamma ),$ where the first term is the task-based loss, $g(\\gamma )=||\\gamma ||_1$ and $\\Gamma $ denotes the set of scaling factors for all filters.", "This pruning scheme, however, suffers from two main drawbacks: 1) since scaling factors are equally minimized for all filterers, it is likely that the pruned filters have unignorable contributions that should not be unduly removed.", "2) the pruning process, i.e., architecture selection, is performed independantly w.r.t.", "the feature learning; the performance of pruned network is inevitably compromised and has to be recovered by single-pass or multi-pass fine-tuning, which impose additional computational burdens.", "Let $\\textbf {W}, \\check{\\textbf {W}}, \\hat{\\textbf {W}}$ denote the sets of neural network weights for, respectively, all filters, those pruned and remained ones i.e.", "$\\textbf {W} = \\lbrace \\check{\\textbf {W}} \\bigcup \\hat{\\textbf {W}} \\rbrace $ .", "In the same vein, $\\Gamma = \\lbrace P(\\Gamma ) \\bigcup R(\\Gamma )\\rbrace $ denote the sets of scaling factors for all filters, those removed and remained ones respectively.", "To mitigate the aforementioned drawbacks, we propose to introduce two additional regularization terms to Eq.", "REF , $L( \\hat{\\textbf {W}}, R(\\Gamma )) = & \\sum _{(\\textbf {x},y)} loss( f(\\textbf {x},\\hat{\\textbf {W}} ), y) + \\lambda _1 \\sum _{\\gamma \\in R(\\Gamma ) } g(\\gamma ) \\nonumber \\\\& - \\lambda _2 \\frac{\\sum _{\\gamma \\in R({\\Gamma }) } \\gamma }{\\sum _{\\gamma \\in \\Gamma } \\gamma } - \\lambda _3 \\sum _{l \\in L} Div(\\hat{\\textbf {W}}^l),$ where $loss( \\cdot , \\cdot )$ and $\\sum _{\\gamma \\in R(\\Gamma ) } g(\\gamma )$ are defined as in Eq.", "REF , the third term is the pruning loss and the forth is the diversity loss which are elaborated below.", "$\\lambda _1, \\lambda _2, \\lambda _3$ are weights of corresponding regularization terms.", "Figure: Comparison of scaling factors for three methods, i.e., baseline with no regularization, network-slimming , and the proposed method with diversified filters, trained with CIFAR-10 and CIFAR-100.", "Note that the pruning loss defined in () are 0.2994, 0.0288, 1.3628e-6, respectively, for three methods.", "Accuracy deterioration are 60.76% and 0% for network-slimming and the proposed methods, and the baseline networks completely failed after pruning, due to insufficient preserved filters at certain layers." ], [ "Estimation of pruning loss", "The second regularization term in (REF ) i.e.", "$\\gamma ^R := \\frac{\\sum _{\\gamma \\in R({\\Gamma }) } \\gamma }{\\sum _{\\gamma \\in \\Gamma } \\gamma }$ (and its compliment $\\gamma ^P :=\\frac{\\sum _{\\gamma \\in P({\\Gamma }) } \\gamma }{\\sum _{\\gamma \\in \\Gamma } \\gamma } = 1 - \\gamma ^R$ ) is closely related to performance deterioration incurred by undue pruningIn the rest of the paper we refer to it as the estimated pruning loss..", "The scaling factors of pruned filters $ P(\\Gamma )$ , as in [14], are determined by first ranking all $\\gamma $ and taking those below the given percentile threshold.", "Incorporating this pruning loss enforces the optimizer to increase scaling factors of promising filters while suppressing contributions of less relevant ones.", "The rationale of this pruning strategy can also be empirically justified in Figure REF , in which scaling factors of three different methods are illustrated.", "When the proposed regularization terms are added, clearly, we observed a tendency for scaling factors being dominated by few number of filters — when 70% of filters are pruned from a VGG network trained with CIFAR-10 dataset, the estimated pruning loss $ \\frac{\\sum _{\\gamma \\in P({\\Gamma }) } \\gamma }{\\sum _{\\gamma \\in \\Gamma } \\gamma } $ equals 0.2994, 0.0288, 1.3628e-6, respectively, for three compared methods.", "Corresponding accuracy deterioration are 60.76% and 0% for network-slimming [14] and the proposed methods.", "Therefore, retraining of pruned network is no longer needed for the proposed method, while [14] has to retain the original accuracy through single-pass or multi-pass of pruning-retraining cycles." ], [ "Turning off candidate filters", "It must be noted that the original loss $\\sum _{(\\textbf {x},y)} loss( f(\\textbf {x},{\\textbf {W}} ), y)$ is independent of the pruning operation.", "If we adopt this loss in (REF ), the estimated pruning loss might be seriously biased because of undue assignments of $\\gamma $ not being penalized.", "It seems likely some candidate filters are assigned with rather small scaling factors, nevertheless, they still retain decisive contributions to the final classifications.", "Pruning these filters blindly leads to serious performance deterioration, according to the empirical study, where we observe over 50$\\%$ accuracy loss at high pruning ratio.", "In order to prevent such biased pruning loss estimation, we therefore explicitly shutdown the outputs of selected filters by setting corresponding scaling factors to absolute zero.", "The adopted loss function becomes $\\sum _{(\\textbf {x},y)} loss( f(\\textbf {x},\\hat{\\textbf {W}} ), y)$ .", "This way, the undue loss due to the biased estimation is reflected in $loss( f(\\textbf {x},\\hat{\\textbf {W}}), y)$ , which is minimized during the learning process.", "We found the turning-off of candidate filters is indispensable.", "[t!]", "Proposed algorithm [1] Online Pruning $\\textit {Training data}~ \\leftarrow \\lbrace x_i, y_i\\rbrace _{i=1}^{N} $ $\\textit {Target pruning ratio}~\\mathbf {Pr}_N \\leftarrow \\mathbf {p}\\% $ $\\textit {Initial network weights}~W \\leftarrow \\textit {method by \\cite {he2015delving}}$ $\\Gamma \\leftarrow \\lbrace 0.5\\rbrace $ $\\hat{W} \\leftarrow W$ $P(\\Gamma ) \\leftarrow \\emptyset $ $R(\\Gamma ) \\leftarrow \\Gamma $ each epoch $n \\in $ {$1,\\dots ,N$ } $\\textit {Current pruning ratio}~\\mathbf {Pr}_n \\in [0, \\mathbf {Pr}_N]$ $\\textit {Sort}~\\Gamma $ $P(\\Gamma ) \\leftarrow \\textit {prune filters w.r.t. }", "\\mathbf {Pr}_n$ $R(\\Gamma ) \\leftarrow \\Gamma \\setminus \\textit {P} (\\Gamma )$ $\\textit {Compute}~L( \\hat{\\textbf {W}}, R(\\Gamma ))~\\textit {in Eq.", "}~(\\ref {eq:regularized_sparisty_func})$ $\\hat{\\textbf {W}} \\leftarrow \\textit {SGD}$ $\\check{\\textbf {W}} \\leftarrow \\hat{\\textbf {W}} \\setminus \\check{\\textbf {W}}$" ], [ "Online pruning", "We take a global threshold for pruning which is determined by percentile among all channel scaling factors.", "The pruning process is performed over the whole training process, i.e., simultaneous pruning and learning.", "To this end, we compute a linearly increasing pruning ratio from the first epoch (e.g., 0%) to the last epoch (e.g., 100%) where the ultimate pruning target ratio is applied.", "Such an approach endows neurons with sufficient evolutions driven by diversity term and pruning loss, to avoid mis-pruning neurons prematurely which produces crucial features.", "Consequently our architecture learning is seamlessly integrated with feature learning.", "After each pruning operation, a narrower and more compact network is obtained and its corresponding weights are copied from the previous network." ], [ "Filter-wise diversity", "The third regularization term in (REF ) encourages high diversities between filter weights as shown below.", "Empirically, we found that this term improves the trade-off between model sizes and accuracies (see experiment results in Section ).", "We treat each filter weight, at layer $l$ , as a weight (feature) vector $\\textbf {W}^l_i$ of length $w \\times h \\times c$ , where $w,h$ are filter width and height, $c$ the number of channels in the filter.", "The diversity between two weight vectors of the same length is based on the normalized cross-correlation of two vectors: $div(\\textbf {W}_i, \\textbf {W}_j) := 1 - | \\langle \\mathbf {\\bar{W}}_i, \\mathbf {\\bar{W}}_j \\rangle | ,$ in which $ \\mathbf {\\bar{W}} = \\frac{ \\mathbf {{W}}}{ | \\mathbf {{W}} | } $ are normalized weight vectors, and $\\langle \\cdot , \\cdot \\rangle $ is the dot product of two vectors.", "Clearly, the diversity is bounded $0 \\le div(\\textbf {W}_i, \\textbf {W}_j) \\le 1$ , with value close 0 indicating low diversity between highly correlated vectors and values near 1 meaning high diversity between uncorrelated vectors.", "In particular, diversity equals 1 also means that two vectors are orthogonal with each other.", "The diversities between $N$ filters at the same layer $l$ is thus characterized by a N-by-N matrix in which elements $d_{ij} =div(\\textbf {W}^l_i, \\textbf {W}^l_j), i,j=\\lbrace 1,\\cdots ,N\\rbrace $ are pairwise diversities between weight vectors $\\textbf {W}^l_i, \\textbf {W}^l_j$ .", "Note that for diagonal elements $d_{ii}$ are constant 0.", "The total diversity between all filters is thus defined as the sum of all elements $Div(\\textbf {W}^l) := \\sum ^{N,N}_{i,j=1,1} d_{i,j}.$ Table: Results on CIFAR-10 datasetTable: Results on CIFAR-100 dataset" ], [ "EXPERIMENT RESULTS", "In this section, we evaluate the effectiveness of our method on various applications with both visual and audio data." ], [ "Datasets", "For visual tasks, we adopt ImageNet and CIFAR datasets.", "The ImageNet dataset contains 1.2 million training images and 50,000 validation images of 1000 classes.", "CIFAR-10 [10] which consists of 50K training and 10K testing RGB images with 10 classes.", "CIFAR-100 is similar to CIFAR-10, except it has 100 classes.", "The input image is 32$\\times $ 32 randomly cropped from a zero-padded 40$\\times $ 40 image or its flipping.", "For audio task, we adopt ISMIR Genre dataset [1] which has been assembled for training and development in the ISMIR 2004 Genre Classification contest.", "It contains 1458 full length audio recordings from Magnatune.com distributed across the 6 genre classes: Classical, Electronic, JazzBlues, MetalPunk, RockPop, World." ], [ "Image Classification", "We evaluate the performance of our proposed method for image classification on CIFAR-10/100 and ImageNet.", "We investigate both classical plain network, VGG-Net [18], and deep residual network i.e., ResNet [8].", "We evaluate our method on two popular network architecture i.e., VGG-Net [18], and ResNet [8].", "We take variations of the original VGG-Net, i.e., VGG-19 used in [14] for comparison purpose.", "ResNet-164 which has 164-layer pre-activation ResNet with bottleneck structure is adopted.", "As base-line networks, we compare with the original networks without regularization terms and their counterparts in network-slimming [14].", "For ImageNet, we adopt VGG-16 and ResNet-50 in order to compare with the original networks.", "To make a fair comparison with [14], we adopt BN based scaling factors for optimization and pruning.", "On CIFAR, we train all the networks from scratch using SGD with mini-batch size 64 for 160 epochs.", "The learning rate is initially set to 0.1 which is reduced twice by 10 at 50% and 75% respectively.", "Nesterov momentum [19] of 0.9 without dampening and a weight decay of $10^{-4}$ are used.", "The robust weight initialization method proposed by [7] is adopted.", "We use the same channel sparse regularization term and its hyperparameter $\\lambda = 10^{-4}$ as defined in [14].", "Table: Accuracies of different methods before (orig.)", "and after pruning (pruned).", "For CIFAR10 and CIFAR100, 70% and 50% filters are pruned respectively.", "Note that 'NA' indicates the baseline networks completely failed after pruning, due to insufficient preserved filters at certain layers." ], [ "Overall performance", "The results on CIFAR-10 and CIFAR-100 are shown in Table REF and Table REF respectively.", "On both datasets, we can observe when typically 50-70% fitlers of the evaluated networks are pruned, the new networks can still achieve accuracy higher than the original network.", "For instance, with 70% filters pruned VGG-19 achieves an accuracy of 0.9393, compared to 0.9366 of the original model on CIFAR-10.", "We attribute this improvement to the introduced diversities between filter weights, which naturally provides discriminative feature representations in intermediate layers of networks.", "As a comparison, our method consistently outperforms network-slimming without resorting to fine-tuning or multi-pass pruning-retraining cycles.", "It is also worth-noting that our method is capable of pruning networks with prohibitively high ratios which are not possible in network-slimming.", "Take VGG-19 network on CIFAR-10 dataset as an example, network-slimming prunes as much as 70%, beyond which point the network cannot be reconstructed as some layers are totally destructed.", "On the contrary, our method is able to reconstruct a very narrower network by pruning 80% filters while producing a marginally degrading accuracy of 0.9302.", "We conjecture this improvement is enabled by our simultaneous feature and architecture learning which can avoid pruning filters prematurely as in network-slimming where the pruning operation (architecture selection) is isolated from the feature learning process and the performance of the pruned network can be only be restored via fine-tuning.", "The results on ImageNet are shown in Table REF where we also present comparison with [9] which reported top-1 and top-5 errors on ImageNet.", "On VGG-16, our method provides 1.2% less accuracy loss while saving additionally 20.5M parameters and 0.8B FLOPs compared with [9].", "On ResNet-50, our method saves 5M more parameters and 1.4B more FLOPs than [9] while providing 0.21% higher accuracy.", "Table: Results on ImageNet dataset" ], [ "Ablation study", "In this section we investigate the contribution of each proposed component through ablation study." ], [ "Filter Diversity", "Fig.", "REF (a) shows the sorted scaling factors of VGG-19 network trained with the proposed filter diversity loss at various training epochs.", "With the progress of training, the scaling factors become increasingly sparse and the number of large scaling factors, i.e., the area under the curve, is decreasing.", "Fig.", "REF shows the sorted scaling factors of VGG-19 network for the baseline model with no regularization, network-slimming [14], and the proposed method with diversified filters, trained with CIFAR-10 and CIFAR-100.", "We observe significantly improved sparsity by introducing filter diversity to the network compared with network-slimming, indicated by nsf.", "Remember the scaling factors essentially determine the importance of filters, thus, maximizing nsf ensures that the deterioration due to filter pruning is minimized.", "Furthermore, the number of filters associated with large scaling factor is largely reduced, rendering more irrelevant filter to be pruned harmlessly.", "This observation is quantitatively confirmed in Table REF which lists the accuracies of three schemes before and after pruning for both CIFAR-10 and CIFAR-100 datasets.", "It is observed that retraining of pruned network is no longer needed for the proposed method, while network-slimming has to restore the original accuracy through single-pass or multi-pass of pruning-retraining cycles.", "Accuracy deterioration are 60.76% and 0% for network-slimming and the proposed method respectively, whilst the baseline networks completely fails after pruning, due to insufficient preserved filters at certain layers." ], [ "Online Pruning", "We firstly empirically investigate the effectiveness of the proposed pruning loss.", "After setting $\\lambda _3=0$ , we train VGG-19 network by switching off/on respectively (set $\\lambda _2=0$ and $\\lambda _2=10^{-4}$ ) the pruning loss on CIFAR-10 dataset.", "By adding the proposed pruning loss, we observe improved accuracy of 0.9325 compared to 0.3254 at pruning ratio of 70%.", "When pruning at 80%, the network without pruning loss can not be constructed due to insufficient preserved filters at certain layers, whereas the network trained with pruning loss can attain an accuracy of 0.9298.", "This experiment demonstrates that the proposed pruning loss enables online pruning which dynamically selects the architectures while evolving filters to achieve extremely compact structures.", "Fig.", "REF (b) shows the sorted scaling factors of VGG-19 network trained with pruning loss subject to various target pruning ratios on CIFAR-10.", "We can observe that given a target pruning ratio, our algorithm adaptively adjusts the distribution of scaling factors to accommodate the pruning operation.", "Such a dynamic evolution warrants little accuracy loss at a considerably high pruning ratio, as opposed to the static offline pruning approaches, e.g., network-slimming, where pruning operation is isolated from the training process causing considerable accuracy loss or even network destruction.", "Figure: Network architecure for image compression.Table: Results of image compression on CIFAR-100 dataset" ], [ "Image Compression", "The proposed approach is applied on end-to-end image compression task which follows a general autoencoder architecture as illustrated in Fig.", "REF .", "We utilize general scaling layer which is added after each convolutional layer, with each scaling factor initialized as 1.", "The evaluation is performed on CIFAR-100 dataset.", "We train all the networks from scratch using Adam with mini-batch size 128 for 600 epochs.", "The learning rate is set to 0.001 and MSE loss is used.", "The results are listed in Table.", "REF where both parameters and floating-point operations (FLOPs) are reported.", "Our method can save about 40% - 60% parameters and 50% - 60% computational cost with minor lost of performance (PSNR)." ], [ "Audio Classification", "We further apply our method in audio classification task, particularly music genre classification.", "The preprocessing of audio data is similar with [13] and produces Mel spectrogram matrix of size 80$\\times $ 80.", "The network architecture is illutrated in Fig.", "REF , where the scaling layer is added after both convolutional layers and fully connected layers.", "The evaluation is performed on ISMIR Genre dataset.", "We train all the networks from scratch using Adam with mini-batch size 64 for 50 epochs.", "The learning rate is set to 0.003.", "The results are listed in Table.", "REF where both parameters and FLOPs are reported.", "Our approach saves about 92% parameters while achieves 1% higher accuracy, saving 80% computational cost.", "With a minor loss of about 1%, 99.5% parameters are pruned, resulting in an extreme narrow network with $\\times $ 50 times speedup.", "Figure: Network architecure for music genre classification.Table: Results of music genre classification on ISMIR Genre dataset" ], [ "CONCLUSIONS", "In this paper, we have proposed a novel approach to simultaneously learning architectures and features in deep neural networks.", "This is mainly underpinned by a novel pruning loss and online pruning strategy which explicitly guide the optimization toward an optimal architecture driven by a target pruning ratio or model size.", "The proposed pruning loss enabled online pruning which dynamically selected the architectures while evolving filters to achieve extremely compact structures.", "In order to improve the feature representation power of the remaining filters, we further proposed to enforce the diversities between filters for more effective feature representation which in turn improved the trade-off between architecture and accuracies.", "We conducted comprehensive experiments to show that the interplay between architecture and feature optimizations improved the final compressed models in terms of both models sizes and accuracies for various tasks on both visual and audio data." ] ]
1906.04505
[ [ "Structure of multiplicative simple Hom-Jordan algebras" ], [ "Abstract In this paper, we mainly study structure of multiplicative simple Hom-Jordan algebras.", "We talk about equivalent conditions for multiplicative Hom-Jordan algebras being solvable, simple and semi-simple.", "As an application, we give a theorem about classification of multiplicative simple Hom-Jordan algebras.", "Moreover, some propositions about bimodules of multiplicative Hom-Jordan algebras are also displayed." ], [ "Introduction", "Algebras where the identities defining the structure are twisted by a homomorphism are called Hom-algebras.", "Many predecessors have investigated Hom-algebras in the literature recently.", "The theory of Hom-algebras started from Hom-Lie algebras introduced and discussed in [6], [10], [11], [12].", "Hom-associative algebras were introduced in [15] while Hom-Jordan algebras were introduced in [14] as twisted generalization of Jordan algebras.", "In recent years, the vertex operator algebras are becoming more and more popular because of its importance.", "As a result, more and more people study the work about the vertex operator algebras.", "In [8], by using the structure of Heisenberg algebras, Lam constructed a vertex operator algebra such that the weight two space $V_{2} \\cong A$ for a given simple Jordan algebra $J$ of type A, B or C over $\\mathbb {C}$ .", "In [3], Ashihara gave a counterexample to the following assertion: If $R$ is a subalgebra of the Griess algebra, then the weight two space of the vertex operator subalgebra VOA$(R)$ generated by $R$ coincides with $R$ by using a vertex operator algebra associated with the simple Jordan algebra of type D. Zhao, H. B. constructed simple quotients $\\bar{V}_{{J}, r}$ for $r \\in \\mathbb {Z}_{\\ne 0}$ using dual-pair type constructions, where $\\bar{V}_{{J}, r}$ satisfies that $(\\bar{V}_{{J}, r})_{0} = \\mathbb {C}1$ , $(\\bar{V}_{{J}, r})_{1} = \\lbrace 0\\rbrace $ and $(\\bar{V}_{{J}, r})_{2}$ is isomorphic to the type B Jordan algebra ${J}$ .", "Moreover, he reproved that $V_{{J}, r}$ is simple if $r \\notin \\mathbb {Z}$ in his paper [19].", "The structure of Hom-algebras seems to be more complex because of variety of twisted maps.", "But structure of original algebras are pretty clear.", "So one of the ways to study the structure of Hom-algebras is to look for relationships between Hom-algebras and their induced algebras.", "In [15], Makhlouf and Silvestrov introduced structure of Hom-associative algebras and Hom-Leibniz algebras together with their induced algebras.", "In [13], Li, X. X. studied the structure of multiplicative Hom-Lie algebras and gave equivalent conditions for a multiplicative Hom-Lie algebra to be solvable, simple and semi-simple.", "By a similar analysis, we generalize above results in Hom-Jordan algebras successfully in this paper.", "It's well known that simple algebras play an important role in structure theory.", "Similarly, it is very necessary to study simple Hom-algebras in Hom-algebras theory.", "In [5], Chen, X. and Han, W. gave a classification theorem about simple multiplicative Hom-Lie algebras.", "Using some theorems got in Section , we generalize the above theorem into Hom-Jordan algebras.", "Nowadays, one of the most modern trends in mathematics has to do with representations and deformations.", "The two topics are important tools in most parts of Mathematics and Physics.", "Representations of Hom-Lie algebras were introduced and studied in [4], [17].", "But representations of Hom-Jordan algebras came much later.", "In 2018, Attan gave the definition of bimodules of Hom-Jordan algebras in his paper [1].", "In [2], Agrebaoui, Benali and Makhlouf study representations of simple Hom-Lie algebras and gave some propositions about it.", "In this paper, we also give some propositions about bimodules of Hom-Jordan algebras using similar methods.", "The paper is organised as follows: In Section , we will introduce some basic definitions and prove a few lemmas which can be used in what follows.", "In Section , we mainly show that three important theorems, Theorem REF , REF and REF , which are about solvability, simpleness and semi-simpleness of multiplicative Hom-Jordan algebras respectively.", "In Section , we give a theorem(Theorem REF ) on the construction of $n$ -dimensional simple Hom-Jordan algebras at first.", "Next we give our main theorem in this section, Theorem REF , which is about classification of simple multiplicative Hom-Jordan algebras.", "In Section , we'll prove a very important theorem, Theorem REF , which is about relationships between bimodules of Hom-Jordan algebras and modules of their induced Jordan algebras.", "Moreover, some propositions about bimodules of simple Hom-Jordan algebras are also obtained as an application of Theorem REF ." ], [ "Preliminaries", "Definition 2.1 [16] A Jordan algebra $J$ over a field $\\rm {F}$ is an algebra satisfying for any $x, y \\in J$ , $x \\circ y = y \\circ x$ ; $(x^{2} \\circ y) \\circ x = x^{2} \\circ (y \\circ x)$ .", "Definition 2.2 [14] A Hom-Jordan algebra over a field $\\rm {F}$ is a triple $(V, \\mu , \\alpha )$ consisting of a linear space $V$ , a bilinear map $\\mu : V \\times V \\rightarrow V$ which is commutative and a linear map $\\alpha : V \\rightarrow V$ satisfying for any $x, y \\in V$ , $\\mu (\\alpha ^{2}(x), \\mu (y, \\mu (x, x))) = \\mu (\\mu (\\alpha (x), y), \\alpha (\\mu (x, x))),$ where $\\alpha ^{2} = \\alpha \\circ \\alpha $ .", "Definition 2.3 A Hom-Jordan algebra $(V, \\mu , \\alpha )$ is called multiplicative if for any $x, y \\in V$ , $\\alpha (\\mu (x, y)) = \\mu (\\alpha (x), \\alpha (y))$ .", "Definition 2.4 [7] A subspace $W \\subseteq V$ is a Hom-subalgebra of $(V, \\mu , \\alpha )$ if $\\alpha (W) \\subseteq W$ and $\\mu (x, y) \\in W,\\quad \\forall x, y \\in W.$ Definition 2.5 [7] A subspace $W \\subseteq V$ is a Hom-ideal of $(V, \\mu , \\alpha )$ if $\\alpha (W) \\subseteq W$ and $\\mu (x, y) \\in W,\\quad \\forall x \\in W, y \\in V.$ Definition 2.6 [7] Let $(V, \\mu , \\alpha )$ and $(V^{^{\\prime }}, \\mu ^{^{\\prime }}, \\beta )$ be two Hom-Jordan algebras.", "A linear map $\\phi : V \\rightarrow V^{^{\\prime }}$ is said to be a homomorphism of Hom-Jordan algebras if $\\phi (\\mu (x, y)) = \\mu ^{^{\\prime }}(\\phi (x), \\phi (y))$ ; $\\phi \\circ \\alpha = \\beta \\circ \\phi $ .", "In particular, $\\phi $ is an isomorphism if $\\phi $ is bijective.", "Definition 2.7 A Hom-Jordan algebra $(V, \\mu , \\alpha )$ is called a Jordan-type Hom-Jordan algebra if there exists a Jordan algebra $(V, \\mu ^{^{\\prime }})$ such that $\\mu (x, y) = \\alpha (\\mu ^{^{\\prime }}(x, y)) = \\mu ^{^{\\prime }}(\\alpha (x), \\alpha (y)),\\quad \\forall x, y \\in V,$ and $(V, \\mu ^{^{\\prime }})$ is called the induced Jordan algebra.", "Lemma 2.8 Suppose that $(V, \\mu )$ is a Jordan algebra and $\\alpha : V \\rightarrow V$ is a homomorphism.", "Then $(V, \\tilde{\\mu }, \\alpha )$ is a multiplicative Hom-Jordan algebra with $\\tilde{\\mu }(x, y) = \\alpha (\\mu (x, y)),\\;\\forall x, y \\in V$ .", "Suppose that $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra and $\\alpha $ is invertible.", "Then $(V, \\mu , \\alpha )$ is a Jordan-type Hom-Jordan algebra and its induced Jordan algebra is $(V, \\mu ^{^{\\prime }})$ with $\\mu ^{^{\\prime }}(x, y) = \\alpha ^{-1}(\\mu (x, y)),\\;\\forall x, y \\in V$ .", "(1).", "We have $\\tilde{\\mu }$ is commutative since $\\mu $ is commutative.", "For all $x, y \\in V$ , we have $&\\tilde{\\mu }(\\alpha ^{2}(x), \\tilde{\\mu }(y, \\tilde{\\mu }(x,x))) = \\alpha (\\mu (\\alpha ^{2}(x), \\alpha (\\mu (y, \\alpha (\\mu (x, x))))))\\\\&= \\mu (\\alpha ^{3}(x), \\mu (\\alpha ^{2}(y), \\mu (\\alpha ^{3}(x), \\alpha ^{3}(x)))) = \\mu (\\mu (\\alpha ^{3}(x), \\alpha ^{2}(y)), \\mu (\\alpha ^{3}(x), \\alpha ^{3}(x)))\\\\&= \\alpha (\\mu (\\alpha (\\mu (\\alpha (x), y)), \\alpha ^{2}(\\mu (x, x)))) = \\tilde{\\mu }(\\tilde{\\mu }(\\alpha (x), y), \\alpha (\\tilde{\\mu }(x, x))),$ which implies that $(V, \\tilde{\\mu }, \\alpha )$ is a Hom-Jordan algebra.", "$\\alpha (\\tilde{\\mu }(x, y)) = \\alpha ^{2}(\\mu (x, y)) = \\alpha (\\mu (\\alpha (x), \\alpha (y))) = \\tilde{\\mu }(\\alpha (x), \\alpha (y)),$ which implies that $(V, \\tilde{\\mu }, \\alpha )$ is multiplicative.", "Hence, $(V, \\tilde{\\mu }, \\alpha )$ is a multiplicative Hom-Jordan algebra.", "(2).", "We have $\\mu ^{^{\\prime }}$ is commutative since $\\mu $ is commutative.", "For any $x, y \\in V$ , we have $&\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(x, x), y), x) = \\alpha ^{-1}(\\mu (\\alpha ^{-1}(\\mu (\\alpha ^{-1}(\\mu (x, x)), y)), x))\\\\&= \\alpha ^{-3}(\\mu (\\mu (\\mu (x, x), \\alpha (y)), \\alpha ^{2}(x))) = \\alpha ^{-3}(\\mu (\\alpha (\\mu (x, x)), \\mu (\\alpha (y), \\alpha (x))))\\\\&= \\alpha ^{-1}(\\mu (\\alpha ^{-1}(\\mu (x, x)), \\alpha ^{-1}(\\mu (y, x)))) = \\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(x, x), \\mu ^{^{\\prime }}(y, x)),$ which implies that $(V, \\mu ^{^{\\prime }})$ is a Jordan algebra.", "It's obvious that $\\mu (x, y) = \\alpha (\\mu ^{^{\\prime }}(x, y)) = \\mu ^{^{\\prime }}(\\alpha (x), \\alpha (y))$ for any $x, y \\in V$ .", "Hence, $(V, \\mu , \\alpha )$ is a Jordan-type Hom-Jordan algebra.", "Definition 2.9 Suppose that $(V, \\mu , \\alpha )$ is a Hom-Jordan algebra.", "Define its derived sequence as follow: $V^{(1)} = \\mu (V, V),\\;V^{(2)} = \\mu (V^{(1)}, V^{(1)}),\\cdots ,\\;V^{(k)} = \\mu (V^{(k - 1)}, V^{(k - 1)}),\\cdots .$ If there exists $m \\in \\mathbb {Z}^{+}$ such that $V^{(m)} = 0$ , then $(V, \\mu , \\alpha )$ is called solvable.", "Definition 2.10 Suppose that $(V, \\mu , \\alpha )$ is a Hom-Jordan algebra and $\\alpha \\ne 0$ .", "If $(V, \\mu , \\alpha )$ has no non trivial Hom-ideals and satisfies $\\mu (V, V) = V$ , then $(V, \\mu , \\alpha )$ is called simple.", "If $V = V_{1} \\oplus V_{2} \\oplus \\cdots \\oplus V_{s},$ where $V_{i}(1 \\le i \\le s)$ are simple Hom-ideals of $(V, \\mu , \\alpha )$ , then $(V, \\mu , \\alpha )$ is called semi-simple.", "Proposition 2.11 Suppose that $(V_{1}, \\tilde{\\mu _{1}}, \\alpha )$ and $(V_{2}, \\tilde{\\mu _{2}}, \\beta )$ are two Jordan-type Hom-Jordan algebras and $\\beta $ is injective.", "Then $\\phi $ is an isomorphism from $(V_{1}, \\tilde{\\mu _{1}}, \\alpha )$ to $(V_{2}, \\tilde{\\mu _{2}}, \\beta )$ if and only if $\\phi $ is an isomorphism between their induced Jordan algebras $(V_{1}, \\mu _{1})$ and $(V_{2}, \\mu _{2})$ and $\\phi $ satisfies $\\beta \\circ \\phi = \\phi \\circ \\alpha $ .", "$(\\Rightarrow )$ For any $x, y \\in V_{1}$ , we have $\\phi (\\tilde{\\mu _{1}}(x, y)) = \\tilde{\\mu _{2}}(\\phi (x), \\phi (y)),$ i.e., $\\phi (\\alpha (\\mu _{1}(x, y))) = \\beta (\\mu _{2}(\\phi (x), \\phi (y))).$ Note that $\\phi \\circ \\alpha = \\beta \\circ \\phi $ , we have $\\beta (\\phi (\\mu _{1}(x, y))) = \\beta (\\mu _{2}(\\phi (x), \\phi (y))).$ Since $\\beta $ is injective, we have $\\phi (\\mu _{1}(x, y)) = \\mu _{2}(\\phi (x), \\phi (y)),$ which implies that $\\phi $ is an isomorphism from $(V_{1}, \\mu _{1})$ to $(V_{2}, \\mu _{2})$ .", "$(\\Leftarrow )$ For any $x, y \\in V_{1}$ , we have $&\\phi (\\tilde{\\mu _{1}}(x, y)) = \\phi (\\alpha (\\mu _{1}(x, y))) = \\beta (\\phi (\\mu _{1}(x, y)))\\\\&= \\beta (\\mu _{2}(\\phi (x), \\phi (y))) = \\tilde{\\mu _{2}}(\\phi (x), \\phi (y)),$ note that $\\beta \\circ \\phi = \\phi \\circ \\alpha $ , we have $\\phi $ is an isomorphism from $(V_{1}, \\tilde{\\mu _{1}}, \\alpha )$ to $(V_{2}, \\tilde{\\mu _{2}}, \\beta )$ .", "Lemma 2.12 Simple multiplicative Hom-Jordan algebras with $\\alpha \\ne 0$ are Jordan-type Hom-Jordan algebras.", "Suppose that $(V, \\mu , \\alpha )$ is a simple multiplicative Hom-Jordan algebra.", "According to Lemma REF (2), we only need to show that $\\alpha $ is invertible.", "If $\\alpha $ is not invertible, then $Ker(\\alpha ) \\ne 0$ .", "It's obvious that $\\alpha (Ker(\\alpha )) \\subseteq Ker(\\alpha )$ .", "For any $x \\in Ker(\\alpha )$ , $y \\in V$ , we have $\\alpha (\\mu (x, y)) = \\mu (\\alpha (x), \\alpha (y)) = \\mu (0, \\alpha (y)) = 0,$ which implies that $\\mu (Ker(\\alpha ), V) \\subseteq Ker(\\alpha )$ .", "Then $Ker(\\alpha )$ is a non trivial Hom-ideal of $(V, \\mu , \\alpha )$ , contradicting with $(V, \\mu , \\alpha )$ is simple.", "Therefore, $Ker(\\alpha ) = 0$ , i.e., $\\alpha $ is invertible.", "Hence, $(V, \\mu , \\alpha )$ is a Jordan-type Hom-Jordan algebra.", "Now we recall a corollary about Proposition REF using Lemma REF .", "Corollary 2.13 Two simple multiplicative Hom-Jordan algebras $(V_{1}, \\tilde{\\mu _{1}}, \\alpha )$ and $(V_{2}, \\tilde{\\mu _{2}}, \\beta )$ are isomorphic if and only if there exists an isomorphism $\\phi $ between their induced Jordan algebras $(V_{1}, \\mu _{1})$ and $(V_{2}, \\mu _{2})$ and $\\phi $ satisfies $\\beta \\circ \\phi = \\phi \\circ \\alpha $ ." ], [ "Structure of Multiplicative Hom-Jordan algebras", "In this section, we discuss the sufficient and necessary conditions that multiplicative Hom-Jordan algebras are solvable, simple and semi-simple.", "Proposition 3.1 Suppose that $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra and $I$ is a Hom-ideal of $(V, \\mu , \\alpha )$ .", "Then $(V/I, \\bar{\\mu }, \\bar{\\alpha })$ is a multiplicative Hom-Jordan algebra where $\\bar{\\mu }(\\bar{x}, \\bar{y}) = \\overline{\\mu (x, y)},\\;\\bar{\\alpha }(\\bar{x}) = \\overline{\\alpha (x)}$ for all $\\bar{x}, \\bar{y} \\in V/I$ .", "We have $\\bar{\\mu }$ is commutative since $\\mu $ is commutative.", "For any $\\bar{x}, \\bar{y} \\in V/I$ , we have $&\\bar{\\mu }(\\bar{\\alpha }^{2}(\\bar{x}), \\bar{\\mu }(\\bar{y}, \\bar{\\mu }(\\bar{x}, \\bar{x}))) = \\overline{\\mu (\\alpha ^{2}(x), \\mu (y, \\mu (x, x)))}\\\\&= \\overline{\\mu (\\mu (\\alpha (x), y), \\alpha (\\mu (x, x)))} = \\bar{\\mu }(\\bar{\\mu }(\\bar{\\alpha }(\\bar{x}), \\bar{y}), \\bar{\\alpha }(\\bar{\\mu }(\\bar{x}, \\bar{x}))).$ Hence, $(V/I, \\bar{\\mu }, \\bar{\\alpha })$ is a Hom-Jordan algebra.", "$\\bar{\\alpha }(\\bar{\\mu }(\\bar{x}, \\bar{y})) = \\overline{\\alpha (\\mu (x, y))} = \\overline{\\mu (\\alpha (x), \\alpha (y))} = \\bar{\\mu }(\\bar{\\alpha }(\\bar{x}, \\bar{y})),$ which implies that $(V/I, \\bar{\\mu }, \\bar{\\alpha })$ is multiplicative.", "Corollary 3.2 Suppose that $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra and satisfies $\\alpha ^{2} = \\alpha $ .", "Then $(V/Ker(\\alpha ), \\bar{\\mu }, \\bar{\\alpha })$ is a Jordan-type Hom-Jordan algebra.", "If $\\alpha $ is invertible, $Ker(\\alpha ) = 0$ .", "According to Lemma REF (2), the conclusion is valid.", "If $\\alpha $ isn't invertible, according to the proof of Lemma REF , we have $Ker(\\alpha )$ is a Hom-ideal of $(V, \\mu , \\alpha )$ .", "Then we have $(V/Ker(\\alpha ), \\bar{\\mu }, \\bar{\\alpha })$ is a multiplicative Hom-Jordan algebra according to Proposition REF .", "Now we show that $\\bar{\\alpha }$ is invertible on $V/Ker(\\alpha )$ .", "Assume that $\\bar{x} \\in Ker(\\bar{\\alpha })$ .", "Then we have $\\overline{\\alpha (x)} = \\bar{\\alpha }(\\bar{x}) = \\bar{0}$ , i.e., $\\alpha (x) \\in Ker(\\alpha )$ .", "Note that $\\alpha ^{2} = \\alpha $ , we have $\\alpha (x) = \\alpha ^{2}(x) = \\alpha (\\alpha (x)) = 0,$ which implies that $x \\in Ker(\\alpha )$ , i.e., $\\bar{x} = \\bar{0}$ .", "Hence, $\\bar{\\alpha }$ is invertible.", "According to Lemma REF (2), $(V/Ker(\\alpha ), \\bar{\\mu }, \\bar{\\alpha })$ is a Jordan-type Hom-Jordan algebra.", "Theorem 3.3 Suppose that $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra and $\\alpha $ is invertible.", "Then $(V, \\mu , \\alpha )$ is solvable if and only if its induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ is solvable.", "Denote derived series of $(V, \\mu ^{^{\\prime }})$ and $(V, \\mu , \\alpha )$ by $V^{(i)}$ , $\\tilde{V}^{(i)}(i = 1, 2, \\cdots )$ respectively.", "Suppose that $(V, \\mu ^{^{\\prime }})$ is solvable.", "Then there exists $m \\in \\mathbb {Z}^{+}$ such that $V^{(m)} = 0$ .", "Note that $\\tilde{V}^{(1)} = \\mu (V, V) = \\alpha (\\mu ^{^{\\prime }}(V, V)) = \\alpha (V^{(1)}),$ $\\tilde{V}^{(2)} = \\mu (\\tilde{V}^{(1)}, \\tilde{V}^{(1)}) = \\mu (\\alpha (V^{(1)}), \\alpha (V^{(1)})) = \\alpha ^{2}(\\mu ^{^{\\prime }}(V^{(1)}, V^{(1)})) = \\alpha ^{2}(V^{(2)}),$ we have $\\tilde{V}^{(m)} = \\alpha ^{m}(V^{(m)})$ by induction.", "Hence, $\\tilde{V}^{(m)} = 0$ , i.e., $(V, \\mu , \\alpha )$ is solvable.", "On the other hand, assume that $(V, \\mu , \\alpha )$ is solvable.", "Then there exists $m \\in \\mathbb {Z}^{+}$ such that $\\tilde{V}^{(m)} = 0$ .", "We have $\\tilde{V}^{(m)} = \\alpha ^{m}(V^{(m)})$ by the above proof.", "Hence we have $V^{(m)} = 0$ since $\\alpha $ is invertible.", "Therefore, $(V, \\mu ^{^{\\prime }})$ is solvable.", "Lemma 3.4 Suppose that an algebra $\\mathcal {A}$ over $\\rm {F}$ can be decomposed into the unique direct sum of simple ideals $\\mathcal {A} = \\oplus ^{s}_{i = 1}\\mathcal {A}_{i}$ where $\\mathcal {A}_{i}$ aren't isomorphic to each other and $\\alpha \\in Aut(\\mathcal {A})$ .", "Then $\\alpha (\\mathcal {A}_{i}) = \\mathcal {A}_{i}(1 \\le i \\le s)$ .", "For any $1 \\le i \\le s$ , we have $\\alpha (\\mathcal {A}_{i})\\mathcal {A} = \\alpha (\\mathcal {A}_{i})\\alpha (\\mathcal {A}) = \\alpha (\\mathcal {A}_{i}\\mathcal {A}) \\subseteq \\alpha (\\mathcal {A}_{i})$ since $\\mathcal {A}_{i}$ are ideals of $\\mathcal {A}$ .", "Similarly, we have $\\mathcal {A}\\alpha (\\mathcal {A}_{i}) \\subseteq \\alpha (\\mathcal {A}_{i})$ .", "Hence, $\\alpha (\\mathcal {A}_{i})$ are also ideals of $\\mathcal {A}$ .", "Moreover, $\\alpha (\\mathcal {A}_{i})$ are simple since $\\mathcal {A}_{i}$ are simple.", "Note that $\\mathcal {A} = \\oplus ^{s}_{i = 1}\\mathcal {A}_{i}$ , we have $\\mathcal {A} = \\alpha (\\mathcal {A}) = \\alpha (\\oplus ^{s}_{i = 1}\\mathcal {A}_{i}) = \\oplus ^{s}_{i = 1}\\alpha (\\mathcal {A}_{i}).$ Note that the decomposition is unique, there exists $1 \\le j \\le s$ such that $\\alpha (\\mathcal {A}_{i}) = \\mathcal {A}_{j}$ for any $1 \\le i \\le s$ .", "If $j \\ne i$ , then we have $\\mathcal {A}_{i} \\cong \\alpha (\\mathcal {A}_{i}) = \\mathcal {A}_{j},$ contradicting with the assumption that $\\mathcal {A}_{i}$ aren't isomorphic to each other.", "Hence, we have $\\alpha (\\mathcal {A}_{i}) = \\mathcal {A}_{i}(1 \\le i \\le s)$ for any $s \\in \\mathbb {N}$ .", "Theorem 3.5 Suppose that $(V, \\mu , \\alpha )$ is a simple multiplicative Hom-Jordan algebra.", "Then its induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ is semi-simple.", "Moreover, $(V, \\mu ^{^{\\prime }})$ can be decomposed into direct sum of isomorphic simple ideals, in addition, $\\alpha $ acts simply transitively on simple ideals of the induced Jordan algebra.", "Suppose that $(V, \\mu ^{^{\\prime }})$ is a simple Jordan algebra and $\\alpha \\in Aut(V)$ .", "Define $\\mu : V \\times V \\rightarrow V$ by $\\mu (x, y) = \\alpha (\\mu ^{^{\\prime }}(x, y)),\\quad \\forall x, y \\in V,$ then $(V, \\mu , \\alpha )$ is a simple multiplicative Hom-Jordan algebra.", "(1) According to the proof of Lemma REF (2) and Lemma REF , $\\alpha $ is an automorphism both on $(V, \\mu , \\alpha )$ and $(V, \\mu ^{^{\\prime }})$ .", "Suppose that $V_{1}$ is the maximal solvable ideal of $(V, \\mu ^{^{\\prime }})$ .", "Then there exists $m \\in \\mathbb {Z^{+}}$ such that $V_{1}^{(m)} = 0$ .", "Note that $\\mu ^{^{\\prime }}(\\alpha (V_{1}), V) = \\mu ^{^{\\prime }}(\\alpha (V_{1}), \\alpha (V)) = \\alpha (\\mu ^{^{\\prime }}(V_{1}, V)) \\subseteq \\alpha (V_{1}),$ $(\\alpha (V_{1}))^{(m)} = \\alpha (V_{1}^{(m)}) = 0,$ we have $\\alpha (V_{1})$ is also a solvable ideal of $(V, \\mu ^{^{\\prime }})$ .", "Then we have $\\alpha (V_{1}) \\subseteq V_{1}$ .", "Moreover, $\\mu (V_{1}, V) = \\alpha (\\mu ^{^{\\prime }}(V_{1}, V)) \\subseteq \\alpha (V_{1}) \\subseteq V_{1},$ so $V_{1}$ is a Hom-ideal of $(V, \\mu , \\alpha )$ .", "Then we have $V_{1} = 0$ or $V_{1} = V$ since $(V, \\mu , \\alpha )$ is simple.", "If $V_{1} = V$ , according to the proof of Theorem REF , we have $\\tilde{V}^{(m)} = \\alpha ^{m}(V^{(m)}) = \\alpha ^{m}(V_{1}^{(m)}) = 0,$ on the other hand, $V = \\mu (V, V)$ since $(V, \\mu , \\alpha )$ is simple.", "Then we have $\\tilde{V}^{(m)} = V$ .", "Contradiction.", "Hence, $V_{1} = 0$ .", "Therefore, $(V, \\mu ^{^{\\prime }})$ is semi-simple.", "Since $(V, \\mu ^{^{\\prime }})$ is semi-simple, we have $V = \\oplus ^s_{i = 1}V_{i}$ , where $V_{i}(1 \\le i \\le s)$ are simple ideals of $(V, \\mu ^{^{\\prime }})$ .", "Because there may be isomorphic Jordan algebras in $V_{1}, V_{2}, \\cdots , V_{s}$ , we rearrange the order as following $V = V_{11} \\oplus V_{12} \\oplus \\cdots \\oplus V_{1m_{1}} \\oplus V_{21} \\oplus V_{22} \\oplus \\cdots \\oplus V_{2m_{2}} \\oplus \\cdots \\oplus V_{t1} \\oplus V_{t2} \\oplus \\cdots \\oplus V_{tm_{t}},$ where $(V_{ij}, \\mu ^{^{\\prime }}) \\cong (V_{ik}, \\mu ^{^{\\prime }}),\\quad 1 \\le j, k \\le m_{i}, i = 1, 2, \\cdots , t.$ According to Lemma REF , we have $\\alpha (V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}) = V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}},$ $&\\mu (V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}, V) = \\alpha (\\mu ^{^{\\prime }}(V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}, V))\\\\&\\subseteq \\alpha (V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}) = V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}},$ we have $V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}$ are Hom-ideals of $(V, \\mu , \\alpha )$ .", "Since $(V, \\mu , \\alpha )$ is simple, we have $V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}} = 0$ or $V$ .", "So all but one $V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}$ must be 0.", "Without loss of generality, we can assume $V = V_{11} \\oplus V_{12} \\oplus \\cdots \\oplus V_{1m_{1}}.$ When $m_{1} = 1$ , $(V, \\mu ^{^{\\prime }})$ is simple.", "When $m_{1} > 1$ , if $\\alpha (V_{1p}) = V_{1p}(1 \\le p \\le m_{1}),$ then $V_{1p}$ is a non trivial ideal of $(V, \\mu , \\alpha )$ , which contradicts with the fact that $(V, \\mu , \\alpha )$ is simple.", "Hence, $\\alpha (V_{1p}) = V_{1l}(1 \\le l \\ne p \\le m_{1}).$ In addition, it is easy to show that $V_{11} \\oplus \\alpha (V_{11}) \\oplus \\cdots \\oplus \\alpha ^{m_{1} - 1}(V_{11})$ is a Hom-ideal of $(V, \\mu , \\alpha )$ .", "Therefore, $V = V_{11} \\oplus \\alpha (V_{11}) \\oplus \\cdots \\oplus \\alpha ^{m_{1} - 1}(V_{11}).$ That is $\\alpha $ acts simply transitively on simple ideals of the induced Jordan algebra.", "(2) Suppose that $(V, \\mu ^{^{\\prime }})$ is a simple Jordan algebra.", "According to Lemma REF (1), we have $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra.", "Suppose that $V_{1}$ is a non trivial Hom-ideal of $(V, \\mu , \\alpha )$ , then we have $\\mu ^{^{\\prime }}(V_{1}, V) = \\alpha ^{-1}(\\mu (V_{1}, V)) \\subseteq \\alpha ^{-1}(V_{1}) = V_{1}.$ So $V_{1}$ is a non trivial ideal of $(V, \\mu ^{^{\\prime }})$ , contradiction.", "So $(V, \\mu , \\alpha )$ has no non trivial ideal.", "If $\\mu (V, V) \\subsetneqq V$ , then $\\mu ^{^{\\prime }}(V, V) = \\alpha ^{-1}(\\mu (V, V)) \\subsetneqq \\alpha ^{-1}(V) = V,$ contradicting with $(V, \\mu ^{^{\\prime }})$ is a simple Jordan algebra.", "Hence, $(V, \\mu , \\alpha )$ is simple.", "Theorem 3.6 Suppose that $(V, \\mu , \\alpha )$ is a semi-simple multiplicative Hom-Jordan algebra.", "Then $(V, \\mu , \\alpha )$ is a Jordan-type Hom-Jordan algebra and its induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ is also semi-simple.", "Suppose that $(V, \\mu ^{^{\\prime }})$ is a semi-simple Jordan algebra and has the decomposition $V = \\oplus ^{s}_{i = 1}V_{i}$ where $V_{i}(1 \\le i \\le s)$ are simple ideals of $(V, \\mu ^{^{\\prime }})$ .", "$\\alpha \\in Aut(V)$ satisfies $\\alpha (V_{i}) = V_{i}(1 \\le i \\le s)$ .", "Then $(V, \\mu , \\alpha )$ is a semi-simple multiplicative Hom-Jordan algebra and has the unique decomposition.", "(1) According to the assumption, $(V, \\mu , \\alpha )$ has the decomposition $V = \\oplus ^{s}_{i = 1}V_{i}$ where $V_{i}(1 \\le i \\le s)$ are simple Hom-ideals of $(V, \\mu , \\alpha )$ .", "Then $(V_{i}, \\mu , \\alpha |_{V_{i}})(1 \\le i \\le s)$ are simple Hom-Jordan algebras.", "According to the proof of Lemma REF , $\\alpha |_{V_{i}}(1 \\le i \\le s)$ are invertible.", "Therefore, $\\alpha $ is invertible on $V$ .", "According to Lemma REF (2), $(V, \\mu , \\alpha )$ is a Jordan-type Hom-Jordan algebra and its induced Jordan algebra is $(V, \\mu ^{^{\\prime }})$ where $\\mu ^{^{\\prime }}(x, y) = \\alpha ^{-1}(\\mu (x, y))$ for all $x, y \\in V$ .", "According to the proof of Theorem REF (2), $V_{i}(i = 1, 2, \\cdots , s)$ are ideals of $(V, \\mu ^{^{\\prime }})$ .", "Moreover, $(V_{i}, \\mu ^{^{\\prime }})$ are induced Jordan algebras of simple Hom-Jordan algebras $(V_{i}, \\mu , \\alpha |_{V_{i}})$ respectively.", "According to Theorem REF (1), $(V_{i}, \\mu ^{^{\\prime }})$ are semi-simple Jordan algebras and can be decomposed into direct sum of isomorphic simple ideals $V_{i} = V_{i1} \\oplus V_{i2} \\oplus \\cdots \\oplus V_{im_{i}}$ .", "Therefore, $(V, \\mu ^{^{\\prime }})$ is semi-simple and has the decomposition of direct sum of simple ideals $V = V_{11} \\oplus V_{12} \\oplus \\cdots \\oplus V_{1m_{1}} \\oplus V_{21} \\oplus V_{22} \\oplus \\cdots \\oplus V_{2m_{2}} \\oplus \\cdots \\oplus V_{s1} \\oplus V_{s2} \\oplus \\cdots \\oplus V_{sm_{s}}.$ (2) According to Lemma REF (1), $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra.", "For all $1 \\le i \\le s$ , we have $\\mu (V_{i}, V) = \\alpha (\\mu ^{^{\\prime }}(V_{i}, V)) \\subseteq \\alpha (V_{i}) = V_{i},$ note that $\\alpha (V_{i}) = V_{i}$ , we have $V_{i}$ are Hom-ideals of $(V, \\mu , \\alpha )$ .", "If there exists $V_{i0} \\subsetneqq V_{i}$ is a non trivial Hom-ideal of $(V_{i}, \\mu , \\alpha |_{V_{i}})$ , then we have $\\mu (V_{i0}, V) = \\mu (V_{i0}, V_{1} \\oplus V_{2} \\oplus \\cdots \\oplus V_{s}) = \\mu (V_{i0}, V_{i}) \\subseteq V_{i0},$ so $V_{i0}$ is a non trivial Hom-ideal of $(V, \\mu , \\alpha )$ .", "According to the proof of Theorem REF (2), $V_{i0}$ is also a non trivial ideal of $(V, \\mu ^{^{\\prime }})$ .", "Hence, $V_{i0}$ is also a non trivial ideal of $(V_{i}, \\mu ^{^{\\prime }})$ .", "Contradiction.", "Hence, $V_{i}(i = 1, 2, \\cdots , s)$ are simple Hom-ideals of $(V, \\mu , \\alpha )$ .", "Therefore, $(V, \\mu , \\alpha )$ is semi-simple and has the unique decomposition.", "Proposition 3.7 Suppose that $(V, \\mu , \\alpha )$ is a multiplicative Hom-Jordan algebra satisfying $\\alpha ^{2} = \\alpha $ and $\\mu (Im(\\alpha ), V) \\subseteq Im(\\alpha )$ .", "Then $(V, \\mu , \\alpha )$ is isomorphic to the decomposition of direct sum of Hom-Jordan algebras $V \\cong (V/Ker(\\alpha )) \\oplus Ker(\\alpha ).$ Set $V_{1} = (V/Ker(\\alpha )) \\oplus Ker(\\alpha )$ .", "According to Corollary REF , $(V/Ker(\\alpha ), \\bar{\\mu }, \\bar{\\alpha })$ is a Hom-Jordan algebra.", "It's obvious that $(Ker(\\alpha ), \\mu , \\alpha |_{Ker(\\alpha )})$ is a Hom-Jordan algebra.", "Define $\\mu _{1} : V_{1} \\times V_{1} \\rightarrow V_{1}$ and $\\alpha _{1} : V_{1} \\rightarrow V_{1}$ by $\\mu _{1}((\\bar{x}, k_{1}), (\\bar{y}, k_{2})) = (\\overline{\\mu (x, y)}, \\mu (k_{1}, k_{2})),$ $\\alpha _{1}((\\bar{x}, k_{1})) = (\\overline{\\alpha (x)}, 0).$ Then $(V_{1}, \\mu _{1}, \\alpha _{1})$ is a Hom-Jordan algebra and $V_{1} = (V/Ker(\\alpha )) \\oplus Ker(\\alpha )$ is the direct sum of ideals.", "Now we show that $(V, \\mu , \\alpha ) \\cong (V_{1}, \\mu _{1}, \\alpha _{1})$ .", "According to the assumption, we have $Im(\\alpha )$ is a Hom-ideal of $(V, \\mu , \\alpha )$ .", "For any $x \\in Ker(\\alpha ) \\cap Im(\\alpha )$ , there exists $y \\in V$ such that $x = \\alpha (y)$ .", "Then we have $0 = \\alpha (x) = \\alpha ^{2}(y) = \\alpha (y) = x,$ so $Ker(\\alpha ) \\cap Im(\\alpha ) = \\lbrace 0\\rbrace $ .", "So for any $x \\in V$ , we have $x = x - \\alpha (x) + \\alpha (x)$ where $x - \\alpha (x) \\in Ker(\\alpha )$ and $\\alpha (x) \\in Im(\\alpha )$ .", "Therefore, $V = Ker(\\alpha ) \\oplus Im(\\alpha )$ .", "Obviously, $(Im(\\alpha ), \\mu , \\alpha |_{Im(\\alpha )})$ is a Hom-Jordan algebra.", "Next we'll show that $(Im(\\alpha ), \\mu , \\alpha |_{Im(\\alpha )}) \\cong (V/Ker(\\alpha ), \\bar{\\mu }, \\bar{\\alpha })$ .", "Define $\\varphi : V/Ker(\\alpha ) \\rightarrow Im(\\alpha )$ by $\\varphi (\\bar{x}) = \\alpha (x)$ for all $\\bar{x} \\in V/Ker(\\alpha )$ .", "Obviously, $\\varphi $ is bijective.", "For all $\\bar{x}, \\bar{y} \\in V/Ker(\\alpha )$ , we have $\\varphi (\\bar{\\mu }(\\bar{x}, \\bar{y})) = \\varphi (\\overline{\\mu (x, y)}) = \\alpha (\\mu (x, y)) = \\mu (\\alpha (x), \\alpha (y)) = \\mu (\\varphi (\\bar{x}), \\varphi (\\bar{y})),$ $\\varphi (\\bar{\\alpha }(\\bar{x})) = \\varphi (\\overline{\\alpha (x)}) = \\alpha ^{2}(x) = \\alpha (\\varphi (\\bar{x})),$ which implies that $\\varphi \\circ \\bar{\\alpha } = \\alpha \\circ \\varphi $ .", "Therefore, $\\varphi $ is an isomorphism, i.e., $(Im(\\alpha ), \\mu , \\alpha |_{Im(\\alpha )}) \\cong (V/Ker(\\alpha ), \\bar{\\mu }, \\bar{\\alpha })$ .", "Therefore, $V = Ker(\\alpha ) \\oplus Im(\\alpha ) \\cong (V/Ker(\\alpha )) \\oplus Ker(\\alpha )$ ." ], [ "Classification of simple multiplicative Hom-Jordan algebras", "In this section, we'll give a theorem about classification on simple multiplicative Hom-Jordan algebras.", "At first, we give a construction of $n$ -dimensional simple Hom-Jordan algebras.", "Theorem 4.1 There exist $n$ -dimensional simple Hom-Jordan algebras for any $n \\in \\mathbb {Z}^{+}$ .", "When $n = 1$ , let $V = \\mathbb {R}^{+}$ over $\\mathbb {R}$ , i.e, $\\mu : V \\times V \\rightarrow V,\\;\\mu (a, b) = \\frac{1}{2}(ab + ba)$ for all $a, b \\in \\mathbb {R}$ .", "It's obvious that $\\rm {dim}\\it {(V)} = 1$ .", "Take $\\alpha = k\\;id_{\\mathbb {R}}$ for $k \\in \\mathbb {R}$ .", "Then $(V, \\mu , \\alpha )$ is a 1-dimensional Hom-Jordan algebra.", "Obviously, $(V, \\mu , \\alpha )$ is simple since $(V, \\mu , \\alpha )$ has no non trivial Hom-ideal and $\\mu (V, V) = V$ .", "When $n = 2$ , let $\\lbrace e_{0}, e_{1}\\rbrace $ be a basis of 2-dimensional vector space over $\\mathbb {C}$ .", "Define a bilinear symmetric binary operation $\\mu : V \\times V \\rightarrow V$ : $\\mu (e_{0}, e_{0}) = e_{0},\\;\\mu (e_{1}, e_{1}) = e_{1},\\;\\mu (e_{0}, e_{1}) = \\mu (e_{1}, e_{0}) = e_{0} + e_{1}.$ Obviously, $\\mu (V, V) = V$ .", "Take $\\alpha \\in End(V)$ where $\\alpha (e_{0}) = pe_{0}, \\alpha (e_{1}) = qe_{1},\\;p, q \\in \\mathbb {C}.$ One can verify that $(V, \\mu , \\alpha )$ is a 2-dimensional Hom-Jordan algebra.", "Next we'll show that $(V, \\mu , \\alpha )$ is simple.", "Suppose that $I$ is a non trivial Hom-ideal of $(V, \\mu , \\alpha )$ .", "Then there exists $0 \\ne a = t_{1}e_{0} + t_{2}e_{1} \\in I$ , where $t_{1}, t_{2} \\in \\mathbb {C}$ .", "Then we have $(t_{1} + t_{2})e_{0} + t_{2}e_{1} = \\mu (a, e_{0}) \\in I$ , i.e, $\\frac{t_{1} + t_{2}}{t_{1}} = \\frac{t_{2}}{t_{2}}$ ; $t_{1}e_{0} + (t_{1} + t_{2})e_{1} = \\mu (a, e_{1}) \\in I$ , i.e, $\\frac{t_{2}}{t_{1} + t_{2}} = \\frac{t_{1}}{t_{1}}$ since $\\rm {dim}\\it {(I)} = 1$ .", "So $t_{1} + t_{2} = t_{1}$ , $t_{1} + t_{2} = t_{2}$ , which imply that $t_{1} = 0, t_{2} = 0$ .", "Hence, $I = 0$ , contradiction.", "Therefore, $(V, \\mu , \\alpha )$ is a 2-dimensional simple Hom-Jordan algebra.", "When $n \\ge 3$ , let $\\lbrace a_{\\bar{i}} | i \\in \\mathbb {Z}_{n}\\rbrace $ be a basis of $n$ -dimensional vector space $V$ over $\\mathbb {C}$ .", "Define a bilinear symmetric binary operation $\\mu : V \\times V \\rightarrow V$ : $\\mu (a_{\\bar{i}}, a_{\\overline{i + 1}}) = \\mu (a_{\\overline{i + 1}}, a_{\\bar{i}}) = a_{\\overline{i + 2}},$ others are all zero.", "Then for any linear map $\\alpha \\in End(V)$ , $(V, \\mu , \\alpha )$ is a Hom-Jordan algebra.", "Next, we prove that $(V, \\mu , \\alpha )$ is simple.", "Clearly, we have $\\mu (V, V) = V$ .", "Let $W$ be a nonzero Hom-ideal of $(V, \\mu , \\alpha )$ , then there exists a nonzero element $x = \\sum ^{n - 1}_{i = 0}x_{i}a_{\\bar{i}} \\in W$ .", "Suppose that $x_{t} \\ne 0$ .", "Since $x_{t}a_{\\overline{t + 2}} = \\mu (a_{\\bar{t}}, \\mu (a_{\\overline{t - 1}}, x)) \\in W$ , we have $a_{\\overline{t + 2}} \\in W$ , so $a_{\\overline{n - 2}}, a_{\\overline{n - 1}} \\in W$ .", "So $a_{\\bar{0}} = \\mu (a_{\\overline{n - 2}}, a_{\\overline{n - 1}}) \\in W$ , $a_{\\bar{1}} = \\mu (a_{\\overline{n - 1}}, a_{\\bar{0}}) \\in W$ .", "Hence, we have all $a_{\\bar{i}}(i \\in \\mathbb {Z}_{n}) \\in W$ .", "Therefore $W =V$ and $(V, \\mu , \\alpha )$ is simple.", "According to Theorem REF (1) and Corollary REF , the dimension of a simple multiplicative Hom-Jordan algebra can only be an integer multiple of dimensions of simple Jordan algebras.", "By Theorem REF (1) and Corollary REF , in order to classify simple multiplicative Hom-Jordan algebras, we just classify automorphism on their induced Jordan algebras, in particular, automorphism on semi-simple Jordan algebras which are direct sum of finite isomorphic simple ideals.", "Theorem 4.2 Let $J$ be a semi-simple Jordan algebra and its $n$ simple ideals are isomorphic mutually, moreover $J$ can be generated by its automorphism $\\alpha $ (or $\\beta $ ) and any simple ideal.", "Taking $\\alpha _{n} = \\alpha ^{n}$ , $\\beta _{n} = \\beta ^{n}$ , then $\\alpha _{n}$ (or $\\beta _{n}$ ) leaves each simple ideal of $J$ invariant.", "Then there exists an automorphism $\\varphi $ on $J$ satisfying $\\varphi \\circ \\alpha = \\beta \\circ \\varphi $ if and only if there exists an automorphism $\\phi $ on the simple ideal of $J$ satisfying $\\phi \\circ \\alpha ^{n} = \\beta ^{n} \\circ \\phi $ .", "Let $J_{1}$ be a simple ideal of $J$ .", "Since $\\alpha _{n}$ (or $\\beta _{n}$ ) leaves each simple ideal of $J$ invariant, we have $\\alpha ^{n}(J_{1}) = J_{1}$ (or $\\beta ^{n}(J_{1}) = J_{1}$ ) and we have $J = J_{1} \\oplus \\alpha (J_{1}) \\oplus \\cdots \\oplus \\alpha ^{n - 1}(J_{1})$ $or\\; J = J_{1} \\oplus \\beta (J_{1}) \\oplus \\cdots \\oplus \\beta ^{n - 1}(J_{1})$ since $J$ can be generated by its automorphism $\\alpha $ (or $\\beta $ ) and any simple ideal.", "Choose a basis $x = (x_{1}, x_{2}, \\cdots , x_{m})$ of $J_{1}$ , then $x^{^{\\prime }} = (x, \\alpha (x), \\alpha ^{2}(x), \\cdots , \\alpha ^{n - 1}(x)),\\quad x^{^{\\prime \\prime }} = (x, \\beta (x), \\beta ^{2}(x), \\cdots , \\beta ^{n - 1}(x))$ are both bases of $J$ .", "Let $\\alpha (x^{^{\\prime }}) = x^{^{\\prime }}A$ , $\\beta (x^{^{\\prime \\prime }}) = x^{^{\\prime \\prime }}B$ , then $A = \\begin{pmatrix}0& & & &A_{1}\\\\I&0& & & \\\\&I&0& & \\\\& &\\ddots &\\ddots & \\\\& & &I&0\\end{pmatrix},B = \\begin{pmatrix}0& & & &B_{1}\\\\I&0& & & \\\\&I&0& & \\\\& &\\ddots &\\ddots & \\\\& & &I&0\\end{pmatrix},$ where $\\alpha _{n}(x) = xA_{1}$ , $\\beta _{n}(x) = xB_{1}$ .", "If there exists an automorphism $\\phi $ on $J_{1}$ such that $\\phi \\circ \\alpha _{n} = \\beta _{n} \\circ \\phi $ , let $\\phi (x) = xM$ , then $MA_{1} = B_{1}M$ .", "Defining $\\varphi (x^{^{\\prime }}) = x^{^{\\prime \\prime }}diag(M, \\cdots , M)$ .", "Then we have $\\varphi (x^{^{\\prime }}) = (\\phi (x), \\beta (\\phi (x)), \\cdots , \\beta ^{n - 1}(\\phi (x)))$ .", "It's easy to verify that $\\varphi $ is an automorphism since $\\phi $ is an automorphism.", "Moreover, $\\varphi \\circ \\alpha (x^{^{\\prime }}) = x^{^{\\prime \\prime }}\\begin{pmatrix}M& & & & \\\\&M& & & \\\\& &\\ddots & & \\\\& & &\\ddots & \\\\& & & &M\\end{pmatrix}\\begin{pmatrix}0& & & &A_{1}\\\\I&0& & & \\\\&I&0& & \\\\& &\\ddots &\\ddots & \\\\& & &I&0\\end{pmatrix}$ $= x^{^{\\prime \\prime }}\\begin{pmatrix}0&\\cdots &\\cdots &0&MA_{1}\\\\M&0& & & \\\\&M&0& & \\\\& & &\\ddots & \\\\& & &M&0\\end{pmatrix},$ $\\beta \\circ \\varphi (x^{^{\\prime }}) = x^{^{\\prime \\prime }}\\begin{pmatrix}0& & & &B_{1}\\\\I&0& & & \\\\&I&0& & \\\\& &\\ddots &\\ddots & \\\\& & &I&0\\end{pmatrix}\\begin{pmatrix}M& & & & \\\\&M& & & \\\\& &\\ddots & & \\\\& & &\\ddots & \\\\& & & &M\\end{pmatrix}$ $= x^{^{\\prime \\prime }}\\begin{pmatrix}0&\\cdots &\\cdots &0&B_{1}M\\\\M&0& & & \\\\&M&0& & \\\\& & &\\ddots & \\\\& & &M&0\\end{pmatrix}.$ Note that $MA_{1} = B_{1}M$ , we have $\\varphi \\circ \\alpha = \\beta \\circ \\varphi $ .", "Now suppose that there exists an automorphism $\\varphi $ on $J$ satisfying $\\varphi \\circ \\alpha = \\beta \\circ \\varphi $ .", "According to the proof of Lemma REF , there exists $0 \\le i \\le n - 1$ such that $\\varphi (J_{1}) = \\beta ^{i}(J_{1})$ .", "Then $\\varphi \\circ \\alpha ^{j}(J_{1}) = \\beta ^{i + j}(J_{1})(0 \\le i, j \\le n - 1)$ .", "Let $\\varphi (x) = \\beta ^{i}(x)M_{1}$ , then $\\varphi (x^{^{\\prime }}) = x^{^{\\prime \\prime }}M$ , where $\\begin{pmatrix}M_{1}& & & \\\\&M_{1}& & & \\\\& & \\ddots & & \\\\& & &\\ddots & \\\\& & & &M_{1}\\end{pmatrix}(i = 0),$ ${\\begin{bordermatrix }&1&\\cdots &n - i&n - i + 1& &\\cdots &n\\\\1 & & & &B_{1}M_{1}& & & \\\\2 & & & & &B_{1}M_{1}& & \\\\\\vdots & & & & & &\\ddots & \\\\i & & & & & & &B_{1}M_{1}\\\\i + 1 &M_{1}& & & & & & \\\\\\vdots & &\\ddots & & & & & \\\\n & & &M_{1}& & & & \\\\\\end{bordermatrix }}(1 \\le i \\le n - 1).$ Defining $\\phi (x) = xM_{1}$ , then we have $\\varphi (x^{^{\\prime }}) =\\left\\lbrace \\begin{aligned}(\\phi (x), \\beta (\\phi (x)), \\cdots , \\beta ^{n - 1}(\\phi (x))),(i = 0)\\\\(\\beta ^{i}(\\phi (x)), \\cdots , \\beta ^{n - 1}(\\phi (x)), \\beta ^{n}(\\phi (x)), \\cdots , \\beta ^{n + i - 1}(\\phi (x))),(1 \\le i \\le n - 1)\\end{aligned}\\right.$ Therefore, $\\phi $ is an automorphism on $J_{1}$ since $\\varphi $ is an automorphism on $J$ .", "Moreover, we have $\\phi \\circ \\alpha ^{n} = \\beta ^{n} \\circ \\phi $ since $\\varphi \\circ \\alpha = \\beta \\circ \\varphi $ .", "By Theorem REF , it is obvious that two simple multiplicative Hom-Jordan algebras $(V_{1}, \\mu _{1}, \\alpha )$ and $(V_{2}, \\mu _{2}, \\beta )$ are isomorphic if and only if automorphisms $\\alpha ^{n}$ and $\\beta ^{n}$ on two simple ideals (as simple Jordan algebras) of the corresponding induced Jordan algebras are conjugate.", "Combing Corollary REF , Theorems REF (1) and REF , we get the following theorem.", "Theorem 4.3 All finite dimensional simple multiplicative Hom-Jordan algebras can be denoted as $(X, n, \\Gamma _{\\alpha })$ , where $X$ represents the type of the simple ideal (as the simple Jordan algebra) of the corresponding induced Jordan algebras, $n$ represents numbers of simple ideals, $\\Gamma _{\\alpha }$ represents the sets of conjugate classes of the automorphism $\\alpha ^{n}$ on the simple Jordan algebra $X$ , i.e., $\\Gamma _{\\alpha } = \\lbrace \\phi \\circ \\alpha ^{n} \\circ \\phi ^{-1} | \\phi \\in \\rm {Aut}\\it {(X)}\\rbrace $ .", "Example 4.4 Suppose that $V$ is a 2-dimensional vector space with basis $\\lbrace e_{1}, e_{2}\\rbrace $ .", "Define $\\mu : V \\times V \\rightarrow V$ to be a bilinear map by $\\left\\lbrace \\begin{aligned}\\mu (e_{1}, e_{1}) = e_{2},\\\\\\mu (e_{2}, e_{2}) = e_{1},\\\\\\mu (e_{1}, e_{2}) = \\mu (e_{2}, e_{1}) = 0,\\end{aligned}\\right.$ and $\\alpha : V \\rightarrow V$ a linear map by $\\left\\lbrace \\begin{aligned}\\alpha (e_{1}) = e_{2},\\\\\\alpha (e_{2}) = e_{1}.\\end{aligned}\\right.$ Then $(V, \\mu , \\alpha )$ is a simple multiplicative Hom-Jordan algebra.", "Moreover, its induced Jordan algebra is $(V, \\mu ^{^{\\prime }})$ where $\\mu ^{^{\\prime }} : V \\times V \\rightarrow V$ satisfies $\\left\\lbrace \\begin{aligned}\\mu ^{^{\\prime }}(e_{1}, e_{1}) = e_{1},\\\\\\mu ^{^{\\prime }}(e_{2}, e_{2}) = e_{2},\\\\\\mu ^{^{\\prime }}(e_{1}, e_{2}) = \\mu (e_{2}, e_{1}) = 0.\\end{aligned}\\right.$ $(V, \\mu ^{^{\\prime }})$ is semi-simple and has the decomposition of simple ideals $V = V_{1} \\oplus V_{2}$ where $V_{1}, V_{2}$ are simple ideals generated by $e_{1}, e_{2}$ respectively.", "Moreover, we get $V_{1}$ is isomorphic to $V_{2}$ .", "According to Theorem REF , $(V, \\mu , \\alpha )$ can be denoted as $(V_{1}, 2, \\alpha ^{2})$ or $(V_{2}, 2, \\alpha ^{2})$ ." ], [ "Bimodules of simple multiplicative Hom-Jordan algebras", "In this section, we mainly study bimodules of simple multiplicative Hom-Jordan algebras.", "We will give a theorem on relationships between bimodules of Jordan-type Hom-Jordan algebras and modules of their induced Jordan algebras.", "Moreover, some propositions about bimoudles of simple multiplicative Hom-Jordan algebras are also obtained.", "Definition 5.1 [1] Let $(V, \\mu , \\alpha )$ be a Hom-Jordan algebra.", "A $V$ -bimodule is a Hom-module $(W, \\alpha _{W})$ that comes equipped with a left structure map $\\rho _{l} : V \\otimes W \\rightarrow W(\\rho _{l}(a \\otimes w) = a \\cdot w)$ and a right structure map $\\rho _{r} : W \\otimes V \\rightarrow W(\\rho _{l}(w \\otimes a) = w \\cdot a)$ such that the following conditions: $\\rho _{r} \\circ \\tau _{1} = \\rho _{l}$ where $\\tau _{1} : V \\otimes W \\rightarrow W \\otimes V$ , $a \\otimes w \\mapsto w \\otimes a$ ; $\\alpha _{W}(w \\cdot a) \\cdot \\alpha (\\mu (b, c)) + \\alpha _{W}(w \\cdot b) \\cdot \\alpha (\\mu (c, a)) + \\alpha _{W}(w \\cdot c) \\cdot \\alpha (\\mu (a, b)) = (\\alpha _{W}(w) \\cdot \\mu (b, c)) \\cdot \\alpha ^{2}(a) + (\\alpha _{W}(w) \\cdot \\mu (c, a)) \\cdot \\alpha ^{2}(b) + (\\alpha _{W}(w) \\cdot \\mu (a, b)) \\cdot \\alpha ^{2}(c)$ ; $\\alpha _{W}(w \\cdot a) \\cdot \\alpha (\\mu (b, c)) + \\alpha _{W}(w \\cdot b) \\cdot \\alpha (\\mu (c, a)) + \\alpha _{W}(w \\cdot c) \\cdot \\alpha (\\mu (a, b)) = ((w \\cdot a) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(c) + ((w \\cdot c) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(a) + \\mu (\\mu (a, c), \\alpha (b)) \\cdot \\alpha _{W}^{2}(w)$ ; hold for all $a, b, c \\in V$ , $w \\in W$ .", "Example 5.2 In [1], we know that $(V, \\alpha )$ is a $V$ -bimodule where the structure maps are $\\rho _{l} = \\rho _{r} = \\mu $ where $(V, \\mu , \\alpha )$ is a Hom-Jordan algebra.", "Next, we construct an example which come true on a field of prime characteristic.", "Example 5.3 Let $(V, \\mu , \\alpha )$ be the Hom-Jordan algebra in Example REF and $W$ a 1-dimensional vector space with basis $\\lbrace w_{1}\\rbrace $ .", "Define $\\alpha _{W} : W \\rightarrow W$ to be a linear map by $\\alpha _{W}(w_{1}) = w_{1}$ .", "Define $\\rho _{l} : V \\otimes W \\rightarrow W$ to be a linear map by $\\left\\lbrace \\begin{aligned}\\rho _{l}(e_{1}, w_{1}) = w_{1},\\\\\\rho _{l}(e_{2}, w_{1}) = w_{1},\\end{aligned}\\right.$ and $\\rho _{r} : W \\otimes V \\rightarrow W$ a linear map by $\\left\\lbrace \\begin{aligned}\\rho _{l}(w_{1}, e_{1}) = w_{1},\\\\\\rho _{l}(w_{1}, e_{2}) = w_{1}.\\end{aligned}\\right.$ Then $(W, \\alpha _{W})$ is a $V$ -bimodule with the structure maps defined as above when the characteristic of the ground field is two.", "Definition 5.4 [9] A Jordan module is a system consisting of a vector space $V$ , a Jordan algebra $J$ , and two compositions $x \\cdot a$ , $a \\cdot x$ for $x$ in $V$ , $a$ in $J$ which are bilinear and satisfy $a \\cdot x = x \\cdot a$ , $(x \\cdot a) \\cdot (b \\circ c) + (x \\cdot b) \\cdot (c \\circ a) + (x \\cdot c) \\cdot (a \\circ b) = (x \\cdot (b \\circ c)) \\cdot a + (x \\cdot (c \\circ a)) \\cdot b + (x \\cdot (a \\circ b)) \\cdot c$ , $(((x \\cdot a) \\cdot b) \\cdot c) + (((x \\cdot c) \\cdot b) \\cdot a) + x \\cdot (a \\circ c \\circ b) = (x \\cdot a) \\cdot (b \\circ c) + (x \\cdot b) \\cdot (c \\circ a) + (x \\cdot c) \\cdot (a \\circ b)$ , where $x \\in V, a , b , c \\in J$ , and $a_{1} \\circ a_{2} \\circ a_{3}$ stands for $((a_{1} \\circ a_{2}) \\circ a_{3})$ .", "Theorem 5.5 Let $(V, \\mu , \\alpha )$ be a Jordan-type Hom-Jordan algebra with $(V, \\mu ^{^{\\prime }})$ the induced Jordan algebra.", "Let $(W, \\alpha _{W})$ be a $V$ -bimodule of $(V, \\mu , \\alpha )$ with $\\rho _{l}(\\rho _{r})$ the left structure map(respectively, the right structure map).", "Suppose that $\\alpha _{W}$ is invertible and satisfies $\\alpha _{W}(w \\cdot a) = \\alpha _{W}(w) \\cdot \\alpha (a)$ for all $a \\in V$ , $w \\in W$ .", "Then $W$ is a module of the induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ with two compositions $w \\cdot ^{^{\\prime }} a = \\alpha _{W}^{-1}(w \\cdot a)$ , $a \\cdot ^{^{\\prime }} w = \\alpha _{W}^{-1}(a \\cdot w)$ for all $a \\in V$ , $w \\in W$ .", "Let $W$ be a module of the induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ with two compositions $w \\cdot ^{^{\\prime }} a$ , $a \\cdot ^{^{\\prime }} w$ for all $a \\in V$ , $w \\in W$ .", "If there exists $\\alpha _{W} \\in End(W)$ such that $\\alpha _{W}(w \\cdot ^{^{\\prime }} a) = \\alpha _{W}(w) \\cdot ^{^{\\prime }} \\alpha (a)$ for all $a \\in V$ , $w \\in W$ .", "Then $(W, \\alpha _{W})$ is a $V$ -bimodule of $(V, \\mu , \\alpha )$ with the left structure map $\\rho _{l} : V \\otimes W \\rightarrow W(\\rho _{l}(a \\otimes w) = \\alpha _{W}(a \\cdot ^{^{\\prime }} w))$ , the right structure map $\\rho _{r} : W\\otimes V \\rightarrow W(\\rho _{r}(w \\otimes a) = \\alpha _{W}(w \\cdot ^{^{\\prime }} a))$ .", "(1) For any $x \\in W$ , $a, b, c \\in V$ , we have $a \\cdot ^{^{\\prime }} x = \\alpha _{W}^{-1}(a \\cdot x) = \\alpha _{W}^{-1}(\\rho _{l}(a \\otimes x)) = \\alpha _{W}^{-1}(\\rho _{r} \\circ \\tau _{1} (a \\otimes x)) = \\alpha _{W}^{-1}(x \\cdot a) = x \\cdot ^{^{\\prime }} a.$ $&(x \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c) + (x \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a) + (x \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)\\\\&= \\alpha _{W}^{-1}(\\alpha _{W}^{-1}(x \\cdot a) \\cdot \\alpha ^{-1}(\\mu (b, c))) + \\alpha _{W}^{-1}(\\alpha _{W}^{-1}(x \\cdot b) \\cdot \\alpha ^{-1}(\\mu (c, a)))\\\\&+ \\alpha _{W}^{-1}(\\alpha _{W}^{-1}(x \\cdot c) \\cdot \\alpha ^{-1}(\\mu (a, b)))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}^{2}(\\alpha _{W}^{-1}(x \\cdot a) \\cdot \\alpha ^{-1}(\\mu (b, c))) + \\alpha _{W}^{2}(\\alpha _{W}^{-1}(x \\cdot b) \\cdot \\alpha ^{-1}(\\mu (c, a)))\\\\&+ \\alpha _{W}^{2}(\\alpha _{W}^{-1}(x \\cdot c) \\cdot \\alpha ^{-1}(\\mu (a, b))))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}((x \\cdot a) \\cdot \\mu (b, c)) + \\alpha _{W}((x \\cdot b) \\cdot \\mu (c, a)) + \\alpha _{W}((x \\cdot c) \\cdot \\mu (a, b)))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}(x \\cdot a) \\cdot \\alpha (\\mu (b, c)) + \\alpha _{W}(x \\cdot b) \\cdot \\alpha (\\mu (c, a)) + \\alpha _{W}(x \\cdot c) \\cdot \\alpha (\\mu (a, b)))\\\\&= \\alpha _{W}^{-3}((\\alpha _{W}(x) \\cdot \\mu (b, c)) \\cdot \\alpha ^{2}(a) + (\\alpha _{W}(x) \\cdot \\mu (c, a)) \\cdot \\alpha ^{2}(b) + (\\alpha _{W}(x) \\cdot \\mu (a, b)) \\cdot \\alpha ^{2}(c))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}(x \\cdot \\alpha ^{-1}(\\mu (b, c))) \\cdot \\alpha ^{2}(a) + \\alpha _{W}(x \\cdot \\alpha ^{-1}(\\mu (c, a))) \\cdot \\alpha ^{2}(b)\\\\&+ \\alpha _{W}(x \\cdot \\alpha ^{-1}(\\mu (a, b))) \\cdot \\alpha ^{2}(c))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}^{2}(x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot \\alpha ^{2}(a) + \\alpha _{W}^{2}(x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot \\alpha ^{2}(b) + \\alpha _{W}^{2}(x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot \\alpha ^{2}(c))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}^{2}((x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot a) + \\alpha _{W}^{2}((x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot b) + \\alpha _{W}^{2}((x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot c))\\\\&= \\alpha _{W}^{-1}((x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot a) + \\alpha _{W}^{-1}((x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot b) + \\alpha _{W}^{-1}((x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot c)\\\\&= (x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot ^{^{\\prime }} a + (x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot ^{^{\\prime }} b + (x \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot ^{^{\\prime }} c.$ $&(x \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c) + (x \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a) + (x \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}(x \\cdot a) \\cdot \\alpha (\\mu (b, c)) + \\alpha _{W}(x \\cdot b) \\cdot \\alpha (\\mu (c, a)) + \\alpha _{W}(x \\cdot c) \\cdot \\alpha (\\mu (a, b)))\\\\&= \\alpha _{W}^{-3}(((x \\cdot a) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(c) + ((x \\cdot c) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(a) + \\mu (\\mu (a, c), \\alpha (b)) \\cdot \\alpha _{W}^{2}(x))\\\\&= \\alpha _{W}^{-3}((\\alpha _{W}(x \\cdot ^{^{\\prime }} a) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(c) + (\\alpha _{W}(x \\cdot ^{^{\\prime }} c) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(a)\\\\&+ \\alpha (\\mu ^{^{\\prime }}(\\alpha (\\mu ^{^{\\prime }}(a, c)), \\alpha (b))) \\cdot \\alpha _{W}^{2}(x))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}((x \\cdot ^{^{\\prime }} a) \\cdot b) \\cdot \\alpha ^{2}(c) + \\alpha _{W}((x \\cdot ^{^{\\prime }} c) \\cdot b) \\cdot \\alpha ^{2}(a) + \\alpha ^{2}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b)) \\cdot \\alpha _{W}^{2}(x))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}^{2}((x \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot \\alpha ^{2}(c) + \\alpha _{W}^{2}((x \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot \\alpha ^{2}(a) + \\alpha ^{2}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b)) \\cdot \\alpha _{W}^{2}(x))\\\\&= \\alpha _{W}^{-3}(\\alpha _{W}^{2}(((x \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot c) + \\alpha _{W}^{2}(((x \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot a)+ \\alpha _{W}^{2}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b) \\cdot x))\\\\&= \\alpha _{W}^{-1}(((x \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot c) + \\alpha _{W}^{-1}(((x \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot a) + \\alpha _{W}^{-1}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b) \\cdot x)\\\\&= ((x \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} c + ((x \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} a + \\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b) \\cdot ^{^{\\prime }} x.$ Therefore, $W$ is a module of the induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ .", "(2) For any $w \\in W$ , $a, b, c \\in V$ , we have $\\rho _{r} \\circ \\tau _{1}(a \\otimes w) = \\alpha _{W}(w \\cdot ^{^{\\prime }} a) = \\alpha _{W}(a \\cdot ^{^{\\prime }} w) = \\rho _{l}(a \\otimes w),$ which implies that $\\rho _{r} \\circ \\tau _{1} = \\rho _{l}$ .", "$&\\alpha _{W}(w \\cdot a) \\cdot \\alpha (\\mu (b, c)) + \\alpha _{W}(w \\cdot b) \\cdot \\alpha (\\mu (c, a)) + \\alpha _{W}(w \\cdot c) \\cdot \\alpha (\\mu (a, b))\\\\&= \\alpha _{W}(\\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\alpha ^{2}(\\mu ^{^{\\prime }}(b, c))) + \\alpha _{W}(\\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\alpha ^{2}(\\mu ^{^{\\prime }}(c, a)))\\\\&+ \\alpha _{W}(\\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\alpha ^{2}(\\mu ^{^{\\prime }}(a, b)))\\\\&= \\alpha _{W}(\\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) + \\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) + \\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)))\\\\&= \\alpha _{W}^{3}((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c) + (w \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a) + (w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b))\\\\&= \\alpha _{W}^{3}((w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot ^{^{\\prime }} a + (w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot ^{^{\\prime }} b + (w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot ^{^{\\prime }} c)\\\\&= \\alpha _{W}^{2}(\\alpha _{W}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot ^{^{\\prime }} \\alpha (a)) + \\alpha _{W}^{2}(\\alpha _{W}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot ^{^{\\prime }} \\alpha (b))\\\\&+ \\alpha _{W}^{2}(\\alpha _{W}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot ^{^{\\prime }} \\alpha (c))\\\\&= \\alpha _{W}(\\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot ^{^{\\prime }} \\alpha ^{2}(a)) + \\alpha _{W}(\\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot ^{^{\\prime }} \\alpha ^{2}(b))\\\\&+ \\alpha _{W}(\\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot ^{^{\\prime }} \\alpha ^{2}(c))\\\\&= \\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c)) \\cdot \\alpha ^{2}(a) + \\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a)) \\cdot \\alpha ^{2}(b) + \\alpha _{W}^{2}(w \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b)) \\cdot \\alpha ^{2}(c)\\\\&= \\alpha _{W}(\\alpha _{W}(w) \\cdot ^{^{\\prime }} \\alpha (\\mu ^{^{\\prime }}(b, c))) \\cdot \\alpha ^{2}(a) + \\alpha _{W}(\\alpha _{W}(w) \\cdot ^{^{\\prime }} \\alpha (\\mu ^{^{\\prime }}(c, a))) \\cdot \\alpha ^{2}(b)\\\\&+ \\alpha _{W}(\\alpha _{W}(w) \\cdot ^{^{\\prime }} \\alpha (\\mu ^{^{\\prime }}(a, b))) \\cdot \\alpha ^{2}(c)\\\\&= (\\alpha _{W}(w) \\cdot \\mu (b, c)) \\cdot \\alpha ^{2}(a) + (\\alpha _{W}(w) \\cdot \\mu (c, a)) \\cdot \\alpha ^{2}(b) + (\\alpha _{W}(w) \\cdot \\mu (a, b)) \\cdot \\alpha ^{2}(c).$ $&\\alpha _{W}(w \\cdot a) \\cdot \\alpha (\\mu (b, c)) + \\alpha _{W}(w \\cdot b) \\cdot \\alpha (\\mu (c, a)) + \\alpha _{W}(w \\cdot c) \\cdot \\alpha (\\mu (a, b))\\\\&= \\alpha _{W}^{3}((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(b, c) + (w \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(c, a) + (w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\mu ^{^{\\prime }}(a, b))\\\\&= \\alpha _{W}^{3}(((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} c + ((w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} a + \\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b) \\cdot ^{^{\\prime }} w)\\\\&= \\alpha _{W}^{2}(\\alpha _{W}((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\alpha (c)) + \\alpha _{W}^{2}(\\alpha _{W}((w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\alpha (a))\\\\&+ \\alpha _{W}^{2}(\\alpha (\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b)) \\cdot ^{^{\\prime }} \\alpha _{W}(w))\\\\&= \\alpha _{W}(\\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\alpha ^{2}(c)) + \\alpha _{W}(\\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot ^{^{\\prime }} \\alpha ^{2}(a))\\\\&+ \\alpha _{W}(\\alpha ^{2}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b)) \\cdot ^{^{\\prime }} \\alpha _{W}^{2}(w))\\\\&= \\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} b) \\cdot \\alpha ^{2}(c) + \\alpha _{W}^{2}((w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} b) \\cdot \\alpha ^{2}(a) + \\alpha ^{2}(\\mu ^{^{\\prime }}(\\mu ^{^{\\prime }}(a, c), b)) \\cdot \\alpha _{W}^{2}(w)\\\\&= \\alpha _{W}(\\alpha _{W}(w \\cdot ^{^{\\prime }} a) \\cdot ^{^{\\prime }} \\alpha (b)) \\cdot \\alpha ^{2}(c) + \\alpha _{W}(\\alpha _{W}(w \\cdot ^{^{\\prime }} c) \\cdot ^{^{\\prime }} \\alpha (b)) \\cdot \\alpha ^{2}(a)\\\\&+ \\alpha (\\mu ^{^{\\prime }}(\\alpha (\\mu ^{^{\\prime }}(a, c)), \\alpha (b))) \\cdot \\alpha _{W}^{2}(w)\\\\&= ((w \\cdot a) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(c) + ((w \\cdot c) \\cdot \\alpha (b)) \\cdot \\alpha ^{2}(a) + \\mu (\\mu (a, c), \\alpha (b)) \\cdot \\alpha _{W}^{2}(w).$ Therefore, $(W, \\alpha _{W})$ is a $V$ -bimodule of $(V, \\mu , \\alpha )$ .", "Definition 5.6 For a bimodule $(W, \\alpha _{W})$ of a Hom-Jordan algebra $(V, \\mu , \\alpha )$ , if a subspace $W_{0} \\subseteq W$ satisfies that $\\rho _{l}(a \\otimes w) \\in W_{0}$ for any $a \\in V$ , $w \\in W_{0}$ and $\\alpha _{W}(W_{0}) \\subseteq W_{0}$ , then $(W_{0}, \\alpha _{W}|_{W_{0}})$ is called a $V$ -submodule of $(W, \\alpha _{W})$ .", "A bimodule $(W, \\alpha _{W})$ of a Hom-Jordan algebra $(V, \\mu , \\alpha )$ is called irreducible, if it has precisely two $V$ -submodules (itself and 0) and it is called completely reducible if $W = W_{1} \\oplus W_{2} \\oplus \\cdots \\oplus W_{s}$ , where $(W_{i}, \\alpha _{W}|_{W_{i}})$ are irreducible $V$ -submodules.", "Proposition 5.7 Suppose that $(W, \\alpha _{W})$ is a bimodule of simple multiplicative Hom-Jordan algebra $(V, \\mu , \\alpha )$ with $\\alpha _{W}(a \\cdot w) = \\alpha (a) \\cdot \\alpha _{W}(w)$ for all $a \\in V$ , $w \\in W$ .", "Then, $Ker(\\alpha _{W})$ , $Im(\\alpha _{W})$ are submodules of $W$ for $(V, \\mu , \\alpha )$ .", "Moreover, we have an isomorphism of $(V, \\mu , \\alpha )$ -modules $\\overline{\\alpha _{W}} : W/Ker(\\alpha _{W}) \\rightarrow Im(\\alpha _{W})$ .", "For any $w \\in Ker(\\alpha _{W})$ , we have $\\alpha _{W}(a \\cdot w) = \\alpha (a) \\cdot \\alpha _{W}(w) = 0,\\quad \\forall a \\in V,$ which implies that $a \\cdot w \\in Ker(\\alpha _{W})$ .", "Obviously, $\\alpha _{W}(Ker(\\alpha _{W})) \\subseteq Ker(\\alpha _{W})$ .", "Therefore, $Ker(\\alpha _{W})$ is a submodule of $W$ for $(V, \\mu , \\alpha )$ .", "For any $w \\in Im(\\alpha _{W})$ , $a \\in V$ , there exists $u \\in W$ , $\\tilde{a} \\in V$ such that $w = \\alpha _{W}(u)$ , $a = \\alpha (\\tilde{a})$ .", "Then $a \\cdot w = \\alpha (\\tilde{a}) \\cdot \\alpha _{W}(u) = \\alpha _{W}(\\tilde{a} \\cdot u) \\in Im(\\alpha _{W}),$ it's obvious that $\\alpha _{W}(Im(\\alpha _{W})) \\subseteq Im(\\alpha _{W})$ .", "So $Im(\\alpha _{W})$ is a submodule of $W$ for $(V, \\mu , \\alpha )$ .", "Define $\\overline{\\alpha _{W}} : W/Ker(\\alpha _{W}) \\rightarrow Im(\\alpha _{W})$ by $\\overline{\\alpha _{W}}(\\bar{w}) = \\alpha _{W}(w)$ .", "It's easy to verify that $\\overline{\\alpha _{W}}$ is an isomorphism.", "Corollary 5.8 If $(W, \\alpha _{W})$ is a irreducible bimodule of simple multiplicative Hom-Jordan algebra $(V, \\mu , \\alpha )$ with $\\alpha _{W}(a \\cdot w) = \\alpha (a) \\cdot \\alpha _{W}(w)$ for all $a \\in V$ , $w \\in W$ .", "Then $\\alpha _{W}$ is invertible.", "Proposition 5.9 Suppose that $(V, \\mu , \\alpha )$ is a simple multiplicative Hom-Jordan algebra and $(W, \\alpha _{W})$ is a bimodule with $\\alpha _{W}(a \\cdot w) = \\alpha (a) \\cdot \\alpha _{W}(w)$ for all $a \\in V$ , $w \\in W$ .", "Moreover, $\\alpha _{W}$ is invertible.", "If $W$ is an irreducible module of the induced Jordan algebra $(V, \\mu ^{^{\\prime }})$ with two compositions $w \\cdot ^{^{\\prime }} a = \\alpha _{W}^{-1}(w \\cdot a)$ , $a \\cdot ^{^{\\prime }} w = \\alpha _{W}^{-1}(a \\cdot w)$ for all $a \\in V$ , $w \\in W$ , then $(W, \\alpha _{W})$ is an irreducible bimodule of $(V, \\mu , \\alpha )$ .", "Assume that $(W, \\alpha _{W})$ is reducible.", "Then there exists $W_{0} \\ne \\lbrace 0_{W}\\rbrace $ a subspace of $W$ such that $(W_{0}, \\alpha _{W}|_{W{0}})$ is a submodule of $(W, \\alpha _{W})$ .", "That is $\\alpha _{W}(W_{0}) \\subseteq W_{0}$ and $a \\cdot w \\in W_{0}$ , for any $a \\in V$ , $w \\in W_{0}$ .", "Hence, $a \\cdot ^{^{\\prime }} w = \\alpha _{W}^{-1}(a \\cdot w) \\in \\alpha _{W}^{-1}(W_{0}) = W_{0}$ .", "So $W_{0}$ is a non trivial submodule of $W$ for $(V, \\mu ^{^{\\prime }})$ , contradiction.", "Hence, $(W, \\alpha _{W})$ is an irreducible bimodule of $(V, \\mu , \\alpha )$ .", "Remark 5.10 In [18], the author introduced another definition of Hom-Jordan algebras, depending on which one could verify that all above results are also valid.", "ACKNOWLEDGEMENTS   The authors would like to thank the referee for valuable comments and suggestions on this article." ] ]
1906.04561
[ [ "The logarithmic gauged linear sigma model" ], [ "Abstract We introduce the notion of log R-maps, and develop a proper moduli stack of stable log R-maps in the case of a hybrid gauged linear sigma model.", "Two virtual cycles (canonical and reduced) are constructed for these moduli stacks.", "The main results are two comparison theorems relating the reduced virtual cycle to the cosection localized virtual cycle, as well as the reduced virtual cycle to the canonical virtual cycle.", "This sets the foundation of a new technique for computing higher genus Gromov-Witten invariants of complete intersections." ], [ "The gauged linear sigma model", "In 1993, Witten gave a physical derivation of the Landau–Ginzburg (LG)/Calabi–Yau (CY) correspondence by constructing a family of theories, known as the gauged linear sigma model or GLSM [52].", "A mathematical realization of the LG model, called the Fan–Jarvis–Ruan–Witten (FJRW) theory has been established in [30] via topological and analytical methods.", "On the algebraic side, an approach using the cosection localization [42] to construct the GLSM virtual cycle was discovered in [16], [17] in the narrow case, and in general in [15], [43].", "Along the cosection approach, some hybrid models were studied in [28], and a general algebraic theory of GLSM for (compact-type sectors of) GIT targets was put on a firm mathematical footing by Fan, Jarvis and the third author [33].", "A further algebraic approach for broad sectors using matrix factorizations has been developed in [50], [27], while an analytic approach has been developed in [31], [51].", "As discovered in [16] and further developed in [44], [20], [26], GLSM can be viewed as a deep generalization of the hyperplane property of Gromov–Witten (GW) theory for arbitrary genus.", "However, comparing to GW theory, a major difference as well as a main difficulty of GLSM is the appearance of an extra torus action on the target, called the R-charge, which makes the moduli stacks in consideration for defining the GLSM virtual cycles non-proper in general.", "This makes the powerful tool of virtual localization [34] difficult to apply.", "This is the second paper of our project aiming at a logarithmic GLSM theory that solves the non-properness issue and provides a localization formula by combining the cosection approach with the logarithmic maps of Abramovich–Chen–Gross–Siebert [1], [21], [35].", "This leads to a very powerful technique for computing higher genus GW/FJRW-invariants of complete intersections in GIT quotients.", "Applications include computing higher genus invariants of quintic 3-folds [38], [39]A different approach to higher genus Gromov–Witten invariants of quintic threefolds has been developed by Chang–Guo–Li–Li–Liu [12], [13], [14], [18], [19]., and the cycle of holomorphic differentials [48] by establishing a localization formula of $r$ -spin cycles conjectured by the second author [22].", "This conjectural localization formula was the original motivation of this project.", "In our first paper [25], we developed a principalization of the boundary of the moduli of log maps, which provides a natural framework for extending cosections to the boundary of the logarithmic compactification.", "The simple but important $r$ -spin case has been studied in [25] via the log compactification for maximal explicitness.", "The goal of the current paper is to further establish a log GLSM theory in the hybrid model case which allows possibly non-GIT quotient targets.", "As our theory naturally carries two different perfect obstruction theories, we further prove explicit relations among these virtual cycles.", "This will provide a solid foundation for our forthcoming paper [24] where various virtual cycles involved in log GLSM will be further decomposed using torus localizations and the developments in [3], [2].", "In the case of GIT quotient targets, another aspect of GLSM moduli spaces is that they depend on a stability parameter and exhibit a rich wall-crossing phenomenum.", "To include general targets, the current paper concerns only the $\\infty $ -stability that is closely related to stable maps.", "We leave the study of other stability conditions in the case of GIT quotient targets to a future research." ], [ "$R$ -maps", "The following fundamental notion of $R$ -maps is the result of our effort to generalize pre-stable maps to the setting of GLSM with possibly non-GIT quotient targets.", "While the definition makes essential use of stacks, it is what makes various constructions in this paper transparent.", "Definition 1.1 Let $\\mathfrak {P}\\rightarrow \\mathbf {BC}^*_\\omega $ be a proper, DM-type morphism of log stacks where $\\mathbf {BC}^*_\\omega := \\mathbf {BG}_m$ is the stack parameterizing line bundles with the trivial log structure.", "A logarithmic $R$ -map (or, for short, log $R$ -map) over a log scheme $S$ with target $\\mathfrak {P}\\rightarrow \\mathbf {BC}^*_\\omega $ is a commutative diagram: ${&& \\mathfrak {P}[d] \\\\\\mathcal {C}[rru]^{f} [rr]_{\\omega ^{\\log }_{\\mathcal {C}/S}}&& \\mathbf {BC}^*_\\omega }$ where $\\mathcal {C}\\rightarrow S$ is a log curve (see §REF ) and the bottom arrow is induced by the log cotangent bundle $\\omega ^{\\log }_{\\mathcal {C}/S}$ .", "The notation $\\mathbf {BC}^*_\\omega $ is reserved for parameterizing the line bundle $\\omega ^{\\log }_{\\mathcal {C}/S}$ of the source curve.", "Pull-backs of log $R$ -maps are defined as usual via pull-backs of log curves.", "For simplicity, we will call $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ a log $R$ -map without specifying arrows to $\\mathbf {BC}^*_\\omega $ .", "Such $f$ is called an $R$ -map if it factors through the open substack $\\mathfrak {P}^{\\circ }\\subset \\mathfrak {P}$ with the trivial log structure.", "A pre-stable map $\\underline{f}\\colon \\underline{\\mathcal {C}} \\rightarrow \\underline{\\mathfrak {P}}$ over $\\underline{S}$ with compatible arrows to $\\mathbf {BC}^*_\\omega $ is called an underlying $R$ -map.", "Here $\\underline{\\mathfrak {P}}$ is the underlying stack obtained by removing the log structure of $\\mathfrak {P}$ .", "Remark 1.2 Our notion of $R$ -maps originates from the $R$ -charge in physics.", "In the physical formulation of GLSM [52], there is a target space $X$ which is a Kähler manifold (usually a GIT quotient of a vector space) and a superpotential $W \\colon X \\rightarrow \\mathbb {C}$ .", "To define the A-topological twist, one needs to choose a $\\mathbb {C}^*$ -action on $X$ , called the R-charge, such that $W$ has $R$ -charge (weight) 2.", "The weights of the R-charge on the coordinates of $X$ are used to twist the map or the fields of the theory [52], [33], [36].", "As pointed out by one of the referees, the setting of this article is more general than GLSM in physics in the sense that $X$ does not necessarily have an explicit coordinate description.", "For this purpose, we formulate the more abstract notion of $R$ -maps as above.", "Our notion of $R$ -maps agrees with those of [52], [33], [36] when $X$ is a GIT quotient of a vector space.", "The moduli space of $R$ -maps should give a mathematical description of the spaces on which the general A-twist localizes in physics.", "Example 1.3 (quintic threefolds) Consider the target $\\mathfrak {P}^\\circ = [\\operatorname{Vb}_{\\mathbb {P}^4}(\\mathcal {O}(-5))/\\mathbb {C}^*_{\\omega }]$ , where $\\mathbb {C}^*_{\\omega } \\cong \\mathbb {G}_{m}$ acts on the line bundle $\\operatorname{Vb}_{\\mathbb {P}^4}(\\mathcal {O}(-5))$ by scaling the fibers with weight one.", "The map $\\mathfrak {P}^{\\circ } \\rightarrow \\mathbf {BC}^*_\\omega $ is the canonical map from the quotient description of $\\mathfrak {P}^{\\circ }$ .", "In this case, an $R$ -map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}^{\\circ }$ is equivalent to the data of a map $g\\colon \\mathcal {C}\\rightarrow \\mathbb {P}^4$ together with a section (or “$p$ -field”) $p \\in H^0(\\omega _C^{\\log } \\otimes f^* \\mathcal {O}(-5))$ .", "Therefore, if $\\mathcal {C}$ is unmarked, we recover the moduli space of stable maps with $p$ -fields [16], which is the GLSM moduli space [33] for a quintic hypersurface in $\\mathbb {P}^4$ .", "The construction of this paper will provide a compactification of $\\mathfrak {P}^{\\circ }$ relative to $\\mathbf {BC}^*_\\omega $ , and a compactification of the moduli of $p$ -fields.", "We refer the reader to Section  for more examples in a general situation.", "Just like in Gromov–Witten theory, various assumptions on $\\mathfrak {P}$ are needed to build a proper moduli space as well as a virtual cycle.", "While a theory of stable log $R$ -maps for general $\\mathfrak {P}$ seems to require much further development using the full machinery of logarithmic maps, we choose to restrict ourselves to the so called hybrid targets which already cover a large class of interesting examples including both FJRW theory and complete intersections in Gromov–Witten theory.", "We leave the general case to a future research." ], [ "The input", "A hybrid target is determined by the following data: A proper Deligne–Mumford stack $\\mathcal {X}$ with a projective coarse moduli scheme $X$ .", "A vector bundle $\\mathbf {E}$ over $\\mathcal {X}$ of the form $\\mathbf {E}= \\bigoplus _{i \\in \\mathbb {Z}_{> 0}} \\mathbf {E}_i$ where $\\mathbf {E}_i$ is a vector bundle with the positive grading $i$ .", "Write $d := \\gcd \\big (i \\ | \\ \\mathbf {E}_i \\ne 0 \\big )$ .", "A line bundle $\\mathbf {L}$ over $\\mathcal {X}$ .", "A positive integer $r$ .", "For later use, fix an ample line bundle $H$ over $X$ , and denote by $\\mathcal {H}$ its pull-back over $\\mathcal {X}$ ." ], [ "The $r$ -spin structure", "The R-charge leads to the universal $r$ -spin structure as follows.", "Consider the cartesian diagram ${\\mathfrak {X}[rrr]^\\mathcal {L}_{\\mathfrak {X}}[d] &&& \\mathbf {BG}_m [d]^{\\nu _r} \\\\\\mathbf {BC}^*_\\omega \\times \\mathcal {X}[rrr]^{\\mathcal {L}_\\omega \\boxtimes \\mathbf {L}^{\\vee }} &&& \\mathbf {BG}_m}$ where $\\mathcal {L}_\\omega $ is the universal line bundle over $\\mathbf {BC}^*_\\omega $ , $\\nu _r$ is the $r$ th power map, the bottom arrow is defined by $\\mathcal {L}_\\omega \\boxtimes \\mathbf {L}^{\\vee }$ , and the top arrow is defined by the universal $r$ -th root of $\\mathcal {L}_\\omega \\boxtimes \\mathbf {L}^{\\vee }$ , denoted by $\\mathcal {L}_{\\mathfrak {X}}$ ." ], [ "The targets", "Fix a twisting choice $a\\in \\frac{1}{d}\\cdot \\mathbb {Z}_{>0}$ , and set $\\tilde{r}= a\\cdot r$ .", "We form the weighted projective stack bundle over $\\mathfrak {X}$ : $\\underline{\\mathfrak {P}} := \\underline{\\mathbb {P}}^{\\mathbf {w}}\\left(\\bigoplus _{i > 0}(\\mathbf {E}^{\\vee }_{i,\\mathfrak {X}}\\otimes \\mathcal {L}_{\\mathfrak {X}}^{\\otimes i})\\oplus \\mathcal {O}_{\\mathfrak {X}} \\right),$ where $\\mathbf {w}$ is the collection of the weights of the $\\mathbb {G}_m$ -action such that the weight on the $i$ -th factor is the positive integer $a\\cdot i$ , while the weight of the last factor $\\mathcal {O}$ is 1.", "Here for any vector bundle $V = \\oplus _{i=1}^{r} V_i$ with a $\\mathbb {G}_m$ -action of weight $\\mathbf {w}$ , we use the notation $\\mathbb {P}^\\mathbf {w}(V) = \\left[\\Big (\\operatorname{Vb}(V)\\setminus \\mathbf {0}_V \\Big ) \\Big / \\mathbb {G}_m \\right],$ where $\\operatorname{Vb}(V)$ is the total space of $V$ , and $\\mathbf {0}_V$ is the zero section of $V$ .", "Intuitively, $\\underline{\\mathfrak {P}}$ compactifies the GLSM given by $\\mathfrak {P}^{\\circ } := \\operatorname{Vb}\\left(\\bigoplus _{i > 0}(\\mathbf {E}^{\\vee }_{i,\\mathfrak {X}}\\otimes \\mathcal {L}_{\\mathfrak {X}}^{\\otimes i})\\right).$ The boundary $\\infty _{\\mathfrak {P}} = \\underline{\\mathfrak {P}} \\setminus \\mathfrak {P}^{\\circ }$ is the Cartier divisor defined by the vanishing of the coordinate corresponding to $\\mathcal {O}_{\\mathfrak {X}}$ .", "We make $\\underline{\\mathfrak {P}}$ into a log stack $\\mathfrak {P}$ by equipping it with the log structure corresponding to the Cartier divisor $\\infty _\\mathfrak {P}$ .", "Denote by $\\mathbf {0}_\\mathfrak {P}$ the zero section of the vector bundle $\\mathfrak {P}^{\\circ }$ .", "We arrive at the following commutative diagram ${\\mathfrak {P}[r]^{\\mathfrak {p}} [rd]_{\\mathfrak {t}}& \\mathfrak {X}[r]^{\\zeta } [d] & \\mathbf {BC}^*_\\omega \\\\&\\mathcal {X}&}$ where $\\zeta $ is the composition $\\mathfrak {X}\\rightarrow \\mathbf {BC}^*_\\omega \\times \\mathcal {X}\\rightarrow \\mathbf {BC}^*_\\omega $ with the left arrow the projection to $\\mathbf {BC}^*_\\omega $ .", "By construction, $\\zeta \\circ \\mathfrak {p}$ is proper of DM-type.", "The general notion of log R-maps formulated using $\\mathfrak {P}$ can be described more concretely in terms maps with log fields, see § REF ." ], [ "The stability", "A log $R$ -map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ over $S$ is stable if $f$ is representable, and if for a sufficiently small $\\delta _0 \\in (0,1)$ there exists $k_0 > 1$ such that for any pair $(k,\\delta )$ satisfying $k > k_0$ and $\\delta _0 > \\delta > 0$ , the following holds $(\\omega _{\\mathcal {C}/S}^{\\log })^{1 + \\delta } \\otimes (\\mathfrak {t}\\circ f)^* \\mathcal {H}^{\\otimes k} \\otimes f^* \\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}}) > 0.$ The notation $>$ in (REF ) means that the left hand side has strictly positive degree when restricting to each irreducible component of the source curve.", "Remark 1.4 It has been shown that the stack of pre-stable log maps are proper over the stack of usual pre-stable maps [1], [21], [35].", "Even given this, establishing a proper moduli stack remains a rather difficult and technical step in developing our theory.", "An evidence is that the moduli of underlying $R$ -maps fails to be universally closed [25] in even most basic cases.", "The log structure of $\\mathfrak {P}$ plays an important role in the properness as evidenced by the subtle stability (REF ) which was found after many failed attempts.", "Remark 1.5 In case of rank one $\\mathbf {E}$ , the stability (REF ) is equivalent to a similar formulation as in [25] using $\\mathbf {0}_{\\mathfrak {P}}$ , see Remark REF .", "However, the latter does not generalize to the higher rank case, especially when $\\mathbf {E}$ is non-splitting.", "Consequently, we have to look for a stability condition of very different form, and a very different strategy for the proof of properness compared to the intuitive proof in [25], see Section  for details." ], [ "The moduli stack", "Denote by ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ the category of stable log $R$ -maps fibered over the category of log schemes with fixed discrete data $(g, \\vec{\\varsigma }, \\beta )$ such that $g$ is the genus of the source curve.", "The composition of the log $R$ -map with $\\mathfrak {t}$ has curve class $\\beta \\in H_2(\\mathcal {X})$ .", "$\\vec{\\varsigma }= \\lbrace (\\gamma _i, c_i)\\rbrace _{i=1}^n$ is a collection of pairs such that $c_i$ is the contact order of the $i$ -th marking with $\\infty _{\\mathfrak {P}}$ , and $\\gamma _i$ is a component of the inertia stack fixing the local monodromy at the $i$ -th (orbifold) marking (Definition REF ).", "The first main result of the current article is the compactification: Theorem 1.6 (Theorem REF ) The category ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ is represented by a proper logarithmic Deligne–Mumford stack.", "Remark 1.7 Different choices of data in Section REF may lead to the same $\\mathfrak {P}$ , hence the same ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ .", "The ambiguity in our set-up is analogous to the non-unique choice of R-charge of general GLSM [33]." ], [ "Virtual cycles", "Another goal of this paper is to construct various virtual cycles of (log) $R$ -maps.", "For this purpose, we now impose the condition that $\\mathcal {X}$ is smooth." ], [ "The canonical virtual cycles", "Olsson's logarithmic cotangent complex [46] provides a canonical perfect obstruction theory for ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ , see Section REF .", "If $c_i = 0$ for all $i$ , we refer it as a holomorphic theory.", "Otherwise, we call it a meromorphic theory.", "For our purposes, we are in particularly interested in the holomorphic theory and the closed substack ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta ) \\subset {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ where log $R$ -maps factor through $\\mathbf {0}_{\\mathfrak {P}}$ along all marked points.", "We call ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ the stack of log R-maps with compact-type evaluations.", "In this case, $\\vec{\\varsigma }$ is simply a collection of connected components of the inertia stack $\\overline{\\mathcal {I}}_{\\mu }\\mathbf {0}_{\\mathfrak {P}_{\\mathbf {k}}} $ of $\\mathbf {0}_{\\mathfrak {P}_{\\mathbf {k}}} := \\mathbf {0}_{\\mathfrak {P}}\\times _{\\mathbf {BC}^*_\\omega }\\operatorname{Spec}\\mathbf {k}$ as all contact orders are zero.", "The canonical perfect obstruction theory of ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ induces a canonical perfect obstruction theory of ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ , see (REF ), hence defines the canonical virtual cycle $[{R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )]^{\\mathrm {vir}}$ ." ], [ "Superpotentials and cosection localized virtual cycles", "A superpotential is a morphism of stacks $W \\colon \\mathfrak {P}^{\\circ } \\rightarrow \\mathcal {L}_\\omega $ over $\\mathbf {BC}^*_\\omega $ .", "Its critical locus $\\operatorname{Crit}(W)$ is the closed substack of $\\mathfrak {P}^{\\circ }$ where $\\operatorname{d}W: T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega } \\rightarrow W^*T_{\\mathcal {L}_\\omega /\\mathbf {BC}^*_\\omega }$ degenerates.", "We will consider the case that $\\operatorname{Crit}(W)$ is proper over $\\mathbf {BC}^*_\\omega $ .", "This $W$ induces a canonical Kiem–Li cosection of the canonical obstruction of the open sub-stack ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta ) \\subset {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ .", "This leads to a cosection localized virtual cycle $[{R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )]_{\\sigma }$ which represents $[{R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )]^{\\mathrm {vir}}$ , and is supported on the proper substack $\\mathring{{R}}_W \\subset {R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ parameterizing $R$ -maps to the critical locus of $W$ , see Section REF .", "The virtual cycle $[{R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )]_{\\sigma }$ is the GLSM virtual cycle that we next recover as a virtual cycle over a proper moduli stack." ], [ "The reduced virtual cycles", "In general, the canonical cosection over ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ does not have a nice extension to ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ .", "The key to this is a proper morphism constructed in [25], called a modular principalization: $F\\colon {U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta ) \\rightarrow {R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ where ${U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ is the moduli of stable log $R$ -maps with uniform maximal degeneracy.", "Note that $F$ restricts to the identity on the common open substack ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ of both its source and target.", "The canonical perfect obstruction theory of ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ pulls back to a canonical perfect obstruction theory of ${U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ , hence the canonical virtual cycle $[{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )]^{\\mathrm {vir}}$ .", "Though $F$ does not change the virtual cycles in that $F_*[{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{},\\beta )]^{\\mathrm {vir}} = [{R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{},\\beta )]^{\\mathrm {vir}}$ , the cosection over ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ extends to the boundary $\\Delta _{{U}} := {U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta ) \\setminus {R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ with explicit poles (REF ).", "Then a general machinery developed in Section , produces a reduced perfect obstruction theory of ${U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ , hence the reduced virtual cycle $[{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )]^{\\mathrm {red}}$ , see Section REF .", "Remark 1.8 The two virtual cycles $[{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )]^{\\mathrm {vir}}$ and $[{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P},\\beta )]^{\\mathrm {red}}$ have the same virtual dimension." ], [ "Reduced versus cosection localized cycle", "We first show that log GLSM recovers GLSM: Theorem 1.9 (First comparison theorem REF ) Let $\\iota \\colon \\mathring{{R}}_W \\rightarrow {U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ be the inclusion (REF ).", "Then we have $\\iota _*[{R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )]_{\\sigma } = [{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )]^{\\mathrm {red}}.$ In Section , we study a few examples explicitly.", "By the first comparison theorem, the reduced virtual cycle of the compact moduli space of stable log $R$ -maps recovers FJRW-theory and Clader's hybrid model when they are constructed using cosection localized virtual cycles [17], [28].", "Our machinery also applies to the Gromov–Witten theory of a complete intersection, or more generally the zero locus $\\mathcal {Z}$ of a non-degenerate section $s$ of a vector bundle $\\mathbf {E}$ , Section REF .", "Examples include the quintic threefolds in $\\mathbb {P}^4$ , and Weierstrass elliptic fibrations, which are hypersurfaces in a $\\mathbb {P}^2$ -bundle over a not necessarily toric base $B$ .", "In this case, we may chose $r=1$ and $\\underline{\\mathfrak {P}} = \\mathbb {P}(\\mathbf {E}^\\vee \\otimes \\mathcal {L}_\\omega \\oplus \\mathcal {O})$ .", "Combining with the results in [16], [44], [20], and more generally in [26], [49], we have Corollary 1.10 (Proposition REF ) Notations as above, we have $p_*[{U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )]^{\\mathrm {red}}= (-1)^{\\operatorname{rk}(\\mathbf {E})(1 - g) + \\int _\\beta c_1(\\mathbf {E}) - \\sum _{j = 1}^n \\operatorname{age}_j(\\mathbf {E})} \\iota _*[{M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta )]^\\mathrm {vir}$ where $p\\colon {U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta ) \\rightarrow {M}_{g,\\vec{\\varsigma }}(\\mathcal {X}, \\beta )$ sends a log $R$ -map to the underlying stable map to $\\mathcal {X}$ , ${M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta )$ is the moduli of stable maps to $\\mathcal {Z}$ , and $\\iota \\colon {M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta ) \\rightarrow {M}_{g,\\vec{\\varsigma }}(\\mathcal {X}, \\beta )$ is the inclusion.", "Therefore Gromov–Witten invariants of $\\mathcal {Z}$ (involving only cohomology classes from the ambient $\\mathcal {X}$ ) can be computed in terms of (log) GLSM invariants defined using $(\\mathcal {X}, W)$ ." ], [ "Canonical versus reduced virtual cycles", "The canonical perfect obstruction and the canonical cosection of ${U}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ together defines a reduced perfect obstruction theory of $\\Delta _{{U}}$ , hence the reduced virtual cycle $[\\Delta _{{U}}]^{\\mathrm {red}}$ , see Section REF .", "The following relates the reduced virtual cycle with the canonical virtual cycle by a third virtual cycle: Theorem 1.11 (Second comparison theorem REF ) $[{U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}, \\beta )]^\\mathrm {vir}= [{U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}, \\beta )]^\\mathrm {red}+ \\tilde{r}[\\Delta _{{U}}]^\\mathrm {red}.$ By Lemma REF , $\\tilde{r}$ is the order of poles of $W$ along $\\infty _{\\mathfrak {P}}$ .", "In particular, it is a positive integer.", "The fact that the difference between the reduced and canonical virtual cycles is again virtual allows us to further decompose $[\\Delta _{{U}}]^{\\mathrm {red}}$ in [24] in terms of canonical and reduced virtual cycles of punctured and meromorphic theories using [2], [3].", "This is an important ingredient in the proof of the structural properties of Gromov–Witten invariants of quintics in [39]." ], [ "Change of twists", "Let $a_1, a_2$ be two twisting choices leading to two targets $\\mathfrak {P}_1$ and $\\mathfrak {P}_2$ respectively.", "Assume that $\\frac{a_1}{a_2} \\in \\mathbb {Z}$ .", "Then there is a morphism $\\mathfrak {P}_1 \\rightarrow \\mathfrak {P}_2$ by taking $\\frac{a_1}{a_2}$ -th root stack along $\\infty _{\\mathfrak {P}_2}$ .", "Theorem 1.12 (Change of twist theorem REF ) There is a canonical morphism $\\nu _{a_1/a_2} \\colon {U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_1,\\beta ) \\rightarrow {U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2,\\beta )$ induced by $\\mathfrak {P}_1 \\rightarrow \\mathfrak {P}_2$ .", "Pushing forward virtual cycles along $\\nu _{a_1/a_2}$ , we have $\\nu _{{a_1/a_2},*}[{U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_1,\\beta )]^{\\mathrm {vir}} = [{U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2,\\beta )]^{\\mathrm {vir}}$ , $\\nu _{{a_1/a_2},*}[{U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_1,\\beta )]^{\\mathrm {red}} = [{U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2,\\beta )]^{\\mathrm {red}}$ , $\\nu _{{a_1/a_2},*}[\\Delta _{{U},1}]^{\\mathrm {red}} = \\frac{a_2}{a_1} \\cdot [\\Delta _{{U},2}]^{\\mathrm {red}}.$ where $\\Delta _{{U},i} \\subset {U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_i,\\beta )$ is the boundary (REF ) for $i=1,2$ .", "Remark 1.13 The flexibility of twisting choices allows different targets with isomorphic infinity hyperplanes.", "The above push-forwards together with the decomposition formulas in [24] will provide relations among invariants of different targets.", "For example, they can be used to prove the Landau–Ginzburg/Calabi–Yau correspondence for quintic threefolds [37], as well as to prove a formula [48] for the class of the locus of holomorphic differentials with specified zeros [22]." ], [ "Plan of Paper", "The paper is organized as follows.", "In Section , we introduce stable $R$ -maps and collect the basic properties of their moduli spaces.", "The canonical and reduced virtual cycles are constructed and the comparison theorems are proven in Section .", "In Section , we work out several examples explicitly.", "Theorem REF is proven in Section .", "Section  discusses reducing virtual cycles along the boundary in more generality, and is used extensively in Section ." ], [ "Acknowledgments", "The first author would like to thank Dan Abramovich, Mark Gross and Bernd Siebert for the collaborations on foundations of stable log maps which influenced the development of this project.", "Last two authors wish to thank Shuai Guo for the collaborations which inspired the current work.", "The authors would like to thank Adrien Sauvaget, Rachel Webb and Dimitri Zvonkine for discussions related to the current work.", "The authors thank Huai-Liang Chang, Young-Hoon Kiem, Jun Li and Wei-Ping Li for their inspiring works on cosection localization needed in our construction.", "Part of this research was carried out during a visit of the Institute for Advanced Studies in Mathematics at Zhejiang University.", "Three of us would like to thank the Institute for the support.", "The first author was partially supported by NSF grant DMS-1700682 and DMS-2001089.", "The second author was partially supported by an AMS Simons Travel Grant and NSF grants DMS-1901748 and DMS-1638352.", "The last author was partially supported by Institute for Advanced Study in Mathematics of Zhejiang University, NSF grant DMS-1405245 and NSF FRG grant DMS-1159265 ." ], [ "Notations", "In this paper, we work over an algebraically closed field of characteristic zero, denoted by $\\mathbf {k}$ .", "All log structures are assumed to be fine and saturated [41] unless otherwise specified.", "A list of notations is provided below: [labelwidth=2cm, align=right] $\\operatorname{Vb}(V)$ the total space of a vector bundle $V$ $\\mathbb {P}^\\mathbf {w}(V)$ the weighted projective bundle stack with weights $\\mathbf {w}$ $\\mathcal {X}$ a proper Deligne-Mumford stack with a projective coarse moduli $\\mathcal {X}\\rightarrow X$ the coarse moduli morphism $\\mathbf {BC}^*_\\omega $ the universal stack of $\\mathbb {C}^*_{\\omega }$ -torsors $r$ a positive integer $\\mathcal {L}_{\\mathfrak {X}}\\rightarrow \\mathfrak {X}$ universal $r$ -spin bundle $\\mathfrak {P}\\rightarrow \\mathbf {BC}^*_\\omega $ the target of log $R$ -maps $\\underline{\\mathcal {C}}\\rightarrow \\underline{S}$ a family of underlying curves over $\\underline{S}$ $\\underline{\\mathcal {C}}\\rightarrow \\underline{C}$ the coarse moduli morphism of underlying curves $\\mathcal {C}\\rightarrow S$ a family of log curves over $S$ $\\mathcal {C}\\rightarrow C$ the coarse moduli morphism of log curves $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ a log $R$ -map $\\beta $ a curve class in $\\mathcal {X}$ $n$ the number of markings $\\vec{\\varsigma }$ collection of discrete data at all markings ${R}_{g,\\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ the moduli stack of stable R-maps ${R}_{g,\\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ the moduli stack of stable log $R$ -maps ${U}_{g,\\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ the moduli stack of stable log $R$ -maps with uniform maximal degeneracy ${R}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ the moduli stack of stable R-maps with compact type evaluations ${R}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ the moduli stack of stable log $R$ -maps with compact type evaluations ${U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P},\\beta )$ the moduli stack of stable log $R$ -maps with compact type evaluations and uniform maximal degeneracy $W\\colon \\mathfrak {P}^\\circ \\rightarrow \\mathcal {L}_\\omega $ the superpotential" ], [ "Twisted curves and pre-stable maps", "We first collect some basic notions needed in our construction." ], [ "Twisted curves", "Recall from [7] that a twisted $n$ -pointed curve over a scheme $\\underline{S}$ consists of the following data $(\\underline{\\mathcal {C}} \\rightarrow \\underline{C} \\rightarrow \\underline{S}, \\lbrace p_i\\rbrace _{i=1}^n)$ where $\\underline{\\mathcal {C}}$ is a Deligne–Mumford stack proper over $\\underline{S}$ , and étale locally is a nodal curve over $\\underline{S}$ .", "$p_i \\subset \\underline{\\mathcal {C}}$ are disjoint closed substacks in the smooth locus of $\\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ .", "$p_i \\rightarrow \\underline{S}$ are étale gerbes banded by the multiplicative group $\\mu _{r_i}$ for some positive integer $r_i$ .", "the morphism $\\underline{\\mathcal {C}} \\rightarrow \\underline{C}$ is the coarse moduli morphism.", "Each node of $\\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ is balanced.", "$\\underline{\\mathcal {C}} \\rightarrow \\underline{C}$ is an isomorphism over $\\underline{\\mathcal {C}}_{gen}$ , where $\\underline{\\mathcal {C}}_{gen}$ is the complement of the markings and the stacky critical locus of $\\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ .", "The balancing condition means that formally locally near a node, the geometric fiber is isomorphic to the stack quotient $[\\operatorname{Spec}\\big (\\mathbf {k}[x,y]/(xy)\\big ) \\big / \\mu _k]$ where $\\mu _k$ is some cyclic group with the action $\\zeta (x,y) = (\\zeta \\cdot x, \\zeta ^{-1}\\cdot y)$ .", "Given a twisted curve as above, by [7] the coarse space $\\underline{C} \\rightarrow \\underline{S}$ is a family of $n$ -pointed usual pre-stable curves over $\\underline{S}$ with the markings determined by the images of $\\lbrace p_i\\rbrace $ .", "The genus of the twisted curve $\\underline{\\mathcal {C}}$ is defined as the genus of the corresponding coarse pre-stable curve $\\underline{C}$ .", "When there is no danger of confusion, we will simply write $\\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ , and the terminology twisted curves and pre-stable curves are interchangable." ], [ "Logarithmic curves", "An $n$ -pointed log curve over a fine and saturated log scheme $S$ in the sense of [47] consists of $(\\pi \\colon \\mathcal {C}\\rightarrow S, \\lbrace p_i\\rbrace _{i=1}^n)$ such that The underlying data $(\\underline{\\mathcal {C}} \\rightarrow \\underline{C} \\rightarrow \\underline{S}, \\lbrace p_i\\rbrace _{i=1}^n)$ is a twisted $n$ -pointed curve over $\\underline{S}$ .", "$\\pi $ is a proper, logarithmically smooth, and integral morphism of fine and saturated logarithmic stacks.", "If $\\underline{U} \\subset \\underline{\\mathcal {C}}$ is the non-singular locus of $\\underline{\\pi }$ , then $\\overline{\\mathcal {M}}_{\\mathcal {C}}|_{\\underline{U}} \\cong \\pi ^*\\overline{\\mathcal {M}}_{S}\\oplus \\bigoplus _{i=1}^{n}\\mathbb {N}_{p_i}$ where $\\mathbb {N}_{p_i}$ is the constant sheaf over $p_i$ with fiber $\\mathbb {N}$ .", "For simplicity, we may refer to $\\pi \\colon \\mathcal {C}\\rightarrow S$ as a log curve when there is no danger of confusion.", "The pull-back of a log curve $\\pi \\colon \\mathcal {C}\\rightarrow S$ along an arbitrary morphism of fine and saturated log schemes $T \\rightarrow S$ is the log curve $\\pi _T\\colon \\mathcal {C}_T:= \\mathcal {C}\\times _S T \\rightarrow T$ with the fiber product taken in the category of fine and saturated log stacks.", "Given a log curve $\\mathcal {C}\\rightarrow S$ , we associate the log cotangent bundle $\\omega ^{\\log }_{\\mathcal {C}/S} := \\omega _{\\underline{\\mathcal {C}}/\\underline{S}}(\\sum _i p_i)$ where $\\omega _{\\underline{\\mathcal {C}}/\\underline{S}}$ is the relative dualizing line bundle of the underlying family $\\underline{\\mathcal {C}}\\rightarrow \\underline{S}$ ." ], [ "Logarithmic $R$ -maps as logarithmic fields", "In this subsection, we reformulate the notion of a log $R$ -map in terms of the more concrete notion of spin-maps with fields.", "This will be useful for relating to previous constructions in GLSM (see Section ), and for some of the proofs in Section .", "Definition 2.1 Let $\\underline{g}\\colon \\underline{\\mathcal {C}} \\rightarrow \\underline{\\mathcal {X}}$ be a pre-stable map over $\\underline{S}$ .", "An $r$ -spin structure of $\\underline{g}$ is a line bundle $\\mathcal {L}$ over $\\underline{C}$ together with an isomorphism $\\mathcal {L}^{\\otimes r} \\cong \\omega ^{\\log }_{\\underline{\\mathcal {C}}/\\underline{S}}\\otimes g^*\\mathbf {L}^{\\vee }.$ The pair $(\\underline{g}, \\mathcal {L})$ is called an $r$ -spin map.", "Given a log map $g\\colon \\mathcal {C}\\rightarrow \\mathcal {X}$ over $S$ and an $r$ -spin structure $\\mathcal {L}$ of the underlying map $\\underline{g}$ , we introduce a weighted projective stack bundle over $\\underline{\\mathcal {C}}$ : $\\underline{\\mathcal {P}}_{\\mathcal {C}} := \\mathbb {P}^\\mathbf {w}\\left(\\bigoplus _{i > 0} (g^*(\\mathbf {E}_i^\\vee ) \\otimes \\mathcal {L}^{\\otimes i}) \\oplus \\mathcal {O}\\right)$ where $\\mathbf {w}$ indicates the weights of the $\\mathbb {G}_m$ -action as in (REF ).", "The Cartier divisor $\\infty _{\\mathcal {P}} \\subset \\underline{\\mathcal {P}}_{\\mathcal {C}}$ defined by the vanishing of the last coordinate, is called the infinity hyperplane.", "Let $\\mathcal {M}_{\\infty _{\\mathcal {P}}}$ be the log structure on $\\underline{\\mathcal {P}}_{\\mathcal {C}}$ associated to the Cartier divisor $\\infty _{\\mathcal {P}}$ , see [41].", "Form the log stack $\\mathcal {P}_{\\mathcal {C}} := (\\underline{\\mathcal {P}}_{\\mathcal {C}}, \\mathcal {M}_{\\mathcal {P}_\\mathcal {C}} := \\mathcal {M}_{\\mathcal {C}}|_{\\underline{\\mathcal {P}}_{\\mathcal {C}}}\\oplus _{\\mathcal {O}^*}\\mathcal {M}_{\\infty _{\\mathcal {P}}}),$ with the natural projection $\\mathcal {P}_{\\mathcal {C}} \\rightarrow \\mathcal {C}$ .", "Definition 2.2 A log field over an $r$ -spin map $(g, \\mathcal {L})$ is a section $\\rho \\colon \\mathcal {C}\\rightarrow \\mathcal {P}_{\\mathcal {C}}$ of $\\mathcal {P}_{\\mathcal {C}} \\rightarrow \\mathcal {C}$ .", "The triple $(g, \\mathcal {L}, \\rho )$ over $S$ is called an $r$ -spin map with a log field.", "The pull-back of an $r$ -spin map with a log field is defined as the pull-back of log maps.", "We now show that the two notions — log $R$ -maps and pre-stable maps with log fields — are equivalent.", "Proposition 2.3 Fix a log map $g\\colon \\mathcal {C}\\rightarrow \\mathcal {X}$ over $S$ , and consider the following diagram of solid arrows ${\\mathcal {C}@{-->}[rrd] @/^2ex/@{-->}[rrrd] @/^4ex/[rrrrd]^{g} @/_6ex/[rrrdd]_{\\omega ^{\\log }_{\\mathcal {C}/S}} &&& \\\\&& \\mathfrak {P}[r] & \\mathfrak {X}[d]^{\\zeta } [r] & \\mathcal {X}\\\\&& & \\mathbf {BC}^*_\\omega &}$ We have the following equivalences: The data of an $r$ -spin map $(g, \\mathcal {L})$ is equivalent to a morphism $\\mathcal {C}\\rightarrow \\mathfrak {X}$ making the above diagram commutative.", "The data of a log field $\\rho $ over a given $r$ -spin map $(g, \\mathcal {L})$ is equivalent to giving a log $R$ -map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ making the above diagram commutative.", "The first equivalence follows from Definition REF and (REF ).", "Note that $(g, \\mathcal {L})$ induces a commutative diagram of solid arrows with all squares cartesian: ${\\mathcal {P}_\\mathcal {C}[d] [r] &\\mathfrak {P}_{\\mathcal {C}} [r] [d] & \\mathfrak {P}[d] \\\\\\mathcal {C}[r] [rd]_{=} @/^1pc/@{-->}[u]^{\\rho }& \\mathfrak {X}_{\\mathcal {C}} [r] [d] & \\mathfrak {X}[d] \\\\& \\mathcal {C}[r]^{\\omega _{\\mathcal {C}/S}^{\\log }} & \\mathbf {BC}^*_\\omega .", "}$ Thus (2) follows from the universal property of cartesian squares.", "Definition 2.4 An $r$ -spin map with a log field is stable if the corresponding $R$ -map is stable.", "Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a logarithmic $R$ -map over $S$ , and $\\rho \\colon \\mathcal {C}\\rightarrow \\mathcal {P}_\\mathcal {C}$ be the corresponding logarithmic field.", "Using (REF ), we immediately obtain $f^* \\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}}) = \\rho ^* \\mathcal {O}(\\tilde{r}\\infty _{\\mathcal {P}_{\\mathcal {C}}}),$ hence the following equivalent description of the stability condition: Corollary 2.5 A pre-stable map with log field $(g, \\mathcal {L}, \\rho )$ over $S$ is stable iff the corresponding $R$ -map $f$ is representable, and if for a sufficiently small $\\delta _0 \\in (0,1)$ there exists $k_0 > 1$ such that for any pair $(k,\\delta )$ satisfying $k > k_0$ and $\\delta _0 > \\delta > 0$ , the following holds $(\\omega _{\\mathcal {C}/S}^{\\log })^{1 + \\delta } \\otimes g^* \\mathcal {H}^{\\otimes k} \\otimes \\rho ^* \\mathcal {O}(\\tilde{r}\\infty ) > 0.$ Remark 2.6 The condition (REF ) is compatible with the stability of log $r$ -spin fields in [25].", "Let $\\mathcal {X}= \\operatorname{Spec}\\mathbf {k}$ and $\\rho \\colon \\mathcal {C}\\rightarrow \\mathcal {P}_\\mathcal {C}$ be a log $r$ -spin field over $S$ as in [25].", "The stability of $\\rho $ is equivalent to $0 & < \\omega ^{\\log }_{\\mathcal {C}/S}\\otimes \\rho ^*\\mathcal {O}(k\\cdot \\mathbf {0}_{\\mathcal {P}}) \\\\&= \\omega ^{\\log }_{\\mathcal {C}/S}\\otimes \\mathcal {L}^{\\otimes k}\\otimes \\rho ^*\\mathcal {O}(k\\cdot \\infty _{\\mathcal {P}}) \\\\&= \\left((\\omega ^{\\log }_{\\mathcal {C}/S})^{1+\\frac{r}{k}}\\otimes \\rho ^*\\mathcal {O}(r\\cdot \\infty _{\\mathcal {P}})\\right)^{\\otimes \\frac{k}{r}}$ for $k \\gg 0$ .", "Now replacing $\\frac{r}{k}$ by $\\delta $ in $(\\omega ^{\\log }_{\\mathcal {C}/S})^{1+\\frac{r}{k}}\\otimes \\rho ^*\\mathcal {O}(r\\cdot \\infty _{\\mathcal {P}}) > 0,$ we recover (REF ) as desired." ], [ "The structure of the infinity divisor", "For later use, we would like to study the structure of $\\infty _{\\mathfrak {P}}$ .", "Let $\\mathbf {w}$ and $\\mathbf {w}^{\\prime }$ be two weights as in (REF ) such that $\\mathbf {w}^{\\prime }$ corresponds to $a= \\frac{1}{d}$ .", "Consider $\\mathbf {w}_{\\infty }$ (resp.", "$\\mathbf {w}^{\\prime }_{\\infty }$ ) obtained by removing the weight of the $\\mathcal {O}$ factor from $\\mathbf {w}$ (resp.", "$\\mathbf {w}^{\\prime }$ ).", "Since $\\gcd (\\mathbf {w}^{\\prime }_{\\infty }) = 1$ , we observe that $\\infty _{\\mathfrak {P}^{\\prime }} = \\mathbb {P}^{\\mathbf {w}^{\\prime }_{\\infty }}\\big (\\bigoplus _i\\mathbf {E}^{\\vee }_{i,\\mathfrak {X}}\\otimes \\mathcal {L}_{\\mathfrak {X}}^{\\otimes i}\\big ) \\cong \\mathbb {P}^{\\mathbf {w}^{\\prime }_{\\infty }}\\big (\\bigoplus _i\\mathbf {E}^{\\vee }_{i,\\mathfrak {X}}\\big ).$ In particular, there is a cartesian diagram ${\\infty _{\\mathfrak {P}^{\\prime }} [rr] [d] && \\infty _{\\mathcal {X}} := \\mathbb {P}^{\\mathbf {w}^{\\prime }_{\\infty }}\\big (\\bigoplus _i\\mathbf {E}^{\\vee }_{i}\\big ) [d] \\\\\\mathfrak {X}[rr] && \\mathcal {X}.", "}$ To fix the notation, denote by $\\mathcal {O}_{\\infty _{\\mathcal {X}}}(1)$ the tautological line bundle over $\\infty _{\\mathcal {X}}$ associated to the upper right corner, and by $\\mathcal {O}_{\\infty _{\\mathfrak {P}^{\\prime }}}(1)$ the pull-back of $\\mathcal {O}_{\\infty _{\\mathcal {X}}}(1)$ via the top horizontal arrow.", "Let $\\mathcal {O}_{\\mathfrak {P}^{\\prime }}(1)$ be the tautological line bundle associated to the expression of $\\mathfrak {P}^{\\prime }$ as in (REF ).", "Let $\\ell = \\gcd (\\mathbf {w}_{\\infty })$ .", "Observe that $\\underline{\\mathfrak {P}} \\rightarrow \\underline{\\mathfrak {P}}^{\\prime }$ is an $\\ell $ -th root stack along $\\infty _{\\mathfrak {P}^{\\prime }}$ .", "Thus, $\\infty _{\\mathfrak {P}}$ parameterizes $\\ell $ -th roots of the normal bundle $N_{\\infty _{\\mathfrak {P}^{\\prime }}/\\mathfrak {P}^{\\prime }}$ over $\\infty _{\\mathfrak {P}^{\\prime }}$ .", "In particular, the morphism $\\infty _{\\mathfrak {P}} \\rightarrow \\infty _{\\mathfrak {P}^{\\prime }}$ is a $\\mu _{\\ell }$ -gerbe.", "As shown below, the small number “$\\delta $ ” in the stability condition (REF ) plays an important role in stabilizing components in $\\infty _{\\mathfrak {P}}$ .", "Proposition 2.7 Consider an underlying $R$ -map ${&& \\infty _{\\mathfrak {P}} [d]^{\\zeta \\circ \\mathfrak {p}} \\\\\\underline{\\mathcal {C}}[rru]^{\\underline{f}} [rr]_{\\omega ^{\\log }_{\\mathcal {C}/S}}&& \\mathbf {BC}^*_\\omega }$ over a geometric point.", "Consider the following commutative diagram ${\\underline{\\mathcal {C}}[r] @/^3ex/[rr]^{\\underline{f}^{\\prime }} @/_3ex/[rrr]_{\\underline{f}_{\\mathcal {X}}} & \\infty _{\\mathfrak {P}} [r] & \\infty _{\\mathfrak {P}^{\\prime }} [r] & \\infty _{\\mathcal {X}}}$ Then we have $\\underline{f}^* \\mathcal {O}_{\\mathfrak {P}}(\\tilde{r}\\infty _{\\mathfrak {P}})= (\\omega ^{\\log }_{\\underline{\\mathcal {C}}})^{\\vee }\\otimes \\underline{f}^*\\mathbf {L}\\otimes (\\underline{f}_\\mathcal {X})^*\\mathcal {O}_{\\infty _{\\mathcal {X}}}(\\frac{r}{d}).$ Furthermore, we have $(\\omega _{\\underline{\\mathcal {C}}}^{\\log })^{1 + \\delta } \\otimes (\\mathfrak {t}\\circ f)^* \\mathcal {H}^{\\otimes k} \\otimes f^* \\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}}) > 0$ if and only if the coarse of $\\underline{f}_{\\mathcal {X}}$ is stable in the usual sense.", "Recall $\\mathbf {w}$ corresponds to the choice $a= \\frac{\\ell }{d}$ .", "We have $\\underline{f}^* \\mathcal {O}_{\\mathfrak {P}}(\\tilde{r}\\infty _{\\mathfrak {P}}) = (\\underline{f}^{\\prime })^*\\mathcal {O}_{\\mathfrak {P}^{\\prime }}(\\frac{r}{d}\\cdot \\infty _{\\mathfrak {P}^{\\prime }}).$ Since $\\mathcal {O}_{\\mathfrak {P}^{\\prime }}(\\infty _{\\mathfrak {P}^{\\prime }}) \\cong \\mathcal {O}_{\\mathfrak {P}^{\\prime }}(1)$ , we calculate $(\\underline{f}^{\\prime })^*\\mathcal {O}_{\\mathfrak {P}^{\\prime }}(\\infty _{\\mathfrak {P}^{\\prime }})|_{\\infty _{\\mathfrak {P}^{\\prime }}} \\cong (\\underline{f}^{\\prime })^*\\mathcal {O}_{\\infty _{\\mathfrak {P}^{\\prime }}}(1)\\otimes \\mathcal {L}^{\\otimes -d} = (\\underline{f}_{\\mathcal {X}})^*\\mathcal {O}_{\\infty _{\\mathcal {X}}}(1)\\otimes \\mathcal {L}^{\\otimes -d}.$ Equation (REF ) is proved by combining the above calculation and Definition REF .", "Now using (REF ), we obtain $(\\omega _{\\underline{\\mathcal {C}}}^{\\log })^{1 + \\delta } \\otimes (\\mathfrak {t}\\circ f)^* \\mathcal {H}^{\\otimes k} \\otimes \\underline{f}^* \\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}}) \\\\\\cong (\\omega _{\\underline{\\mathcal {C}}}^{\\log })^\\delta \\otimes (\\mathfrak {t}\\circ f)^* \\mathcal {H}^{\\otimes k} \\otimes (\\mathfrak {t}\\circ f)^*\\mathbf {L}\\otimes (f_\\mathcal {X})^*\\mathcal {O}_{\\infty _{\\mathcal {X}}}(\\frac{r}{d}),$ Let $\\underline{\\mathcal {Z}} \\subset \\underline{\\mathcal {C}}$ be an irreducible component.", "Note the (REF ) holds over $\\underline{\\mathcal {Z}}$ for $k \\gg 0$ unless $\\mathfrak {t}\\circ f$ contracts $\\mathcal {Z}$ to a point.", "Suppose we are in the latter situation, hence both $(\\mathfrak {t}\\circ f)^* \\mathcal {H}^{\\otimes k}$ and $(\\mathfrak {t}\\circ f)^*\\mathbf {L}$ have degree zero over $\\underline{\\mathcal {Z}}$ .", "Since $\\underline{f}_{\\mathcal {X}}^*\\mathcal {O}_{\\infty _\\mathcal {X}}(1)|_{\\underline{\\mathcal {Z}}}$ has non-negative degree and $1 \\gg \\delta > 0$ , (REF ) holds if and only if either $\\underline{f}_{\\mathcal {X}}(\\underline{\\mathcal {Z}})$ is not a point, or $\\omega ^{\\log }_{\\underline{\\mathcal {C}}}|_{\\mathcal {Z}}$ is positive.", "This proves the second statement.", "Corollary 2.8 Let $\\underline{f}\\colon \\underline{\\mathcal {C}} \\rightarrow \\underline{\\mathfrak {P}}$ be an underlying R-map.", "Then a rational bridge $\\mathcal {Z}\\subset \\underline{\\mathcal {C}}$ fails to satisfy the stability condition (REF ) if and only if $\\deg \\underline{f}^*\\mathcal {O}_{\\mathfrak {P}}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} = 0$ and $\\deg (\\mathfrak {t}\\circ f)^*\\mathcal {H}|_{\\mathcal {Z}} = 0$ .", "Suppose $\\mathcal {Z}$ is unstable.", "Then (REF ) implies that $\\deg (\\mathfrak {t}\\circ f)^*\\mathcal {H}^{\\otimes k}|_{\\mathcal {Z}} = 0$ for any $k \\gg 0$ , hence $\\deg (\\mathfrak {t}\\circ f)^*\\mathcal {H}|_{\\mathcal {Z}} = 0$ .", "Since $\\omega ^{\\log }_{\\mathcal {C}/S}|_{\\mathcal {Z}} = \\mathcal {O}_{\\mathcal {Z}}$ , we have $\\deg \\underline{f}^*\\mathcal {O}_{\\mathfrak {P}}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} \\le 0$ .", "It is clear that $\\deg \\underline{f}^*\\mathcal {O}_{\\mathfrak {P}}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} \\ge 0$ if $f(\\mathcal {Z}) \\lnot \\subset \\infty _{\\mathfrak {P}}$ .", "If $f(\\mathcal {Z}) \\subset \\infty _{\\mathfrak {P}}$ , then (REF ) implies $\\deg \\underline{f}^*\\mathcal {O}_{\\mathfrak {P}}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} \\ge 0$ .", "Thus we have $\\deg \\underline{f}^*\\mathcal {O}_{\\mathfrak {P}}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} = 0$ .", "The other direction follows immediately from (REF )." ], [ "The combinatorial structures", "The minimality or basicness of stable log maps, which plays a crucial role in constructing the moduli of stable log maps, was introduced in [1], [21], [35].", "Based on their construction, a modification called minimality with uniform maximal degeneracy has been introduced in [25] for the purpose of constructing reduced virtual cycles.", "We recall these constructions for later reference." ], [ "Degeneracies and contact orders", "We fix a log $R$ -map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ over $S$ .", "Consider the induced morphism of characteristic sheaves: $f^{\\flat } \\colon f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}} \\rightarrow \\overline{\\mathcal {M}}_{\\mathcal {C}}.$ Note that characteristic sheaves are constructible.", "We recall the following terminologies.", "(1)Degeneracies of irreducible components.", "An irreducible component $\\mathcal {Z}\\subset \\mathcal {C}$ is called degenerate if $(f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}})_{\\eta _\\mathcal {Z}}\\cong \\mathbb {N}$ where $\\eta _\\mathcal {Z}\\in \\mathcal {Z}$ is the generic point, and non-degenerate otherwise.", "Equivalently $\\mathcal {Z}$ is degenerate iff $f(\\mathcal {Z}) \\subset \\infty _{\\mathfrak {P}}$ .", "For a degenerate $\\mathcal {Z}$ , write $e_\\mathcal {Z}=\\ \\bar{f}^{\\flat }(1)_{\\eta _\\mathcal {Z}} \\in \\overline{\\mathcal {M}}_{\\mathcal {C}, \\eta _\\mathcal {Z}} = \\overline{\\mathcal {M}}_{S}$ and call it the degeneracy of $\\mathcal {Z}$ .", "If $\\mathcal {Z}$ is non-degenerate, set $e_\\mathcal {Z}= 0$ .", "An irreducible component $\\mathcal {Z}$ is called a maximally degenerate component, if $e_{\\mathcal {Z}^{\\prime }} \\preccurlyeq e_{\\mathcal {Z}}$ for any irreducible component $\\mathcal {Z}^{\\prime }$ .", "Here for $e_1, e_2 \\in \\overline{\\mathcal {M}}_{S}$ , we define $e_1 \\preccurlyeq e_2$ iff $(e_2 - e_2) \\in \\overline{\\mathcal {M}}_S$ .", "(2)The structure at markings.", "Let $p \\in \\mathcal {Z}$ be a marked point.", "Consider $(f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}})_{p} \\stackrel{\\bar{f}^{\\flat }}{\\longrightarrow } \\overline{\\mathcal {M}}_{\\mathcal {C},p} \\cong \\overline{\\mathcal {M}}_S\\oplus \\mathbb {N}\\longrightarrow \\mathbb {N}$ where the arrow on the right is the projection.", "If $(f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}})_{p} \\cong \\mathbb {N}$ or equivalently $f(p) \\in \\infty _{\\mathfrak {P}}$ , we denote by $c_p \\in \\mathbb {Z}_{\\ge 0}$ the image of $1 \\in \\mathbb {N}$ via the above composition, and $c_p = 0$ otherwise.", "We call $c_p$ the contact order at the marking $p$ .", "Contact orders are a generalization of tangency multiplicities in the log setting.", "(3)The structure at nodes.", "Define the natural partial order $\\preccurlyeq $ on the set of irreducible components of $\\mathcal {C}$ such that $\\mathcal {Z}_1 \\preccurlyeq \\mathcal {Z}_2$ iff $(e_{\\mathcal {Z}_2} - e_{\\mathcal {Z}_1}) \\in \\overline{\\mathcal {M}}_S$ .", "Let $q \\in \\mathcal {C}$ be a node joining two irreducible components $\\mathcal {Z}_1$ and $\\mathcal {Z}_2$ with $\\mathcal {Z}_1 \\preccurlyeq \\mathcal {Z}_2$ .", "Then étale locally at $q$ , (REF ) is of the form $(\\bar{f}^{\\flat })_q \\colon (f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}})_q \\rightarrow \\overline{\\mathcal {M}}_{\\mathcal {C},q} \\cong \\overline{\\mathcal {M}}_{S}\\oplus _{\\mathbb {N}}\\mathbb {N}^2,$ where the two generators $\\sigma _1$ and $\\sigma _2$ of $\\mathbb {N}^2$ correspond to the coordinates of $\\mathcal {Z}_1$ and $\\mathcal {Z}_2$ at $q$ respectively, and the arrow $\\mathbb {N}:= \\langle \\ell _q\\rangle \\rightarrow \\mathbb {N}^2$ is the diagonal $\\ell _q \\mapsto \\sigma _1 + \\sigma _2$ .", "If $(f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}})_q \\cong \\mathbb {N}$ or equivalently $f(q) \\in \\infty _{\\mathfrak {P}}$ , we have $(\\bar{f}^{\\flat })_q (1) = e_{\\mathcal {Z}_1} + c_q \\cdot \\sigma _1,$ where the non-negative integer $c_q$ is called the contact order at $q$ .", "In this case, we have a relation between the two degeneracies $e_{\\mathcal {Z}_1} + c_q \\cdot \\ell _q = e_{\\mathcal {Z}_2}.$ If $(f^*\\overline{\\mathcal {M}}_{\\mathfrak {P}})_q$ is trivial, then we set the contact order $c_q = 0$ .", "Note that in this case $e_{\\mathcal {Z}_1} = e_{\\mathcal {Z}_2} = 0$ , and (REF ) still holds." ], [ "Minimality", "We recall the construction of minimal monoids in [21], [1], [35].", "The log combinatorial type of the $R$ -map $f$ consists of: $G = \\big (\\underline{G}, V(G) = V^{n}(G) \\cup V^{d}(G), \\preccurlyeq , (c_i)_{i\\in L(G)}, (c_l)_{l\\in E(G)} \\big )$ where $\\underline{G}$ is the dual intersection graph of the underlying curve $\\underline{\\mathcal {C}}$ .", "$V^{n}(G) \\cup V^{d}(G)$ is a partition of $V(G)$ where $V^{d}(G)$ consists of vertices of with non-zero degeneracies.", "$\\preccurlyeq $ is the natural partial order on the set $V(G)$ .", "We associate to a leg $i\\in L(G)$ the contact order $c_i \\in \\mathbb {N}$ of the corresponding marking $p_i$ .", "We associate to an edge $l\\in E(G)$ the contact order $c_l \\in \\mathbb {N}$ of the corresponding node.", "We introduce a variable $\\ell _l$ for each edge $l \\in E(G)$ , and a variable $e_v$ for each vertex $v \\in V(G)$ .", "Denote by $h_l$ the relation $ e_{v^{\\prime }} = e_v + c_l\\cdot \\ell _l$ for each edge $l$ with the two ends $v \\preccurlyeq v^{\\prime }$ and contact order $c_l$ .", "Denote by $h_v$ the following relation $e_v = 0$ for each $v \\in V^{n}(G)$ .", "Consider the following abelian group $\\mathcal {G} = \\left(\\big (\\bigoplus _{v \\in V(G)} \\mathbb {Z}e_v\\big ) \\oplus \\big ( \\bigoplus _{l \\in E(G)} \\mathbb {Z}\\rho _l \\big ) \\right) \\big / \\langle h_v, h_l \\ | \\ v\\in V^{d}(G), \\ l \\in E(G) \\rangle $ Let $\\mathcal {G}^{t} \\subset \\mathcal {G}$ the torsion subgroup.", "Consider the following composition $\\big ( \\bigoplus _{v \\in V(G)} \\mathbb {N}e_v \\big ) \\oplus \\big ( \\bigoplus _{l \\in E(G)} \\mathbb {N}\\rho _l\\big ) \\rightarrow \\mathcal {G} \\rightarrow \\mathcal {G}/\\mathcal {G}^{t}$ Let $\\overline{M}(G)$ be the smallest submonoid that is saturated in $\\mathcal {G}/\\mathcal {G}^{t}$ , and contains the image of the above composition.", "We call $\\overline{M}(G)$ the minimal or basic monoid associated to $G$ .", "Recall from [21], or [25] that there is a canonical map of monoids $\\overline{M}(G) \\rightarrow \\overline{\\mathcal {M}}_S$ induced by sending $e_v$ to the degeneracy of the component associated to $v$ , and sending $\\ell _l$ to the element $\\ell _{q}$ as in (REF ) associated to $l$ .", "In particular, the monoid $\\overline{M}(G)$ is fine, saturated, and sharp.", "Definition 2.9 A log $R$ -map is minimal or basic if over each geometric fiber, the natural morphism (REF ) is an isomorphism." ], [ "Logarithmic $R$ -map with uniform maximal degeneracy", "Definition 2.10 A log $R$ -map is said to have uniform maximal degeneracy if there exists a maximal degenerate component over each geometric fiber, see Section REF (1).", "Let $f \\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a log $R$ -map over a geometric log point $S$ , and $G$ be its log combinatorial type.", "Assume that $f$ has uniform maximal degeneracy, and denote by $V_{\\max } \\subset V(G)$ the collection of vertices with the maximal degeneracy.", "We call $(G, V_{\\max })$ the log combinatorial type with uniform maximal degeneracy, and form the corresponding minimal monoid below.", "Consider the torsion-free abelian group $\\big ( \\overline{M}(G)^{gp}\\big / \\sim \\big )^{tf}$ where $\\sim $ is given by the relations $(e_{v_1} - e_{v_2}) = 0$ for any $v_1, v_2 \\in V_{\\max }$ .", "By abuse of notation, we may use $e_v$ for the image of the degeneracy of the vertex $v$ in $\\big ( \\overline{M}(G)^{gp}\\big / \\sim \\big )^{tf}$ .", "Thus, for any $v \\in V_{\\max }$ their degeneracies in $\\big ( \\overline{M}(G)^{gp}\\big / \\sim \\big )^{tf}$ are identical, denoted by $e_{\\max }$ .", "Let $\\overline{M}(G,V_{\\max })$ be the saturated submonoid in $\\big ( \\overline{M}(G)^{gp}\\big / \\sim \\big )^{tf}$ generated by the image of $\\overline{M}(G) \\rightarrow \\big ( \\overline{M}(G)^{gp}\\big / \\sim \\big )^{tf}$ , and the elements $(e_{\\max } - e_v)$ for any $v \\in V(G)$ .", "By [25], there is a natural morphism of monoids $\\overline{M}(G) \\rightarrow \\overline{M}(G,V_{\\max })$ which fits in a commutative diagram ${\\overline{M}(G) [r] [rd]_{\\phi } & \\overline{M}(G,V_{\\max }) [d]^{\\phi _{\\max }}\\\\& \\overline{\\mathcal {M}}_S}$ We call $\\overline{M}(G,V_{\\max })$ the minimal monoid with uniform maximal degeneracy associated to $(G, V_{\\max })$ , or simply the minimal monoid associated to $(G,V_{\\max })$ .", "Definition 2.11 A log $R$ -map is minimal with uniform maximal degeneracy if over each geometric fiber the morphism $\\phi _{\\max }$ is an isomorphism.", "Note that in general a log $R$ -map minimal with uniform maximal degeneracy does not need to be minimal in the sense of Definition REF ." ], [ "The universal logarithmic target", "Consider the log stack $\\mathcal {A}$ with the underlying stack $[\\mathbb {A}^1/\\mathbb {G}_m]$ and log structure induced by its toric boundary.", "It parameterizes Deligne–Faltings log structures of rank one [21].", "Thus there is a canonical strict morphism of log stacks $\\mathfrak {P}\\rightarrow \\mathcal {A}.$ Let $\\infty _{\\mathcal {A}} \\subset \\mathcal {A}$ be the strict closed substack, then $\\infty _{\\mathfrak {P}} = \\infty _{\\mathcal {A}}\\times _{\\mathcal {A}}\\mathfrak {P}$ .", "Given any log R-map $f \\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ , we obtain a log map $f^{\\prime } \\colon \\mathcal {C}\\rightarrow \\mathcal {A}$ via composing with (REF ).", "Then $f^{\\prime }$ and $f$ share the same log combinatorial type (with uniform maximal degeneracy) since $(f^{\\prime })^*\\mathcal {M}_{\\mathcal {A}} \\cong f^*\\mathcal {M}_{\\mathfrak {P}} \\ \\ \\ \\mbox{and} \\ \\ \\ f^{\\flat } = (f^{\\prime })^{\\flat }.$ This point of view will be used later in our construction." ], [ "The evaluation morphism of the underlying structure", "Denote by $\\mathfrak {P}_\\mathbf {k}:= \\mathfrak {P}\\times _{\\mathbf {BC}^*_\\omega }\\operatorname{Spec}\\mathbf {k}$ where the arrow on the right is the universal $\\mathbb {G}_m$ -torsor.", "Let $\\mathcal {I}_{\\mu }\\mathfrak {P}_\\mathbf {k}$ be the cyclotomic inertia stack of $\\mathfrak {P}_{\\mathbf {k}}$ [5].", "Then $\\mathcal {I}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}} = \\mathcal {I}_{\\mu }\\mathfrak {P}_\\mathbf {k}\\times _{\\mathcal {A}}\\infty _{\\mathcal {A}}$ is the cyclotomic inertia stack of $\\infty _{\\mathfrak {P}_\\mathbf {k}}$ equipped with the pull-back log structure from $\\mathcal {A}$ .", "Lemma 2.12 Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a log $R$ -map over $S$ , and $p \\subset \\mathcal {C}$ be a marking.", "Then the restriction $f|_{p}$ factors through $\\mathfrak {P}_\\mathbf {k}\\rightarrow \\mathfrak {P}$ .", "Furthermore, $f$ is representable along $p$ if the induced morphism $p \\rightarrow \\mathfrak {P}_\\mathbf {k}$ is representable.", "Since $\\omega ^{\\log }_{\\mathcal {C}/S}|_{p} \\cong \\mathcal {O}_{p}$ , the composition $\\underline{p} \\rightarrow \\underline{\\mathcal {C}} \\rightarrow \\mathbf {BC}^*_\\omega $ factors through $\\operatorname{Spec}\\mathbf {k}\\rightarrow \\mathbf {BC}^*_\\omega $ .", "This proves the statement.", "Consider the universal gerbes $\\mathcal {I}_{\\mu }\\mathfrak {P}_\\mathbf {k}\\rightarrow \\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}$ and $\\mathcal {I}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}} \\rightarrow \\overline{\\mathcal {I}}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}}$ in $\\mathfrak {P}_\\mathbf {k}$ and $\\infty _{\\mathfrak {P}_\\mathbf {k}}$ [5].", "Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a log $R$ -map over $S$ with constant contact order $c_i$ along its $i$ -th marking $p_i \\subset \\mathcal {C}$ .", "Write $\\overline{\\mathcal {I}}_{\\mu }^i = \\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}$ if $c_i = 0$ , and $\\overline{\\mathcal {I}}_{\\mu }^i = \\overline{\\mathcal {I}}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}}$ , otherwise.", "By the above lemma, the restriction $f|_{p_i}$ induces the $i$ -th evaluation morphism of the underlying structures $\\operatorname{ev}_i\\colon \\underline{S} \\rightarrow \\overline{\\mathcal {I}}_{\\mu }^{i}$ such that $p_i \\rightarrow \\underline{S}$ is given by the pull-back of the universal gerbe over $\\overline{\\mathcal {I}}_{\\mu }^{i}$ .", "Thus, connected components of $\\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}\\cup \\overline{\\mathcal {I}}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}}$ provide discrete data for log $R$ -maps.", "Note that $\\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}\\cup \\overline{\\mathcal {I}}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}}$ is smooth provided that $\\mathfrak {P}\\rightarrow \\mathbf {BC}^*_\\omega $ , hence $\\mathfrak {P}_\\mathbf {k}$ is smooth.", "Definition 2.13 A log sector $\\gamma $ is a connected component of either $\\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}$ or $\\overline{\\mathcal {I}}_{\\mu }\\infty _{\\mathfrak {P}_\\mathbf {k}}$ .", "It is narrow if gerbes parameterized by $\\gamma $ all avoids $\\infty _{\\mathfrak {P}_\\mathbf {k}}$ .", "A sector of compact type is a connected component of $\\overline{\\mathcal {I}}_{\\mu }\\mathbf {0}_{\\mathfrak {P}_\\mathbf {k}}$ .", "In particular all narrow sectors are of compact type.", "Due to the fiberwise $\\mathbb {C}^*_\\omega $ -action of $\\mathfrak {P}\\rightarrow \\mathfrak {X}$ , it is easy to see that a sector is narrow iff it parameterizes gerbes in $\\mathbf {0}_{\\mathfrak {P}_\\mathbf {k}}$ .", "Thus, the above definition is compatible with [33].", "Furthermore, since $\\mathbf {0}_{\\mathfrak {P}_\\mathbf {k}}$ and $\\infty _{\\mathfrak {P}_\\mathbf {k}}$ are disjoint, the compact-type condition forces the contact order to be trivial." ], [ "The stack of logarithmic $R$ -maps", "The discrete data of a log $R$ -map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ consists of the genus $g$ , and the curve class $\\beta \\in H_2(\\mathcal {X})$ of $\\mathfrak {t}\\circ f$ .", "Furthermore, each marking has discrete data given by its contact order $c$ and the log sector $\\gamma $ .", "Let $\\vec{\\varsigma }= \\lbrace (\\gamma _i, c_i)\\rbrace _{i=1}^n$ be the collection of discrete data at all markings where $n$ is the number of markings.", "Denote by ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ the stack of stable $R$ -maps over the category of logarithmic schemes with discrete data $g$ , $\\beta $ , $\\vec{\\varsigma }$ .", "Let ${U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ be the category of objects with uniform maximal degeneracy.", "There is a tautological morphism [25] ${U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta ) \\rightarrow {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta ).$ which is representable, proper, log étale, and surjective.", "Furthermore, (REF ) restricts to the identity over the open substack parameterizing log $R$ -maps with images in $\\mathfrak {P}^{\\circ }$ .", "Theorem 2.14 The categories ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ and ${U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ are represented by proper log Deligne–Mumford stacks.", "Since (REF ) is representable and proper, it suffices to verify the statement for ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ , which will be done in Section .", "A key to the representability is the fact discovered in [1], [21], [35] that the underlying stack $\\underline{{R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )}$ is the stack of minimal objects in Definition REF , and $\\underline{{U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )}$ is the stack of minimal objects in Definition REF , see also [25]." ], [ "Change of twists", "This section studies R-maps under the change of twists in preparation of the proof of the Change of twist theorem (Theorem REF ).", "The reader may skip this section on first reading, and return when studying the proof of Theorem REF .", "Consider two twisting choices $a_1, a_2 \\in \\frac{1}{d}\\cdot \\mathbb {Z}$ such that $\\frac{a_1}{a_2} \\in \\mathbb {Z}$ .", "Let $\\mathfrak {P}_1$ and $\\mathfrak {P}_2$ be the hybrid targets corresponding to the choices of $a_1$ and $a_2$ respectively as in (REF ).", "Then there is a cartesian diagram of log stacks ${\\mathfrak {P}_1 [rr] [d] && \\mathfrak {P}_2 [d] \\\\\\mathcal {A}_1 [rr]^{\\nu } && \\mathcal {A}_2}$ where $\\mathcal {A}_{1}$ and $\\mathcal {A}_2$ are two copies of $\\mathcal {A}$ , the vertical arrows are given by (REF ), and $\\nu $ is the morphism induced by $\\mathbb {N}\\rightarrow \\mathbb {N}, 1 \\mapsto \\frac{a_1}{a_2}$ on the level of characteristic monoids.", "Note that the top is the $\\frac{a_1}{a_2}$ -th root stack along $\\infty _{\\mathfrak {P}_2}$ in $\\mathfrak {P}_2$ , and is compatible with arrows to $\\mathbf {BC}^*_\\omega $ .", "Proposition 2.15 Let $f^{\\prime }\\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_1$ be a stable log $R$ -map over $S$ .", "Then the composition $\\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_1 \\rightarrow \\mathfrak {P}_2$ factors through a stable log $R$ -map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}_2$ over $S$ such that The morphism $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}$ induces an isomorphism of their coarse curves, denoted by $C$ .", "The underlying coarse morphisms of $\\mathcal {C}^{\\prime } \\rightarrow C\\times _{\\mathbf {BC}^*_\\omega }\\mathfrak {P}_1$ and $\\mathcal {C}\\rightarrow C\\times _{\\mathbf {BC}^*_\\omega }\\mathfrak {P}_2$ are isomorphic.", "If $f^{\\prime }$ has uniform maximal degeneracy, so does $f$ .", "Furthermore, this factorization is unique up to a unique isomorphism.", "Consider the stable log map $\\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_{1,C} := \\mathfrak {P}_1\\times _{\\mathbf {BC}^*_\\omega }C$ induced by $f^{\\prime }$ .", "By [7], the underlying map of the composition $\\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_{1,C} \\rightarrow \\mathfrak {P}_{2,C} := \\mathfrak {P}_2\\times _{\\mathbf {BC}^*_\\omega }C$ factors through a stable map $\\underline{\\mathcal {C}} \\rightarrow \\underline{\\mathfrak {P}_{2,C}}$ which yields an induced underlying $R$ -map $\\underline{f}\\colon \\underline{\\mathcal {C}} \\rightarrow \\mathfrak {P}_{2}$ .", "We first construct the log curve $\\mathcal {C}\\rightarrow S$ .", "Let $\\mathcal {C}^{\\sharp } \\rightarrow S^{\\sharp }$ be the canonical log structure associated to the underlying curve.", "Since $\\mathfrak {P}_{1,C} \\rightarrow \\mathfrak {P}_{2,C}$ is quasi-finite, the morphism $\\underline{\\mathcal {C}^{\\prime }} \\rightarrow \\underline{\\mathcal {C}}$ induces an isomorphism of coarse curves.", "Thus we obtain a log morphism $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}^{\\sharp }$ over $S \\rightarrow S^{\\sharp }$ .", "This yields the log curve $\\mathcal {C}:= S\\times _{S^{\\sharp }}\\mathcal {C}^{\\sharp } \\rightarrow S$ .", "Next we show that $f^{\\prime }$ descends to a log map $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}_2$ .", "Since the underlying structure $\\underline{f}$ has already being constructed, by (REF ) it suffices to show that the morphism $h^{\\prime }\\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathcal {A}_1$ induced by $f^{\\prime }$ descends to $h\\colon \\mathcal {C}\\rightarrow \\mathcal {A}_2$ with $\\underline{h}$ induced by $\\underline{f}$ .", "Since $\\mathcal {A}$ is an Artin cone, it suffices to check on the level of characteristic sheaves over the log étale cover $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}$ , i.e.", "we need to construct the dashed arrow making the following commutative ${\\overline{\\mathcal {M}}_{\\mathcal {A}_2}|_{\\mathcal {C}^{\\prime }} @{-->}[r]^{\\bar{h}^{\\flat }} @{^{(}->}[d] & \\overline{\\mathcal {M}}_{\\mathcal {C}}|_{\\mathcal {C}^{\\prime }} @{^{(}->}[d] \\\\\\overline{\\mathcal {M}}_{\\mathcal {A}_1} [r]^{(\\bar{h}^{\\prime })^{\\flat }} & \\overline{\\mathcal {M}}_{\\mathcal {C}^{\\prime }}.", "}$ Thus, it suffices to consider the case where $\\underline{S}$ is a geometric point.", "Note that both vertical arrows are injective, hence we may view the monoids on the top as the submonoids of the bottom ones.", "Let $\\delta _1$ and $\\delta _2$ be a local generator of $\\mathcal {M}_{\\mathcal {A}_1}$ and $\\mathcal {M}_{\\mathcal {A}_2}|_{\\mathcal {C}^{\\prime }}$ respectively.", "Denote by $\\bar{\\delta }_1 \\in \\overline{\\mathcal {M}}_{\\mathcal {A}_1}$ and $\\bar{\\delta }_1 \\in \\overline{\\mathcal {M}}_{\\mathcal {A}_2}|_{\\mathcal {C}^{\\prime }}$ the corresponding elements.", "Since $\\bar{\\delta }_2 \\mapsto \\frac{a_1}{a_2}\\cdot \\bar{\\delta }_1$ , it suffices to show that $m := (\\bar{h}^{\\prime })^{\\flat }(\\frac{a_1}{a_2}\\cdot \\bar{\\delta }_1) \\in \\overline{\\mathcal {M}}_{\\mathcal {C}}|_{\\mathcal {C}^{\\prime }}$ .", "Indeed, the morphism $\\underline{\\mathcal {C}^{\\prime }} \\rightarrow \\underline{\\mathcal {A}_1}\\times _{\\underline{\\mathcal {A}_2}}\\underline{\\mathcal {C}}$ lifting the identity of $\\underline{\\mathcal {C}}$ is representable.", "Hence along any marking, the morphism $\\underline{\\mathcal {C}^{\\prime }} \\rightarrow \\underline{\\mathcal {C}}$ is a $\\rho $ -th root stack with $\\rho | \\frac{a_1}{a_2}$ .", "And along each node, the morphism $\\underline{\\mathcal {C}^{\\prime }} \\rightarrow \\underline{\\mathcal {C}}$ is a $\\rho $ -th root stack with $\\rho | \\frac{a_1}{a_2}$ on each component of the node.", "By the definition of log curves, we have $\\frac{a_1}{a_2}\\cdot \\overline{\\mathcal {M}}_{\\mathcal {C}^{\\prime }} \\subset \\overline{\\mathcal {M}}_{\\mathcal {C}}|_{\\mathcal {C}^{\\prime }}$ .", "This proves $m \\in \\overline{\\mathcal {M}}_{\\mathcal {C}}|_{\\mathcal {C}^{\\prime }}$ as needed for constructing $h$ hence $f$ .", "Finally, consider any component $Z \\subset \\mathcal {C}$ and the unique component $Z^{\\prime } \\subset \\mathcal {C}^{\\prime }$ dominating $Z$ .", "Then we have $e_{Z} = \\frac{a_1}{a_2}\\cdot e_{Z^{\\prime }}$ where $e_Z, e_Z^{\\prime } \\in \\overline{\\mathcal {M}}_S$ are the degeneracies of $Z$ and $Z^{\\prime }$ respectively.", "Therefore (3) holds, since if $Z^{\\prime }$ is maximally degenerate, so is $Z$ .", "Consider log R-maps $f^{\\prime }$ and $f$ as in Proposition REF .", "Let $\\vec{\\varsigma }^{\\prime } = \\lbrace (\\gamma ^{\\prime }_i, c^{\\prime }_i)\\rbrace _{i=1}^n$ (resp.", "$\\vec{\\varsigma }= \\lbrace (\\gamma _i, c_i)\\rbrace _{i=1}^n$ ) be the discrete data of $f^{\\prime }$ (resp.", "$f$ ) along markings.", "Observe that $(\\gamma _i, c_i)$ is uniquely determined by $(\\gamma ^{\\prime }_i, c^{\\prime }_i)$ as follows.", "First, since $\\mathfrak {P}_{1,\\mathbf {k}} \\rightarrow \\mathfrak {P}_{2,\\mathbf {k}}$ is the $\\frac{a_1}{a_2}$ -th root stack along $\\infty _{\\mathfrak {P}_{2,\\mathbf {k}}}$ , the sector $\\gamma _i$ is uniquely determined by $\\gamma ^{\\prime }_i$ [4].", "Then the morphism $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}$ is an $\\varrho _i$ -th root stack along the $i$ -th marking for some $\\varrho _i | \\frac{a_1}{a_2}$ uniquely determined by the natural morphism $\\gamma ^{\\prime }_i \\rightarrow \\gamma _i$ [4].", "The contact orders $c_i$ and $c_i^{\\prime }$ are then related by $c_i = \\frac{a_1}{a_2}\\cdot \\frac{c^{\\prime }_i}{\\varrho _i}.$ Corollary 2.16 There are canonical morphisms ${R}_{g,\\vec{\\varsigma }^{\\prime }}(\\mathfrak {P}_1, \\beta ) \\rightarrow {R}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2,\\beta ) \\ \\ \\ \\mbox{and} \\ \\ \\ {U}_{g,\\vec{\\varsigma }^{\\prime }}(\\mathfrak {P}_1, \\beta ) \\rightarrow {U}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2,\\beta ).$ For convenience, we denote both morphisms by $\\nu _{a_1/a_2}$ when there is no danger of confusion." ], [ "A tale of two virtual cycles", "This section forms the heart of this paper.", "We first introduce the canonical perfect obstruction theory and virtual cycle in Section REF , and prove a change of twist theorem in this setting in Section REF .", "We then introduce the compact type locus, its canonical virtual cycle (Section REF ) and the superpotentials (Section REF ), in preparation for defining the canonical cosection in Section REF .", "This allows us to construct the reduced theory in Section REF .", "We then prove the comparison theorems in Sections REF –REF .", "The first time reader may skip Sections REF and REF related to the change of twists.", "In addition, under the further simplification that the set $\\Sigma $ of markings is empty, the reader may skip Sections REF , REF and REF .", "In this situation, the sup- or subscripts, “$\\mathrm {cpt}$ ”, “$\\mathrm {reg}$ ” and “$-$ ” may be dropped." ], [ "The canonical theory", "For the purposes of perfect obstruction theory and virtual fundamental classes, we impose in this section: Assumption 3.1 $\\mathcal {X}$ is smooth.", "The assumption implies that $\\mathfrak {P}\\rightarrow \\mathbf {BC}^*_\\omega $ is log smooth with the smooth underlying morphism.", "To simplify notations, we introduce ${U}:= {U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta ), \\ \\ \\ \\ \\ \\ {R}:= {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta ),$ for stacks of log R-maps as in Section REF .", "We also introduce $\\mathfrak {U}:= \\mathfrak {U}_{g,\\vec{c}}(\\mathcal {A}), \\ \\ \\ \\ \\ \\ \\mathfrak {M}:= \\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A})$ where $\\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A})$ (resp.", "$\\mathfrak {U}_{g,\\vec{c}}(\\mathcal {A})$ ) is the stack parameterizing log maps (resp.", "with uniform maximal degeneracy) to $\\mathcal {A}$ of genus $g$ and with contact orders $\\vec{c}$ induced by $\\vec{\\varsigma }$ .", "These stacks fit in a cartesian diagram ${{U}[rr]^{F} [d] && {R}[d] \\\\\\mathfrak {U}[rr] && \\mathfrak {M}}$ where the vertical arrows are canonical strict morphisms by Section REF , the bottom is given by [25], and the top is (REF ).", "Let $\\bullet $ be one of the stacks ${U}, {R}, \\mathfrak {U}$ or $\\mathfrak {M}$ , and $\\pi _{\\bullet }\\colon \\mathcal {C}_{\\bullet } \\rightarrow \\bullet $ be the universal curve.", "Denote by $\\mathcal {P}_{\\bullet } := \\mathcal {C}_{\\bullet }\\times _{\\mathbf {BC}^*_\\omega }\\mathfrak {P}$ where $\\mathcal {C}_{\\bullet } \\rightarrow \\mathbf {BC}^*_\\omega $ is induced by $\\omega ^{\\log }_{\\mathcal {C}_{\\bullet }/\\bullet }$ .", "Let $f_{\\bullet }\\colon \\mathcal {C}_{\\bullet } \\rightarrow \\mathcal {P}_{\\bullet }$ be the section induced by the universal log $R$ -map for $\\bullet = {U}$ or ${R}$ .", "Consider the commutative diagram ${\\mathcal {C}_{{R}} @/^1pc/[rrd]^{=} [rd]^{f_{{R}}} @/_1pc/[ddr]_{f}&&& \\\\& \\mathcal {P}_{{R}} [r] [d] & \\mathcal {C}_{{R}} [r]^{\\pi _{{R}}} [d] & {R}[d] \\\\& \\mathcal {P}_{\\mathfrak {M}} [r] & \\mathcal {C}_{\\mathfrak {M}} [r]_{\\pi _{\\mathfrak {M}}} & \\mathfrak {M}}$ where the three vertical arrows are strict, and the two squares are Cartesian.", "We use $\\mathbb {L}$ to denote the log cotangent complexes in the sense of Olsson [46].", "The lower and upper triangle yield $\\mathbb {L}_{f_{{R}}} \\rightarrow f^*_{{R}}\\mathbb {L}_{\\mathcal {P}_{{R}}/\\mathcal {P}_{\\mathfrak {M}}}[1] \\cong \\pi ^*_{{R}}\\mathbb {L}_{{R}/\\mathfrak {M}}[1] \\qquad \\mbox{and} \\qquad \\mathbb {L}_{f_{{R}}} \\cong f^*_{{R}}\\mathbb {L}_{\\mathcal {P}_{{R}}/\\mathcal {C}_{{R}}}[1],$ respectively.", "Hence we obtain $f^*_{{R}}\\mathbb {L}_{\\mathcal {P}_{{R}}/\\mathcal {C}_{{R}}} \\rightarrow \\pi ^*_{{R}}\\mathbb {L}_{{R}/\\mathfrak {M}}.$ Tensoring both sides by the dualizing complex $\\omega _{\\pi _{{R}}}^\\bullet = \\omega _{\\mathcal {C}_{{R}}/{R}}[1]$ and applying $\\pi _{{R},*}$ , we obtain $\\pi _{{R},*}\\big (f^*_{{R}}\\mathbb {L}_{\\mathcal {P}_{{R}}/\\mathcal {C}_{{R}}}\\otimes \\omega _{\\pi _{{R}}}^\\bullet \\big ) \\rightarrow \\pi _{{R},*}\\pi ^{!", "}_{{R}} \\mathbb {L}_{{R}/\\mathfrak {M}} \\rightarrow \\mathbb {L}_{{R}/\\mathfrak {M}}$ where the last arrow follows from the fact that $\\pi _{{R},*}$ is left adjoint to $\\pi ^{!", "}(-) := \\omega _{\\pi _{{R}}}^\\bullet \\otimes \\pi ^*(-)$ .", "Further observe that $\\mathbb {L}_{\\mathcal {P}_{{R}}/\\mathcal {C}_{{R}}} = \\Omega _{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {P}_{R}}$ is the log cotangent bundle.", "Hence, we obtain $\\varphi ^{\\vee }_{{R}/\\mathfrak {M}}\\colon \\mathbb {E}^{\\vee }_{{R}/\\mathfrak {M}} := \\pi _{{R},*}\\big (f^*_{{R}}\\Omega _{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }\\otimes \\omega _{\\pi _{{R}}}^\\bullet \\big ) \\rightarrow \\mathbb {L}_{{R}/\\mathfrak {M}}.$ The same proof as in [25] shows that $\\varphi ^{\\vee }_{{R}/\\mathfrak {M}}$ is a perfect obstruction theory of ${R}\\rightarrow \\mathfrak {M}$ in the sense of [9].", "Recall that $\\mathfrak {M}$ is log smooth and equi-dimentional [25].", "Denote by $[{R}]^{\\mathrm {vir}}$ the virtual cycle given by the virtual pull-back of the fundamental class $[\\mathfrak {M}]$ using $\\varphi ^{\\vee }_{{R}/\\mathfrak {M}}$ .", "Pulling back $\\varphi ^{\\vee }_{{R}/\\mathfrak {M}}$ along ${U}\\rightarrow {R}$ , we obtain a perfect obstruction theory of ${U}\\rightarrow \\mathfrak {U}$ : $\\varphi ^{\\vee }_{{U}/\\mathfrak {U}}\\colon \\mathbb {E}^{\\vee }_{{U}/\\mathfrak {U}} := \\pi _{{U},*}\\big (f^*_{{U}}\\Omega _{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }\\otimes \\omega _{\\pi _{{U}}}^\\bullet \\big ) \\rightarrow \\mathbb {L}_{{U}/\\mathfrak {U}}.$ A standard calculation shows that $\\mathbb {E}_{{U}/\\mathfrak {U}} = \\pi _{{U},*}\\big (f^*_{{U}}T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }\\big )$ .", "Let $[{U}]^{\\mathrm {vir}}$ be the corresponding virtual cycle.", "Since $\\mathfrak {U}\\rightarrow \\mathfrak {M}$ is proper and birational by [25], by the virtual push-forward of [29], [45] the two virtual cycles are related by $F_*[{U}]^{\\mathrm {vir}} = [{R}]^{\\mathrm {vir}}.$" ], [ "Independence of twists I: the case of the canonical theory", "In this section, using the results from Section REF , we study the behavior of the canonical virtual cycle under the change of twists.", "Proposition 3.2 Given the situation as in Corollary REF , we have the following push-forward properties of virtual cycles $\\nu _{a_1/a_2,*}[{R}_{g,\\vec{\\varsigma }^{\\prime }}(\\mathfrak {P}_1, \\beta )]^\\mathrm {vir}= [{R}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2, \\beta )]^\\mathrm {vir}$ , $\\nu _{a_1/a_2,*}[{U}_{g,\\vec{\\varsigma }^{\\prime }}(\\mathfrak {P}_1, \\beta )]^\\mathrm {vir}= [{U}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2, \\beta )]^\\mathrm {vir}$ .", "We will only consider (1).", "Statement (2) can be proved identically by considering only log maps with uniform maximal degeneracy, thanks to Proposition REF (3).", "Since $\\mathfrak {P}_1 \\rightarrow \\mathfrak {P}_2$ is a log étale birational modification, (1) follows from a similar proof as in [8] but in a simpler situation except that we need to take into account orbifold structures.", "In what follows, we will only specify the differences, and refer to [8] for complete details.", "First, consider the stack $\\mathfrak {M}^{\\prime } := \\mathfrak {M}_{g,\\vec{c}^{\\prime }}^{\\prime }(\\mathcal {A}_1 \\rightarrow \\mathcal {A}_2)$ , the analogue of the one in [8], parameterizing commutative diagrams ${\\mathcal {C}^{\\prime } [r] [d] & \\mathcal {A}_1 [d] \\\\\\mathcal {C}[r] & \\mathcal {A}_2}$ where $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}$ is a morphism of log curves over $S$ inducing an isomorphism of underlying coarse curves, the top and bottom are log maps with discrete data along markings given by $\\vec{c}^{\\prime } = \\lbrace (r^{\\prime }_i, c^{\\prime }_i)\\rbrace $ (see Lemma REF ) and $\\vec{c} = \\lbrace (r_i, c_i)\\rbrace $ respectively, and the induced morphism $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\times _{\\mathcal {A}_2}\\mathcal {A}_1$ is representable, hence is stable.", "We first show that $\\mathfrak {M}^{\\prime }$ is algebraic.", "Indeed, let $\\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A})$ be the stack of genus $g$ log maps to $\\mathcal {A}$ with discrete data $\\vec{c}$ along markings.", "Let $\\mathfrak {M}_1$ be the stack parameterizing sequences $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\rightarrow \\mathcal {A}_2$ where $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}$ is a morphism of genus $g$ , $n$ -marked log curves over $S$ with isomorphic underlying coarse curves, and $\\varrho _i$ -th root stack along the $i$ -th marking for each $i$ (REF ).", "$\\mathfrak {M}_1$ is algebraic as the morphism $\\mathfrak {M}_1 \\rightarrow \\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A}_2)$ defined by $[\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\rightarrow \\mathcal {A}_2] \\mapsto [\\mathcal {C}\\rightarrow \\mathcal {A}_2]$ is algebraic and $\\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A}_2) = \\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A})$ is algebraic.", "Now $\\mathfrak {M}^{\\prime }$ is given by the open substack of $\\mathfrak {M}_1\\times _{\\mathfrak {M}_{g,\\vec{c}^{\\prime \\prime }}(\\mathcal {A}_2)}\\mathfrak {M}_{g,\\vec{c}^{\\prime }}(\\mathcal {A}_1)$ where the representability of $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\times _{\\mathcal {A}_2}\\mathcal {A}_1$ holds.", "Here $\\mathfrak {M}_{g,\\vec{c}^{\\prime }}(\\mathcal {A}_1) = \\mathfrak {M}_{g, \\vec{c}^{\\prime }}(\\mathcal {A}) \\rightarrow \\mathfrak {M}_{g,\\vec{c}^{\\prime \\prime }}(\\mathcal {A}_2)$ is given by composing log maps to $\\mathcal {A}_1$ with $\\mathcal {A}_1 \\rightarrow \\mathcal {A}_2$ hence $\\vec{c}^{\\prime \\prime } = \\lbrace (r^{\\prime }_i, \\frac{a_1}{a_2}\\cdot c^{\\prime }_i)\\rbrace $ , and $\\mathfrak {M}_1 \\rightarrow \\mathfrak {M}_{g,\\vec{c}^{\\prime \\prime }}(\\mathcal {A}_2)$ is given by $[\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\rightarrow \\mathcal {A}_2] \\mapsto [\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {A}_2]$ .", "Next, by Proposition REF and (REF ), we obtain the commutative diagram ${{R}_{g,\\vec{\\varsigma }^{\\prime }}(\\mathfrak {P}_1, \\beta ) [rr] [d]^{G^{\\prime }_1} @/_2pc/[dd]_{G_1}&& {R}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2, \\beta ) [d]^{G_2} \\\\\\mathfrak {M}^{\\prime } [rr]_{F_2} [d]^{F_1} && \\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A}_2) \\\\\\mathfrak {M}_{g,\\vec{c}^{\\prime }}(\\mathcal {A}_1) &&}$ where we define a morphism $F_1 \\colon (\\ref {eq:middle-stack}) \\mapsto [\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {A}_1]$ and a proper morphism $F_2 \\colon (\\ref {eq:middle-stack}) \\mapsto [\\mathcal {C}\\rightarrow \\mathcal {A}_2]$ , and the square is cartesian.", "Since the horizontal arrows in (REF ) are logarithmic modifications in the sense of [8], the same proof as in [8] shows that $\\mathfrak {M}^{\\prime } \\rightarrow \\mathfrak {M}_{g,\\vec{c}^{\\prime }}(\\mathcal {A}_1)$ is strict and étale.", "Using the identical method as in [8], one construct a perfect obstruction theory of $G^{\\prime }_1$ which is identical to the one of $G_1$ as in (REF ).", "Furthermore Lemma REF and the same proof as in [8] imply that $\\mathfrak {M}^{\\prime } \\rightarrow \\mathfrak {M}_{g,\\vec{c}^{\\prime }}(\\mathcal {A}_1)$ and $\\mathfrak {M}^{\\prime } \\rightarrow \\mathfrak {M}_{g,\\vec{c}}(\\mathcal {A}_2)$ are both birational.", "Finally, following the same lines of proof as in [8] and using Costello's virtual push-forward [29], we obtain (1).", "Since log maps to $\\mathcal {A}$ are unobstructed [8] (see also [25]), the discrete data along markings can be determined by studying the following non-degenerate situation.", "Lemma 3.3 Let $f \\colon \\mathcal {C}\\rightarrow \\mathcal {A}_2$ be a log map with discrete data $(r_i, c_i)$ at the $i$ -th marking.", "Assume that no component of $\\mathcal {C}$ has image entirely contained in $\\infty _{\\mathcal {A}_2}$ .", "Let $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}$ be obtained by taking the $\\varrho _i$ -th root along the $i$ -th marking for each $i$ .", "Then $f \\colon \\mathcal {C}\\rightarrow \\mathcal {A}_2$ lifts to $f^{\\prime } \\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathcal {A}_1$ if and only if $\\frac{a_1}{a_2} | c_i \\cdot \\varrho _i$ .", "In this case, the lift is unique up to a unique isomorphism.", "Furthermore, the induced $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\times _{\\mathcal {A}_2}\\mathcal {A}_1$ by $f^{\\prime }$ is representable if and only if $\\varrho _i = \\frac{a_1/a_2}{\\gcd (c_i,a_1/a_2)}$ .", "In this case, let $(r^{\\prime }_i, c^{\\prime }_i)$ be the discrete data of $f_i$ at the $i$ -th marking.", "Then for each $i$ we have $r^{\\prime }_i = \\varrho _i \\cdot r_i \\ \\ \\ \\mbox{and} \\ \\ \\ c^{\\prime }_i = \\frac{c_i}{\\gcd (c_i,a_1/a_2)}.$ Finding a lift $f^{\\prime }$ amounts to finding $\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {C}\\times _{\\mathcal {A}_2}\\mathcal {A}_1$ that lifts the identity $\\mathcal {C}\\rightarrow \\mathcal {C}$ .", "Thus, both (1) and (2) follow from [4] and [11]." ], [ "The compact type locus and its canonical virtual cycle", "We next introduce the closed substack over which the reduced theory will be constructed." ], [ "The logarithmic evaluation stacks", "Let $\\mathfrak {Y}= \\mathfrak {M}$ (resp.", "$\\mathfrak {U}$ ), and ${Y}= {R}$ (resp.", "${U}$ ) with the strict canonical morphism ${Y}\\rightarrow \\mathfrak {Y}$ .", "The $i$ -th evaluation stack $\\mathfrak {Y}^{\\operatorname{ev}}_i$ associates to any $\\mathfrak {Y}$ -log scheme $S$ the category of commutative diagrams: ${p_i [rr] [d] && \\mathfrak {P}_{\\mathbf {k}} [d] \\\\\\mathcal {C}[rr]^{\\omega ^{\\log }_{\\mathcal {C}/S}} &&\\mathcal {A}}$ where $\\mathcal {C}\\rightarrow \\mathcal {A}$ is the log map over $S$ given by $S \\rightarrow \\mathfrak {Y}$ , $p_i \\subset \\mathcal {C}$ is the $i$ -th marking with the pull-back log structure from $\\mathcal {C}$ , and the top horizontal arrow is representable.", "There is a canonical strict morphism $ \\mathfrak {Y}^{\\operatorname{ev}}_i \\rightarrow \\mathfrak {Y}$ forgetting the top arrow in (REF ).", "By Lemma REF , the morphism ${Y}\\rightarrow \\mathfrak {Y}$ factors through the $i$ -th evaluation morphism $\\mathfrak {ev}_i \\colon {Y}\\rightarrow \\mathfrak {Y}^{\\operatorname{ev}}_i.$ For the reduced theory, we introduce substacks $\\mathfrak {Y}^{\\mathrm {cpt}}_i \\subset \\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_i \\subset \\mathfrak {Y}^{\\operatorname{ev}}_i,$ where $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_i$ parameterizes diagrams (REF ) whose images at the $i$ -th markings avoid $\\infty _{\\mathcal {A}}$ (or equivalently avoids $\\infty _{\\mathfrak {P}}$ ) and $\\mathfrak {Y}^{\\mathrm {cpt}}_i$ parameterizes diagrams (REF ) whose images at the $i$ -th markings are contained in $\\mathbf {0}_{\\mathfrak {P}}$ .", "Recall that $(\\gamma _i, c_i)$ are the sector and contact order at the $i$ -th marking, see Section REF .", "Proposition 3.4 Both $\\mathfrak {Y}^{\\mathrm {cpt}}_i$ and $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_i$ are log algebraic stacks.", "Furthermore, we have If $c_i > 0$ , then $\\mathfrak {Y}^{\\mathrm {cpt}}_i = \\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_i = \\emptyset $ .", "The strict morphisms $\\mathfrak {Y}^{\\mathrm {cpt}}_{i} \\rightarrow \\mathfrak {Y}$ and $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_{i} \\rightarrow \\mathfrak {Y}$ are smooth.", "Remark 3.5 $\\mathfrak {Y}^{\\operatorname{ev}}_i$ is also algebraic, but we do not need this fact here.", "(1) follows from the definition of $\\mathfrak {Y}^{\\mathrm {cpt}}_i$ and $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_i$ .", "We now assume $c_i = 0$ .", "Let $\\mathfrak {Y}_{i,0} \\subset \\mathfrak {Y}$ be the open dense sub-stack over which the image of $p_i$ avoids $\\infty _{\\mathcal {A}}$ .", "Let $\\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}^\\circ \\subset \\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}$ be the open substack parameterizing gerbes avoiding $\\infty _{\\mathfrak {P}_{\\mathbf {k}}}$ .", "By (REF ), it follows that $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_{i} = \\mathfrak {Y}_{i,0}\\times \\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}^\\circ $ hence $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_{i}$ is algebraic.", "Similarly $\\mathfrak {Y}^{\\mathrm {cpt}}_i$ is a closed substack of $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_{i}$ given by $\\mathfrak {Y}^{\\mathrm {cpt}}_{i} = \\mathfrak {Y}_{i,0}\\times \\overline{\\mathcal {I}}_{\\mu }\\mathbf {0}_{\\mathfrak {P}_\\mathbf {k}},$ hence is also algebraic.", "(2) follows from the smoothness of $\\overline{\\mathcal {I}}_{\\mu }\\mathbf {0}_{\\mathfrak {P}_\\mathbf {k}}$ and $\\overline{\\mathcal {I}}_{\\mu }\\mathfrak {P}_\\mathbf {k}^\\circ $ .", "Consider the following fiber products both taken over $\\mathfrak {Y}$ : $\\mathfrak {Y}^{\\mathrm {cpt}} := \\prod _i \\mathfrak {Y}^{\\mathrm {cpt}}_{i} \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathfrak {Y}^{\\mathring{\\operatorname{ev}}} := \\prod _i \\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}_i,$ where $i$ runs through all markings with contact order zero.", "Consider fiber products ${Y}^\\mathrm {cpt}:= {Y}\\times _{\\mathfrak {Y}^{\\operatorname{ev}}}\\mathfrak {Y}^{\\mathrm {cpt}} \\ \\ \\ \\mbox{and} \\ \\ \\ {Y}^{\\mathring{\\operatorname{ev}}}:= {Y}\\times _{\\mathfrak {Y}^{\\operatorname{ev}}}\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}.$ Then ${Y}^{\\mathring{\\operatorname{ev}}} \\subset {Y}$ (resp.", "${Y}^\\mathrm {cpt}\\subset {Y}$ ) is the open (resp.", "closed) sub-stack parameterizing stable log R-maps whose images at markings with the zero contact order avoid $\\infty _{\\mathfrak {P}}$ (resp.", "in $\\mathbf {0}_{\\mathfrak {P}}$ )." ], [ "The canonical perfect obstruction theory of ${Y}^\\mathrm {cpt}\\rightarrow \\mathfrak {Y}^{\\mathrm {cpt}}$", "Consider the universal map and projection over $\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}$ respectively: $\\mathfrak {ev}\\colon \\cup _i p_i \\rightarrow \\mathfrak {P}\\ \\ \\ \\mbox{and} \\ \\ \\ \\pi _{\\operatorname{ev}}\\colon \\cup _{i}p_i \\rightarrow \\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}.$ By (REF ), (REF ) and [5], we have an isomorphism of vector bundles $\\varphi ^{\\vee }_{\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}\\colon \\mathbb {E}^{\\vee }_{\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}} := (\\pi _{\\operatorname{ev},*} \\mathfrak {ev}^*T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega })^{\\vee } \\stackrel{\\cong }{\\longrightarrow } \\mathbb {L}_{\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}.$ The perfect obstruction theory (REF ) restricts to a relative perfect obstruction theory $\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}\\colon \\mathbb {E}^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}} := \\mathbb {E}^{\\vee }_{{R}/\\mathfrak {M}}|_{{Y}^{\\mathring{\\operatorname{ev}}}} \\rightarrow \\mathbb {L}_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}.$ A standard construction as in [10] or [3] yields a morphism of triangles ${\\operatorname{ev}^*\\mathbb {E}^{\\vee }_{\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}} [r] [d]_{\\varphi ^{\\vee }_{\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}} & \\mathbb {E}^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}} [r] [d]_{\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}} & \\mathbb {E}^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}} [d]_{\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}}} [r]^-{[1]} & \\\\\\operatorname{ev}^*\\mathbb {L}_{\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}} [r] & \\mathbb {L}_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}} [r] & \\mathbb {L}_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}} [r]^-{[1]} &}$ where $\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}}$ is a perfect obstruction theory of ${Y}^{\\mathring{\\operatorname{ev}}} \\rightarrow \\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}$ of the form $\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}}\\colon \\mathbb {E}^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}} := \\pi _{{Y}^{\\mathring{\\operatorname{ev}}},*} f_{{Y}^{\\mathring{\\operatorname{ev}}}}^*(T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(- \\Sigma ))^{\\vee } \\rightarrow \\mathbb {L}_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}}.$ Here $\\Sigma $ is the sum or union of all markings with the zero contact order.", "Thus, the two perfect obstruction theories $\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}}$ and $\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{^{\\mathring{\\operatorname{ev}}}}}$ are compatible in the sense of [9].", "Pulling back $\\varphi ^{\\vee }_{{Y}^{\\mathring{\\operatorname{ev}}}/\\mathfrak {Y}^{\\mathring{\\operatorname{ev}}}}$ to ${Y}^{\\mathrm {cpt}}$ we obtain the canonical perfect obstruction theory of ${Y}^{\\mathrm {cpt}} \\rightarrow \\mathfrak {Y}^{\\mathrm {cpt}}$ $\\varphi ^{\\vee }_{{Y}^{\\mathrm {cpt}}/\\mathfrak {Y}^{\\mathrm {cpt}}}\\colon \\mathbb {E}^{\\vee }_{{Y}^{\\mathrm {cpt}}/\\mathfrak {Y}^{\\mathrm {cpt}}} := \\pi _{{Y}^{\\mathrm {cpt}},*} f_{{Y}^{\\mathrm {cpt}}}^*(T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(- \\Sigma ))^{\\vee } \\rightarrow \\mathbb {L}_{{Y}^{\\mathrm {cpt}}/\\mathfrak {Y}^{\\mathrm {cpt}}}.$ Denote by $[{Y}^{\\mathrm {cpt}}]^{\\mathrm {vir}}$ the canonical virtual cycle of ${Y}^{\\mathrm {cpt}}$ defined via (REF )." ], [ "The compact type locus", "We call ${Y}^\\mathrm {cpt}\\subset {Y}$ the compact type locus if the contact orders of all markings are equal to zero.", "For the purpose of constructing the reduced virtual cycles over the compact type locus, we will impose for the rest of this section that Assumption 3.6 All contact orders are equal to zero.", "This assumption is needed in our construction to apply the cosection localization of Kiem-Li [42].", "In this case $\\vec{\\varsigma }$ is the same as a collection of log sectors which are further restricted to be sectors of compact type (see Definition REF ).", "Note that if all sectors are narrow, then ${Y}^\\mathrm {cpt}= {Y}$ ." ], [ "The superpotentials", "Our next goal is to construct the reduced virtual cycle $[{Y}^{\\mathrm {cpt}}]^{\\mathrm {red}}$ for ${Y}^{\\mathrm {cpt}} = {U}^{\\mathrm {cpt}}$ .", "The superpotential is a key ingredient which we discuss now." ], [ "The definition", "A superpotential is a morphism of stacks $W\\colon \\mathfrak {P}^{\\circ } \\rightarrow \\mathcal {L}_\\omega $ over $\\mathbf {BC}^*_\\omega $ .", "Equivalently, $W$ is a section of the line bundle $\\mathcal {L}_\\omega |_{\\mathfrak {P}^{\\circ }}$ over $\\mathfrak {P}^{\\circ }$ .", "Pulling-back $W$ along the universal torsor $\\operatorname{Spec}\\mathbf {k}\\rightarrow \\mathbf {BC}^*_\\omega $ , we obtain a $\\mathbb {C}^*_\\omega $ -equivariant function $W_\\mathbf {k}\\colon \\mathfrak {P}^{\\circ }_\\mathbf {k}\\rightarrow \\mathbf {k}$ which recovers the information of $W$ .", "Denote by $\\operatorname{Crit}(W_{\\mathbf {k}}) \\subset \\mathfrak {P}^{\\circ }_{\\mathbf {k}}$ the critical locus of the holomorphic function $W_\\mathbf {k}$ .", "It descends to a closed substack $\\operatorname{Crit}(W) \\subset \\mathfrak {P}^{\\circ }$ .", "Definition 3.7 We call $\\operatorname{Crit}(W)$ the critical locus of the superpotential $W$ .", "We say that $W$ has proper critical locus if $\\operatorname{Crit}(W_{\\mathbf {k}})$ is proper over $\\mathbf {k}$ , or equivalently $\\operatorname{Crit}(W)$ is proper over $\\mathbf {BC}^*_\\omega $ .", "Let $\\mathfrak {X}_\\mathbf {k}:= \\mathfrak {X}\\times _{\\mathbf {BC}^*_\\omega }\\operatorname{Spec}\\mathbf {k}$ where the left arrow is given by $\\zeta $ in (REF ) and the right one is the universal torsor.", "Since $\\mathfrak {P}^{\\circ }_\\mathbf {k}$ is a vector bundle over $\\mathfrak {X}_\\mathbf {k}$ , the critical locus of $W_\\mathbf {k}$ , if proper, is necessarily supported on the fixed locus $\\mathbf {0}_{\\mathfrak {P}_\\mathbf {k}} \\subset \\mathfrak {P}_\\mathbf {k}$ of the $\\mathbb {C}^*_\\omega $ -action." ], [ "The extended superpotential", "To extend $W$ to $\\mathfrak {P}$ , we first observe the following: Lemma 3.8 Suppose there exists a non-zero superpotential $W$ .", "Then the order of poles of $W_{\\mathbf {k}}$ along $\\infty _{\\mathfrak {P}_{\\mathbf {k}}}$ is the positive integer $\\tilde{r}= a \\cdot r$ .", "The existence of non-zero $W$ implies that there is a sequence of non-negative integers $k_i$ such that $r = \\sum _i k_i \\cdot i,$ where $i$ runs through the grading of non-trivial $\\mathbf {E}_i$ .", "The integrality of $\\tilde{r}$ follows from the choices of $a$ , and the order of poles of $W_{\\mathbf {k}}$ follows from the choices of weights $\\mathbf {w}$ in (REF ).", "Consider the $\\mathbb {P}^1$ -bundle over $\\mathbf {BC}^*_\\omega $ $\\mathbb {P}_\\omega = \\mathbb {P}(\\mathcal {L}_\\omega \\oplus \\mathcal {O}).$ We further equip $\\mathbb {P}_\\omega $ with the log structure given by its reduced infinity divisor $\\infty _{\\mathbb {P}_\\omega } := \\mathbb {P}_\\omega \\setminus \\operatorname{Vb}(\\mathcal {L}_\\omega )$ .", "The superpotential $W$ extends to a rational map of log stacks $\\overline{W}\\colon \\mathfrak {P}\\dashrightarrow \\mathbb {P}_\\omega $ over $\\mathbf {BC}^*_\\omega $ with the indeterminacy locus $\\overline{(W^{-1}(\\mathbf {0}_{\\mathcal {L}_\\omega }))}\\cap \\infty _{\\mathfrak {P}}$ by Lemma REF .", "Equivalently, $\\overline{W}$ can be viewed as a rational section of $\\mathcal {L}_\\omega |_{\\mathfrak {P}^{\\circ }}$ extending $W$ , and having poles along $\\infty _{\\mathfrak {P}}$ of order $\\tilde{r}$ ." ], [ "The twisted superpotential", "Next, we discuss how to extend the superpotential $W$ across the boundary.", "This will be shown to be the key to extending cosections to the boundary of the log moduli stacks.", "It should be noticed that the non-empty indeterminacy locus of $\\overline{W}$ is a new phenomenon compared to the $r$ -spin case [25], and requires a somewhat different treatment as shown below.", "Consider the log étale morphism of log stacks $\\mathcal {A}^e \\rightarrow \\mathcal {A}\\times \\mathcal {A}$ given by the blow-up of the origin of $\\mathcal {A}\\times \\mathcal {A}$ .", "Denote by $\\mathfrak {P}^e$ and $\\mathbb {P}^e_\\omega $ the pull-back of (REF ) along the following respectively $\\mathfrak {P}\\times \\mathcal {A}_{\\max } \\stackrel{(\\mathcal {M}_{\\mathfrak {P}}, id)}{\\longrightarrow } \\mathcal {A}\\times \\mathcal {A}_{\\max } \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathbb {P}_\\omega \\times \\mathcal {A}_{\\max } \\stackrel{(\\mathcal {M}_{\\mathbb {P}_\\omega }, \\nu _{\\tilde{r}})}{\\longrightarrow } \\mathcal {A}\\times \\mathcal {A}$ Here $\\mathcal {A}_{\\max } = \\mathcal {A}$ , and $\\nu _{\\tilde{r}}$ is the degree $\\tilde{r}$ morphism induced by $\\mathbb {N}\\rightarrow \\mathbb {N}, \\ 1 \\mapsto \\tilde{r}$ on the level of characteristics.", "Recall from Lemma REF that $\\tilde{r}$ is a positive integer given $W \\ne 0$ .", "Denote by $\\infty _{\\mathfrak {P}^e} \\subset \\mathfrak {P}^e$ and $\\infty _{\\mathbb {P}^e} \\subset \\mathbb {P}^e_\\omega $ the proper transforms of $\\infty _{\\mathfrak {P}}\\times \\mathcal {A}_{\\max }$ and $\\infty _{\\mathbb {P}_{\\omega }}\\times \\mathcal {A}_{\\max }$ respectively.", "Consider $\\mathfrak {P}^{e,\\circ } := \\mathfrak {P}^{e}\\setminus \\infty _{\\mathfrak {P}^e} \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathbb {P}^{e,\\circ }_\\omega := \\mathbb {P}^e_\\omega \\setminus \\infty _{\\mathbb {P}^e_\\omega }.$ We obtain a commutative diagram with rational horizontal maps ${\\mathfrak {P}^{e,\\circ } [d] @{-->}[rr]^{\\overline{W}^{e,\\circ }} && \\mathbb {P}_\\omega ^{e,\\circ } [d] \\\\\\mathfrak {P}\\times \\mathcal {A}_{\\max } @{-->}[rr]^{\\overline{W}\\times id} && \\mathbb {P}_\\omega \\times \\mathcal {A}_{\\max }}$ Lemma 3.9 There is a canonical surjective log morphism $\\mathfrak {c}\\colon \\mathbb {P}^{e,\\circ }_\\omega \\rightarrow \\operatorname{Vb}\\big (\\mathcal {L}_\\omega \\boxtimes \\mathcal {O}_{\\mathcal {A}_{\\max }}(\\tilde{r}\\Delta _{\\max })\\big )$ by contracting the proper transform of $\\mathbb {P}_\\omega \\times \\Delta _{\\max }$ where $\\Delta _{\\max } \\subset \\mathcal {A}_{\\max }$ is the closed point, and the target of $\\mathfrak {c}$ is equipped with the pull-back log structure from $\\mathcal {A}_{\\max }$ .", "This follows from a local coordinate calculation.", "Proposition 3.10 The composition $\\widetilde{W}:= \\mathfrak {c}\\circ \\overline{W}^{e,\\circ }$ is a surjective morphism that contracts the proper transform of $\\mathfrak {P}\\times \\Delta _{\\max }$ .", "A local calculation shows that the proper transform of $\\mathfrak {P}\\times \\Delta _{\\max }$ dominates the proper transform of $\\mathbb {P}^{e,\\circ }_\\omega \\times \\Delta _{\\max }$ , hence is contracted by $\\mathfrak {c}$ .", "The surjectivity of $\\overline{W}^{e,\\circ }$ follows from the pole order of Lemma REF and the above construction.", "Hence the surjectivity in the statement follows from the surjectivity of $\\mathfrak {c}$ in the above lemma.", "It remains to show that $\\widetilde{W}$ is well-defined everywhere.", "Let $E^{\\circ } \\subset \\mathfrak {P}^{e,\\circ }$ be the exceptional divisor of $\\mathfrak {P}^{e,\\circ } \\rightarrow \\mathfrak {P}\\times \\mathcal {A}$ .", "Then $E^{\\circ } \\cong N_{\\infty _{\\mathfrak {P}/\\mathfrak {P}}}$ is the total space of the normal bundle.", "The indeterminacy locus of $\\overline{W}^{e,\\circ }$ is the fiber of $E^{\\circ } \\rightarrow \\infty _{\\mathfrak {P}}$ over $\\overline{(W^{-1}(\\mathbf {0}_{\\mathcal {L}_\\omega }))}\\cap \\infty _{\\mathfrak {P}}$ .", "One checks that $\\widetilde{W}$ contracts the indeterminacy locus of $\\overline{W}^{e,\\circ }$ to the zero section of its target.", "Definition 3.11 We call $\\widetilde{W}$ the twisted superpotential.This is different from the “twisted superpotential” used in the physics literature [52].", "It is said to have proper critical locus if the vanishing locus of the log differential $\\operatorname{d}\\widetilde{W}$ , defined as a closed strict substack of $\\mathfrak {P}^{e,\\circ }$ , is proper over $\\mathbf {BC}^*_\\omega \\times \\mathcal {A}_{\\max }$ .", "Proposition 3.12 $\\widetilde{W}$ has proper critical locus iff $W$ has proper critical locus.", "Since $W$ is the fiber of $\\widetilde{W}$ over the open dense point of $\\mathcal {A}_{\\max }$ , one direction is clear.", "We next assume that $W$ has proper critical locus.", "Consider the substack $\\mathfrak {P}^{e,*} \\subset \\mathfrak {P}^{e,\\circ }$ obtained by removing the zero section $\\mathbf {0}_{\\mathfrak {P}^{e}}$ and the proper transform of $\\mathfrak {P}\\times _{\\mathcal {A}_{\\max }}\\Delta _{\\max }$ .", "Since the proper transform of $\\mathfrak {P}\\times _{\\mathcal {A}_{\\max }} \\Delta _{\\max }$ is proper over $\\mathbf {BC}^*_\\omega \\times \\mathcal {A}_{\\max }$ , it suffices to show that the morphism $\\widetilde{W}|_{\\mathfrak {P}^{e,*}}\\colon \\mathfrak {P}^{e,*} \\rightarrow \\operatorname{Vb}\\big (\\mathcal {L}_\\omega \\boxtimes \\mathcal {O}_{\\mathcal {A}_{\\max }}(\\tilde{r}\\Delta _{\\max })\\big )$ has no critical points fiberwise over $\\mathbf {BC}^*_\\omega \\times \\mathcal {A}_{\\max }$ , as otherwise the critical locus would be non-proper due to the $\\mathbb {C}^*$ -scaling of $\\mathfrak {P}^{e,*}$ .", "On the other hand, $\\mathfrak {P}^{e,*}$ can be expressed differently as follows $\\mathfrak {P}^{e,*} = \\operatorname{Vb}\\big (\\bigoplus _{i > 0}(\\mathbf {E}^{\\vee }_{i,\\mathfrak {X}}\\otimes \\mathcal {L}_{\\mathfrak {X}}^{\\otimes i}\\boxtimes \\mathcal {O}_{\\mathcal {A}_{\\max }}(ai \\Delta _{\\max }))\\big )\\setminus \\mathbf {0}$ where $\\mathbf {0}$ is the zero section of the corresponding vector bundle.", "Note that $W$ induces a morphism over $\\mathbf {BC}^*_\\omega \\times \\mathcal {A}_{\\max }$ : $\\operatorname{Vb}\\big (\\bigoplus _{i > 0}(\\mathbf {E}^{\\vee }_{i,\\mathfrak {X}}\\otimes \\mathcal {L}_{\\mathfrak {X}}^{\\otimes i}\\boxtimes \\mathcal {O}(ai \\Delta _{\\max }))\\big ) \\rightarrow \\operatorname{Vb}\\big (\\mathcal {L}_\\omega \\boxtimes \\mathcal {O}(\\tilde{r}\\Delta _{\\max }) \\big ).$ whose restriction to $\\mathfrak {P}^{e,*}$ is precisely $\\widetilde{W}|_{\\mathfrak {P}^{e,*}}$ .", "Since $\\operatorname{Crit}(W) $ is contained in the zero section, $\\widetilde{W}|_{\\mathfrak {P}^{e,*}}$ has no critical points on $\\mathfrak {P}^{e,*}$ ." ], [ "The canonical cosection", "Next we construct the canonical cosection for the moduli of log R-maps.", "For this purpose, we adopt the assumptions in Section REF by assuming all contact orders are zero and working with the compact type locus for the rest of this section.", "Furthermore, in order for the canonical cosection to behave well along the boundary of the moduli, it is important to work with log R-maps with uniform maximal degeneracy, see Section REF .", "As already exhibited in the r-spin case [25], this will be shown to be the key to constructing the reduced theory in later sections in the general case." ], [ "Modifiying the target", "We recall the short-hand notation $\\mathfrak {U}^{\\mathrm {cpt}}$ and ${U}^\\mathrm {cpt}$ as in (REF ) and (REF ).", "Consider the universal log $R$ -map and the projection over ${U}^\\mathrm {cpt}$ respectively: $f_{{U}^\\mathrm {cpt}}\\colon \\mathcal {C}_{{U}^\\mathrm {cpt}} \\rightarrow \\mathfrak {P}\\ \\ \\ \\mbox{and} \\ \\ \\ \\pi \\colon \\mathcal {C}_{{U}^\\mathrm {cpt}} \\rightarrow {U}^\\mathrm {cpt}.$ Denote by $f_{{U}^\\mathrm {cpt}}\\colon \\mathcal {C}_{{U}^\\mathrm {cpt}} \\rightarrow \\mathcal {P}_{{U}^\\mathrm {cpt}} :=\\mathfrak {P}\\times _{\\mathbf {BC}^*_\\omega }\\mathcal {C}_{{U}^\\mathrm {cpt}}$ again for the corresponding section.", "To obtain a cosection, we modify the target $\\mathcal {P}_{{U}^\\mathrm {cpt}}$ as follows.", "Consider $\\mathfrak {X}_{{U}^\\mathrm {cpt}} := \\mathcal {C}_{{U}^\\mathrm {cpt}}\\times _{\\omega ^{\\log },\\mathbf {BC}^*_\\omega ,\\zeta }\\mathfrak {X}$ .", "Recall $\\Sigma $ is the sum of all markings.", "We define $\\mathcal {P}_{{U}^\\mathrm {cpt},-}$ to be the log stack with the underlying stack $\\underline{\\mathcal {P}_{{U}^\\mathrm {cpt},-}} := \\underline{\\mathbb {P}}^{\\mathbf {w}}\\left(\\bigoplus _{i > 0}(\\mathbf {E}^{\\vee }_i|_{\\mathfrak {X}_{{U}^\\mathrm {cpt}}}\\otimes \\mathcal {L}_{\\mathfrak {X}}^{\\otimes i}|_{\\mathfrak {X}_{{U}^\\mathrm {cpt}}}(-\\Sigma ))\\oplus \\mathcal {O}_{\\mathfrak {X}_{{U}^\\mathrm {cpt}}} \\right).$ The log structure on $\\mathcal {P}_{{U}^\\mathrm {cpt},-}$ is defined to be the direct sum of the log structures from the curve $\\mathcal {C}_{{U}^\\mathrm {cpt}}$ and the Cartier divisor $\\infty _{\\mathcal {P}_{{U}^\\mathrm {cpt},-}}$ similar to $\\mathcal {P}_{{U}^\\mathrm {cpt}}$ in Section REF .", "Denote by $\\mathcal {P}^{\\circ }_{{U}^\\mathrm {cpt},-} = \\mathcal {P}_{{U}^\\mathrm {cpt},-} \\setminus \\infty _{\\mathcal {P}_{{U}^\\mathrm {cpt},-}}$ .", "We have a morphism of vector bundles over $\\mathfrak {X}_{{U}^\\mathrm {cpt}}$ $\\mathcal {P}^{\\circ }_{{U}^\\mathrm {cpt},-} \\rightarrow \\mathcal {P}^{\\circ }_{{U}^\\mathrm {cpt}}$ which contracts the fiber over $\\Sigma $ , and is isomorphic everywhere else.", "This extends to a birational map $\\mathcal {P}_{{U}^\\mathrm {cpt},-} \\dashrightarrow \\mathcal {P}_{{U}^\\mathrm {cpt}}$ whose indeterminacy locus is precisely $\\infty _{\\mathcal {P}_{{U}^\\mathrm {cpt},-}}|_{\\Sigma }$ .", "Denote by $\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}} = \\mathcal {P}_{{U}^\\mathrm {cpt},-} \\setminus \\infty _{\\mathcal {P}_{{U}^\\mathrm {cpt},-}}|_{\\Sigma }.$ Lemma 3.13 There is a canonical factorization ${\\mathcal {C}_{{U}^\\mathrm {cpt}} [rr]^{f_{{U}^\\mathrm {cpt}}} [rd]_{f_{{U}^\\mathrm {cpt},-}} && \\mathcal {P}_{{U}^\\mathrm {cpt}} \\\\&\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}} [ru]&}$ Note that $f_{{U}^\\mathrm {cpt},-}$ and $f_{{U}^\\mathrm {cpt}}$ coincide when restricted away from $\\Sigma $ .", "The lemma follows from the constraint $f_{{U}^\\mathrm {cpt}}(\\Sigma ) \\subset \\mathbf {0}_{\\mathcal {P}_{{U}^\\mathrm {cpt}}}$ of the compact type locus.", "The following lemma will be used to show the compatibility of perfect obstruction theories constructed in (REF ) and in [26].", "Lemma 3.14 There is a canonical exact sequence $0 \\rightarrow f^*_{{U}^\\mathrm {cpt}}T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(-\\Sigma ) \\rightarrow f^*_{{U}^\\mathrm {cpt},-}T_{\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{{U}^\\mathrm {cpt}}} \\rightarrow T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\Sigma } \\rightarrow 0.$ Consider the following commutative diagram of solid arrows ${0 [r] & T_{\\mathfrak {P}/\\mathfrak {X}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) [r] [d]^{\\cong } & T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) [r] @{-->}[d] & T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) [r] @{_{(}->}[d] & 0 \\\\0 [r] & T_{\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathfrak {X}_{{U}^\\mathrm {cpt}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [r] @{_{(}->}[d] & T_{\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{{U}^\\mathrm {cpt}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [r] @{_{(}->}[d] & T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [r] [d]^{\\cong } [r] & 0 \\\\0 [r] & T_{\\mathfrak {P}/\\mathfrak {X}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [r] & T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [r] & T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [r] & 0}$ where the horizontal lines are exact, the top exact sequence is the twist of the bottom one, and the lower middle vertical arrow is induced by (REF ).", "Note that the sheaves in the first two columns are naturally viewed as sub-sheaves of $T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}$ .", "The injection $T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) \\hookrightarrow T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}$ on the upper right corner can be viewed as an inclusion of quotients by the same sub-bundle $T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) \\big /T_{\\mathfrak {P}/\\mathfrak {X}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) \\subset T_{\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{{U}^\\mathrm {cpt}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}\\big /T_{\\mathfrak {P}/\\mathfrak {X}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ),$ which lifts to $T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) \\subset T_{\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{{U}^\\mathrm {cpt}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}$ by Lemma REF below.", "This defines the dashed arrow.", "Finally, (REF ) follows from combinning the following exact sequence to the top two rows in the above commutative diagram $0 \\rightarrow T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}(-\\Sigma ) \\rightarrow T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} \\rightarrow T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\Sigma } \\rightarrow 0.$ Lemma 3.15 Suppose $R$ is a commutative ring, and $A, B, C$ are submodules of an $R$ -module $M$ satisfying $A \\subset B$ , $A \\subset C$ and $B/A \\subset C/A$ as submodules of $M/A$ .", "Then $B \\subset C$ as submodules of $M$ .", "The proof is left to the reader." ], [ "The boundary of the moduli stacks", "Recall from [25] that the maximal degeneracy induces canonical morphisms to $\\mathcal {A}_{\\max }$ ${U}^\\mathrm {cpt}\\rightarrow \\mathfrak {U}^{\\mathrm {cpt}} \\rightarrow \\mathfrak {U}\\rightarrow \\mathcal {A}_{\\max }.$ Consider the Cartier divisors $\\Delta _{\\mathfrak {U}} = \\Delta _{\\max }\\times _{\\mathcal {A}_{\\max }}\\mathfrak {U}\\subset \\mathfrak {U}\\ \\ \\ \\mbox{and} \\ \\ \\ \\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}} = \\Delta _{\\max }\\times _{\\mathcal {A}_{\\max }}\\mathfrak {U}^{\\mathrm {cpt}} \\subset \\mathfrak {U}^{\\mathrm {cpt}}$ and their pre-image $\\Delta _{{U}^\\mathrm {cpt}} \\subset {U}^\\mathrm {cpt}$ .", "Hence we have the line bundle $\\mathbf {L}_{\\max } = \\mathcal {O}_{\\mathfrak {U}^{\\mathrm {cpt}}}(-\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}) = \\mathcal {O}_{\\mathcal {A}_{\\max }}(-\\Delta _{\\max })|_{\\mathfrak {U}^{\\mathrm {cpt}}}.$ Definition 3.16 We call $\\Delta _{\\mathfrak {U}}$ (resp.", "$\\Delta _{{U}^\\mathrm {cpt}}$ and $\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}} $ ) the boundary of maximal degeneracy of $\\mathfrak {U}$ (resp.", "${U}^\\mathrm {cpt}$ and $\\mathfrak {U}^{\\mathrm {cpt}}$ ).", "We further introduce the interiors $\\mathring{{R}}^\\mathrm {cpt}:= {U}^\\mathrm {cpt}\\setminus (\\Delta _{{U}^\\mathrm {cpt}}) \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathring{\\mathfrak {U}}^{\\mathrm {cpt}} := \\mathfrak {U}^{\\mathrm {cpt}}\\setminus (\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}})$ By construction, $\\mathring{{R}}^\\mathrm {cpt}$ (resp.", "$\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}}$ ) parameterizes stable log $R$ -maps (resp.", "log maps) whose image avoids $\\infty _{\\mathfrak {P}}$ (resp.", "avoids $\\infty _{\\mathcal {A}}$ ).", "In this case, $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}}$ is the stack of pre-stable curves since all maps to $\\mathcal {A}$ factor through its unique open dense point.", "In particular, $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}}$ is smooth and log smooth." ], [ "The twisted superpotential over the modified target", "Consider the two morphisms $\\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}} \\rightarrow \\mathcal {A}\\times \\mathcal {A}_{\\max } \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathcal {P}_{{U}^\\mathrm {cpt}} \\rightarrow \\mathcal {A}\\times \\mathcal {A}_{\\max }$ where the morphisms to the first copy $\\mathcal {A}$ are induced by their infinity divisors.", "Pulling back (REF ) along the above two morphisms, we obtain $\\mathcal {P}^e_{{U}^\\mathrm {cpt},\\mathrm {reg}} \\rightarrow \\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathcal {P}^e_{{U}^\\mathrm {cpt}} \\rightarrow \\mathcal {P}_{{U}^\\mathrm {cpt}}$ Further removing the proper transforms of their infinity divisors from both, we obtain $\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}}$ and $\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt}}$ .", "Note that that $\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt}} \\cong \\mathfrak {P}^{e,\\circ }\\times _{\\mathbf {BC}^*_\\omega }\\mathcal {C}_{{U}^\\mathrm {cpt}}$ .", "Consider the short-hand $\\widetilde{\\omega }:= \\omega _{\\mathcal {C}_{{U}^\\mathrm {cpt}}/{U}^\\mathrm {cpt}}\\otimes \\pi ^*\\mathbf {L}_{\\max }^{-\\tilde{r}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\widetilde{\\omega }_{\\log } := \\omega ^{\\log }_{\\mathcal {C}_{{U}^\\mathrm {cpt}}/{U}^\\mathrm {cpt}}\\otimes \\pi ^*\\mathbf {L}_{\\max }^{-\\tilde{r}}|_{{U}^\\mathrm {cpt}}$ with the natural inclusion $\\widetilde{\\omega }\\rightarrow \\widetilde{\\omega }_{\\log }$ .", "Lemma 3.17 There is a commutative diagram ${\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}} [rr]^{\\widetilde{\\mathcal {W}}_{-}} [d] && \\widetilde{\\omega }[d] \\\\\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt}} [rr]^{\\widetilde{\\mathcal {W}}} && \\widetilde{\\omega }_{\\log }}$ where the two vertical arrows are the natural inclusions, $\\widetilde{\\mathcal {W}}$ is the pull-back of $\\widetilde{W}$ , and the two horizontal arrows are isomorphic away from the fibers over $\\Sigma $ .", "It suffices to construct the following commutative diagram ${\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}} [rr]^{\\widetilde{\\mathcal {W}}^{\\prime }_{-}} [d] && \\widetilde{\\omega }_{\\mathfrak {X}_{{U}^\\mathrm {cpt}}} [d] \\\\\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt}} [rr]^{\\widetilde{\\mathcal {W}}^{\\prime }} && \\widetilde{\\omega }_{\\log , \\mathfrak {X}_{{U}^\\mathrm {cpt}}}}$ where the right vertical arrow is the pull-back of $\\widetilde{\\omega }\\rightarrow \\widetilde{\\omega }_{\\log }$ along $\\mathfrak {X}_{{U}^\\mathrm {cpt}} \\rightarrow \\mathcal {C}_{{U}^\\mathrm {cpt}}$ .", "By Proposition REF , the composition $\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}} \\rightarrow \\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt}} \\rightarrow \\widetilde{\\omega }_{\\log , \\mathfrak {X}_{{U}^\\mathrm {cpt}}}$ contracts the fiber of $\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}} \\rightarrow \\mathfrak {X}_{{U}^\\mathrm {cpt}}$ over $\\Sigma $ to the zero section of $\\widetilde{\\omega }_{\\log , \\mathfrak {X}_{{U}^\\mathrm {cpt}}}$ , hence factors through $\\widetilde{\\omega }_{\\mathfrak {X}_{{U}^\\mathrm {cpt}}} \\cong \\widetilde{\\omega }_{\\log ,\\mathfrak {X}_{{U}^\\mathrm {cpt}}}(-\\Sigma )$ ." ], [ "The relative cosection", "By [25], (REF ) canonically lifts to a commutative triangle ${\\mathcal {C}_{{U}^\\mathrm {cpt}} [rr]^{f_{{U}^\\mathrm {cpt}}} [rd]_{f_{{U}^\\mathrm {cpt},-}} && \\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt}} \\\\&\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}} [ru]&}$ where the corresponding arrows are denoted again by $f_{{U}^\\mathrm {cpt}}$ and $f_{{U}^\\mathrm {cpt},-}$ .", "Now we have $f^*_{{U}^\\mathrm {cpt},-}\\operatorname{d}\\widetilde{\\mathcal {W}}_- \\colon f^*_{{U}^\\mathrm {cpt},-}T_{\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{{U}^\\mathrm {cpt}}} \\longrightarrow (\\widetilde{\\mathcal {W}}_-\\circ f_{{U}^\\mathrm {cpt},-})^*T_{\\widetilde{\\omega }/\\mathcal {C}_{{U}^\\mathrm {cpt}}} \\cong \\widetilde{\\omega }.$ By (REF ), we have a composition $f^*_{{U}^\\mathrm {cpt}}T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(-\\Sigma ) \\longrightarrow f^*_{{U}^\\mathrm {cpt},-}T_{\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{{U}^\\mathrm {cpt}}} \\rightarrow \\widetilde{\\omega },$ again denoted by $f^*_{{U}^\\mathrm {cpt},-}\\operatorname{d}\\widetilde{\\mathcal {W}}_-$ .", "Pushing forward along $\\pi $ and using (REF ), we have $\\sigma ^{\\bullet }_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} := \\pi _*\\big (f^*_{{U}^\\mathrm {cpt},-}\\operatorname{d}\\widetilde{\\mathcal {W}}_-\\big ) \\colon \\mathbb {E}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} \\longrightarrow \\pi _*\\widetilde{\\omega }\\cong \\pi _*\\omega _{\\mathcal {C}_{{U}^\\mathrm {cpt}}/{U}^\\mathrm {cpt}}\\otimes \\mathbf {L}_{\\max }^{-\\tilde{r}}|_{{U}^\\mathrm {cpt}}.$ where the isomorphism follows from the projection formula and (REF ).", "Finally, taking the first cohomology we obtain the canonical cosection: $\\sigma _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} \\colon \\operatorname{Obs}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} := H^1(\\mathbb {E}_{{U}/\\mathfrak {U}^{\\mathrm {cpt}}}) \\longrightarrow \\mathbf {L}_{\\max }^{-\\tilde{r}}|_{{U}^\\mathrm {cpt}}.$" ], [ "The degeneracy locus of $\\sigma _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$", "Denote by $\\mathring{{R}}_W$ the stack of $R$ -maps in ${U}^\\mathrm {cpt}$ which factor through $\\operatorname{Crit}(W)$ .", "Since $\\operatorname{Crit}(W)$ is a closed sub-stack of $\\mathfrak {P}$ , $\\mathring{{R}}_W$ is a strict closed substack of $\\mathring{{R}}^\\mathrm {cpt}$ .", "The stack ${U}^{\\mathrm {cpt}}$ plays a key role in the following crucial result.", "Proposition 3.18 Suppose $W$ has proper critical locus.", "Then the degeneracy locus of $\\sigma _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ is supported on $\\mathring{{R}}_W \\subset {U}^\\mathrm {cpt}$ .", "It suffices to check the statement at each geometric point.", "Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a stable log $R$ -map given by a geometric point $S \\rightarrow {U}^\\mathrm {cpt}$ .", "Following the same line of proof as in [26], consider the cosection: $\\sigma _S:= \\sigma _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}|_S \\colon H^1(f^*T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(-\\Sigma )) \\rightarrow H^1(\\widetilde{\\omega }|_{\\mathcal {C}})$ Applying Serre duality and taking dual, we have $\\sigma ^{\\vee }_S \\colon H^0(\\omega _{\\mathcal {C}/S}\\otimes \\widetilde{\\omega }^{\\vee }|_{\\mathcal {C}}) \\rightarrow H^0(\\omega _{\\mathcal {C}/S}\\otimes f^*\\Omega _{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(\\Sigma )).$ Note that $\\omega _{\\mathcal {C}/S}\\otimes \\widetilde{\\omega }^{\\vee }|_{\\mathcal {C}} =\\mathbf {L}_{\\max }^{\\tilde{r}}|_{\\mathcal {C}} \\cong \\mathcal {O}_{\\mathcal {C}}$ .", "Thus $\\sigma _S$ degenerates iff $id\\otimes \\big (f^*_{-}\\operatorname{d}\\widetilde{\\mathcal {W}}_-\\big )^{\\vee } \\colon \\omega _{\\mathcal {C}/S}\\otimes \\widetilde{\\omega }^{\\vee }|_{\\mathcal {C}} \\rightarrow \\omega _{\\mathcal {C}/S}\\otimes f^*\\Omega _{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(\\Sigma )$ degenerates which translates to the vanishing of $\\big (f^*_{-}\\operatorname{d}\\widetilde{\\mathcal {W}}_-\\big ) \\colon f^*T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(-\\Sigma ) \\rightarrow \\mathcal {O}_{\\mathcal {C}}$ Note that away from markings, $\\widetilde{\\mathcal {W}}_-$ is the same as $\\widetilde{\\mathcal {W}}$ which is the pull-back of $\\widetilde{W}$ .", "If $S \\notin \\Delta _{{U}^{\\mathrm {cpt}}}$ , then (REF ) degenerates iff $f$ factors through $\\operatorname{Crit}(W)$ .", "Consider a geometric point $S \\in \\Delta _{{U}^{\\mathrm {cpt}}}$ .", "By [25], $\\mathcal {C}$ has at least one component $\\mathcal {Z}$ whose image via $f_{-}$ is contained in the exceptional locus of $\\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}} \\rightarrow \\mathcal {P}_{{U}^\\mathrm {cpt},\\mathrm {reg}}$ .", "Because $\\widetilde{\\mathcal {W}}$ has proper critical locus, (REF ) is non-zero along $\\mathcal {Z}$ .", "This completes the proof." ], [ "The reduced theory", "Next we fix a $W$ hence $\\widetilde{W}$ with proper critical loci, and apply the general machinery in Section  to construct the reduced theory." ], [ "The twisted Hodge bundle", "Consider $\\widetilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}} := \\omega _{\\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}/\\mathfrak {U}^{\\mathrm {cpt}}} \\otimes \\pi ^*_{\\mathfrak {U}^{\\mathrm {cpt}}}\\mathbf {L}^{-\\tilde{r}}_{\\max }$ and its direct image cone $ \\mathfrak {H}:=\\mathbf {C}(\\pi _{\\mathfrak {U},*}\\widetilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}}) $ as in [16].", "It is an algebraic stack over $\\mathfrak {U}^{\\mathrm {cpt}}$ parameterizing sections of $\\widetilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}}$ [16].", "Indeed, $\\mathfrak {H}$ is the total space of the vector bundle $R^0\\pi _{\\mathfrak {U}^{\\mathrm {cpt}},*} \\widetilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}} \\cong R^0\\pi _{\\mathfrak {U}^{\\mathrm {cpt}},*}\\omega _{\\mathfrak {U}^{\\mathrm {cpt}}}\\otimes \\mathbf {L}^{- \\tilde{r}}_{\\max }|_{\\mathfrak {U}^{\\mathrm {cpt}}}$ over $\\mathfrak {U}^{\\mathrm {cpt}}$ by [25].", "We further equip $\\mathfrak {H}$ with the log structure pulled back from $\\mathfrak {U}^{\\mathrm {cpt}}$ .", "By [16], $\\mathfrak {H}\\rightarrow \\mathfrak {U}^{\\mathrm {cpt}}$ has a perfect obstruction theory $\\varphi _{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}} \\colon \\mathbb {T}_{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}} \\rightarrow \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}} := \\pi _{\\mathfrak {H},*}\\widetilde{\\omega }_{\\mathfrak {H}}.$ By projection formula, we have $H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}}) = R^1\\pi _{\\mathfrak {H},*} \\widetilde{\\omega }_{\\mathfrak {H}} = R^1\\pi _{\\mathfrak {H}, *}\\omega _{\\mathfrak {H}}\\otimes \\mathbf {L}^{-\\tilde{r}}_{\\max }|_{\\mathfrak {H}} \\cong \\mathbf {L}^{-\\tilde{r}}_{\\max }|_{\\mathfrak {H}}.$ Let $\\mathbf {s}_{\\mathfrak {H}}\\colon \\mathcal {C}_{\\mathfrak {H}} \\rightarrow \\operatorname{Vb}(\\widetilde{\\omega }_{\\mathfrak {H}})$ be the universal section over $\\mathfrak {H}$ .", "The morphism ${U}^\\mathrm {cpt}\\rightarrow \\mathfrak {U}^{\\mathrm {cpt}}$ factors through the tautological morphism ${U}^\\mathrm {cpt}\\rightarrow \\mathfrak {H}$ such that $\\mathbf {s}_{\\mathfrak {H}}|_{{U}^\\mathrm {cpt}} = \\widetilde{\\mathcal {W}}_{-} \\circ f_{{U}^\\mathrm {cpt},-}$ ." ], [ "Verifying assuptions in Section ", "First, the sequence (REF ) in consideration is ${U}^\\mathrm {cpt}\\rightarrow \\mathfrak {H}\\rightarrow \\mathfrak {U}^{\\mathrm {cpt}}$ with the perfect obstruction theories $\\varphi _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ in (REF ) and $\\varphi _{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}}$ in (REF ).", "Choose the Cartier divisor $\\Delta = \\tilde{r}\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}$ with the pre-images $\\tilde{r}\\Delta _{{U}^\\mathrm {cpt}} \\subset {U}^\\mathrm {cpt}$ and $\\tilde{r}\\Delta _{\\mathfrak {H}} \\subset \\mathfrak {H}$ .", "Thus we have the two term complex $\\mathbb {F}= [\\mathcal {O}_{\\mathfrak {U}^{\\operatorname{ev}}_{0}} \\stackrel{\\epsilon }{\\rightarrow }\\mathbf {L}^{-\\tilde{r}}_{\\max }]$ in degrees $[0,1]$ .", "The commutativity of (REF ) is verified in Lemma REF below, and the sujectivity of (REF ) along $\\Delta _{\\mathfrak {U}_0}$ follows from Proposition REF .", "Lemma 3.19 There is a canonical commutative diagram ${\\mathbb {T}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} [rr] [d]_{\\varphi _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}} && \\mathbb {T}_{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}} [d]^{\\varphi _{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}}} \\\\\\mathbb {E}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} [rr]^{\\sigma ^{\\bullet }_{\\mathfrak {U}^{\\mathrm {cpt}}}} && \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}}}$ where the two vertical arrows are the perfect obstruction theories.", "Similarly as in Section REF , we may construct the log weighted projective bundle $\\mathcal {P}_{\\mathfrak {U}^{\\mathrm {cpt}}} \\rightarrow \\mathfrak {X}_{\\mathfrak {U}^{\\mathrm {cpt}}} := \\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}\\times _{\\mathbf {BC}^*_\\omega } \\mathfrak {X}$ and its modification $\\mathcal {P}^{e,\\circ }_{\\mathfrak {U}^{\\mathrm {cpt}},\\mathrm {reg}}$ with the pull-backs $\\mathcal {P}_{\\mathfrak {U}^{\\mathrm {cpt}}}\\times _{\\mathfrak {X}_{{U}^\\mathrm {cpt}}}\\mathfrak {X}_{\\mathfrak {U}^{\\mathrm {cpt}}} \\cong \\mathcal {P}_{{U}^\\mathrm {cpt}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathcal {P}^{e,\\circ }_{\\mathfrak {U}^{\\mathrm {cpt}},\\mathrm {reg}}\\times _{\\mathfrak {X}_{{U}^\\mathrm {cpt}}}\\mathfrak {X}_{\\mathfrak {U}^{\\mathrm {cpt}}} \\cong \\mathcal {P}^{e,\\circ }_{{U}^\\mathrm {cpt},\\mathrm {reg}}.$ We may also define the line bundle $\\tilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}}$ over $\\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}$ similar to (REF ).", "The same proof as in Lemma REF yields a morphism $\\widetilde{\\mathcal {W}}_{\\mathfrak {U}^{\\mathrm {cpt}},-} \\colon \\mathcal {P}^{e,\\circ }_{\\mathfrak {U}^{\\mathrm {cpt}},\\mathrm {reg}} \\rightarrow \\tilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}}$ which pulls back to $\\widetilde{\\mathcal {W}}_{-}$ over ${U}^\\mathrm {cpt}$ .", "We obtain a commutative diagram ${\\mathcal {C}_{{U}^\\mathrm {cpt}} [rr] [d]_{f_{{U}^\\mathrm {cpt},-}} && \\mathcal {C}_\\mathfrak {H}[d]^{\\mathbf {s}_\\mathfrak {H}} \\\\\\mathcal {P}^{e,\\circ }_{\\mathfrak {U}^{\\mathrm {cpt}},\\mathrm {reg}} [rr]^{\\widetilde{\\mathcal {W}}_{\\mathfrak {U}^{\\mathrm {cpt}},-}} && \\tilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}}}$ where by abuse of notations the two vertical arrows are labeled by the morphisms inducing them.", "This leads to a commutative diagram of log tangent complexes ${\\pi ^* \\mathbb {T}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} \\cong \\mathbb {T}_{\\mathcal {C}_{{U}^\\mathrm {cpt}}/\\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}} [rr] [d] && \\pi ^* \\mathbb {T}_{\\mathcal {C}_{\\mathfrak {H}}/\\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} \\cong \\mathbb {T}_{\\mathfrak {H}/{\\mathfrak {U}^{\\mathrm {cpt}}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}} [d]\\\\(f_{{U}^\\mathrm {cpt},-})^* \\mathbb {T}_{\\mathcal {P}^{e,\\circ }_{\\mathfrak {U}^{\\mathrm {cpt}},\\mathrm {reg}}/\\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}} [rr]^{(\\operatorname{d}\\widetilde{\\mathcal {W}}_{\\mathfrak {U}^{\\mathrm {cpt}},-})|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}} && (\\mathbf {s}_\\mathfrak {H})^* \\mathbb {T}_{\\tilde{\\omega }_{\\mathfrak {U}^{\\mathrm {cpt}}}/\\mathcal {C}_{\\mathfrak {U}^{\\mathrm {cpt}}}}|_{\\mathcal {C}_{{U}^\\mathrm {cpt}}}}$ Diagram (REF ) follows from first applying $\\pi _*$ to the above diagram and then using adjunction." ], [ "The reduced perfect obstruction theory", "Applying Theorem REF to the situation above, we obtain the reduced perfect obstruction theory $\\varphi ^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} \\colon \\mathbb {T}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}} \\rightarrow \\mathbb {E}^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ and the reduced cosection $\\sigma ^{\\mathrm {red}}_{\\mathfrak {U}^{\\mathrm {cpt}}} \\colon H^1(\\mathbb {E}^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}) \\rightarrow \\mathcal {O}_{{U}^\\mathrm {cpt}}$ with the following properties The morphism $\\varphi _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ factors through $\\varphi ^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ such that $\\varphi _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}\\setminus \\Delta _{{U}^\\mathrm {cpt}}} = \\varphi ^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}\\setminus \\Delta _{{U}^\\mathrm {cpt}}}$ $\\sigma ^{\\mathrm {red}}_{\\mathfrak {U}^{\\mathrm {cpt}}}$ is surjective along $\\Delta _{{U}^\\mathrm {cpt}}$ , and satisfies $\\sigma ^{\\mathrm {red}}_{\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}\\setminus \\Delta _{{U}^\\mathrm {cpt}}} = \\sigma _{\\mathfrak {U}^{\\mathrm {cpt}}}|_{{U}^\\mathrm {cpt}\\setminus \\Delta _{{U}^\\mathrm {cpt}}}.$ The virtual cycle $[{U}^\\mathrm {cpt}]^{\\mathrm {red}}$ associated to $\\varphi ^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ is called the reduced virtual cycle of ${U}^\\mathrm {cpt}$ .", "We emphasize that the reduced theory depends on the superpotential $W$ ." ], [ "The cosection localized virtual cycle of $\\mathring{{R}}^\\mathrm {cpt}$", "Recall from Proposition REF that the degeneracy loci of $\\sigma _{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}$ are supported along the proper substack $\\mathring{{R}}_W \\subset \\mathring{{R}}^\\mathrm {cpt}$ .", "We have canonical embeddings $\\mathring{\\iota }\\colon \\mathring{{R}}_W \\hookrightarrow \\mathring{{R}}^\\mathrm {cpt}\\ \\ \\ \\mbox{and} \\ \\ \\ \\iota \\colon \\mathring{{R}}_W \\hookrightarrow {U}^\\mathrm {cpt}.$ Since $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}}$ is smooth, we are in the situation of Section REF .", "Applying Theorem REF , we obtain the cosection localized virtual cycle $[\\mathring{{R}}^\\mathrm {cpt}]_{\\sigma } \\in A_*(\\mathring{{R}}_W)$ with the property that $\\mathring{\\iota }_*[\\mathring{{R}}^\\mathrm {cpt}]_{\\sigma } = [\\mathring{{R}}^\\mathrm {cpt}]^{\\mathrm {vir}}$ .", "Since the canonical theory, the reduced theory, and their cosections all agree over $\\mathring{{R}}^\\mathrm {cpt}$ , the existence of the cycle $[\\mathring{{R}}^\\mathrm {cpt}]_{\\sigma }$ does not require the compactification ${U}^\\mathrm {cpt}$ of $\\mathring{{R}}^\\mathrm {cpt}$ ." ], [ "The first comparison theorem", "We now show that the reduced virtual cycle and the cosection localized virtual cycle agree.", "Theorem 3.20 $\\iota _*[\\mathring{{R}}^\\mathrm {cpt}]_{\\sigma } = [{U}^\\mathrm {cpt}]^{\\mathrm {red}}$ Since ${U}^\\mathrm {cpt}$ is of finite type, replacing $\\mathfrak {U}$ by an open set containing the image of ${U}^\\mathrm {cpt}$ , we may assume that $\\mathfrak {U}$ hence $\\mathfrak {U}^{\\mathrm {cpt}}$ is also of finite type.", "By [25], there is a birational projective resolution $\\mathfrak {r}\\colon \\widetilde{\\mathfrak {U}}\\rightarrow \\mathfrak {U}$ which restricts to the identity on $\\mathring{\\mathfrak {U}}= \\mathfrak {U}\\setminus (\\Delta _{\\max }|_{\\mathfrak {U}})$ .", "Let $\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}} =\\mathfrak {U}^{\\mathrm {cpt}}\\times _{\\mathfrak {U}}\\widetilde{\\mathfrak {U}}\\rightarrow \\mathfrak {U}^{\\mathrm {cpt}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\widetilde{{U}}^\\mathrm {cpt}= {U}\\times _{\\mathfrak {U}}\\widetilde{\\mathfrak {U}}\\rightarrow {U}.$ By abuse of notations, both morphisms are denoted by $\\mathfrak {r}$ when there is no danger of confusion.", "Then the two morphisms restrict to the identity on $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}}$ and $\\mathring{{R}}^{\\mathrm {cpt}}$ respectively.", "Furthermore, $\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}} \\rightarrow \\mathfrak {U}^{\\mathrm {cpt}}$ is a birational projective resolution by Proposition REF .", "Let $(\\varphi ^{\\mathrm {red}}_{\\widetilde{{U}}^\\mathrm {cpt}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}, \\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}})$ be the pull-back of $(\\varphi ^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}, \\sigma _{{\\mathfrak {U}}^{\\mathrm {cpt}}})$ along $\\mathfrak {r}$ .", "Then $\\varphi ^{\\mathrm {red}}_{\\widetilde{{U}}^\\mathrm {cpt}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}$ defines a perfect obstruction theory of $\\widetilde{{U}}^\\mathrm {cpt}\\rightarrow \\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}$ hence a virtual cycle $[\\widetilde{{U}}^\\mathrm {cpt}]^{\\mathrm {red}}$ .", "By the virtual push-forward of [29], [45], we have $\\mathfrak {r}_*[\\widetilde{{U}}^\\mathrm {cpt}]^{\\mathrm {red}} = [{U}^\\mathrm {cpt}]^{\\mathrm {red}}.$ On the other hand, since $(\\varphi ^{\\mathrm {red}}_{\\widetilde{{U}}^\\mathrm {cpt}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}, \\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}})$ is the pull-back of $(\\varphi ^{\\mathrm {red}}_{{U}^\\mathrm {cpt}/\\mathfrak {U}^{\\mathrm {cpt}}}, \\sigma _{{\\mathfrak {U}}^{\\mathrm {cpt}}})$ , the same properties listed in Section REF also pull back to $(\\varphi ^{\\mathrm {red}}_{\\widetilde{{U}}^\\mathrm {cpt}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}, \\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}})$ .", "Since $\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}$ is smooth, Theorem REF implies $\\iota _*[\\widetilde{{U}}^\\mathrm {cpt}]_{\\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}} = [{U}^\\mathrm {cpt}]^{\\mathrm {red}}.$ Since $\\mathfrak {r}$ does not modify the interior $\\mathring{{R}}^\\mathrm {cpt}$ and $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}}$ , we have $[\\widetilde{{U}}^\\mathrm {cpt}]_{\\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}} = [\\mathring{{R}}^\\mathrm {cpt}]_{\\sigma _{{\\mathfrak {U}}^{\\mathrm {cpt}}}}.$ Finally, (REF ), (REF ), and (REF ) together imply the statement." ], [ "The second comparison theorem", "By Section REF and Theorem REF (1), we obtain a factorization of perfect obstruction theories of $\\Delta _{{U}^\\mathrm {cpt}} \\rightarrow \\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}$ ${\\mathbb {T}_{\\Delta _{{U}^\\mathrm {cpt}}/\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}} [rr]^{\\varphi _{\\Delta _{{U}^\\mathrm {cpt}}/\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}}} [rd]_{\\varphi ^{\\mathrm {red}}_{\\Delta _{{U}^\\mathrm {cpt}}/\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}}} && \\mathbb {E}_{\\Delta _{{U}^\\mathrm {cpt}}/\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}} \\\\&\\mathbb {E}^{\\mathrm {red}}_{\\Delta _{{U}^\\mathrm {cpt}}/\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}} [ru]&}$ where the top is the pull-back of (REF ).", "Let $[\\Delta _{{U}^\\mathrm {cpt}}]^{\\mathrm {red}}$ be the reduced boundary virtual cycle associated to $\\varphi ^{\\mathrm {red}}_{\\Delta _{{U}^\\mathrm {cpt}}/\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}}$ .", "We then have: Theorem 3.21 $[{U}^\\mathrm {cpt}]^{\\mathrm {vir}} = [{U}^\\mathrm {cpt}]^{\\mathrm {red}} + \\tilde{r}[\\Delta _{{U}^\\mathrm {cpt}}]^{\\mathrm {red}}$ .", "The pull-back $\\varphi _{\\widetilde{{U}}^\\mathrm {cpt}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}} := \\varphi _{{U}^\\mathrm {cpt}/{\\mathfrak {U}}^{\\mathrm {cpt}}}|_{\\widetilde{{U}}^\\mathrm {cpt}}$ defines a perfect obstruction theory of $\\widetilde{{U}}^\\mathrm {cpt}\\rightarrow \\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}$ with the corresponding virtual cycle $[\\widetilde{{U}}^\\mathrm {cpt}]^{\\mathrm {vir}}$ .", "Applying the virtual push-forward [29], [45], we have $\\mathfrak {r}_*[\\widetilde{{U}}^\\mathrm {cpt}] = [{U}^\\mathrm {cpt}].$ Consider the resolution (REF ), and write $\\Delta _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}} = \\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}}\\times _{\\mathfrak {U}^{\\mathrm {cpt}}}\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\Delta _{\\widetilde{{U}}^\\mathrm {cpt}} = \\Delta _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}\\times _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}\\widetilde{{U}}^\\mathrm {cpt}.$ Applying Theorem REF to the data $(\\widetilde{{U}}^\\mathrm {cpt}, \\tilde{r}\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}, \\varphi _{\\widetilde{{U}}^\\mathrm {cpt}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}, \\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}})$ , we obtain the reduced boundary cycle $[\\tilde{r}\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}]^{\\mathrm {red}} = \\tilde{r}[\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}]^{\\mathrm {red}}$ and the following relation $[\\widetilde{{U}}^\\mathrm {cpt}] = [\\widetilde{{U}}^\\mathrm {cpt}]^{\\mathrm {red}} + \\tilde{r}\\cdot [\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}]^{\\mathrm {red}}.$ Applying $\\mathfrak {r}_*$ and using (REF ) and (REF ), we have $[{U}^\\mathrm {cpt}] = [{U}^\\mathrm {cpt}]^{\\mathrm {red}} + \\tilde{r}\\cdot \\mathfrak {r}_*[\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}]^{\\mathrm {red}}.$ It remains to verify that $[\\Delta _{{U}^\\mathrm {cpt}}]^{\\mathrm {red}} = \\mathfrak {r}_*[\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}]^{\\mathrm {red}}$ .", "Recall the degeneracy loci $\\mathring{{R}}_{W} \\subset \\mathring{{R}}^\\mathrm {cpt}$ of $\\sigma _{\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}$ .", "Write $V = {U}^\\mathrm {cpt}\\setminus \\mathring{{R}}_{W}$ and $\\widetilde{V} = \\widetilde{{U}}^\\mathrm {cpt}\\setminus \\mathring{{R}}_{W}$ .", "In the same way as in (REF ) we construct the totally reduced perfect obstruction theory $\\mathbb {E}^{\\operatorname{tred}}_{V/\\mathfrak {U}^{\\mathrm {cpt}}}$ for $V \\rightarrow \\mathfrak {U}^{\\mathrm {cpt}}$ which pulls back to the totally reduced perfect obstruction theory $\\mathbb {E}^{\\operatorname{tred}}_{\\widetilde{V}/\\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}}$ for $\\widetilde{V} \\rightarrow \\widetilde{\\mathfrak {U}}^{\\mathrm {cpt}}$ .", "Let $[V]^{\\operatorname{tred}}$ and $[\\widetilde{V}]^{\\operatorname{tred}}$ be the corresponding virtual cycles.", "Then the virtual push-forward implies $\\mathfrak {r}_*[\\widetilde{V}]^{\\operatorname{tred}} = [V]^{\\operatorname{tred}}$ .", "We calculate $\\tilde{r}\\cdot \\mathfrak {r}_*[\\Delta _{\\widetilde{{U}}^\\mathrm {cpt}}]^{\\mathrm {red}} = \\mathfrak {r}_* i^!", "[\\widetilde{V}]^{\\operatorname{tred}} = i^!", "[V]^{\\operatorname{tred}} = \\tilde{r}\\cdot [\\Delta _{{U}^\\mathrm {cpt}}]^{\\mathrm {red}}$ where the first and the last equalities follow from (REF ), and the middle one follows from the projection formula.", "This completes the proof." ], [ "Independence of twists II: the case of the reduced theory", "In this section, we complete the proof of the change of twists theorem.", "Consider the two targets $\\mathfrak {P}_1$ and $\\mathfrak {P}_2$ as in Section REF .", "Since $\\mathfrak {P}_1 \\rightarrow \\mathfrak {P}_2$ is isomorphic along $\\mathbf {0}_{\\mathfrak {P}_1} \\cong \\mathbf {0}_{\\mathfrak {P}_2}$ and $\\vec{\\varsigma }$ is a collection compact type sectors, the morphism in Corollary REF restricts to $\\nu _{a_1/a_2} \\colon {U}^{\\mathrm {cpt}}_1 := {U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_1,\\beta ) \\rightarrow {U}^{\\mathrm {cpt}}_2 :={U}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}_2,\\beta ).$ We compare the virtual cycles: Theorem 3.22 $\\nu _{{a_1/a_2},*}[{U}^{\\mathrm {cpt}}_1]^{\\mathrm {red}} =[{U}^{\\mathrm {cpt}}_2]^{\\mathrm {red}}$ $\\nu _{{a_1/a_2},*}[{U}^{\\mathrm {cpt}}_1]^\\mathrm {vir}=[{U}^{\\mathrm {cpt}}_2]^\\mathrm {vir}$ .", "$\\nu _{{a_1/a_2},*}[\\Delta _{{U}^{\\mathrm {cpt}}_1}]^{\\mathrm {red}} = \\frac{a_2}{a_1} \\cdot [\\Delta _{{U}^{\\mathrm {cpt}}_2}]^{\\mathrm {red}}.$ By Theorem REF , both $[{U}^{\\mathrm {cpt}}_1]^{\\mathrm {red}}$ and $[{U}^{\\mathrm {cpt}}_2]^{\\mathrm {red}}$ are represented by the same cosection localized virtual cycle contained in the common open set $\\mathring{{R}}^\\mathrm {cpt}$ of both ${U}^{\\mathrm {cpt}}_1$ and ${U}^{\\mathrm {cpt}}_2$ , hence are independent of the choices of $a_i$ .", "This proves the part of (1).", "We can prove (2) similarly as in Proposition REF .", "The only modification needed is to work over the log evaluation stack in Section REF .", "Finally, (3) follows from (1), (2) and Theorem REF ." ], [ "Gromov–Witten theory of complete intersections", "One of the most direct application of log GLSM is to study the Gromov–Witten theory of complete intersections, and more generally, zero loci of non-degenerate sections of vector bundles.", "Here, the most prominent examples are quintic threefolds in $\\mathbb {P}^4$ .", "The input of this log GLSM is given by a proper smooth Deligne–Mumford stack $\\mathcal {X}$ with a projective coarse moduli, a vector bundle $\\mathbf {E}= \\mathbf {E}_1$ over $\\mathcal {X}$ , a section $s \\in H^0(\\mathbf {E})$ whose zero locus $\\mathcal {Z}$ is smooth of codimension $\\operatorname{rk}\\mathbf {E}$ .", "In this case we may choose $\\mathbf {L}= \\mathcal {O}_{\\mathcal {X}}$ , $r = 1$ , and may choose $a= 1$ for simplicity.", "Then the universal targets are $\\mathfrak {P}= \\mathbb {P}(\\mathbf {E}^\\vee \\otimes \\mathcal {L}_\\omega \\oplus \\mathcal {O})$ and $\\mathfrak {P}^\\circ = \\operatorname{Vb}(\\mathbf {E}^\\vee \\otimes \\mathcal {L}_\\omega )$ .", "We may also view them as the quotients of $\\mathbb {P}(\\mathbf {E}^\\vee \\oplus \\mathcal {O})$ and $\\operatorname{Vb}(\\mathbf {E}^\\vee )$ under the $\\mathbb {C}^*_\\omega = \\mathbb {C}^*$ -scalar multiplication on $\\mathbf {E}^\\vee $ .", "By Proposition REF , the data of a stable R-map $f\\colon \\mathcal {C}\\ \\rightarrow \\mathfrak {P}^{\\circ }$ with compact type evaluation over $S$ is equivalent to a stable map $g\\colon \\mathcal {C}\\rightarrow \\mathcal {X}$ over $S$ together with a section $\\rho \\in H^0(\\omega _\\mathcal {C}\\otimes g^*(\\mathbf {E}^\\vee ))$ .", "Thus ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ is the same as the moduli space of stable maps to $\\mathcal {X}$ with $p$ -fields studied in [16], [44], [20], [26].", "In this situation, the superpotential $W\\colon \\operatorname{Vb}(\\mathbf {E}^\\vee \\boxtimes \\mathcal {L}_\\omega ) \\rightarrow \\operatorname{Vb}(\\mathcal {L}_\\omega )$ is defined as the pairing with $s$ .", "It has proper critical locus whenever $\\mathcal {Z}$ is smooth of expected dimension [26], and then the degeneracy locus $\\mathring{{R}}_W$ is supported on ${M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta )$ embedded in the subset ${M}_{g, \\vec{\\varsigma }}(\\mathcal {X}, \\beta ) \\subset {R}^{\\mathrm {cpt}}_{g,\\vec{\\varsigma }}(\\mathfrak {P}^{\\circ },\\beta )$ , which is defined by log $R$ -maps mapping into $\\mathbf {0}_\\mathfrak {P}$ .", "Recall that $\\vec{\\varsigma }$ is a collection of connected components of the inertia stack of $\\mathcal {X}$ .", "The moduli space ${M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta )$ parameterizes stable maps $\\mathcal {C}\\rightarrow \\mathcal {Z}$ such that the composition $\\mathcal {C}\\rightarrow \\mathcal {Z}\\rightarrow \\mathcal {X}$ has curve class $\\beta $ , and sectors $\\vec{\\varsigma }$ .", "In particular, ${M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta )$ is a disjoint union parameterized by curve classes $\\beta ^{\\prime }$ on $\\mathcal {Z}$ such that $\\iota _* \\beta ^{\\prime } = \\beta $ under the inclusion $\\iota \\colon \\mathcal {Z}\\rightarrow \\mathcal {X}$ .", "Combining Theorem REF with the results in [16], [44], [20], and more generally in [26], [49], we obtain: Proposition 4.1 In the above setting, we have $[ {U}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )]^{\\mathrm {red}}= (-1)^{\\operatorname{rk}(\\mathbf {E})(1 - g) + \\int _\\beta c_1(\\mathbf {E}) - \\sum _{j = 1}^n \\operatorname{age}_j(\\mathbf {E})} [{M}_{g, \\vec{\\varsigma }}(\\mathcal {Z}, \\beta )]^\\mathrm {vir},$ where $\\operatorname{age}_j(\\mathbf {E})$ is the age of $\\mathbf {E}|_\\mathcal {C}$ at the $j$ th marking (see [5]).", "Therefore Gromov–Witten invariants of $\\mathcal {Z}$ (involving only cohomology classes from $\\mathcal {X}$ ) can be computed in terms of log GLSM invariants.", "We will show that the perfect obstruction theory and cosection used in this paper are compatible with those in [26].", "Recall the notations $\\mathring{{R}}^\\mathrm {cpt}= {R}^\\mathrm {cpt}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^\\circ , \\beta )$ and $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}} = \\mathfrak {U}^{\\mathrm {cpt}}\\setminus (\\Delta _{\\mathfrak {U}^{\\mathrm {cpt}}})$ from (REF ).", "Note that $\\mathring{\\mathfrak {U}}^{\\mathrm {cpt}} = \\mathring{\\mathfrak {U}}\\times (\\overline{\\mathcal {I}}_{\\mu }\\mathcal {X})^n$ where $ \\overline{\\mathcal {I}}_{\\mu }\\mathcal {X}$ is the rigidified cyclotomic inertia stack of $\\mathcal {X}$ as in [5], and $\\mathring{\\mathfrak {U}}$ is simply the moduli of twisted curves.", "Note that we have a morphism of distinguished triangles over $\\mathring{{R}}^\\mathrm {cpt}$ ${\\mathbb {T}_{\\mathring{{R}}^\\mathrm {cpt}/\\mathring{\\mathfrak {U}}^{cpt}} [r] [d] & \\mathbb {T}_{\\mathring{{R}}^\\mathrm {cpt}/\\mathring{\\mathfrak {U}}} [r] [d] & T_{(\\overline{\\mathcal {I}}_{\\mu }\\mathcal {X})^n}|_{\\mathring{{R}}^\\mathrm {cpt}} [d]^{\\cong } [r] &\\\\\\pi _{\\mathring{{R}}^\\mathrm {cpt},*}f^*_{\\mathring{{R}}^\\mathrm {cpt}}T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(-\\Sigma ) [r] & \\pi _{\\mathring{{R}}^\\mathrm {cpt},*}f^*_{\\mathring{{R}}^\\mathrm {cpt},-}T_{\\mathcal {P}_{\\mathring{{R}}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{\\mathring{{R}}^\\mathrm {cpt}}} [r] & \\pi _{\\mathring{{R}}^\\mathrm {cpt},*}T_{\\mathfrak {X}/\\mathbf {BC}^*_\\omega }|_{\\Sigma } [r] & \\\\}$ where the left vertical arrow is the restriction of the perfect obstruction theory (REF ) to $\\mathring{{R}}^\\mathrm {cpt}$ , the middle vertical arrow is precisely the perfect obstruction theory [26], the vertical arrow on the right follows from [5], and the bottom is obtained by applying the derived pushforward $\\pi _{\\mathring{{R}}^\\mathrm {cpt},*}$ to (REF ).", "Thus, the perfect obstruction theory defined in this paper is compatible with that of [26], hence they define the same absolute perfect obstruction theory of $\\mathring{{R}}^\\mathrm {cpt}$ .", "Now applying $R^1\\pi _{\\mathring{{R}}^\\mathrm {cpt},*}$ to the composition (REF ), we have ${R^1\\pi _{\\mathring{{R}}^\\mathrm {cpt},*}f^*_{\\mathring{{R}}^\\mathrm {cpt}}T_{\\mathfrak {P}/\\mathbf {BC}^*_\\omega }(-\\Sigma ) [r]^{\\cong \\ \\ } [rd] & R^1\\pi _{\\mathring{{R}}^\\mathrm {cpt},*}f^*_{\\mathring{{R}}^\\mathrm {cpt},-}T_{\\mathcal {P}_{\\mathring{{R}}^\\mathrm {cpt},\\mathrm {reg}}/\\mathcal {C}_{\\mathring{{R}}^\\mathrm {cpt}}} [d]\\\\& \\mathcal {O}_{\\mathring{{R}}^\\mathrm {cpt}}}$ where the horizontal isomorphism follows from the compatibility of perfect obstruction theories above, the vertical arrow on the right is the relative cosection [26], and the skew arrow is the relative cosection (REF ) restricted to the open substack $\\mathring{{R}}^\\mathrm {cpt}$ .", "This means that the cosections in this paper over $\\mathring{{R}}^\\mathrm {cpt}$ is identical to the cosections in [26].", "Therefore, the statement follows from [26]." ], [ "FJRW theory", "We discuss in this section how our set-up includes all of FJRW theory, which is traditionally [30] stated in terms of a quasi-homogeneous polynomial $W$ defining an isolated singularity at the origin, and a diagonal symmetry group $G$ of $W$ .", "We first recall a more modern perspective on the input data for the FJRW moduli space following [33] and [50].", "Fix an integer $N$ , a finite subgroup $G \\subset \\mathbb {G}_{m}^N$ , and positive integers $c_1, \\cdots , c_N$ such that $\\gcd (c_1, \\cdots , c_N) = 1$ .", "Let $\\mathbb {C}^*_R$ be the one dimensional sub-torus $\\lbrace (\\lambda ^{c_1}, \\cdots , \\lambda ^{c_N})\\rbrace \\subset \\mathbb {G}_{m}^N$ , and assume that $G \\cap \\mathbb {C}^*_R$ is a cyclic group of order $r$ , which is usually denoted by $\\langle J\\rangle $ .", "Consider the subgroup $\\Gamma = G \\cdot \\mathbb {C}^*_R \\subset \\mathbb {G}_{m}^N$ .", "There is a homomorphism $\\zeta \\colon \\Gamma \\rightarrow \\mathbb {C}^*_\\omega \\cong \\mathbb {G}_{m}$ defined by $G \\mapsto 1$ and $(\\lambda ^{c_1}, \\cdots , \\lambda ^{c_N}) \\mapsto \\lambda ^r$ .", "Definition 4.2 A $\\Gamma $ -structure on a twisted stable curve $\\mathcal {C}$ is a commutative diagram ${& \\mathbf {B\\Gamma } [d] \\\\\\mathcal {C}[r] [ur] & \\mathbf {BC}^*_\\omega .", "}$ A $\\Gamma $ -structure with fields [17] is a commutative diagram ${& [\\mathbb {C}^N / \\Gamma ] [d] \\\\\\mathcal {C}[r] [ur] & \\mathbf {BC}^*_\\omega .", "}$ Remark 4.3 A special case of FJRW theory is the $r$ -spin theory, whose logarithmic GLSM was discussed in [25].", "In this case, $N = 1$ , $\\mathbb {C}^*_R = \\Gamma $ , and $G = \\mu _r \\subset \\mathbb {C}^*_R$ is the subgroup of $r$ th roots of unity.", "Lemma 4.4 There is hybrid target data (as in Section REF ) such that there is a commutative diagram ${[\\mathbb {C}^N/\\Gamma ] [r]^{\\sim } [dr] & \\mathfrak {P}^\\circ [d] \\\\& \\mathbf {BC}^*_\\omega .", "}$ This is a special case of the following Lemma REF .", "There are several constructions of the FJRW virtual cycle in full generality [15], [32], [43], [50].", "The construction closest to ours, and which we will follow here, is the approach [17] using cosection localized virtual classes for the special case of narrow insertions at all markings.", "In the FJRW situation, by Lemma REF , the moduli space ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^\\circ , \\beta )$ of stable $R$ -maps is the same as the moduli of $G$ -spin curves with fields in [17].", "Indeed, $\\mathcal {X}$ is a point, and all compact-type sectors are narrow.", "In this case, Proposition REF (1) implies that ${R}^{\\mathrm {cpt}}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^\\circ , \\beta ) = {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}^\\circ , \\beta )$ .", "The perfect obstruction theories in this paper are constructed in a slightly different way to the ones in [17] or [25] in that construct it relative to a moduli space of twisted curves instead of a moduli space of $G$ -spin curves.", "These constructions are related via a base-change of obstruction theories as in [26], and in particular give rise to the same virtual class.", "Given a superpotential $W$ with proper critical locus, the cosection constructed in Section REF is easily seen to agree with the one in [17].", "Therefore, $[\\mathring{{R}}^\\mathrm {cpt}]^\\mathrm {vir}$ is the FJRW virtual class, and log GLSM recovers FJRW theory in the narrow case." ], [ "Hybrid models", "The hybrid GLSM considered in the literature [27], [28], [33] fit neatly into our set-up, and they form a generalization of the examples of the previous sections.", "In this paper though, we restrict ourselves to the case of compact type insertions, and to the $\\infty $ -stability in order to include non-GIT quotients.", "The input data of a hybrid GLSM is the following: Let $G \\subset \\mathbb {G}_{m}^{K + N}$ be a sub-torus, and $\\theta \\colon G \\rightarrow \\mathbb {C}^*$ be a character such that the stable locus $\\mathbb {C}^{K, s}$ and the semi-stable locus $\\mathbb {C}^{K, ss}$ for the $G$ -action on $\\mathbb {C}^K = \\mathbb {C}^K \\times \\lbrace 0\\rbrace $ agree, and that $\\mathbb {C}^{K+N, ss} = \\mathbb {C}^{K,ss}\\times \\mathbb {C}^{N}$ .", "Then $[\\mathbb {C}^{K, ss} \\times \\mathbb {C}^N / G]$ is the total space of a vector bundle $\\mathbf {E}^\\vee $ on a Deligne–Mumford stack $\\mathcal {X}= [\\mathbb {C}^{K, ss} / G]$ .", "Furthermore, assume that there is a one-dimensional subtorus $\\mathbb {C}^*_R = \\lbrace (1, \\cdots , 1, \\lambda ^{c_1}, \\cdots , \\lambda ^{c_N})\\rbrace \\subset \\mathbb {G}_{m}^{K + N}$ such that $c_i > 0$ for all $i$ , and $G \\cap \\mathbb {C}^*_R \\cong \\mathbb {Z}/r\\mathbb {Z}$ .", "Let $\\Gamma = G\\cdot \\mathbb {C}^*_R$ , and define $\\zeta \\colon \\Gamma \\rightarrow \\mathbb {C}^*_\\omega $ via $G \\mapsto 1$ and $(\\lambda ^{c_1}, \\cdots , \\lambda ^{c_N}) \\mapsto \\lambda ^r$ .", "Given this set-up, the moduli space of $\\infty $ -stable LG quasi-maps [27] is the same as the moduli space of $R$ -maps to the target $[\\mathbb {C}^{K, ss} \\times \\mathbb {C}^N/\\Gamma ] \\rightarrow \\mathbf {BC}^*_\\omega $ .", "Analogously to the previous section, we have the following: Lemma 4.5 There is hybrid target data (as in Section REF ) such that there is a commutative diagram ${[\\mathbb {C}^{K, ss} \\times \\mathbb {C}^N/\\Gamma ] [r]^-{\\sim } [dr] & \\mathfrak {P}^\\circ [d] \\\\& \\mathbf {BC}^*_\\omega .", "}$ Choose a splitting $\\mathbb {G}_{m}^{K + N} \\cong T \\times \\mathbb {C}^*_R$ into tori.", "Let $H$ be the projection of $G$ to $T$ .", "Then there is an isomorphism $\\Gamma \\cong H \\times \\mathbb {C}^*_R$ defined by the projections, and the homomorphism $\\zeta \\colon \\Gamma \\rightarrow \\mathbb {C}^*_\\omega $ becomes of the form $(\\lambda , h) \\mapsto \\lambda ^r \\chi (h)$ for the character $\\chi := \\zeta |_{H} \\colon H \\rightarrow \\mathbb {C}^*_\\omega $ .", "Set $\\mathcal {X}= [\\mathbb {C}^{K, ss}/H]$ and let $\\mathbf {L}$ be the line bundle induced by $\\chi $ .", "Then $[\\mathbb {C}^{K, ss} \\times \\mathbb {C}^N / H]$ is a rank $N$ vector bundle over $\\mathcal {X}$ with the splitting $\\mathbf {E}= \\oplus _j \\mathbf {E}^{\\vee }_{c_j}$ according to the weights $c_j$ of the $\\mathbb {C}^*_R$ -action.", "Consider $\\mathfrak {X}:= [\\mathbb {C}^{K, ss}/ \\Gamma ] \\cong \\mathbf {BC}^*_R \\times \\mathcal {X}\\rightarrow \\mathbf {BC}^*_\\omega \\times \\mathcal {X}$ induced by the line bundle $\\mathcal {L}_R^{\\otimes r} \\boxtimes \\mathbf {L}$ and the identity on the second factor.", "Here, $\\mathcal {L}_R$ is the universal line bundle on $\\mathbf {BC}^*_R$ .", "The universal spin structure $\\mathcal {L}_{\\mathfrak {X}}$ is the pull-back of $\\mathcal {L}_R$ .", "We then have $\\mathfrak {P}^{\\circ } \\cong [\\operatorname{Vb}(\\oplus _i \\mathbf {E}^{\\vee }_i)/\\mathbb {C}^*_R] \\rightarrow \\mathbf {BC}^*_\\omega $ which is the same as $[\\mathbb {C}^{K, ss} \\times \\mathbb {C}^N / \\Gamma ] \\rightarrow \\mathbf {BC}^*_\\omega $ .", "It is a straightforward verification that the hybrid GLSM virtual cycles constructed in our paper agree with those constructed in [28], [33].", "Indeed, the absolute perfect obstruction theory and cosection for $\\mathring{{R}}^{\\mathrm {cpt}}$ constructed in this paper agree with the ones in the literature (to see this, we again need the base-change lemma [26]).", "We leave the comparison to [27] for a future work." ], [ "Properties of the stack of stable logarithmic $R$ -maps", "In this section, we establish Theorem REF ." ], [ "The representability", "For convenience, we prove the algebraicity of the stack ${R}(\\mathfrak {P})$ of all log R-maps with all possible discrete data since the discrete data specifies open and closed components, and the stability is an open condition.", "Consider the stack of underlying R-maps $\\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega )$ which associates to any scheme $\\underline{S}$ the category of commutative diagrams ${\\underline{\\mathcal {C}}[rd]_{\\omega ^{\\log }_{\\underline{C}/\\underline{S}}} [r]^{\\underline{f}} & \\underline{\\mathfrak {P}} [d] \\\\& \\mathbf {BC}^*_\\omega }$ where $\\underline{\\mathcal {C}}\\rightarrow \\underline{S}$ is a family of twisted curves.", "As proved in [1], [21], [35], the tautological morphism ${R}(\\mathfrak {P}, \\beta ) \\rightarrow \\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega )$ is represented by log algebraic spaces, see also [25].", "To show that ${R}(\\mathfrak {P}, \\beta )$ is algebraic, it remains to prove the algebraicity of $\\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega )$ .", "Now consider the tautological morphism $\\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega ) \\rightarrow \\mathfrak {M}^{\\mathrm {tw}}$ where $\\mathfrak {M}^{\\mathrm {tw}}$ is the stack of twisted pre-stable curves.", "For any morphism $\\underline{S} \\rightarrow \\mathfrak {M}^{\\mathrm {tw}}$ , the corresponding pre-stable curve $\\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ defines a fiber product $\\underline{\\mathfrak {P}}\\times _{\\mathbf {BC}^*_\\omega }\\underline{\\mathcal {C}}$ .", "For any $\\underline{T} \\rightarrow \\underline{S}$ , the fiber product $\\mathfrak {S}_{\\underline{S}}(\\underline{T}) := \\underline{T}\\times _{\\mathfrak {M}(\\mathcal {X})}\\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega )(\\underline{S})$ parameterizes sections of the projection $\\underline{\\mathfrak {P}}\\times _{\\mathbf {BC}^*_\\omega }\\underline{\\mathcal {C}}_{\\underline{T}} \\rightarrow \\underline{\\mathcal {C}}_{\\underline{T}} := \\underline{\\mathcal {C}}\\times _{\\underline{S}}\\underline{T}$ .", "Note that the composition $\\underline{\\mathfrak {P}}\\times _{\\mathbf {BC}^*_\\omega }\\underline{\\mathcal {C}} \\rightarrow \\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ is proper and of Deligne–Mumford type.", "Since being a section is an open condition, the stack $\\mathfrak {S}_{\\underline{S}}$ is an open substack of the stack parameterizing pre-stable maps to the family of targets $\\underline{\\mathfrak {P}}\\times _{\\mathbf {BC}^*_\\omega }\\underline{\\mathcal {C}} \\rightarrow \\underline{S}$ , which is algebraic by the algebraicity of Hom-stacks in [40].", "Hence, $\\mathfrak {S}_{\\underline{S}}$ is algebraic over $\\underline{S}$ .", "This proves the algebraicity of $\\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega )$ ." ], [ "Finiteness of automorphisms", "We now verify that ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ is of Deligne–Mumford type.", "Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a pre-stable $R$ -map.", "An automorphism of $f$ over $\\underline{S}$ is an automorphism of the log curve $\\mathcal {C}\\rightarrow S$ over $\\underline{S}$ which fixes $f$ .", "Denote by $\\operatorname{Aut}(f/S)$ the sheaf of automorphism groups of $f$ over $\\underline{S}$ .", "Since the underlying stack $\\underline{{R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )}$ parameterizes minimal objects in Definition REF , it suffices to consider the following: Proposition 5.1 Assume $f$ as above is minimal and stable, and that $\\underline{S}$ is a geometric point.", "Then $\\operatorname{Aut}(f/S)$ is a finite group.", "By [21] and [35], it suffices to show that the automorphism group of the underlying objects are finite, see also [25].", "By abuse of notation, we leave out the underlines, and assume all stacks and morphisms are equipped with the trivial logarithmic structures.", "Since the dual graph of $\\mathcal {C}$ has finitely many automorphisms, it suffices to consider the case that $\\mathcal {C}$ is irreducible.", "After possibly taking normalization, and marking the preimages of nodes, we may further assume that $\\mathcal {C}$ is smooth.", "Suppose $f$ has infinite automorphisms.", "Then we have either $\\mathcal {C}$ is smooth and rational, and the total number of markings is less than 3, or $\\mathcal {C}$ is an unmarked genus one curve.", "In both cases, the morphism $g := \\mathfrak {t}\\circ f\\colon \\mathcal {C}\\rightarrow \\mathcal {X}$ contracts the curve to a point $x \\in \\mathcal {X}$ .", "We first consider the cases that $\\mathcal {C}$ is rational with two markings, or it is of genus one without any markings.", "In both cases, we have $\\omega ^{\\log }_{\\mathcal {C}/S} \\cong \\mathcal {O}_{\\mathcal {C}/S}$ .", "Thus the morphism $\\mathcal {C}\\rightarrow \\mathbf {BC}^*_\\omega $ induced by $\\omega ^{\\log }_{\\mathcal {C}/S}$ factors through the universal quotient $\\operatorname{Spec}\\mathbf {k}\\rightarrow \\mathbf {BC}^*_\\omega $ .", "We obtain a commutative diagram ${\\mathcal {C}@/_3ex/[rd] [r]_{f_\\mathbf {k}} @/^3ex/[rr]^{f} & \\mathfrak {P}_\\mathbf {k}[r] [d] & \\mathfrak {P}[d] \\\\& \\operatorname{Spec}\\mathbf {k}[r] & \\mathbf {BC}^*_\\omega }$ where the square is cartesian.", "Since the automorphism group of $f$ is infinite, the automorphism group of $f_\\mathbf {k}$ is infinite as well.", "Thus $f_\\mathbf {k}$ contracts $\\mathcal {C}$ to a point of the Deligne–Mumford stack $\\mathfrak {P}_\\mathbf {k}$ .", "Then we have $\\deg \\big (f_\\mathbf {k}^*\\mathcal {O}(\\infty _{\\mathfrak {P}_\\mathbf {k}})\\big ) = \\deg \\big ( f^*\\mathcal {O}(\\infty _{\\mathfrak {P}})\\big ) = 0$ which contradicts the stability of $f$ as in (REF ).", "Now assume that $\\mathcal {C}$ is rational with at most one marking.", "Suppose there is no point $q \\in \\mathcal {C}$ such that $f(q) \\in \\mathbf {0}_\\mathfrak {P}$ .", "Let $f_{\\mathcal {X}}\\colon \\mathcal {C}\\rightarrow \\infty _{\\mathcal {X}}$ be the composition $\\mathcal {C}\\rightarrow \\mathfrak {P}\\setminus \\mathbf {0}_\\mathfrak {P}\\rightarrow \\infty _{\\mathfrak {P}} \\rightarrow \\infty _{\\mathcal {X}}$ where $\\mathfrak {P}\\setminus \\mathbf {0}_\\mathfrak {P}\\rightarrow \\infty _{\\mathfrak {P}}$ is the projection from $\\mathbf {0}_\\mathfrak {P}$ to $\\infty _\\mathfrak {P}$ , see Proposition REF .", "Since automorphisms of $f$ fix $f_{\\mathcal {X}}$ , the map $f_{\\mathcal {X}}$ contracts $\\mathcal {C}$ to a point of $\\infty _{\\mathcal {X}}$ , hence $\\deg \\big (f_{\\mathcal {X}}^*\\mathcal {O}_{\\infty _{\\mathfrak {P}^{\\prime }}}(\\frac{r}{d})\\big ) = 0$ .", "Proposition REF immediately leads to a contradiction to the stability condition (REF ).", "Thus there must be a point $q \\in \\mathcal {C}$ such that $f(q) \\in \\mathbf {0}_\\mathfrak {P}$ .", "On the other hand, since $\\omega ^{\\log }_{\\mathcal {C}/S} < 0$ and $\\deg g^*\\mathcal {H}= 0$ , by the stability condition (REF ) we must have $\\deg \\big ( f^* \\mathcal {O}(\\infty _{\\mathfrak {P}})\\big ) > 0$ .", "Thus $\\mathcal {C}$ intersects $\\infty _{\\mathfrak {P}}$ properly at its unique marking, denoted by $\\sigma $ , as the morphism $f$ comes from a log map.", "Clearly, $q \\ne \\sigma $ .", "Consider the $\\mathbb {G}_m$ -invariant open subset $U = \\mathcal {C}\\setminus \\lbrace q\\rbrace $ .", "Note that $\\omega _U^{\\log }$ is $\\mathbb {G}_m$ -equivariantly trivial.", "We thus arrive at the same diagram (REF ) with $\\mathcal {C}$ replaced by $U$ .", "The infinite automorphism group implies that $f_\\mathbf {k}|_{U}$ is constant.", "On the other hand, the image of $U$ must intersect $\\infty _{\\mathfrak {P}_\\mathbf {k}}$ properly.", "This is not possible!" ], [ "Boundedness", "We next show that the stack ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ is of finite type.", "Consider the following composition ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta ) \\rightarrow \\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega ) \\rightarrow \\mathfrak {M}_{g,n}$ where $\\mathfrak {M}_{g,n}$ is the stack of genus $g$ , $n$ -marked pre-stable curves, the first arrow is obtained by removing log structures, and the second arrow is obtained by taking coarse source curves.", "We divide the proof into two steps." ], [ "The composition (", "Let $T \\rightarrow \\mathfrak {M}_{g,n}$ be a morphism from a finite type scheme $T$ , and $C \\rightarrow T$ be the universal curve.", "Since the question is local on $\\mathfrak {M}_{g,n}$ , it suffices to prove that ${R}_T := {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )\\times _{\\mathfrak {M}_{g,n}} T \\rightarrow \\mathfrak {S}_{T} := \\mathfrak {S}(\\underline{\\mathfrak {P}}/\\mathbf {BC}^*_\\omega )\\times _{\\mathfrak {M}_{g,n}} T \\rightarrow T$ is of finite type.", "For any object $(f\\colon \\mathcal {C}_S \\rightarrow \\underline{\\mathfrak {P}}) \\in \\mathfrak {S}_T(S)$ , let $C_T$ be the pull-back of $C \\rightarrow T$ via $S \\rightarrow T$ .", "Then $\\mathcal {C}_S \\rightarrow C_T$ is the coarse morphism.", "Note that $\\omega ^{\\log }_{C_T/T}$ pulls back to $\\omega ^{\\log }_{\\mathcal {C}_S/S}$ .", "We thus obtain a commutative diagram of solid arrows with the unique square cartesian: ${&& \\underline{\\mathfrak {P}}_{T} [rr] [d] && \\underline{\\mathfrak {P}} [d] \\\\\\mathcal {C}_S [rr] @{-->}[rru]^{\\tilde{f}} @/_1pc/[rrrr]_{\\omega ^{\\log }_{\\mathcal {C}_S/S}} && C [rr]^{\\omega ^{\\log }_{C/T}} && \\mathbf {BC}^*_\\omega }$ Then it follows that $f$ factors through a unique dashed arrow $\\tilde{f}$ making the above diagram commutative.", "Note that $\\underline{\\mathfrak {P}}_T \\rightarrow T$ is a family of proper Deligne–Mumford stacks with projective coarse moduli spaces over $T$ .", "Let $\\tilde{\\beta }$ be the curve class of the fiber of $\\underline{\\mathfrak {P}}_T \\rightarrow T$ corresponding to objects in ${R}_T$ .", "Note that $\\tilde{\\beta }$ is uniquely determined by the curve class $\\beta $ in $\\mathcal {X}$ and the contact orders.", "Thus, the morphism ${R}_T \\rightarrow \\mathfrak {S}_T$ factors through the open substack $\\mathfrak {S}_T(\\tilde{\\beta }) \\subset \\mathfrak {S}_T$ with the induced maps with curve class $\\tilde{\\beta }$ .", "First, note that the morphism ${R}_T \\rightarrow \\mathfrak {S}_T(\\tilde{\\beta })$ is of finite type.", "Indeed, using the same proof as in [25], one shows that the morphism ${R}_T \\rightarrow \\mathfrak {S}_T(\\tilde{\\beta })$ is combinatorially finite ([25]), hence is of finite type by [25].", "Now let ${M}_{g,n}(\\underline{\\mathfrak {P}}_T/T,\\tilde{\\beta })$ be the stack of genus $g$ , $n$ -marked stable maps to the family of targets $\\underline{\\mathfrak {P}}_T/T$ with curve class $\\tilde{\\beta }$ .", "Then $\\mathfrak {S}_T(\\tilde{\\beta })$ is identified with the locally closed sub-stack of ${M}_{g,n}(\\underline{\\mathfrak {P}}_T/T,\\tilde{\\beta })$ which for any $T$ -scheme $S$ associates the category of stable maps $\\tilde{f}\\colon \\mathcal {C}_S \\rightarrow \\underline{\\mathfrak {P}}_T$ over $S$ such that the induced map $C_S \\rightarrow C$ from the coarse curve $C_S$ of $\\mathcal {C}_S$ to $C \\rightarrow T$ is stable, and is compatible with the marked points.", "Since ${M}_{g,n}(\\underline{\\mathfrak {P}}_T/T,\\tilde{\\beta })$ is of finite type over $T$ by [7], $\\mathfrak {S}_T(\\tilde{\\beta })$ is of finite type." ], [ "The image of (", "It remains to show that the image of (REF ) is contained in a finite type sub-stack of $\\mathfrak {M}_{g,n}$ .", "For this purpose, it suffices to bound the number of unstable components of the source curves in ${R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ .", "Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be an $R$ -map corresponding to a geometric log point $S \\rightarrow {R}_{g, \\vec{\\varsigma }}(\\mathfrak {P}, \\beta )$ .", "Observe that the number $d_{\\beta }:= \\deg \\omega ^{\\log }_{\\mathcal {C}/S}\\otimes {f}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})$ is a constant only depending on the choice of genus $g$ , the orbifold structure at markings, and the contact orders.", "Let $\\mathcal {Z}\\subset \\mathcal {C}$ be an irreducible component.", "Denote by $d_{\\beta ,\\mathcal {Z}} := \\deg \\big ( \\omega ^{\\log }_{\\mathcal {C}/S}\\otimes {f}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\big )|_{\\mathcal {Z}}.$ Let $g := \\mathfrak {t}\\circ f$ be the pre-stable map underlying $f$ .", "An irreducible component $\\mathcal {Z}\\subset \\mathcal {C}$ is called $g$ -stable if $(\\deg g^*\\mathcal {H})|_{\\mathcal {Z}} > 0$ or $\\mathcal {Z}$ is a stable component of the curve $\\mathcal {C}$ .", "Otherwise, $\\mathcal {Z}$ is called $g$ -unstable.", "Suppose $\\mathcal {Z}$ is $g$ -unstable.", "Then by the stability condition (REF ) and $(\\deg g^*\\mathcal {H})|_{\\mathcal {Z}} = 0$ , we have $d_{\\beta ,\\mathcal {Z}} \\ge \\deg \\big ( (\\omega ^{\\log }_{\\mathcal {C}/S})^{1 + \\delta }\\otimes {f}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\big )|_{\\mathcal {Z}} > 0.$ Note that $\\mathfrak {P}_\\mathbf {k}$ is a proper Deligne–Mumford stack, and the stack of cyclotomic gerbes in $\\mathfrak {P}_\\mathbf {k}$ is of finite type, see [5].", "Thus there exists a positive integer $\\lambda $ such that if $\\mathfrak {P}_\\mathbf {k}$ has a cyclotomic gerbes banded by $\\mu _k$ , then $k | \\lambda $ .", "Since $\\underline{f}$ factors through a representable morphism $\\tilde{f}$ in (REF ), we have $d_{\\beta ,\\mathcal {Z}} \\ge \\frac{1}{\\lambda }$ .", "Now we turn to considering $g$ -stable components.", "Since the genus is fixed, and the orbifold structure of $\\mathcal {Z}$ is bounded, the number of $g$ -stable components is bounded.", "Let $\\mathcal {Z}$ be an $g$ -stable component.", "We have the following two possibilities.", "Suppose $f(\\mathcal {Z}) \\lnot \\subset \\infty _{\\mathfrak {P}}$ , hence $\\deg {f}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\big )|_{\\mathcal {Z}} \\ge 0$ .", "Then we have $d_{\\beta ,\\mathcal {Z}} \\ge -1$ where the equality holds only if $\\mathcal {Z}$ is a rational tail.", "Now assume that $f(\\mathcal {Z}) \\subset \\infty _{\\mathfrak {P}}$ .", "By Proposition REF , we have $d_{\\beta ,\\mathcal {Z}} = \\deg g^*\\mathbf {L}\\otimes (\\underline{f}_{\\mathcal {X}})^*\\mathcal {O}_{\\infty _{\\mathcal {X}}}(\\frac{r}{d}).$ Since $\\deg (\\underline{f}_{\\mathcal {X}})^*\\mathcal {O}_{\\infty _{\\mathcal {X}}}(\\frac{r}{d}) \\ge 0$ and $\\deg g^*\\mathbf {L}$ is bounded below by some number only depending on $\\mathbf {L}$ and the curve class $\\beta $ , we conclude that $d_{\\beta ,\\mathcal {Z}}$ is also bounded below by some rational number independent the choices of $\\mathcal {Z}$ .", "Finally, note that $d_{\\beta } = \\sum _{\\mathcal {Z}\\colon \\text{ $g$-stable}} d_{\\beta ,\\mathcal {Z}} + \\sum _{\\mathcal {Z}\\colon \\text{ $g$-unstable}} d_{\\beta ,\\mathcal {Z}}.$ The above discussion implies that the first summation can be bounded below by a number only depending on the discrete data $\\beta $ , and each term in the second summation is a positive number larger than $\\frac{1}{\\lambda }$ .", "We thus conclude that the number of irreducible components of the source curve $\\mathcal {C}$ is bounded.", "This finishes the proof of the boundedness." ], [ "The set-up of the weak valuative criterion", "Let $R$ be a discrete valuation ring (DVR), $K$ be its quotient field, $\\mathfrak {m}\\subset R$ be the maximal ideal, and $k = R/\\mathfrak {m}$ the residue field.", "Denote by $\\underline{S} = \\operatorname{Spec}R$ , $\\underline{\\eta } = \\operatorname{Spec}K$ and $\\underline{s} = \\operatorname{Spec}k$ .", "Our next goal is to prove the weak valuative criterion of stable log R-maps.", "Theorem 5.2 Let $f_{\\eta }\\colon \\mathcal {C}_{\\eta } \\rightarrow \\mathfrak {P}$ be a minimal log $R$ -map over a log $K$ -point $\\eta $ .", "Possibly replacing $R$ by a finite extension of DVRs, there is a minimal log $R$ -map $f_S \\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ over $S$ extending $f_{\\eta }$ over $\\underline{\\eta }$ .", "Furthermore, the extension $f_S$ is unique up to a unique isomorphism.", "We will break the proof into several steps.", "Since the stability condition in Section REF are constraints only on the level of underlying structures, by the relative properness of log maps over underlying stable maps [21], [1], [35], see also [25], it suffices to prove the existence and uniqueness of an underlying stable $R$ -map $\\underline{f}\\colon \\underline{\\mathcal {C}} \\rightarrow \\mathfrak {P}$ extending $f_{\\eta }$ over $S$ , possibly after a finite extension of the base.", "Since the focus is now the underlying structure, we will leave out the underlines to simplify the notations, and assume all stacks are equipped with the trivial logarithmic structure for the rest of this section.", "Normalizing along nodes of $\\mathcal {C}_{\\eta }$ , possibly taking further base change, and marking the points from the nodes, we obtain a possibly disjoint union of smooth curves $\\mathcal {C}_{\\eta }^{n}$ .", "Observe that $\\mathcal {C}_{\\eta }^n \\rightarrow \\mathbf {BC}^*_\\omega $ induced by $\\omega ^{\\log }_{\\mathcal {C}_{\\eta }^n}$ factors through the corresponding $\\mathcal {C}_{\\eta } \\rightarrow \\mathbf {BC}^*_\\omega $ before taking normalization.", "Thus, we may assume that $\\mathcal {C}_{\\eta }$ is smooth and irreducible.", "It is important to notice that every isolated intersection of $\\mathcal {C}_{\\eta }$ with $\\infty _{\\mathfrak {P}}$ via $f$ is marked.", "This will be crucial in the proof below." ], [ "The separatedness", "We first verify the uniqueness in Theorem REF .", "The strategy is similar to [25] but in a more complicated situation." ], [ "Reduction to the comparison of coarse curves", "Let $f_i\\colon \\mathcal {C}_i \\rightarrow \\mathfrak {P}$ be a stable underlying R-map over $S$ extending $f_{\\eta }$ for $i=1,2$ .", "Let $\\mathcal {C}_i \\rightarrow C_i$ and $\\mathcal {C}_{\\eta } \\rightarrow C_{\\eta }$ be the corresponding coarse moduli.", "By (REF ), the morphism $f_i$ factors through a twisted stable map $\\mathcal {C}_i \\rightarrow \\mathcal {P}_i := \\underline{\\mathfrak {P}}\\times _{\\mathbf {BC}^*_\\omega }C_i$ , where $\\mathfrak {P}_i$ is a proper Deligne–Mumford stack over $S$ .", "By the properness of [7], to show that $f_1$ and $f_2$ are canonically isomorphic, it suffices to show that the two coarse curves $C_1$ and $C_2$ extending $C_{\\eta }$ are canonically isomorphic." ], [ "Merging two maps", "Let $C_3$ be a family of prestable curves over $\\operatorname{Spec}R$ extending $C_{\\eta }$ with dominant morphisms $C_3 \\rightarrow C_i$ for $i=1,2$ .", "We may assume $C_3$ has no rational components with at most two special points contracted in both $C_1$ and $C_2$ by further contracting these components.", "Let $\\mathcal {C}_3 \\rightarrow \\mathcal {C}_1\\times \\mathcal {C}_2\\times C_3$ be the family of twisted stable maps over $\\operatorname{Spec}R$ extending the obvious one $\\mathcal {C}_\\eta \\rightarrow \\mathcal {C}_1\\times \\mathcal {C}_2\\times C_3$ .", "Observe that the composition $\\mathcal {C}_3 \\rightarrow \\mathcal {C}_1\\times \\mathcal {C}_2\\times C_3 \\rightarrow C_3$ is the coarse moduli morphism.", "Indeed, if there is a component of $\\mathcal {C}_3$ contracted in $C_3$ , then it will be contracted in both $\\mathcal {C}_1$ and $\\mathcal {C}_2$ as well.", "Set $U_i^{(0)} = C_3$ for $i=1,2$ .", "Let $U_i^{(k+1)}$ be obtained by removing from $U_i^{(k)}$ the rational components with precisely one special point in $U_i^{(k)}$ and that are contracted in $C_i$ .", "Note that these removed rational components need not be proper, and their closure may have more than one special points in $C_3$ .", "We observe that this process must stop after finitely many steps.", "Denote by $U_i \\subset C_3$ the resulting open subset.", "Lemma 5.3 $U_1 \\cup U_2 = C_3$ .", "Suppose $z \\in C_3 \\setminus (U_1\\cup U_2) \\ne \\emptyset $ .", "Then there is a tree of rational curves in $C_3$ attached to $z$ and contracted in both $C_1$ and $C_2$ .", "This contradicts the assumption on $C_3$ .", "We then construct an underlying $R$ -map $f_3\\colon \\mathcal {C}_3 \\rightarrow \\mathfrak {P}$ by merging $f_1$ and $f_2$ as follows.", "Denote by $\\mathcal {U}_i := \\mathcal {C}_3\\times _{C_3}U_i$ for $i=1,2$ .", "Note that $U_{i} \\rightarrow C_i$ hence $\\mathcal {U}_i \\rightarrow \\mathcal {C}_i$ contracts only rational components with precisely two special points in $U_i$ .", "In particular, we have $\\omega ^{\\log }_{\\mathcal {C}_3/S}|_{\\mathcal {U}_i} = \\omega ^{\\log }_{\\mathcal {C}_i/S}|_{\\mathcal {U}_i}$ .", "This leads to a commutative diagram below ${&&&& \\mathfrak {P}[d] \\\\\\mathcal {U}_i @/^1.5pc/[urrrr]^{f_3|_{\\mathcal {U}_i}} [rr] @/_1pc/[rrrr]_{\\omega ^{\\log }_{\\mathcal {C}_3/S}|_{\\mathcal {U}_i}} && \\mathcal {C}_i @/^.5pc/[rru]^{f_i} [rr]^{\\omega ^{\\log }_{\\mathcal {C}_i/S}} && \\mathbf {BC}^*_\\omega .", "}$ where $f_{3}|_{\\mathcal {U}_i}$ is given by the obvious composition.", "We then observe that the two morphisms $f_3|_{\\mathcal {U}_1}$ and $f_3|_{\\mathcal {U}_2}$ coincide along $\\mathcal {U}_1\\cap \\mathcal {U}_2$ .", "Indeed, both morphisms $f_3|_{\\mathcal {U}_i}$ restrict to $f_{\\eta }$ over the open dense set $\\mathcal {C}_{\\eta } \\subset \\mathcal {U}_i$ , and $\\mathfrak {P}\\rightarrow \\mathbf {BC}^*_\\omega $ is proper of Deligne–Mumford type.", "Thus, $f_3|_{\\mathcal {U}_1}$ and $f_3|_{\\mathcal {U}_2}$ can be glued to an underlying $R$ -map $f_3\\colon \\mathcal {C}_3 \\rightarrow \\mathfrak {P}$ over $S$ ." ], [ "Comparing the underlying $R$ -maps", "Denote by $\\overline{\\mathcal {U}_{i,s}}$ the closure of the closed fiber $\\mathcal {U}_{i,s}$ in $\\mathcal {C}_3$ .", "Lemma 5.4 Notations as above, we have $\\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_{3}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\overline{\\mathcal {U}_{i,s}}}\\ge \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_i/S} \\otimes f_{i}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {C}_{i,s}}$ We prove (REF ) by checking the following $\\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_3^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {Z}} \\\\\\ge \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_i/S} \\otimes f_i^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {Z}}$ for each irreducible component $\\mathcal {Z}\\subset \\overline{\\mathcal {U}_{i,s}}$ .", "Since $\\mathcal {U}_{i,s} \\rightarrow \\mathcal {C}_{i,s}$ is a dominant morphism contracting rational components with precisely two special points in $\\mathcal {U}_{i,s}$ , there is an effective divisor $D^{\\prime }$ of $\\mathcal {C}_{3,s}$ supported on $\\overline{\\mathcal {U}_{i}} \\setminus \\mathcal {U}_{i}$ such that $\\omega ^{\\log }_{\\mathcal {C}_3/S}|_{\\overline{\\mathcal {U}_{i,s}}} = \\omega ^{\\log }_{\\mathcal {C}_i/S}|_{\\overline{\\mathcal {U}_{i,s}}}(D^{\\prime }).$ Restricting to $\\mathcal {Z}$ , we obtain $\\deg \\omega ^{\\log }_{\\mathcal {C}_3/S}|_{\\mathcal {Z}} = \\deg \\omega ^{\\log }_{\\mathcal {C}_i/S}|_{\\mathcal {Z}} + \\deg D^{\\prime }|_{\\mathcal {Z}}.$ Suppose $\\mathcal {Z}$ is contracted in $\\mathcal {C}_i$ .", "By (REF ), we obtain $\\deg f_3^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} = \\deg f_i^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} = 0.$ Then (REF ) immediately follows from (REF ).", "Now assume $\\mathcal {Z}$ is mapped to an irreducible component $\\mathcal {Z}^{\\prime } \\subset \\mathcal {C}_i$ .", "Consider the case that $f_{3}(\\mathcal {Z}) \\subset \\infty _{\\mathfrak {P}}$ , hence $f_i(\\mathcal {Z}^{\\prime }) \\subset \\infty _{\\mathfrak {P}}$ by (REF ).", "By (REF ), we obtain the equality in (REF ).", "It remains to consider the case $f_{3}(\\mathcal {Z}) \\lnot \\subset \\infty _{\\mathfrak {P}}$ .", "Let $\\mathcal {L}_3$ and $\\mathcal {L}_i$ be the corresponding spin structures over $\\mathcal {C}_3$ and $\\mathcal {C}_i$ respectively, see Proposition REF .", "Note that $\\mathcal {L}_3|_{\\mathcal {U}_i} \\cong \\mathcal {L}_i|_{\\mathcal {U}_i}$ by (REF ).", "By (REF ) and Definition REF , there is an effective divisor $D$ supported on $\\overline{\\mathcal {U}_i} \\setminus \\mathcal {U}_i$ such that $r\\cdot D = D^{\\prime }$ and $\\mathcal {L}_{3}|_{\\mathcal {Z}} \\cong \\mathcal {L}_i|_{\\mathcal {Z}}(D|_{\\mathcal {Z}})$ .", "By (REF ), we have $\\deg f_3^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} - \\deg f_i^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}^{\\prime }} \\ge \\frac{1}{a}\\deg D|_{\\mathcal {Z}}.$ Combining this with (REF ), we obtain (REF ).", "Suppose $C_1 \\ne C_2$ .", "Then we have $U_i \\ne C_i$ for some $i$ , say $i=1$ .", "By construction each connected component of $C_3 \\setminus U_1$ is a tree of proper rational curves in $U_2$ with no marked point, hence $\\mathcal {T}:= (\\mathcal {C}_3 \\setminus \\mathcal {U}_1) \\subset \\mathcal {U}_2$ .", "By construction, the composition $\\mathcal {T}\\rightarrow \\mathcal {C}_3 \\rightarrow \\mathcal {C}_2$ is a closed immersion and $f_{3}|_{\\mathcal {T}} = f_{2}|_{\\mathcal {T}}$ .", "Since $\\deg \\omega ^{\\log }_{\\mathcal {C}_3/S}|_{\\mathcal {T}} < 0$ (unless $\\mathcal {T}= \\emptyset $ ), and $\\mathcal {T}$ is contracted to $\\mathcal {C}_1$ and hence maps to a point in $\\mathcal {X}$ , the stability of $f_2$ implies $\\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_3^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {T}}= \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_2/S} \\otimes f_2^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {T}} > 0.$ Using Lemma REF , We calculate $\\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_{3}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {C}_{3,s}} \\\\= \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_3^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\overline{\\mathcal {U}_{1, s}}} + \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_3^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right) \\big |_{\\mathcal {T}} \\\\\\ge \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_1/S} \\otimes f_1^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_{\\mathcal {C}_{1,s}} + \\deg \\left(\\omega ^{\\log }_{\\mathcal {C}_3/S} \\otimes f_3^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})\\right)\\big |_\\mathcal {T}.$ Since $\\deg f_{3,s}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}}) = \\deg f_{1,s}^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})$ is given by the sum of contact orders, we conclude that $\\mathcal {T}= \\mathcal {C}_3 \\setminus \\mathcal {U}_1 = \\emptyset $ .", "Observe that $C_3 = U_1 \\rightarrow C_1$ contracts proper rational components with precisely two special points.", "Let $Z \\subset C_3$ be such a component, and let $\\mathcal {Z}= Z \\times _{C_3}\\mathcal {C}_3$ .", "Since $f_3|_{\\mathcal {C}_3 = \\mathcal {U}_1}$ factors through $f_1$ , we have $\\deg f_3^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}} = 0.$ On the other hand, since $Z$ has two special points in $C_3$ and is contracted in $C_1$ , it is not contracted in $C_2$ .", "Denote by $\\mathcal {Z}^{\\prime } \\subset \\mathcal {C}_2$ the component dominating $Z \\subset C_2$ .", "Then $\\mathcal {Z}^{\\prime }$ has precisely two special points.", "Furthermore $f_2|_{\\mathcal {Z}^{\\prime }}$ and $f_3|_{\\mathcal {Z}}$ coincide away from the two special points.", "Using (REF ), we observe that $\\deg f_2^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}^{\\prime }} = 0$ , which contradicts the stability of $f_2$ .", "Thus $C_3 \\rightarrow C_1$ is an isomorphism.", "This finishes the proof of separatedness." ], [ "Rigidifying (pre-)stable reductions", "We start constructing the stable limit as in Theorem REF .", "Recall from Section REF that it suffices to construct an extension of the underlying structures where $\\mathcal {C}_{\\eta }$ is smooth and irreducible.", "Suppose we have an underlying R-map extending $f_{\\eta }$ : $f \\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ where $\\mathcal {C}\\rightarrow S$ is a pre-stable curve over $S$ .", "We modify $f$ to obtain a representable morphism to $\\mathfrak {P}$ as follows.", "Forming the relative coarse moduli [6], we obtain a diagram ${\\mathcal {C}[d]_\\pi [r]^f & \\mathfrak {P}[d] \\\\\\mathcal {C}^r [ur]^{f^r} [r] & \\mathbf {BC}^*_\\omega ,}$ in which the upper triangle is commutative, $f^r$ is representable, and $\\pi $ is proper and quasi-finite.", "Note that since $\\omega ^{\\log }_{C/S}|_{\\mathcal {C}^r} = \\omega ^{\\log }_{\\mathcal {C}^r/S}$ , the lower triangle is also commutative.", "Proposition 5.5 Notations as above, we have: $f^r$ is a representable underlying $R$ -map over $S$ extending $f_{\\eta }$ .", "If $f$ satisfies the positivity condition (REF ), then so does $f^r$ .", "Both parts follow easily from the above observations." ], [ "Pre-stable reduction", "Next, we construct a (not necessarily stable) family (REF ) across the central fiber.", "We will show that such a family can be constructed by taking the stable map limit twice in a suitable way.", "It is worth mentioning that the method here is very different than the one in [25] in order to handle the general situation of this paper.", "In the following, $g_{\\eta }$ and $\\mathcal {L}_{\\eta }$ denote the pre-stable map and the spin structure on $\\mathcal {C}_{\\eta }$ associated to $f_\\eta $ ." ], [ "The first stable map limit", "Let $g_\\eta $ be the prestable map underlying $f_\\eta $ .", "Then, let $g_0\\colon \\mathcal {C}_0 \\rightarrow \\mathcal {X}$ be any pre-stable map extending $g_{\\eta }$ whose existence follows from [7].", "Possibly after a further base change, we construct the following commutative diagram: ${\\mathcal {C}_0^{\\prime \\prime } [d] [rr]^{f_0} && \\mathfrak {P}[d] \\\\\\mathcal {C}_0^{\\prime } [d] [rr]^{\\mathcal {L}_0} && \\mathfrak {X}[d] \\\\\\mathcal {C}_0 [rr]^{(\\omega ^{\\log }_{\\mathcal {C}_0/S},g_0) \\ \\ \\ \\ } && \\mathbf {BC}^*_\\omega \\times \\mathcal {X}}$ First, there is a unique stable map limit $\\mathcal {C}_0^{\\prime } \\rightarrow \\mathcal {C}_0\\times _{\\mathbf {BC}^*_\\omega \\times \\mathcal {X}}\\mathfrak {X}$ extending the one given by the spin $\\mathcal {L}_{\\eta }$ .", "This yields the spin structure $\\mathcal {L}_0$ on $\\mathcal {C}_0^{\\prime }$ .", "Furthermore, the morphism $\\mathcal {C}^{\\prime }_0 \\rightarrow \\mathcal {C}_0$ is quasi-finite.", "We then take the unique stable map limit $h\\colon \\mathcal {C}_0^{\\prime \\prime } \\rightarrow \\mathcal {P}_{\\mathcal {C}_0^{\\prime }} := \\mathfrak {P}\\times _{\\mathfrak {X}}\\mathcal {C}_0^{\\prime }$ extending the one given by $f_{\\eta }$ .", "To see the difference between the above stable map limits and a pre-stable reduction, we observe: Lemma 5.6 Suppose we are given a commutative diagram ${\\mathcal {C}[d] [rr]^{f} && \\mathfrak {P}[d] \\\\\\mathcal {C}^{\\prime } [rr]^{\\omega ^{\\log }_{\\mathcal {C}^{\\prime }/S}} && \\mathbf {BC}^*_\\omega }$ where $\\mathcal {C}^{\\prime }$ and $\\mathcal {C}$ are two pre-stable curves over $S$ such that $\\mathcal {C}\\rightarrow \\mathcal {C}^{\\prime }$ contracts only rational components with two special points.", "Then $f$ is a pre-stable reduction as in (REF ).", "The lemma follows from that $\\omega ^{\\log }_{\\mathcal {C}^{\\prime }/S}|_{\\mathcal {C}} = \\omega ^{\\log }_{\\mathcal {C}/S}$ .", "Observe that $\\omega ^{\\log }_{\\mathcal {C}_0/S}|_{\\mathcal {C}^{\\prime }_0} = \\omega ^{\\log }_{\\mathcal {C}^{\\prime }_0/S}$ .", "If $\\mathcal {C}^{\\prime \\prime }_0 \\rightarrow \\mathcal {C}^{\\prime }_0$ contracts no rational tails then it can only contract rational bridges.", "Thus we obtain a pre-stable reduction in this case by applying Lemma REF .", "Otherwise, we show that a pre-stable reduction can be achieved by repeating stable map limits one more time as follows." ], [ "The second stable map limit", "Set $\\mathcal {C}_1 = \\mathcal {C}^{\\prime \\prime }_0$ .", "We will construct the following commutative diagram: ${\\mathcal {C}_2 [d] [rrrr]^{f_1} &&&& \\mathfrak {P}[d] \\\\\\tilde{\\mathcal {C}}_1^{\\prime } [rr] && \\mathcal {C}_1^{\\prime } [d] [rr]^{\\mathcal {L}_1} && \\mathfrak {X}[d] \\\\&& \\mathcal {C}_1 [rr]^{(\\omega ^{\\log }_{\\mathcal {C}_1/S},g_1) \\ \\ \\ \\ } && \\mathbf {BC}^*_\\omega \\times \\mathcal {X}}$ First, $g_1$ is the composition of $\\mathcal {C}_1 \\rightarrow \\mathcal {C}_0$ with $g_0$ , and $\\mathcal {L}_1$ is the spin structure over $\\mathcal {C}^{\\prime }_1$ obtained by taking the stable map limit as in (REF ).", "Second, we construct a quasi-finite morphism of pre-stable curves $\\tilde{\\mathcal {C}}_1^{\\prime } \\rightarrow \\mathcal {C}_1^{\\prime }$ over $S$ such that over $\\eta $ it is the identity $\\mathcal {C}_{\\eta } \\rightarrow \\mathcal {C}_{\\eta }$ , and the identity $\\mathcal {L}_{\\eta } \\rightarrow \\mathcal {L}_{\\eta }$ extends to a morphism of line bundles $\\mathcal {L}_0|_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\rightarrow \\mathcal {L}_1|_{\\tilde{\\mathcal {C}}^{\\prime }_1}$ whose $r$ -th power is the natural morphism $(\\omega _{\\mathcal {C}_0/S}^{\\log } \\otimes g_0^* \\mathbf {L}^\\vee )|_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\rightarrow (\\omega _{\\mathcal {C}_1/S}^{\\log } \\otimes g_1^* \\mathbf {L}^\\vee )|_{\\tilde{\\mathcal {C}}^{\\prime }_1}.$ Let $\\@root r \\of {\\mathcal {C}_1^{\\prime }}$ be the $r$ th root stack of $(\\omega _{\\mathcal {C}_0/S}^{\\log } \\otimes g_0^* \\mathbf {L}^\\vee )^{\\vee }|_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\otimes (\\omega _{\\mathcal {C}_1/S}^{\\log } \\otimes g_1^* \\mathbf {L}^\\vee )|_{\\tilde{\\mathcal {C}}^{\\prime }_1},$ and $\\@root r \\of {(\\mathcal {C}_1^{\\prime },s)}$ be the $r$ -th root stack of the section $s$ of the above line bundle given by (REF ).", "We form the fiber product $\\hat{\\mathcal {C}}^{\\prime }_1 := \\mathcal {C}^{\\prime }_1\\times _{\\@root r \\of {\\mathcal {C}_1^{\\prime }}}\\@root r \\of {(\\mathcal {C}_1^{\\prime },s)},$ where the morphism $\\mathcal {C}^{\\prime }_1 \\rightarrow \\@root r \\of {\\mathcal {C}_1^{\\prime }}$ is defined via $\\mathcal {L}_0^\\vee |_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\otimes \\mathcal {L}_1|_{\\tilde{\\mathcal {C}}^{\\prime }_1}$ .", "The identities $\\mathcal {L}_{\\eta } = \\mathcal {L}|_{\\mathcal {C}_{\\eta }} = \\mathcal {L}_{1}|_{\\mathcal {L}_{\\eta }}$ induce a stable map $\\mathcal {C}_{\\eta } \\rightarrow \\hat{\\mathcal {C}}^{\\prime }_1$ which, possibly after a finite base change of $S$ , extends to a quasi-finite stable map $\\tilde{\\mathcal {C}}^{\\prime }_1 \\rightarrow \\hat{\\mathcal {C}}^{\\prime }_1$ .", "Since $\\hat{\\mathcal {C}}^{\\prime }_{1} \\rightarrow \\mathcal {C}^{\\prime }_1$ is quasi-finite, the composition $\\tilde{\\mathcal {C}}^{\\prime }_1 \\rightarrow \\hat{\\mathcal {C}}^{\\prime }_1 \\rightarrow \\mathcal {C}^{\\prime }_1$ gives the desired quasi-finite morphism.", "Thus, $\\mathcal {L}_1$ pulls back to a spin structure on $\\tilde{\\mathcal {C}}^{\\prime }_1$ .", "Furthermore, the universal $r$ -th root of $\\@root r \\of {(\\mathcal {C}_1^{\\prime },s)}$ pulls back to a section of $\\mathcal {L}_1 \\otimes \\mathcal {L}^\\vee |_{\\tilde{\\mathcal {C}}^{\\prime }_1}$ as needed.", "Finally, we construct $f_1$ in the same way as the stable map limit in (REF ) but using the spin structure $\\mathcal {L}_1|_{\\tilde{\\mathcal {C}}^{\\prime }_1}$ .", "We will show: Proposition 5.7 The morphism $\\mathcal {C}_2 \\rightarrow \\tilde{\\mathcal {C}}^{\\prime }_1$ contracts no rational tails.", "Together with Lemma REF , we obtain a pre-stable reduction (REF )." ], [ "The targets of the two limits", "Consider $ \\mathcal {P}_i := \\tilde{\\mathcal {C}}^{\\prime }_1\\times _{\\mathfrak {X}} \\mathfrak {P}$ for $i=0,1$ , where the arrow $\\tilde{\\mathcal {C}}^{\\prime }_1 \\rightarrow \\mathfrak {X}$ is induced by $\\mathcal {L}_i$ .", "The morphism (REF ) induces a birational map $ c\\colon \\mathcal {P}_0 \\dashrightarrow \\mathcal {P}_1 $ whose indeterminacy locus is precisely the infinity divisor $\\infty _{\\mathcal {P}_0} \\subset \\mathcal {P}_0$ over the degeneracy locus of (REF ).", "Its inverse $c^{-1}\\colon \\mathcal {P}_1 \\dashrightarrow \\mathcal {P}_0$ is given by the composition $\\mathcal {P}_1 = \\mathbb {P}^\\mathbf {w}\\left(\\bigoplus _{j > 0} (g_1^*(\\mathbf {E}_j^\\vee ) \\otimes \\mathcal {L}_1^{\\otimes j})|_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\oplus \\mathcal {O}\\right) \\\\\\dashrightarrow \\mathbb {P}^\\mathbf {w}\\left(\\bigoplus _{j > 0} (g_1^*(\\mathbf {E}_j^\\vee ) \\otimes \\mathcal {L}_1^{\\otimes j})|_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\oplus (\\mathcal {L}_1 \\otimes \\mathcal {L}_0^\\vee )^{\\otimes a}|_{\\tilde{\\mathcal {C}}^{\\prime }_1}\\right) \\\\\\cong \\mathbb {P}^\\mathbf {w}\\left(\\bigoplus _{j > 0} (g_0^*(\\mathbf {E}_j^\\vee ) \\otimes \\mathcal {L}_0^{\\otimes j})|_{\\tilde{\\mathcal {C}}^{\\prime }_1} \\oplus \\mathcal {O}\\right) = \\mathcal {P}_0,$ where the first map is multiplication of the last coordinate by the $a$ th power of the section of $(\\mathcal {L}_1 \\otimes \\mathcal {L}_0^\\vee )|_{\\tilde{\\mathcal {C}}^{\\prime }_1}$ given by (REF ).", "Therefore, the indeterminacy locus of $c^{-1}$ is the zero section $\\mathbf {0}_{\\mathcal {P}_1} \\subset \\mathcal {P}_1$ over the degeneracy locus of (REF ).", "We have arrived at the following commutative diagram ${&& \\mathcal {P}_0 @/_1pc/@{-->}[dd]_{c} @/^1pc/@{<--}[dd]^{c^{-1}} [rrd] && \\\\\\mathcal {C}_2 [rru]^{f_0} [rrd]_{f_1} &&&& \\tilde{\\mathcal {C}}^{\\prime }_1 \\\\&& \\mathcal {P}_1 [rru] &&}$ where by abuse of notations $f_0$ and $f_1$ are given by the corresponding arrows in (REF ) and (REF ).", "Indeed, $f_0\\colon \\mathcal {C}_2 \\rightarrow \\mathcal {P}_0$ is given by the composition $ \\mathcal {C}_2 \\rightarrow \\tilde{\\mathcal {C}}^{\\prime }_1 \\rightarrow \\mathcal {C}^{\\prime }_1 \\rightarrow \\mathcal {C}_1 \\rightarrow \\mathfrak {P}.$" ], [ "Comparing the two limits along vertical rational tails", "A rational tail of $\\mathcal {C}_2$ over the closed fiber is called vertical if it is contracted in $\\tilde{\\mathcal {C}}^{\\prime }_1$ .", "Lemma 5.8 If $\\mathcal {Z}\\subset \\mathcal {C}_2$ is a vertical rational tail, then $f_0(\\mathcal {Z}) \\subset \\infty _{\\mathcal {P}_0}$ .", "Note that $f_0$ contracts any vertical rational tails.", "Suppose $f_0(\\mathcal {Z}) \\lnot \\subset \\infty _{\\mathcal {P}_0}$ .", "Then $c\\circ f_0$ is well-defined along $\\mathcal {Z}$ hence $f_1|_\\mathcal {Z}= c\\circ f_0|_{\\mathcal {Z}}$ .", "This contradicts the stability of $f_1$ as a stable map limit.", "For $i=0,1$ , denote by $p_i\\colon \\mathcal {P}_i \\dashrightarrow \\infty _{\\mathcal {P}_i}$ the projection from the zero section $\\mathbf {0}_{\\mathcal {P}_i}$ to $\\infty _{\\mathcal {P}_i}$ .", "Thus $p_i$ is a rational map well-defined away from $\\mathbf {0}_{\\mathcal {P}_i}$ .", "Furthermore, we observe that $\\infty _{\\mathcal {P}_0} \\cong \\infty _{\\mathcal {P}_1}$ .", "Using this isomorphism, we have $p_0 = p_1\\circ c$ and $p_1 = p_0\\circ c^{-1}$ .", "Lemma 5.9 Let $\\mathcal {Z}\\subset \\mathcal {C}_2$ be a vertical rational tail.", "Then $p_1\\circ f_1$ contracts an open dense subset of $\\mathcal {Z}$ .", "Since $f_1$ is a stable map and $\\mathcal {Z}$ is a vertical rational tail, we have $f_1(\\mathcal {Z}) \\lnot \\subset \\mathbf {0}_{\\mathcal {P}_1}$ .", "Thus $p_1\\circ f_1$ is well-defined on an open dense $U \\subset \\mathcal {Z}$ such that $f_1(U)$ avoids $\\mathbf {0}_{\\mathcal {P}_1}$ .", "Observe that $c^{-1}$ is well-defined on $f_1(U)$ .", "We then have $p_1 \\circ f_1 |_U = p_0\\circ c^{-1} \\circ f_1 |_U = p_0 \\circ f_0|_U.$ Here $p_0$ is well-defined on $f_0|_U$ by Lemma REF .", "The statement follows from that $f_0$ contracts any vertical rational tail.", "Corollary 5.10 If $\\mathcal {Z}\\subset \\mathcal {C}_2$ is a vertical rational tail, then the image $f_1(\\mathcal {Z})$ dominates a line joining $\\mathbf {0}_{\\mathcal {P}_1}$ and a point on $\\infty _{\\mathcal {P}_1}$ .", "By Lemma REF , $f_1(\\mathcal {Z})$ has support on a fiber of $p_1$ .", "Since $\\mathcal {Z}$ intersects $\\infty _{\\mathcal {P}_1}$ at its unique node, it suffices to show that $f_1(\\mathcal {Z}) \\lnot \\subset \\infty _{\\mathcal {P}_1}$ hence $f_1|_{\\mathcal {Z}}$ dominates a fiber of $p_1$ .", "Otherwise, since $p_1|_{\\infty _{\\mathcal {P}_1}}$ is the identity, $f_1$ contracts $\\mathcal {Z}$ by Lemma REF .", "This contradicts the stability of $f_1$ constructed as a stable map limit.", "We show that Corollary REF and Lemma REF contradict each other, hence rule out the existence of vertical rational tails.", "Let $\\mathcal {Z}\\subset \\mathcal {C}_2$ be a vertical rational tail.", "The pre-stable map $\\mathcal {C}_2 \\rightarrow \\mathcal {X}$ factors through $\\mathcal {C}_2 \\rightarrow \\mathcal {C}_1$ along which $\\mathcal {Z}$ is contracted to a smooth unmarked point on $\\mathcal {C}_1$ .", "Thus there is a Zariski neighborhood $U \\subset \\mathcal {C}_2$ containing $q$ such that $\\mathbf {E}_j|_{U}$ splits.", "Denote by $\\lbrace H_{ijk}\\rbrace _{k=1}^{\\operatorname{rk}\\mathbf {E}_j}$ the collection of hyperplanes in $\\mathcal {P}_i|_{U}$ corresponding to each splitting factor of $\\mathbf {E}_j|_{U}$ .", "By Corollary REF there is a smooth unmarked point $q \\in \\mathcal {Z}$ such that $f_1(q) \\in \\mathbf {0}_{\\mathcal {P}_1}$ , hence $f_1(q) \\in H_{1jk}$ for all $j$ and $k$ .", "We will show that $f_0(q) \\in H_{0jk}$ for all $j$ and $k$ as well, hence $f_0(q) \\in \\mathbf {0}_{\\mathcal {P}_0}$ , which contradicts Lemma REF .", "Suppose $\\mathcal {Z}$ intersects $H_{1jk}$ properly at $q$ via $f_1$ .", "Let $D_{1jk} \\subset U$ be an irreducible component of $f_1^*(H_{1jk})$ containing $q$ .", "Then $D_{1jk}$ is a multi-section over $S$ with the general fiber $D_{1jk,\\eta } \\subset f_{1,\\eta }^*(H_{1jk,\\eta }) = f_{0,\\eta }^*(H_{0jk,\\eta })$ .", "Taking closure, we observe that $f_0(q) \\in D_{1jk} \\subset f_{0}^*(H_{0jk})$ .", "Suppose $f_1(\\mathcal {Z}) \\subset H_{1jk}$ .", "Note that $p_1\\circ f_1 = p_0 \\circ c^{-1} \\circ f_1 = p_0 \\circ f_0$ are well-defined along an open dense subset of $\\mathcal {Z}$ .", "Then Lemma REF together with $p_1 \\circ f_1(\\mathcal {Z}) \\subset \\infty _{\\mathcal {P}_1} \\cap H_{1jk} \\cong \\infty _{\\mathcal {P}_0} \\cap H_{0jk}$ implies that $f_0$ contracts $\\mathcal {Z}$ to a point of $\\infty _{\\mathcal {P}_0} \\cap H_{0jk}$ ." ], [ "Stabilization", "Let $f\\colon \\mathcal {C}\\rightarrow \\mathfrak {P}$ be a pre-stable reduction extending $f_{\\eta }$ over $S$ as in (REF ).", "We next show that by repeatedly contracting unstable rational bridges and rational tails as in Section REF and REF , we obtain a pre-stable reduction satisfying (REF ).", "Together with Proposition REF , this will complete the proof of Theorem REF ." ], [ "Stabilizing unstable rational bridges", "Let $\\mathcal {Z}\\subset \\mathcal {C}$ be an unstable rational bridge.", "We contract $\\mathcal {Z}$ as follows.", "Consider $\\mathcal {C}\\rightarrow C \\rightarrow C^{\\prime }$ where the first arrow takes the coarse curve, and the second arrow contracts the component corresponding to $\\mathcal {Z}$ .", "Since $\\omega ^{log}_{C^{\\prime }/S}|_{\\mathcal {C}} = \\omega ^{log}_{\\mathcal {C}/S}$ , we have a commutative diagram ${\\mathcal {C}@/^1pc/[drr]^{f} @/_1pc/[rdd] @{-->}[dr]^{f_{C^{\\prime }}}&& \\\\& \\mathfrak {P}_{C^{\\prime }} [r] [d] & \\mathfrak {P}[d] \\\\& C^{\\prime } [r]^{\\omega ^{log}_{C^{\\prime }/S}} & \\mathbf {BC}^*_\\omega }$ where the square is cartesian and the dashed arrow $f_{C^{\\prime }}$ is induced by the fiber product.", "By Corollary REF , $\\mathcal {Z}$ is contracted along $f_{C^{\\prime }}$ .", "Note that $f_{\\eta }\\colon \\mathcal {C}_{\\eta } \\rightarrow \\mathfrak {P}$ yields a stable map $\\mathcal {C}_{\\eta } \\rightarrow \\mathfrak {P}_{C^{\\prime }}$ which, possibly after a finite base change, extends to a stable map $f^{\\prime }_{C^{\\prime }}\\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_{C^{\\prime }}$ .", "Let $q \\in C^{\\prime }$ be the node to which $\\mathcal {Z}$ contracts.", "Lemma 5.11 The composition $\\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_{C^{\\prime }} \\rightarrow C^{\\prime }$ is the coarse moduli morphism.", "Furthermore, let $\\tilde{q} \\in \\mathcal {C}^{\\prime }$ be the node above $q \\in C^{\\prime }$ .", "Then we have $f|_{\\mathcal {C}\\setminus \\mathcal {Z}} = f^{\\prime }|_{\\mathcal {C}^{\\prime }\\setminus \\lbrace q\\rbrace }$ .", "Let $\\bar{f}^{\\prime }\\colon \\bar{\\mathcal {C}}^{\\prime } \\rightarrow P_{C^{\\prime }}$ be the coarse stable map of $f^{\\prime }_{C^{\\prime }}$ , and $\\bar{f}_{C^{\\prime }}\\colon C \\rightarrow P_{C^{\\prime }}$ be the coarse stable map of $f_{C^{\\prime }}$ .", "Thus $\\bar{f}^{\\prime }$ is the stabilization of $\\bar{f}_{C^{\\prime }}$ as a stable map.", "By construction, the image of $\\mathcal {Z}$ in $C$ is the only unstable component of $\\bar{f}_{C^{\\prime }}$ , hence is the only component contracted along $C \\rightarrow \\bar{\\mathcal {C}}^{\\prime }$ .", "Therefore $\\mathcal {C}^{\\prime } \\rightarrow C^{\\prime }$ is the coarse moduli.", "Since the modification is local along $\\mathcal {Z}$ , the second statement follows from the first one.", "Let $f^{\\prime }$ be the composition $\\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}_{C^{\\prime }} \\rightarrow \\mathfrak {P}$ .", "The above lemma implies that $\\omega ^{\\log }_{\\mathcal {C}^{\\prime }/S} = \\omega ^{\\log }_{C^{\\prime }/S}|_{\\mathcal {C}^{\\prime }}$ .", "Thus $f^{\\prime }\\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}$ is a new pre-stable reduction extending $f_{\\eta }$ with $\\mathcal {Z}$ removed." ], [ "Stabilizing rational tails", "Let $\\mathcal {Z}\\subset \\mathcal {C}$ be an unstable rational tail, $\\mathcal {C}\\rightarrow \\mathcal {C}^{\\prime }$ be the contraction of $\\mathcal {Z}$ , and $p \\in \\mathcal {C}^{\\prime }$ be the image of $\\mathcal {Z}$ .", "Possibly after a finite extension, we take the stable map limit $f^{\\prime }\\colon \\mathcal {C}^{\\prime \\prime } \\rightarrow \\mathfrak {P}_{\\mathcal {C}^{\\prime }} := \\mathcal {C}^{\\prime }\\times _{\\mathbf {BC}^*_\\omega }\\mathfrak {P}$ extending the one induced by $f_{\\eta }$ .", "We will also use $f^{\\prime }\\colon \\mathcal {C}^{\\prime \\prime } \\rightarrow \\mathfrak {P}$ for the corresponding morphism.", "Let $\\mathcal {T}\\subset \\mathcal {C}^{\\prime \\prime }$ be the tree of rational components contracted to $p$ .", "Since $f^{\\prime }$ is a modification of $f$ around $\\mathcal {Z}$ , we observe that $f^{\\prime }_{\\mathcal {C}^{\\prime \\prime } \\setminus \\mathcal {T}} = f|_{\\mathcal {C}\\setminus \\mathcal {Z}}$ .", "Proposition 5.12 The composition $\\mathcal {C}^{\\prime \\prime } \\rightarrow \\mathfrak {P}_{\\mathcal {C}^{\\prime }} \\rightarrow \\mathcal {C}^{\\prime }$ is the identity.", "Therefore, $f^{\\prime }\\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathfrak {P}$ is a pre-stable reduction extending $f_{\\eta }$ with $\\mathcal {Z}$ contracted but everywhere else is identical to $f$ .", "The proof of the above proposition occupies the rest of this section.", "Since $p$ is a smooth unmarked point of $\\mathcal {C}^{\\prime }$ , it suffices to show that $\\mathcal {T}$ contains no component.", "We first consider the following case.", "Lemma 5.13 Notations and assumptions as above, suppose that $f(\\mathcal {Z}) \\subset \\mathbf {0}_{\\mathfrak {P}}$ .", "Then Proposition REF holds.", "Since $f^{\\prime }_{\\mathcal {C}^{\\prime \\prime } \\setminus \\mathcal {T}} = f|_{\\mathcal {C}\\setminus \\mathcal {Z}}$ , the assumption implies $\\deg f^*\\mathcal {O}(\\infty _{\\mathfrak {P}}) \\le \\deg (f^{\\prime })^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\overline{\\mathcal {C}^{\\prime \\prime }_s\\setminus \\mathcal {T}}}$ .", "On the other hand, we have $\\deg (f^{\\prime })^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {T}} \\ge 0$ , and “$=$ ” iff $\\mathcal {T}$ is a single point.", "Thus, the lemma follows from $ \\deg f^*\\mathcal {O}(\\infty _{\\mathfrak {P}}) = \\deg (f^{\\prime })^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\overline{\\mathcal {C}^{\\prime \\prime }_s\\setminus \\mathcal {T}}} + \\deg (f^{\\prime })^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {T}} $ .", "We now impose the condition $f(\\mathcal {Z}) \\lnot \\subset \\mathbf {0}_{\\mathfrak {P}}$ .", "Observe that the pre-stable map $g\\colon \\mathcal {C}\\rightarrow \\mathcal {X}$ contracts $\\mathcal {Z}$ , hence factors through a pre-stable map $g^{\\prime }\\colon \\mathcal {C}^{\\prime } \\rightarrow \\mathcal {X}$ .", "Since $p$ is a smooth unmarked point, we may choose a Zariski neighborhood $U^{\\prime } \\subset \\mathcal {C}^{\\prime }$ of $p$ such that $(g^{\\prime })^*\\mathbf {E}_i|_{U^{\\prime }}$ splits for each $i$ .", "Denote by $U = U^{\\prime }\\times _{\\mathcal {C}^{\\prime }} \\mathcal {C}$ .", "Then $g^*\\mathbf {E}_i|_{U}$ splits as well for each $i$ .", "The $j$ -th splitting factors of $\\oplus \\mathbf {E}_i|_{U}$ and $\\oplus \\mathbf {E}_i|_{U^{\\prime }}$ define families of hyperplanes $H_j \\subset \\mathfrak {P}_U, \\ \\ \\ \\mbox{and} \\ \\ \\ H^{\\prime }_j \\subset \\mathfrak {P}_{U^{\\prime }}$ over $U$ and $U^{\\prime }$ respectively for $j = 1, 2, \\cdots , n$ .", "Lemma 5.14 Notations and assumptions as above, for each $j$ we have $\\deg (f^*H_j)|_{\\mathcal {Z}} \\le 0$ .", "In particular, $f(\\mathcal {Z}) \\lnot \\subset \\mathbf {0}_{\\mathfrak {P}}$ implies that $f(\\mathcal {Z}) \\cap \\mathbf {0}_{\\mathfrak {P}} = \\emptyset $ .", "Observe that $\\bigcap _j H_j$ is the zero section $\\mathbf {0}_{\\mathfrak {P}_U}$ .", "Thus, it suffices to show that $\\deg (f^*H_j)|_{\\mathcal {Z}} \\le 0$ for each $j$ .", "Since $\\mathcal {Z}$ is contracted by $f$ , $\\mathbf {E}_i$ and $\\mathbf {L}$ are both trivial along $\\mathcal {Z}$ .", "Thus, we have $\\mathfrak {P}_{\\mathcal {Z}} = \\mathbb {P}^\\mathbf {w}(\\oplus _j \\mathcal {L}_{\\mathcal {Z}}^{\\otimes i_j} \\oplus \\mathcal {O})$ where the direct sum is given by the splitting of $\\mathbf {E}_i$ for all $i$ .", "The corresponding section $\\mathcal {Z}\\rightarrow \\mathfrak {P}_{\\mathcal {Z}}$ is defined by a collection of sections $(s_1, \\cdots , s_n, s_{\\infty })$ with no base point, where $s_j \\in H^0(\\mathcal {L}^{i_j}\\otimes f^*\\mathcal {O}(w_{i_j} \\infty _{\\mathfrak {P}})|_{\\mathcal {Z}})$ and $s_{\\infty } \\in H^0(f^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}})$ .", "In particular, we have $f^*\\mathcal {O}(H_j)|_{\\mathcal {Z}} = \\mathcal {L}^{i_j}\\otimes f^*\\mathcal {O}(w_{i_j} \\infty _{\\mathfrak {P}})|_{\\mathcal {Z}}$ .", "Note that $w_{i_j} = a\\cdot i$ by the choice of weights (REF ).", "We calculate $(\\mathcal {L}^{i_j}\\otimes f^*\\mathcal {O}(w_{i_j} \\infty _{\\mathfrak {P}})|_{\\mathcal {Z}})^{r}= (\\mathcal {L}\\otimes f^*\\mathcal {O}(\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}})^{\\tilde{r}i}= \\big (\\omega ^{\\log }_{\\mathcal {C}/S}\\otimes f^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}}\\big )^{ i}.$ Since $\\mathcal {Z}$ is unstable, we have $\\deg \\omega ^{\\log }_{\\mathcal {C}/S}\\otimes f^*\\mathcal {O}(\\tilde{r}\\infty _{\\mathfrak {P}})|_{\\mathcal {Z}}\\le 0$ , which implies $\\deg (f^*H_j)|_{\\mathcal {Z}} \\le 0$ .", "To further proceed, consider the spin structure $\\mathcal {L}^{\\prime }$ over $\\mathcal {C}^{\\prime }$ and observe that $\\mathcal {L}^{\\prime }|_{\\mathcal {C}^{\\prime }\\setminus \\lbrace p\\rbrace } = \\mathcal {L}|_{\\mathcal {C}\\setminus \\mathcal {Z}}$ .", "Using the same construction as for (REF ), we obtain a quasi-finite morphism $\\widetilde{\\mathcal {C}} \\rightarrow \\mathcal {C}$ between two pre-stable curves over $S$ which is isomorphic away from $\\mathcal {Z}$ and its pre-image in $\\widetilde{\\mathcal {C}}$ , and a canonical morphism of line bundles $\\mathcal {L}^{\\prime }|_{\\widetilde{\\mathcal {C}}} \\rightarrow \\mathcal {L}|_{\\widetilde{\\mathcal {C}}}$ extending the identity $\\mathcal {L}^{\\prime }|_{\\mathcal {C}^{\\prime }\\setminus \\lbrace p\\rbrace } = \\mathcal {L}|_{\\mathcal {C}\\setminus \\mathcal {Z}}$ , whose $r$ -th power is the canonical morphism $\\omega ^{\\log }_{\\mathcal {C}^{\\prime }/S}|_{\\widetilde{\\mathcal {C}}} \\rightarrow \\omega ^{\\log }_{\\mathcal {C}/S}|_{\\widetilde{\\mathcal {C}}}$ .", "Define: $\\mathfrak {P}_{\\widetilde{\\mathcal {C}}} := \\mathfrak {P}\\times _{\\mathbf {BC}^*_\\omega }\\widetilde{\\mathcal {C}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathfrak {P}^{\\prime }_{\\widetilde{\\mathcal {C}}} := \\mathfrak {P}_{\\mathcal {C}^{\\prime }}\\times _{\\mathcal {C}^{\\prime }}\\widetilde{\\mathcal {C}}$ We have arrived at the following commutative diagram ${\\widetilde{\\mathcal {C}} [rr]^{\\widetilde{f}} && \\mathfrak {P}_{\\widetilde{\\mathcal {C}}} [rrd] @/^1pc/@{-->}[dd]^{c^{-1}}&&&& \\\\&&&& \\widetilde{\\mathcal {C}} [rrd] && \\\\\\widetilde{\\mathcal {C}}^{\\prime \\prime \\prime } [rr]^{\\widetilde{f}^{\\prime }} [uu] @/_1pc/[rrrrd]^{f^{\\prime \\prime }}&& \\mathfrak {P}^{\\prime }_{\\widetilde{\\mathcal {C}}} @/^1pc/@{-->}[uu]^{c} [rru] [rrd] &&&& \\mathcal {C}^{\\prime } \\\\&& &&\\mathfrak {P}_{\\mathcal {C}^{\\prime }} [rru] &&}$ where $\\widetilde{f}$ is the section obtained by pulling back $f$ , $c$ and $c^{-1}$ are the two birational maps defined using $\\mathcal {L}^{\\prime }|_{\\widetilde{\\mathcal {C}}} \\rightarrow \\mathcal {L}|_{\\widetilde{\\mathcal {C}}}$ similarly to (REF ), and $\\widetilde{f}^{\\prime }$ is the stable map limit extending the one given by $f_{\\eta }$ .", "Denote by $\\mathcal {Z}$ the corresponding rational tail of $\\widetilde{\\mathcal {C}}$ , and by $\\widetilde{\\mathcal {Z}} \\subset \\widetilde{\\mathcal {C}}^{\\prime \\prime \\prime }$ the component dominating $\\mathcal {Z}$ .", "By Lemma REF , the image $\\widetilde{f}(\\mathcal {Z})$ avoids $\\mathbf {0}_{\\mathfrak {P}_{\\widetilde{\\mathcal {C}}}}|_{\\mathcal {Z}}$ which is the indeterminacy locus of $c^{-1}$ .", "This implies that $\\widetilde{f}^{\\prime }(\\widetilde{\\mathcal {Z}}) \\subset c^{-1}\\circ \\widetilde{f}(\\mathcal {Z}) \\subset \\infty _{\\mathfrak {P}^{\\prime }_{\\widetilde{\\mathcal {C}}}}$ .", "Thus by the commutativity of the above diagram, any rational tail of $\\widetilde{\\mathcal {C}}^{\\prime \\prime \\prime }$ contracted to a point on $\\mathcal {Z}$ , is also contracted by $\\widetilde{f}^{\\prime }$ .", "Now the stability of $\\widetilde{f}^{\\prime }$ as a stable map implies: Lemma 5.15 $\\widetilde{\\mathcal {C}}^{\\prime \\prime \\prime } \\rightarrow \\widetilde{\\mathcal {C}}$ contracts no component.", "Furthermore: Lemma 5.16 The rational tail $\\widetilde{\\mathcal {Z}}$ is contracted by $f^{\\prime \\prime }$ .", "Write $\\widetilde{U} = \\widetilde{\\mathcal {C}}\\times _{\\mathcal {C}^{\\prime }}U$ .", "By abuse of notations, denote by $H_{j} \\subset \\mathfrak {P}_{\\widetilde{\\mathcal {C}}}$ and $H^{\\prime }_{j} \\subset \\mathfrak {P}^{\\prime }_{\\widetilde{\\mathcal {C}}}$ the families of hyperplanes over $\\widetilde{U}$ obtained by pulling back the corresponding hyperplanes in (REF ).", "From the construction of $c^{-1}$ , we observe that $\\widetilde{f}(\\mathcal {Z}) \\subset H_j$ for some $H_j$ implies that $\\widetilde{f}^{\\prime }(\\widetilde{\\mathcal {Z}}) \\subset H^{\\prime }_j$ .", "Suppose $f^{\\prime \\prime }(\\widetilde{\\mathcal {Z}})$ is one-dimensional.", "Then $\\widetilde{\\mathcal {Z}}$ intersects some $H^{\\prime }_j$ properly and non-trivially.", "Since $H^{\\prime }_{j}$ is a family over $\\widetilde{U}$ , $(\\widetilde{f}^{\\prime })^*(H^{\\prime }_j)$ contains a non-empty irreducible multi-section over $U$ which intersects $\\widetilde{\\mathcal {Z}}$ .", "Denote this multi-section by $D$ .", "Consider the general fiber $D_{\\eta } \\subset f_{\\eta }^*(H^{\\prime }_{j,\\eta }) = f_{\\eta }^*(H_{j,\\eta })$ .", "The closure $\\overline{D_{\\eta }} \\subset \\widetilde{f}^{*}H_j$ intersects $\\mathcal {Z}$ non-trivially.", "By Lemma REF , we necessarily have $\\widetilde{f}(\\mathcal {Z}) \\subset H_j$ hence $\\widetilde{f}^{\\prime }(\\widetilde{\\mathcal {Z}}) \\subset H^{\\prime }_{j}$ by the previous paragraph.", "This contradicts the assumption that $\\widetilde{\\mathcal {Z}}$ and $H^{\\prime }_j$ intersect properly.", "Finally, observe that the coarse pre-stable map of $f^{\\prime \\prime }$ factors through the coarse stable map of $f^{\\prime }\\colon \\mathcal {C}^{\\prime \\prime } \\rightarrow \\mathfrak {P}_{\\mathcal {C}^{\\prime }}$ .", "The above two lemmas show that the unstable components of $\\widetilde{\\mathcal {C}}^{\\prime \\prime \\prime }$ with respect to $f^{\\prime \\prime }$ are precisely those contracted in $\\mathcal {C}^{\\prime }$ .", "Therefore, the arrow $\\mathcal {C}^{\\prime \\prime } \\rightarrow \\mathcal {C}^{\\prime }$ contracts no component.", "This completes the proof of Proposition REF ." ], [ "Reducing perfect obstruction theories along boundary", "For various applications in this and our subsequent papers [23], [24], we further develop a general machinery, initiated in [25], on reducing a perfect obstruction theory along a Cartier divisor using cosections.", "Furthermore, we prove a formula relating the two virtual cycles defined using a perfect obstruction theory and its reduction under the general setting in Section REF .", "Since log structures are irrelevant in this section, we will assume all log structures to be trivial for simplicity." ], [ "Set-up of the reduction", "Throughout this section we will consider a sequence of morphisms of algebraic stacks ${M}\\rightarrow \\mathfrak {H}\\rightarrow \\mathfrak {M}$ where ${M}$ is a separated Deligne–Mumford stack, and the second morphism is smooth of Deligne–Mumford type.", "Let $\\Delta \\subset \\mathfrak {M}$ be an effective Cartier divisor, and let $\\Delta _{\\mathfrak {H}}$ and $\\Delta _{{M}}$ be its pull-backs in $\\mathfrak {H}$ and ${M}$ respectively.", "Let $\\mathbb {F}$ be the complex with amplitude $[0,1]$ over $\\mathfrak {M}$ $\\mathcal {O}_{\\mathfrak {M}} \\stackrel{\\epsilon }{\\longrightarrow } \\mathcal {O}_{\\mathfrak {M}}(\\Delta )$ where $\\epsilon $ is the canonical section defining $\\Delta $ .", "We further assume two relative perfect obstruction theories $\\varphi _{{M}/\\mathfrak {M}} \\colon \\mathbb {T}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{{M}/\\mathfrak {M}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\varphi _{\\mathfrak {H}/\\mathfrak {M}} \\colon \\mathbb {T}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}$ which fit in a commutative diagram ${\\mathbb {T}_{{M}/\\mathfrak {M}} [rr] [d]_{\\varphi _{{M}/\\mathfrak {M}}} && \\mathbb {T}_{\\mathfrak {H}/\\mathfrak {M}} [d]^{\\varphi _{\\mathfrak {H}/\\mathfrak {M}}|_{{M}}} \\\\\\mathbb {E}_{{M}/\\mathfrak {M}} [rr]^{\\sigma ^{\\bullet }_{\\mathfrak {M}}} && \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}}$ such that $H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}) \\cong \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{\\mathfrak {H}}$ , and the following cosection $\\sigma _{\\mathfrak {M}} := H^1(\\sigma ^{\\bullet }_{\\mathfrak {M}}) \\colon H^1(\\mathbb {E}_{{M}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}}) \\cong \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{{M}}$ is surjective along $\\Delta _{{M}}$ ." ], [ "The construction of the reduction", "Consider the composition $\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}})[-1] \\cong \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{{M}} \\twoheadrightarrow \\operatorname{cok}(\\epsilon )[-1].$ Since $\\mathfrak {H}\\rightarrow \\mathfrak {M}$ is smooth, we have $\\operatorname{cok}(\\epsilon )[-1] \\cong \\mathbb {F}|_{\\mathfrak {H}}$ .", "Hence the above composition defines a morphism $\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow \\mathbb {F}|_{\\mathfrak {H}}$ over $\\mathfrak {H}$ .", "We form the distinguished triangles $\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow \\mathbb {F}|_{\\mathfrak {H}} \\stackrel{[1]}{\\rightarrow } \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {F}|_{{M}} \\stackrel{[1]}{\\rightarrow },$ where the middle arrow in the second triangle is the composition of (REF ) with $\\sigma ^{\\bullet }_{\\mathfrak {M}}$ .", "Theorem 6.1 Notations and assumptions as above, we have: There is a factorization of perfect obstruction theories ${\\mathbb {T}_{*/\\mathfrak {M}} [rr]^{\\varphi _{*/\\mathfrak {M}}} [rd]_{\\varphi ^{\\mathrm {red}}_{*/\\mathfrak {M}}} && \\mathbb {E}_{*/\\mathfrak {M}} \\\\&\\mathbb {E}^{\\mathrm {red}}_{*/\\mathfrak {M}} [ru]&}$ such that $\\varphi ^{\\mathrm {red}}_{*/\\mathfrak {M}}|_{*\\setminus \\Delta _*} = \\varphi _{*/\\mathfrak {M}}|_{*\\setminus \\Delta _*}$ for $* = {M}$ or $\\mathfrak {H}$ .", "There is a canonical commutative diagram ${\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}} [rr] [d]_{\\sigma ^{\\bullet ,\\mathrm {red}}_{\\mathfrak {M}}} && \\mathbb {E}_{{M}/\\mathfrak {M}} [d]^{\\sigma ^{\\bullet }_{\\mathfrak {M}}} \\\\\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}} [rr] && \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}}}$ such that $H^1(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) \\cong \\mathcal {O}_{\\mathfrak {H}}$ .", "Furthermore, the reduced cosection $\\sigma ^{\\mathrm {red}}_{\\mathfrak {M}}:= H^1(\\sigma ^{\\mathrm {red},\\bullet }_{\\mathfrak {M}}) \\colon H^1(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}}) \\cong \\mathcal {O}_{{M}}$ is surjective along $\\Delta _{{M}}$ , and satisfies $\\sigma ^{\\mathrm {red}}_{\\mathfrak {M}}|_{{M}\\setminus \\Delta _{{M}}} = \\sigma _{\\mathfrak {M}}|_{{M}\\setminus \\Delta _{{M}}}$ .", "This theorem will be proven below in Section REF .", "In case $\\mathfrak {M}$ admits a fundamental class $[\\mathfrak {M}]$ , denote by $[{M}]^{\\mathrm {vir}}$ and $[{M}]^{\\mathrm {red}}$ the virtual cycles giving by the perfect obstruction theories $\\varphi _{{M}/\\mathfrak {M}}$ and $\\varphi ^{\\mathrm {red}}_{{M}/\\mathfrak {M}}$ respectively.", "Remark 6.2 In order to construct the cone $\\mathbb {E}^\\mathrm {red}_{{M}/\\mathfrak {M}}$ , instead of having the auxiliary stack $\\mathfrak {H}$ , it suffices to assume the existence of a cosection $\\sigma _\\mathfrak {M}\\colon H^1(\\mathbb {E}_{{M}/\\mathfrak {M}}) \\rightarrow \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{M}$ .", "Furthermore, the proof of Theorem REF shows that if $\\sigma _\\mathfrak {M}$ is surjective along $\\Delta _{M}$ , then $\\mathbb {E}^\\mathrm {red}_{{M}/\\mathfrak {M}}$ is perfect of amplitude $[0, 1]$ .", "On the other hand, in practice the auxiliary stack $\\mathfrak {H}$ provides a convenient criterion to ensure the factorization of Theorem REF (1)." ], [ "Decending to the absolute reduced theory", "We further assume $\\mathfrak {M}$ is smooth.", "Consider the morphism of triangles: ${\\mathbb {T}_{{*}/\\mathfrak {M}} [r] [d]_{\\varphi _{*/\\mathfrak {M}}^{\\mathrm {red}}} & \\mathbb {T}_{{*}} [r] [d]_{\\varphi ^{\\mathrm {red}}_{*}} & \\mathbb {T}_{\\mathfrak {M}}|_{{*}} [r]^{[1]} [d]^{\\cong } & \\\\\\mathbb {E}^{\\mathrm {red}}_{{*}/\\mathfrak {M}} [r] & \\mathbb {E}^{\\mathrm {red}}_{{*}} [r] & \\mathbb {T}_{\\mathfrak {M}}|_{{*}} [r]^{[1]} &}$ for $*=\\mathfrak {H}$ or ${M}$ .", "By [10], $\\varphi ^{\\mathrm {red}}_{M}$ is a perfect obstruction theory compatible with $\\varphi ^{\\mathrm {red}}_{{M}/\\mathfrak {M}}$ , hence induces the same virtual cycle $[{M}]^{\\mathrm {red}}$ .", "Lemma 6.3 The induced morphism $H^1(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}/{\\mathfrak {M}}}) \\rightarrow H^1(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}})$ is an isomorphism of $\\mathcal {O}_{{\\mathfrak {H}}}$ .", "Since $\\mathfrak {M}$ is smooth, we have $H^1(\\mathbb {T}_{\\mathfrak {M}}) = 0$ .", "Consider the induced morphism between long exact sequences ${H^{0}(\\mathbb {T}_{{\\mathfrak {H}}}) [r] [d]^{\\cong } & H^{0}(\\mathbb {T}_{\\mathfrak {M}|_{{\\mathfrak {H}}}}) [r] [d]^{\\cong } & H^{1}(\\mathbb {T}_{{\\mathfrak {H}}/\\mathfrak {M}}) [r] [d] & H^{1}(\\mathbb {T}_{{\\mathfrak {H}}}) [r] [d] & 0 \\\\H^{0}(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}}) [r] & H^{0}(\\mathbb {T}_{\\mathfrak {M}}|_{{\\mathfrak {H}}}) [r] & H^{1}(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}/\\mathfrak {M}}) [r] & H^{1}(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}}) [r] & 0}$ Since ${\\mathfrak {H}} \\rightarrow \\mathfrak {M}$ is smooth, the two horizontal arrows on the left are both surjective.", "Thus $H^1(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}})$ is an isomorphism.", "By Theorem REF (2), we have $H^1(\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}}) \\cong \\mathcal {O}_{{\\mathfrak {H}}}$ .", "By Theorem REF , we obtain a morphism of triangles ${\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}} [r] [d]_{{\\sigma ^{\\bullet ,\\mathrm {red}}_{\\mathfrak {M}}}} & \\mathbb {E}^{\\mathrm {red}}_{{M}} [r] [d]_{\\sigma ^{\\bullet ,\\mathrm {red}}} & \\mathbb {T}_{{M}} [r]^{[1]} [d] & \\\\\\mathbb {E}^{\\mathrm {red}}_{{\\mathfrak {H}}/{\\mathfrak {M}}}|_{{M}} [r] & \\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}}|_{{M}} [r] & \\mathbb {T}_{\\mathfrak {M}}|_{{M}} [r]^{[1]} &}$ Taking $H^1$ and applying Lemma REF , we have a commutative diagram ${H^1(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}}) @{->>}[r] [d]_{\\sigma ^{\\mathrm {red}}_{\\mathfrak {M}}} & H^1(\\mathbb {E}^{\\mathrm {red}}_{{M}}) [d]^{\\sigma ^{\\mathrm {red}}} \\\\\\mathcal {O}_{{M}} [r]^{=} & \\mathcal {O}_{{M}}.", "}$ Denote by ${M}(\\sigma ^{\\mathrm {red}}) \\subset {M}$ the closed substack along which the cosection $\\sigma ^{\\mathrm {red}}$ degenerates, and write $\\iota \\colon {M}(\\sigma ^{\\mathrm {red}}) \\hookrightarrow {M}$ for the closed embedding.", "Let $[{M}]_{\\sigma ^{\\mathrm {red}}}$ be the cosection localized virtual cycle as in [42].", "We conclude that Theorem 6.4 With the assumptions in Section REF and further assuming that $\\mathfrak {M}$ is smooth, we have The cosection $\\sigma ^{\\mathrm {red}}$ is surjective along $\\Delta _{{M}}$ .", "$\\iota _*[{M}]_{\\sigma ^{\\mathrm {red}}} = [{M}]^{\\mathrm {red}}$ .", "(1) follows from the surjectivity of $\\sigma ^{\\mathrm {red}}_{\\mathfrak {M}}$ along $\\Delta _{{M}}$ and (REF ).", "(2) follows from [42]." ], [ "Proof of Theorem ", "By (REF ), we obtain a commutative diagram of solid arrows ${\\mathbb {T}_{{M}/\\mathfrak {M}} @/^1pc/[rrd] @{-->}[rd]_{\\varphi ^{\\mathrm {red}}_{{M}/\\mathfrak {M}}} [dd] &&&& \\\\& \\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}} [dd] [r] & \\mathbb {E}_{{M}/\\mathfrak {M}} [dd] [r] & \\mathbb {F}|_{{M}} [r]^{[1]} @{=}[dd] & \\\\\\mathbb {T}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}} @/^1pc/[rrd]|{\\ \\ \\ \\ \\ } @{-->}[rd]_{\\varphi ^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}}} &&&& \\\\& \\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}} [r] & \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}} [r] & \\mathbb {F}|_{{M}} [r]^{[1]} &}$ where the two horizontal lines are given by (REF ), and the two solid curved arrows are given by (REF ).", "Since $\\mathfrak {H}\\rightarrow \\mathfrak {M}$ is smooth of Deligne–Mumford type, $\\mathbb {T}_{\\mathfrak {H}/\\mathfrak {M}}$ is the relative tangent bundle $T_{\\mathfrak {H}/\\mathfrak {M}}$ .", "Thus the composition $\\mathbb {T}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}} \\rightarrow \\mathbb {F}|_{\\mathfrak {H}}$ is trivial, which leads to the desired arrow $\\varphi ^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}$ .", "Similarly, the composition $\\mathbb {T}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {F}|_{{M}}$ factors through $\\mathbb {T}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}} \\rightarrow \\mathbb {F}|_{{M}}$ hence is also trivial, which leads to $\\varphi ^{\\mathrm {red}}_{{M}/\\mathfrak {M}}$ .", "This proves the factorization part in (1), and the commutative diagram in (2).", "For the perfect obstruction theories part, observe that $\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}$ and $\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}}$ are at least perfect in $[0,2]$ as $\\mathbb {F}$ is perfect in $[0,1]$ .", "It remains to show that $H^2(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) = 0$ and $H^{2}(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}})=0$ .", "Taking the long exact sequence of the first triangle in (REF ), we have $H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {F}|_{\\mathfrak {H}}) \\rightarrow H^2(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) \\rightarrow 0.$ Since the left arrow is precisely $\\mathcal {O}_{\\mathfrak {M}}(\\Delta ) \\twoheadrightarrow \\operatorname{cok}\\epsilon $ , we obtain $H^2(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) = 0$ .", "Similarly, we have the long exact sequence $H^1(\\mathbb {E}_{{M}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {F}|_{{M}}) \\rightarrow H^2(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}}) \\rightarrow 0,$ where the left arrow is given by the composition $H^1(\\mathbb {E}_{{M}/\\mathfrak {M}}) \\stackrel{\\sigma _{\\mathfrak {M}}}{\\rightarrow } H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}|_{{M}}) \\twoheadrightarrow H^1(\\mathbb {F}|_{{M}}).$ Since $\\mathbb {F}|_{{M}\\setminus \\Delta _{{M}}} = 0$ and $\\sigma _{\\mathfrak {M}}$ is surjective along $\\Delta _{{M}}$ , the above composition is surjective, hence $H^2(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}}) = 0$ .", "We next verify that $\\varphi ^{\\mathrm {red}}_{{M}/\\mathfrak {M}}$ and $\\varphi ^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}$ are obstruction theories.", "Indeed, the factorization of (1) implies a surjection $H^0(\\mathbb {T}_{{M}/\\mathfrak {M}}) \\twoheadrightarrow H^0(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}})$ and an injection $H^1(\\mathbb {T}_{{M}/\\mathfrak {M}}) \\hookrightarrow H^1(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}})$ .", "Since $\\mathbb {F}|_{{M}}$ is perfect in $[0,1]$ , $H^0(\\mathbb {T}_{{M}/\\mathfrak {M}}) \\twoheadrightarrow H^0(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}})$ is an injection, hence an isomorphism.", "The case that $\\varphi ^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}$ is an obstruction theory can be proved similarly.", "This completes the proof of (1).", "Observe that $H^0(\\mathbb {F}|_{\\mathfrak {H}}) = 0$ since $\\mathfrak {H}\\rightarrow \\mathfrak {M}$ is smooth.", "The first triangle in (REF ) implies an exact sequence $0 \\rightarrow H^1(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}) \\rightarrow H^1(\\mathbb {F}|_{\\mathfrak {H}}) \\rightarrow 0.$ Using (REF ) and the construction of (REF ), we obtain $H^1(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) \\cong \\mathcal {O}_{\\mathfrak {H}}$ .", "Now (REF ) induces a morphism of long exact sequences ${0 [r] & H^0(\\mathbb {F}|_{{M}}) [r] [d]^{\\cong } & H^1(\\mathbb {E}^{\\mathrm {red}}_{{M}/\\mathfrak {M}}) [r] [d]^{\\sigma ^{\\mathrm {red}}_{\\mathfrak {M}}} & H^1(\\mathbb {E}_{{M}/\\mathfrak {M}}) [r] [d]^{\\sigma _{\\mathfrak {M}}} & H^1(\\mathbb {F}|_{{M}}) [r] [d]^{\\cong }& 0 \\\\0 [r] & H^0(\\mathbb {F}|_{{M}}) [r] & H^1(\\mathbb {E}^{\\mathrm {red}}_{\\mathfrak {H}/\\mathfrak {M}}) [r] & H^1(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}) [r] & H^1(\\mathbb {F}|_{{M}}) [r] & 0}$ The surjectivity of $\\sigma ^{\\mathrm {red}}_{\\mathfrak {M}}$ along $\\Delta _{{M}}$ follows from the surjectivity of $\\sigma _{\\mathfrak {M}}$ along $\\Delta _{{M}}$ .", "This finishes the proof of (2)." ], [ "The reduced boundary cycle", "The pull-backs $\\mathbb {E}_{\\Delta _{{M}}/\\Delta } := \\mathbb {E}_{{M}/\\mathfrak {M}}|_{\\Delta _{{M}}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\mathbb {E}_{\\Delta _{\\mathfrak {H}}/\\Delta } := \\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}|_{\\Delta _{\\mathfrak {H}}}.$ define perfect obstruction theories of $\\Delta _{{M}} \\rightarrow \\Delta $ and $\\Delta _{\\mathfrak {H}} \\rightarrow \\Delta $ respectively.", "Consider the sequence of morphisms $\\mathbb {E}_{\\Delta _{{M}}/\\Delta } \\rightarrow \\mathbb {E}_{\\Delta _{\\mathfrak {H}}/\\Delta }|_{\\Delta _{{M}}} \\rightarrow H^{1}(\\mathbb {E}_{\\mathfrak {H}/\\mathfrak {M}}|_{\\Delta _{{M}}})[-1] \\rightarrow H^1(\\mathbb {F})|_{\\Delta _{{M}}}[-1]$ where the last arrow is given by (REF ).", "Since $H^1(\\mathbb {F}|_{\\Delta }) = \\mathcal {O}_{\\Delta }(\\Delta ),$ we obtain a triangle $\\mathbb {E}^{\\mathrm {red}}_{\\Delta _{{M}}/\\Delta } \\rightarrow \\mathbb {E}_{\\Delta _{{M}}/\\Delta } \\rightarrow \\mathcal {O}_{\\Delta }(\\Delta )|_{\\Delta _{{M}}}[-1] \\stackrel{[1]}{\\rightarrow }$ The two virtual cycles $[{M}]^{\\mathrm {vir}}$ and $[{M}]^{\\mathrm {red}}$ are related as follows.", "Theorem 6.5 Notations and assumptions as above, we have There is a canonical factorization of perfect obstruction theories ${\\mathbb {T}_{\\Delta _{{M}}/\\Delta } [rr]^{\\varphi _{\\Delta _{{M}}/\\Delta }} [rd]_{\\varphi ^{\\mathrm {red}}_{\\Delta _{{M}}/\\Delta }} && \\mathbb {E}_{\\Delta _{{M}}/\\Delta } \\\\&\\mathbb {E}^{\\mathrm {red}}_{\\Delta _{{M}}/\\Delta } [ru]&}$ Denote by $[\\Delta _{{M}}]^{\\mathrm {red}}$ the virtual cycle associated to $\\varphi ^{\\mathrm {red}}_{\\Delta _{{M}}/\\Delta }$ , called the reduced boundary cycle.", "Suppose $\\mathfrak {M}$ is smooth.", "Then we have a relation of virtual cycles $[{M}]^{\\mathrm {vir}} = [{M}]^{\\mathrm {red}} + i_*[\\Delta _{{M}}]^{\\mathrm {red}}$ where $i \\colon \\Delta _{{M}} \\rightarrow {M}$ is the natural embedding.", "The proof of Theorem REF (1) is similar to Theorem REF (1), and will be omitted.", "We next consider (2).", "Recall that ${M}(\\sigma ^{\\mathrm {red}}) \\subset {M}$ is the locus where $\\sigma ^{\\mathrm {red}}$ hence $\\sigma _{\\mathfrak {M}}$ degenerates.", "Replacing ${M}$ by ${M}\\setminus {M}(\\sigma ^{\\mathrm {red}})$ we may assume that $\\sigma _{\\mathfrak {M}}$ is everywhere surjective.", "Since the cosection localized virtual cycle $[{M}]_{\\sigma ^{\\mathrm {red}}}$ is represented by a Chow cycle supported on ${M}(\\sigma ^{\\mathrm {red}})$ , which is empty by our assumption, we see that $[{M}]_{\\sigma ^{\\mathrm {red}}} = 0$ .", "By Theorem REF , it remains to show that $[{M}] = i_*[\\Delta _{{M}}]^{\\mathrm {red}}.$ To proceed, we consider the triangle $\\mathbb {E}^{\\operatorname{tred}}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{{M}/\\mathfrak {M}} \\rightarrow \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{{M}}[-1] \\stackrel{[1]}{\\rightarrow }$ where the middle arrow is given by (REF ) and (REF ).", "Similar to the case of (1), we obtain a factorization of perfect obstruction theories ${\\mathbb {T}_{{M}/\\mathfrak {M}} [rr]^{\\varphi _{{M}/\\mathfrak {M}}} [rd]_{\\varphi ^{\\operatorname{tred}}_{{M}/\\mathfrak {M}}} && \\mathbb {E}_{{M}/\\mathfrak {M}} \\\\&\\mathbb {E}^{\\operatorname{tred}}_{{M}/\\mathfrak {M}} [ru]&}$ Let $[{M}]^{\\operatorname{tred}}$ be the virtual cycle corresponding to the perfect obstruction theory $\\varphi ^{\\operatorname{tred}}_{{M}/\\mathfrak {M}}$ .", "We call $[{M}]^{\\operatorname{tred}}$ the totally reduced virtual cycle to be distinguished from $[{M}]^{\\mathrm {red}}$ .", "Comparing (REF ) and (REF ), we have $\\varphi ^{tred}_{{M}/\\mathfrak {M}}|_{\\Delta _{{M}}} = \\varphi ^{\\mathrm {red}}_{\\Delta _{{M}}/\\Delta },$ hence $i^{!}", "[{M}]^{tred}= [\\Delta _{{M}}]^{\\mathrm {red}}.$ Since $\\mathfrak {M}$ is smooth, as in (REF ), we may construct absolute perfect obstruction theories associated to $\\varphi _{{M}/\\mathfrak {M}}$ and $\\varphi ^{\\operatorname{tred}}_{{M}/\\mathfrak {M}}$ respectively: $\\varphi _{{M}}\\colon \\mathbb {T}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {E}_{{M}} \\ \\ \\ \\mbox{and} \\ \\ \\ \\varphi ^{\\operatorname{tred}}_{{M}}\\colon \\mathbb {T}_{{M}/\\mathfrak {M}} \\rightarrow \\mathbb {E}^{\\operatorname{tred}}_{{M}}.$ By the same construction as in Section REF , the cosection $\\sigma _{\\mathfrak {M}}$ descends to an absolute cosection $\\sigma \\colon H^1(\\mathbb {E}_{{M}}) \\rightarrow \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{{M}}$ which is everywhere surjective.", "Let $\\mathfrak {E}_{{M}}$ and $\\mathfrak {E}^{tred}_{{M}}$ be the vector bundle stacks of $\\mathbb {E}_{{M}}$ and $\\mathbb {E}^{\\operatorname{tred}}_{{M}}$ respectively.", "Then $\\mathfrak {E}^{tred}_{{M}}$ is the kernel cone stack of $\\mathfrak {E}_{{M}} \\rightarrow \\mathcal {O}_{\\mathfrak {M}}(\\Delta )|_{{M}}$ induced by $\\sigma $ .", "Let $\\mathfrak {C}_{{M}}$ be the intrinsic normal cone of ${M}$ .", "Unwinding the definition of cosection localized virtual cycle in [42], we have $[{M}]_\\sigma = i^{!}", "0^!_{\\mathfrak {E}^{\\operatorname{tred}}_{{M}}}[\\mathfrak {C}_{{M}}] = i^!", "[{M}]^{\\operatorname{tred}} = [\\Delta _{{M}}]^{\\mathrm {red}}.$ where $[{M}]_\\sigma $ is the cosection localized virtual cycle corresponding to $\\sigma $ .", "Finally, (REF ) follows from $i_*[{M}]_\\sigma = [{M}]^{\\mathrm {vir}}$ , see [42]." ] ]
1906.04345
[ [ "A quantum emitter coated with graphene interacting in the strong\n coupling regime" ], [ "Abstract We demonstrate the strong coupling of a quantum dot and a graphene spherical shell coating it.", "Our simulations are the exact solutions of 3D Maxwell equations.", "Interaction produces sharp hybrid modes, even when the two are off-resonant, which are voltage-tunable (continuously) in an 80 meV interval.", "Despite a voltage-tunable quantum dot, the coupling of the light to these \"very sharp\" plexcitonic resonances is an order of magnitude larger than its coupling to a quantum dot.", "Hence, our results are very attractive for sensing applications and graphene display technologies with sharper colors.", "Moreover, on a simple theoretical model, we explain why such sharp, highly tunable, resonances emerge." ], [ "Introduction", "Graphene is a material with superior optical, electronic and mechanical properties [1], [2], [3], [4], [5].", "And, it can be used for replacing noble metals (mainly Au and Ag) for applications operating at near to far infrared (IR) wavelengths [6].", "Graphene possesses an advantage over noble metals because it has smaller material losses [7] and also its optical properties are tunable [8], thus allowing the design of multipurpose applications [9], [10], [11].", "In recent years, graphene has also been recognized as a promising active material for super-capacitors.", "Studies show that having large surface area is essential for such applications [12], [13].", "In that respect, a spherical geometry (graphene nano-ball) is suggested for increasing the surface area.", "Then, it was shown that a graphene mesoporous structure with an average pore diameter of $4.27$ nm, can be fabricated via chemical vapor deposition technique [14].", "Additionally, self-crystallized graphene and graphite nano-balls have been recently demonstrated via Ni vapor-assisted growth [15].", "Utilization of such growth techniques or in-liquid synthesis methods [16] can be employed to construct nanoparticle-graphene composite structures which operate at strong-coupling regime.", "In many of studies on such nanoscale composites, the focus of attention resided mainly on electrical properties.", "It is also intriguing to study the optical applications of the graphene spherical shell structures [17].", "In addition, the electromagnetic response of spherical shells has also been studied in terms of their plasmonic responses [18], [19].", "Graphene plasmons (GPs) can be tuned continuously by applying a voltage or can be adjusted by electrostatic doping [20] besides trapping the incident light into small volumes [21], [22].", "This tuning provides incredible potential in a vast amount of applications, such as sensing [23], switching [8], and meta-materials [24].", "Placing a quantum emitter (QE), such as a quantum dot (QD), in close proximity to a graphene nano-structure can yield strong interaction [21] and modulations in optical properties.", "Usually the interactions between QE placed in a nano-structured environment are described through investigating the QE's lifetime, calculating the Purcell factor [21].", "For such simulations, the QE-nano-structure interaction is described in terms of non-Hermitian of quantum electrodynamics, and the QE is assumed as a point dipole source.", "Moreover, the interaction between the QE and an infinite graphene layer has been investigated experimentally by measuring the relaxation rate for varying the distance between them [25] and varying the chemical potential value of the graphene layer [26].", "The QEs used are erbium ions with a transition energy close to the telecommunication wavelength, where the graphene nano-structures can have a plasmonic response for specific chemical potential values.", "Moreover, there are variety of molecules and quantum emitters also operating at infrared wavelengths [27], [28].", "In this paper, we demonstrate the strong coupling between a  5 nm-radius quantum dot (QD) and a graphene spherical shell, of the same size, coating the QD as shown in Fig.", "REF (a).", "We show that, in this way, the strong coupling between the QD and the graphene shell can be achieved even in a single quantum emitter level.", "A splitting of about 80 meV between the two hybrid modes could be obtained due to the strong coupling, interestingly, even when the spectrum of the QD does not overlap with the one of the graphene shell (when the two, actually, are off-resonant to each-other).", "When the coupling between the two is off-resonant, one of the (tunable) hybrid modes is very narrow compared to the linewidth of the bare graphene (around three times).", "The spectral positions of the these hybrid modes can be controlled via tuning the chemical potential of the graphene shell, which can be done by tailoring the Fermi energy level of graphene by the applied bias voltage [30] as shown in Fig.", "REF (b).", "Beyond demonstrating these effects via exact solutions of the 3D Maxwell equations, i.e., taking the retardation effects into account, we also show that the same effects are already predicted by a simple analytical model.", "We explain the physics, i.e., why such a sharp hybrid mode appear simply on the analytical model.", "Achieving such `tunable' narrow linewidth plasmonic (plexcitonic) modes are invaluable for sensing applications.", "Because one of the hybrid modes has a sharper linewidth, carrying potential for enhanced Figure of Merit (FOM) sensing and, for instance, graphene display technologies [29] with sharper color tunings.", "We remark that the spectral position of a QD (in general a QE) is also tunable via the applied voltage.", "The coupling of light to a QD, however, is order of magnitude lower compared to a graphene shell - QD hybrid, which also provides a more intense hot-spot.", "Figure: (a) The hybrid structure, a QD coated with a graphene spherical shell.", "(b) In the proposed experimental setup, the hybrid structure is placed between the substrate and the AFM tip with zero and finite electrical potential respectively to tailor the Fermi energy level of graphene by the applied bias voltage .The paper is organized as follows.", "We first present the exact solutions of the 3D-Maxwell equations, specifically, the absorption spectrum of the graphene spherical shell, the semi-conducting sphere individually and the combination of a QE with a graphene spherical shell (the full case), respectively in Sec. .", "Next, we describe the theoretical model and derive an effective Hamiltonian for a two-level system (QE) coupled to GPs in Sec.", "where we derive the equations of motion for suggested structure and obtain $a$ $single$ $equation$ for the steady-state plasmon amplitude.", "A summary appears in Sec.", "." ], [ "Electromagnetic simulations of the absorption of a graphene coated semi-conducting sphere", "When the absorption peak of the QE matches the GP resonance, we observe a splitting in the absorption band due to the interaction between the exciton polariton mode, of the semi-conducting sphere, with the localized surface GP mode, supported by the graphene spherical shell.", "To prove this, we perform electromagnetic simulations by using MNPBEM package [31], through solving the Maxwell equations in 3-dimensions.", "This splitting is connected to the energy exchange between the two modes.", "Due to the large splitting the system enters the strong coupling regime, where a splitting of $80\\,m$ eV between the hybrid-modes is observed [32].", "These type of collective modes have been also named as plexcitons [33].", "We stress out that the QE coated with graphene spherical shell has been experimentally demonstrated [34].", "In this section, we start with presenting the mathematical framework and the expressions that give the dielectric permittivity of the graphene spherical shell and of the semi-conducting QE.", "Next we present results regarding the absorption spectrum of the graphene spherical shell, the QE and the full case of QE with a graphene spherical shell coating.", "Figure: Absorption spectrum of the graphene spherical shell, varying the excitation wavelength.", "We keep fixed the value of the chemical potential, μ=1\\mu =1\\,eV, of the graphene spherical shell, for different values of its radius, R=2.5R=2.5\\,nm, 55\\,nm, 1010\\,nm and 1515\\,nm.The optical response of graphene is given by its in-plane surface conductivity, $\\sigma $ , in the random phase approximation [35], [36].", "This quantity is mainly determined by electron-hole pair excitations, which can be divided into intraband and interband transitions $ \\sigma =\\sigma _{\\text{intra}}+\\sigma _{\\text{inter}} $ .", "It depends on the chemical potential ($\\mu $ ), the temperature ($T$ ), and the scattering energy ($E_{S}$ ) values [37].", "The intraband term $\\sigma _{\\text{intra}}$ describes a Drude modes response, corrected for scattering by impurities through a term containing $\\tau $ , the relaxation time.", "The relaxation time, $\\tau $ , causes the plasmons to acquire a finite lifetime and is influenced by several factors, such as collisions with impurities, coupling to optical phonon and finite-size effects.", "In this paper, we assume that $T=300\\,\\text{K}$ and $\\tau =1\\,\\text{ps}$ .", "In addition, we vary the value of chemical potential [9], $\\mu $ , for active tuning of GPs.", "Figure: Extinction/absorption spectrum of the graphene spherical shell, varying the excitation wavelength.", "We keep fixed the radius, R=5R=5\\,nm, of the graphene spherical shell, for different values of chemical potential, μ=0.2\\mu =0.2\\,eV, 0.40.4\\,eV, 0.60.6\\,eV, 0.80.8\\,eV and 1.01.0\\,eV.In Fig.", "REF and Fig.", "REF , we present the extinction spectrum of the graphene spherical shell by a plane wave illumination.", "In both figures we observe a peak in the extinction spectrum, this peak value is due to the excitation of localized surface plasmon (LSP) mode supported by the graphene spherical shell.", "In particular, the LSP resonance frequency is given as a solution of the equation [18]: $\\frac{i\\epsilon \\omega _{l}}{2\\pi \\sigma \\left(\\omega _{l}\\right)}=\\left(1+\\frac{1}{2l+1}\\right)\\frac{l}{R},$ where $R$ is the radius of the graphene spherical shell, $\\epsilon $ is the dielectric permittivity of the surrounding medium and the space inside the graphene spherical shell and $l$ is the resonance eigenvalue which is connected with the expansion order.", "Here, we focus on graphene spherical shell radii that $R\\ll \\lambda $ , where $\\lambda $ is the excitation wavelength, thus we focus on the dipole mode $l=1$ .", "Since, $R\\ll \\lambda $ , the extinction and the absorption have essentially the same value (we disregard its scattering).", "Moreover, the LSP resonance depends on the intraband contributions of the surface conductivity, which, in the limit $\\mu /\\hbar \\omega \\gg 1$ , $\\sigma (\\omega )=4ia\\mu /\\hbar \\omega $ , ignoring the plasmon lifetime.", "Then, the LSP resonance wavelength ($\\lambda _{1}$ ) has the value: $\\lambda _{1}=2\\pi c\\sqrt{\\frac{\\hbar \\varepsilon }{\\pi a\\mu }\\frac{1}{12}R}.$ In boundary element simulations, using MNPBEM [31], the graphene spherical shell is modeled as a thin layer of thickness $d=0.5\\,\\text{nm}$ , with a dielectric permittivity [38], ($\\varepsilon (\\omega )$ ) $\\epsilon (\\omega )=1+\\frac{4\\pi \\sigma (\\omega )}{\\omega d},$ where the surface conductivity is given by Eq.", "REF  [9].", "Figure: Absorption spectra of the QE coated with the graphene spherical shell with respect to excitation wavelength.", "Different values of the chemical potential are employed while the QE transition energy is kept constant at λ eg =1550\\lambda _{eg}=1550\\,nm.", "More details on simulation parameters are given in the inset.In Fig.", "REF , we present the absorption spectrum of the graphene spherical shell in near IR region, considering different values of its radius $R=2.5\\,$ nm, $5\\,$ nm, $10\\,$ nm and $15\\,$ nm.", "We consider a fixed value for the chemical potential, $\\mu =1\\,$ eV and observe that by increasing the radius of the graphene spherical shell the surface plasmon resonance is red-shifted as is predicted by Eq.", "REF .", "The dipole surface plasmon resonance from Fig.", "REF for $R=10\\,$ nm is $2190\\,$ nm and from numerically solving Eq.", "REF it is $2120\\,$ nm, validating our approach.", "Moreover, increasing the graphene spherical shell radius the absorption strength gets higher.", "In Fig.", "REF , we present the extinction spectrum of the graphene spherical shell, for fixed radius $R=5\\,$ nm, for different values of the chemical potential, $\\mu =0.2\\,$ eV, $0.4\\,$ eV, $0.6\\,$ eV, $0.8\\,$ eV and $1.0\\,$ eV.", "As the value of the chemical potential increases the GP resonance is shifted to lower wavelengths as expected from Eq.", "REF .", "The physical explanation for such behavior is that the optical gap increases as the chemical potential value increases, thus the surface plasmon resonance blue-shifts.", "For exploring the effect of coupling, we placed a QD (QE) inside graphene spherical shell.", "The optical properties of the QE are also described through its absorption spectrum.", "We here stress out that we do not take into account the emission of the QE itself.", "Response of a QD or QE can be safely modeled by a Lorentzian dielectric function [39], [40].", "$\\epsilon _{eg}(\\omega )=\\epsilon _{\\infty }-f\\frac{\\omega _{eg}^{2}}{\\omega ^{2}-\\omega _{eg}^{2}+i\\gamma _{_{eg}}\\omega },$ where $\\epsilon _{\\infty }$ is the bulk dielectric permittivity at high frequencies, $f$ is the oscillator strength [41], [42] and $\\gamma _{eg}$ is the transition line-width, which is connected to quality of the QE.", "$\\omega _{eg}$ is connected with the energy from the excited to the ground state of the the QE.", "As the sphere is composed by a semi-conducting material, it supports localized exciton polariton modes.", "The sphere sizes considered in this paper are much smaller than the excitation wavelength and only the dipole exciton resonance is excited.", "In the electrostatic limit, condition for exciting the dipole localized exciton resonance is given by the ${\\rm Re}\\left(\\epsilon _{eg}(\\omega )\\right)=-2\\epsilon $ , where $\\epsilon $ is the dielectric permittivity of the surrounding medium, is this paper we consider $\\epsilon =1$ .", "From this resonance condition it becomes apparent that changing the radius of the semi-conducting sphere does not influence its resonance wavelength, as long as $R\\ll \\lambda $ .", "On the other hand, as the level spacing of the QE changes, the position of the dipole localized exciton resonance shifts accordingly.", "Figure: Absorption spectra of the QE coated with the graphene spherical shell with respect to excitation wavelength.", "Fixed value for the chemical potential is taken as μ=1.2\\mu =1.2\\,eV, while different values of the transition energy of the QE are considered.", "More details in the inset.In Fig.", "REF , we consider the full case in which the QE is coated by graphene spherical shell.", "We simulate the absorption of the combined system in the same spectral region.", "We start in Fig.", "REF by considering the effect of the value of the chemical potential, $\\mu $ , in the absorption of the combined system, where the value of the transition energy of the QE is fixed at $\\lambda _{eg}=1550\\,$ nm.", "For the chemical potential $\\mu =1\\,$ eV the splitting in the absorption spectrum is $\\hbar \\Omega =84\\,m$ eV, where we can apparently see that the localized exciton mode is off-resonant to the surface plasmon mode.", "This means that the interaction between GP and exciton modes is still in the strong coupling regime.", "In addition, the initial splitting blue-shifts as the value of the chemical potential $\\mu $ increases.", "In Fig.", "REF , we present the absorption of the QE coated with graphene spherical shell, for $\\mu =1.2\\,$ eV and the radius of the sphere is $R=5\\,$ nm.", "We consider different values of the transition energy of the QE, $\\lambda _{eg}$ .", "We observe that by increasing the value of $\\lambda _{eg}$ the resonance of the exciton polariton mode redshifts, similarly the splitting in the extinction/absorption of the combined QE core-graphene spherical shell nanosystem also redshifts.", "Initially for $\\lambda _{eg}=1400\\,$ nm, even the exciton polariton and the GP modes are highly off-resonant, we still observe the plexitonic modes, which can also be read as the existence of the strong coupling between the nanostructures.", "In the following section, we explain this in more detail with a simple analytical model.", "Figure: The scaled absorption intensity of the GP (|α GP | 2 |\\alpha _{_{GP}}|^2 ) as a function of excitation wavelength λ\\lambda , obtained from Eq.", "(-).", "(a) In the absence (black-solid) and in the presence of the QE having resonance at λ eg \\lambda _{eg} = 1535 nm (dark gray-dotted), λ eg \\lambda _{eg} = 1500 nm (blue-dashed) and λ eg \\lambda _{eg} = 1400 nm (red- dashed-dotted) for a fixed coupling strength, Ω R \\Omega _R = 0.05 ω GP \\omega _{_{GP}} .", "Variation of the resonance intensity of GP with excitation wavelength λ\\lambda and coupling strength Ω R \\Omega _R for (b) λ eg \\lambda _{eg} = 1535 nm and (c) λ eg \\lambda _{eg} =1400 nm.", "Here we use γ GP =0.005 \\gamma _{_{GP}}=0.005 ω GP \\omega _{_{GP}} and γ eg =10 -5 \\gamma _{eg}=10^{-5} ω GP \\omega _{_{GP}} ." ], [ "The analytical model", "Here, we write the effective Hamiltonian for the GPs coupled to a QE and derive the equations of motion.", "We consider the QE as a two level system [39] with level spacing $\\omega _{eg}=2\\pi c/ \\lambda _{eg}$ .", "In the steady state, we obtain a single equation.", "We show that by using this equation one can have a better understanding on the parameters of the combined system.", "We consider the dynamics of the total system as follows.", "The incident light ($\\varepsilon _{_L}$ ) with optical frequency $ \\omega =2\\pi c/ \\lambda $ excites a GP ($ \\hat{a}_{_{GP}}$ ), which is coupled to a QE.", "The Hamiltonian of the system can be written as the sum of the energy of the QE and GP ($\\omega _{_{GP}}=2\\pi c/ \\lambda _{_{GP}}$ ) oscillations ($\\hat{H}_0 $ ) and the energy transferred by the pump source ($\\hat{H}_{L}$ ) $\\hat{H}_0&=&\\hbar \\omega _{_{GP}} \\hat{a}_{_{GP}}^\\dagger \\hat{a}_{_{GP}}+\\hbar \\omega _{eg} |e \\rangle \\langle e|\\\\\\hat{H}_{L}&=&i\\hbar (\\varepsilon _{_L} \\hat{a}_{_{GP}}^\\dagger e^{-i\\omega t} -h.c)$ and the interaction of the QE with the GP modes ($ \\hat{H}_{int}$ ) $\\hat{H}_{int}&=&\\hbar \\lbrace \\Omega _R^\\ast \\hat{a}_{_{GP}}^\\dagger |g\\rangle \\langle e|+ \\Omega _R |e\\rangle \\langle g| \\hat{a}_{_{GP}} \\rbrace ,$ where the parameter $ \\Omega _R $ , in units of frequency, is the coupling strength between GP and the QE.", "$|g\\rangle $  ($|e \\rangle $ ) is the ground (excited) state of the QE.", "In the strong coupling limit, one needs to consider counter-rotating terms in the interaction Hamiltonian [43], but there is still no analytically exact solution [44].", "Instead of pursuing a full consideration, left for future work, we demonstrate here RWA, giving consistent results for the structure considered in this work.", "Moreover, we are interested in intensities but not in the correlations, so we replace the operators $\\hat{a}_i$ and $ \\hat{\\rho }_{ij}= |i\\rangle \\langle j|$ with complex number ${\\alpha }_i$ and $ {\\rho }_{ij} $  [45] respectively and the equations of motion can be obtained as $\\dot{{\\alpha }}_{_{GP}}&=-(i\\omega _{_{GP}}+\\gamma _{_{GP}}) {\\alpha }_{_{GP}}-i \\Omega _R^\\ast {{\\rho }}_{ge}+\\varepsilon _{_L} e^{-i\\omega t} ,\\\\\\dot{{\\rho }}_{ge} &= -(i \\omega _{eg}+\\gamma _{eg}) {\\rho }_{ge}+i \\Omega _R {\\alpha }_{_{GP}}({\\rho }_{ee}-{{\\rho }}_{gg}) ,\\\\\\dot{{{\\rho }}}_{ee} &= -\\gamma _{ee} {{\\rho }}_{ee}+i \\bigl \\lbrace \\Omega _R^\\ast {\\alpha }^\\ast _{_{GP}} {{\\rho }}_{ge}- \\textit {c.c} \\bigr \\rbrace ,$ where $ \\gamma _{_{GP}} $ and $ \\gamma _{eg} $ are the damping rates of the GP mode and of the off-diagonal density matrix elements of the QE, respectively.", "The values of the damping rates are considered as the same with previous section.", "The conservation of probability ${\\rho }_{ee}+{\\rho }_{gg}=1$ with the diagonal decay rate of the QE $ \\gamma _{ee} = 2 \\gamma _{eg} $ accompanies Eqs.", "(REF -).", "In the steady state, one can define the amplitudes as ${\\alpha }_{_{GP}}(t)&=&\\tilde{\\alpha }_{_{GP}} e^{-i\\omega t}, \\qquad {\\rho }_{ge}(t)= \\tilde{\\rho }_{ge} e^{-i\\omega t}, $ where $\\tilde{\\alpha }_{_{GP}}$ and $ \\tilde{\\rho }_{ge} $ are constant in time.", "By inserting Eq.", "(REF ) into Eqs.", "(REF -), the steady-state solution for the GP mode can be obtained as $\\tilde{\\alpha }_{_{GP}}=\\frac{\\varepsilon _{_L} [i(\\omega _{eg}-\\omega )+\\gamma _{eg}]}{(\\omega -\\Omega _+)(\\omega -\\Omega _-)+i\\Gamma (\\omega )},$ where $ \\Omega _{\\pm }=\\delta _+ \\pm \\sqrt{\\delta _-^2-|\\Omega _R|^2y+\\gamma _{eg}\\gamma _{_{GP}}}$ defines hybrid mode resonances [46] and $\\Gamma (\\omega )=[\\gamma _{eg}(\\omega _{_{GP}}-\\omega )+\\gamma _{_{GP}}(\\omega _{eg}-\\omega )]$ with $ \\delta _\\pm =(\\omega _{_{GP}} \\pm \\omega _{eg})/2 $ and population inversion $ y=\\rho _{ee}-\\rho _{gg} $ terms.", "It is important to note that the results presented in Fig.", "REF and Fig.", "REF are the exact solutions of Eqs.", "(REF -).", "We study the steady-state in Eq.", "(REF ) to gain a better understanding over the parameters and avoid time consuming electromagnetic 3D simulations of the combined system.", "Moreover, we hereafter calculate the intensity of the GP mode in Eq.", "(REF ), which is related to the absorption from the nanostructure [34], to compare the results with the electromagnetic 3D-simulations.", "To find the modulation of the intensities of the hybrid modes in the presence of QE, we use different resonance values of the QE,  $ \\lambda _{eg}=2\\pi c/\\omega _{eg} $ = 1535 nm 1500 nm 1400 nm in Fig.", "REF a.", "The quantitative results comparing with the numerical simulations in Fig.", "REF , which takes retardation effects into account, are obtained.", "We also show the evolution of the hybrid-modes by varying interaction strength $ |\\Omega _R| $ for zero detuning ($ \\delta _-=0 $ ) in Fig.", "REF b, and for highly off-resonant case in Fig.", "REF c. The strong coupling regime is reached if $ \\Omega _R^2 \\ > (\\gamma _{_{GP}}^2+\\gamma _{eg}^2)/2 $  [47], that is the coupling strength exceeds the sum of the dephasing rates.", "When QE and GP are resonant [see Fig.", "REF b] a dip starts to appear around $ |\\Omega _R| \\approx \\gamma _{_{GP}} $ .", "This can be also read from Eq.", "(REF ).", "That is when $\\omega _{eg}=\\omega _{_{GP}}=\\omega $ , the Eq.", "(REF ) becomes $\\tilde{\\alpha }_{_{GP}}\\propto \\gamma _{eg}/(|\\Omega _R|^2y+\\gamma _{_{GP}}\\gamma _{eg}) $ .", "Since $ \\gamma _{eg} $ is very small from other frequencies, with increasing $ |\\Omega _R| $ , $ \\tilde{\\alpha }_{_{GP}} $ becomes smaller compared to case obtained without QE.", "Beyond a point, where the transparency window appears [39], there emerge two different peaks centered at frequencies $ \\Omega _\\pm $ .", "And, the separation becomes larger as the $ \\Omega _R $ increases.", "This argument is not valid when GP and level spacing of the QE are highly off-resonant.", "In this case, to make second peak significant, the interaction strength has to be much larger than $ \\gamma _{_{GP}} $  [see Fig.", "REF c].", "The dip can be seen at $ \\omega _{eg} $ , which is out of the GP resonance window and it may not be useful for practical applications.", "However, having sharp peak, due to strong coupling between off-resonant particles, can be very useful for the sensing applications.", "The reason for that it has smaller line-width and can be tuned by changing chemical potential.", "To show this, in Fig.", "REF , we plot the evolution of the field intensity of the GP ($ |\\alpha _{_{GP}}|^2 $ ) as a function of excitation wavelength $\\lambda $ and GP resonance $\\lambda _{_{GP}}$ , when graphene is alone Fig.", "REF a and with QE Fig.", "REF b.", "It can be seen from Fig.", "REF b that it is possible to control the positions and line-widths of the hybrid resonances by adjusting $\\mu $ .", "The similar behavior is also obtained in MNPBEM simulation [see Fig.", "REF ].", "Figure: The scaled field intensity of the GP (|α GP | 2 |\\alpha _{_{GP}}|^2 ) as a function of excitation wavelength λ\\lambda and GP resonance λ GP \\lambda _{_{GP}}, when the graphene spherical shell is alone (a) and with QE (b).", "We scale GP intensity with its maximum value and the parameters are used as: Ω R \\Omega _R = 0.1 ω eg \\omega _{eg} , γ GP =0.01 \\gamma _{_{GP}}=0.01 ω eg \\omega _{eg} and γ eg =10 -5 \\gamma _{eg}=10^{-5} ω eg \\omega _{eg} ." ], [ "Summary", "In summary, we investigate the optical response of the GPs for the spherical shell geometry in the presence and absence of the QE.", "We show that there is a tunability of the optical response of the graphene spherical shell through changing the value of the chemical potential and its radius.", "For the combined system (the QE covered with a graphene layer) we observe a splitting in the absorption band.", "This is due to the strong coupling regime where splitting of up to $80\\,m$ eV are observed in a single QE limit.", "We also discuss the case when the QE and GP are off-resonant, and observe that the system can hold strong coupling.", "The results of the theoretical model, we present here, support the exact solutions of the 3D-Maxwell equations obtained from MNPBEM simulations.", "Our results show that chemical potential and the coupling strength can be used as tuning parameters for tuning the extinction spectrum of the nanocomposite very effectively.", "Tuning of the chemical potential can be induced by use of an electrolytic cell or alternatively an electrical nanocontact through scanning probe microscope tip as shown in Fig.", "1.", "The same tip can also be used to mechanically alter the graphene spherical shell-QE layout in a way to modify the coupling strength of the two counterparts through mechanical distortion of the graphene spherical shell and hence as another tuning mechanism.", "We expect our results to contribute to controlling light-matter interactions at the nanometer scale and find potential from all-optical switch nonlinear devices to sensing applications with current experimental ability for fabrication.", "Extreme field confinement, device tunability and low losses make such structures even more attractive in the future studies." ], [ "Acknowledgments", "$^\\ddagger $ Contributed equally.", "This research was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No.", "117F118.", "MET, AB and RS acknowledge support from TUBITAK 1001-119F101." ] ]
1906.04434
[ [ "The Fifth International Students' Olympiad in Cryptography -- NSUCRYPTO:\n problems and their solutions" ], [ "Abstract Problems and their solutions of the Fifth International Students' Olympiad in cryptography NSUCRYPTO'2018 are presented.", "We consider problems related to attacks on ciphers and hash functions, Boolean functions, quantum circuits, Enigma, etc.", "We discuss several open problems on orthogonal arrays, Sylvester matrices and disjunct matrices.", "The problem of existing an invertible Sylvester matrix whose inverse is again a Sylvester matrix was completely solved during the Olympiad." ], [ "Introduction", "NSUCRYPTO — The International Students' Olympiad in cryptography — celebrated its 5-year anniversary in 2018.", "Interest in the Olympiad around the world is significant: there were more than 1600 participants from 52 countries in the first five Olympiads from 2014 to 2018!", "The Olympiad program committee includes specialists from Belgium, France, The Netherlands, USA, Norway, India, Belarus', and Russia.", "Let us shortly formulate the format of the Olympiad.", "One of the Olympiad main ideas is that everyone can participate!", "Each participant chooses his/her category when registering on the Olympiad website bluensucrypto.nsu.ru.", "There are three categories: “school students” (for junior researchers: pupils and high school students), “university students” (for participants who are currently studying at universities) and “professionals” (for participants who have already completed education or just want to be in the restriction-free category).", "Awarding of the winners is held in each category separately.", "The Olympiad consists of two independent Internet rounds: the first one is individual (duration 4 hours 30 minutes) while the second round is team (duration 1 week).", "The first round is divided into two sections: A — for “school students”, B — for “university students” and “professionals”.", "The second round is general for all participants.", "Participants read the Olympiad problems and submit their solutions using the Olympiad website.", "The language of the Olympiad is English.", "The Olympiad participants are always interested in solving different problems of various complexities at the intersection of mathematics and cryptography.", "They show their knowledge, creativity and professionalism.", "That is why the Olympiad not only includes interesting tasks with known solutions but also offers unsolved problems in this area.", "This year, one of such open problems, “Sylvester matrices”, was completely solved by three teams!", "All the open problems stated during the Olympiad history can be found at bluensucrypto.nsu.ru/unsolved-problems.", "On the website we also mark the current status of each problem.", "For example, in addition to “Sylvester matrices” solved in 2018, the problem “Algebraic immunity” was completely solved during the Olympiad in 2016.", "And what is important for us, some participants were trying to find solutions after the Olympiad was over.", "For example, a partial solution for the problem “A secret sharing” (2014) was proposed in [7].", "We invite everybody who has ideas on how to solve the problems to send your solutions to us!", "The paper is organized as follows.", "We start with problem structure of the Olympiad in section .", "Then we present formulations of all the problems stated during the Olympiad and give their detailed solutions in section .", "Finally, we publish the lists of NSUCRYPTO'2018 winners in section .", "Mathematical problems of the previous International Olympiads NSUCRYPTO'2014, NSUCRYPTO'2015, NSUCRYPTO'2016, and NSUCRYPTO'2017 can be found in [2], [1], [11], and [8] respectively." ], [ "Problem structure of the Olympiad", "There were 16 problems stated during the Olympiad, some of them were included in both rounds (TablesREF , REF ).", "Section A of the first round consisted of six problems, whereas the section B contained seven problems.", "Three problems were common for both sections.", "The second round was composed of eleven problems.", "Three problems of the second round were marked as unsolved (awarded special prizes from the Program Committee).", "Table: Problems of the first roundTable: Problems of the second round" ], [ "Problems and their solutions", "In this section we formulate all the problems of NSUCRYPTO'2018 and present their detailed solutions paying attention to solutions proposed by the participants." ], [ "Formulation", "Alice uses a new digital signature algorithm, that turns a text message $M$ into a pair $(M,s)$ , where $s$ is an integer and generated in the following way: [noitemsep] 1.", "The special function $h$ transforms $M$ into a big positive integer $r=h(M)$ .", "2.", "The number $t=r^2$ is calculated, where $t=\\overline{t_1 t_2 \\ldots t_n}$ .", "3.", "The signature $s$ is calculated as $s=t_1+t_2+\\ldots +t_n$ .", "Bob obtained the signed message (Congratulations on the fifth year anniversary of NSUCRYPTO!, 2018) from Alice and immediately recognized that something was wrong with the signature!", "How did he discover it?", "Remarks.", "By $t = \\overline{t_1t_2\\ldots t_n}$ we mean that $t_1, t_2, \\ldots ,t_n$ are decimal digits and all digits under the bar form decimal number $t$ ." ], [ "Solution", "It is widely known that every integer is congruent to the sum of its digits modulo 3.", "So, we have that $t \\equiv _3 2018 \\equiv _3 2.$ But $t$ is equal to $r^2$ and a square can not be equal to 2 modulo 3.", "Thus, we have a contradiction.", "We got a lot correct solutions.", "The most accurate and detailed solutions were sent by Ruxandra Icleanu (Tudor Vianu National College of Computer Science, Romania), Petr Ionov (Yaroslavl State University, Russia), and the team of Henning Seidler and Katja Stumpp (TU Berlin, Germany)." ], [ "Formulation", "Little Jack is only seven years old and likes solving riddles involving the powers of two.", "Recently, his uncle Bitoshi gave him 16 BeanCoin seeds and promised that Jack can collect all BeanCoins which will grow from these seeds.", "But in order for BeanCoins to grow big and fruitful, Jack must plant the seeds in the garden in a special way.", "He has to draw eight lines on the ground and plant all 16 seeds on these lines in such a way that each of the lines contains exactly four seeds.", "Can you help Jack to achieve his goal and suggest how to plant the seeds?" ], [ "Solution", "The seeds can be place on the corners and intersection points of an octagram, as depicted in Figure REF (a).", "As is clear from this figure, all eight lines contain exactly four seeds and it is impossible to draw other line contained exactly four seeds.", "Many school students found interesting ways to draw these lines, for example Figure REF  (b).", "The most interesting ones were given by Gorazd Dimitrov (Yahya Kemal College, Macedonia), Artem Ismagilov (The Specialized Educational and Scientific Center UrFU, Russia), and Igor Pastushenko (The Specialized Educational Scientific Center of Novosibirsk State University, Russia)." ], [ "Formulation", "Let $n$ be an odd positive integer.", "In some cipher, a key is a binary $n\\times n$ matrix $A = \\left(\\begin{array}{cccc}a_{1,1} & a_{1,2} & \\dots & a_{1,n} \\\\a_{2,1} & a_{2,2} & \\dots & a_{2,n} \\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\a_{n,1} & a_{n,2} & \\dots & a_{n,n} \\\\\\end{array}\\right),$ where $a_{i,j}$ is either 0 or 1, such that each diagonal of any length $1, 2, \\ldots , n-1, n$ contains an odd number of 1s.", "What is the minimal and the maximal number of 1s that can be placed in a key matrix $A$ ?", "Remarks.", "For example, for $n=3$ , diagonals are the following ten lines: Figure: NO_CAPTION" ], [ "Solution", "The correct solution of this problem must consist of two steps.", "The first step is to find theoretical lower and upper bounds for the number of 1s, and the second step is to prove that these bounds are tight.", "The best solution was proposed by Aleksei Udovenko (University of Luxembourg), we provide it below.", "1.", "Minimum.", "Consider the $n \\times n$ matrix $A$ ($n$ is odd) with both the top row filled with 1s, the bottom row filled with 1s and the central cell equal to 1; all other elements are 0: ${\\left\\lbrace \\begin{array}{ll}a_{1,i}=1, & 1 \\leqslant i \\leqslant n;\\\\ a_{n,i}=1, & 1 \\leqslant i \\leqslant n;\\\\ a_{(n+1)/2,(n+1)/2}=1 ;\\\\a_{i,j}=0, & \\mbox{otherwise}.\\end{array}\\right.", "}$ Any diagonal of length less than $n-1$ includes exactly a single 1 (either from the top row or from the bottom row).", "The two diagonals of length $n$ include three 1s (one from the top row, one from the bottom row and one from the center).", "Therefore, this matrix satisfies the condition.", "It has $2n+1$ 1s.", "We now prove that this number of 1s is minimal.", "Note that each corner cell $a_{1,1}$ , $a_{1,n}$ , $a_{n,1}$ , $a_{n,n}$ makes a single element diagonal.", "Therefore, these cells must contain 1s.", "There are $2(n-2)$ diagonals going in the down-right direction and not touching the corners (starting from the cells of the leftmost column and from the cells for the topmost row).", "Furthermore, the main diagonal without the corner cells must have odd number of 1s too.", "Therefore, $2n-3$ disjoint diagonals must contain at least one 1, in addition to 4 corner 1s.", "Therefore, there should be at least $2(n-2) + 1 + 4 = 2n + 1$ 1s in the matrix.", "2.", "Maximum.", "Consider the $n \\times n$ matrix $A$ ($n$ is odd) filled with 1s except cells in the leftmost and the rightmost columns which have an even row index: ${\\left\\lbrace \\begin{array}{ll}a_{2i,1}=0, & 1 \\leqslant i \\leqslant (n-1)/2;\\\\ a_{2i,n}=0, & 1 \\leqslant i \\leqslant (n-1)/2;\\\\a_{i,j}=1, & \\mbox{otherwise}.\\end{array}\\right.", "}$ It is easy to check that all diagonals that contain an even number of elements contain a single zero either from the leftmost or from the rightmost column.", "Therefore, these diagonals have an odd number of 1s.", "Also, all diagonals that contain an odd number of elements contain no zeroes and thus have an odd number of 1s too.", "Therefore, this matrix satisfies the condition.", "It has $n^2-2(n-1)/2 = n^2-n + 1$ 1s.", "We now prove that this number is maximal.", "Consider diagonals going in the down-right direction that have an even number of elements.", "There are $2(n-1)/2 = (n-1)$ such diagonals and they are disjoint.", "Each of them must contain at least a single zero.", "Therefore, the maximum number of 1s is $n^2-n+1$ ." ], [ "Formulation", "Two friends, Roman and Anton, are very interested in sequences and ciphers.", "Their new cryptosystem encrypts binary messages of length $n$ , $X = (x_1,x_2,\\ldots ,x_n)$ , where each $x_i$ is either 0 or 1.", "A key $K$ of the cipher is a set of $n$ integers $a_1,a_2,\\ldots , a_n$ .", "The ciphertext $Y$ for the message $X$ encrypted with the key $K$ is the integer $Y = x_1\\cdot a_1 + x_2\\cdot a_2 + \\ldots + x_n\\cdot a_n.$ Roman and Anton change their key regularly.", "Today, the key $K$ is defined by $a_i = 2^i + (-1)^i \\ \\text{ for all } i = 1,\\ldots ,n.$ The friends can easily decipher any message using the key defined by this sequence for any $n$ !", "Prove that the encryption is correct for this key $K$ for any $n$ : there are no two distinct input messages $X^1$ and $X^2$ such that their ciphertexts $Y^1$ and $Y^2$ are equal, i. e. $Y^1 = Y^2$ .", "Describe an algorithm which can be used to easily decipher any ciphertext $Y$ encrypted with today's key $K$ .", "Here “easily” means that the algorithm should work much faster than checking all possible variants for an input message $X$ ." ], [ "Solution", "Let us firstly show that the sequence $\\lbrace a_i\\rbrace $ is superincreasing, i. e. $a_{i+1}>\\sum _{k=1}^{i}a_k$ for any $i>0$ .", "Indeed, $\\sum _{k=1}^{i}a_k=\\sum _{k=1}^{i} (2^k + (-1)^k) = 2^{i+1} - 2 + \\sum _{k=1}^i(-1)^k ={\\left\\lbrace \\begin{array}{ll}2^{i+1} - 2, \\text{ if }i\\text{ is even}\\\\2^{i+1} - 3, \\text{ if }i\\text{ is odd}\\\\\\end{array}\\right.", "}<2^{i+1} + (-1)^i = a_{i+1}.$ Let us show that the encryption is correct.", "Let $X^1 = (x^1_1, \\ldots , x^1_n)$ and $X^2 = (x^2_1, \\ldots , x^2_n)$ be two distinct messages, and $i$ is the largest position such that $x^1_i\\ne x^2_i$ .", "Without loss of generality, suppose that $x^1_i = 1$ .", "Then $Y^1 - Y^2 = (x^1_1\\cdot a_1 + \\ldots + x^1_i\\cdot a_i + \\ldots + x^1_n\\cdot a_n) - (x^2_1\\cdot a_1 + \\ldots + x^2_i\\cdot a_i + \\ldots + x^2_n\\cdot a_n)$ $= (x^1_1 - x^2_1) \\cdot a_1 + \\ldots + (x^1_{i-1} - x^2_{i-1}) \\cdot a_{i-1} + a_i > 0$ since $\\lbrace a_i\\rbrace $ is a superincreasing sequence.", "The correctness of the decryption algorithm (Algorithm REF ) is also based on the superincreasing property of $\\lbrace a_i\\rbrace $ .", "The complexity of the algorithm consists of $n$ integer comparisons.", "The decryption algorithm Input: $Y$ , $n$ .", "Output: $X = (x_1,\\ldots ,x_n)$ .", "Step 0.", "$T:=Y$ , $i:=n$ .", "Step 1.", "If $T > a_i$ , then $x_i = 1$ ; else $x_i = 0$ .", "Step 2.", "$T:=T - x_i\\cdot a_i$ , $i:=i-1$ .", "If $i>0$ , go to step 1; else return $X$ .", "The problem was solved by the majority of participants including eight school students." ], [ "Formulation", "Alice is studying special functions that are used in symmetric ciphers.", "Let $E^n$ be the set of all binary vectors $x = (x_1, x_2,\\ldots ,x_n)$ of length $n$ , where $x_i$ is either 0 or 1.", "Given two vectors $x$ and $y$ from $E^n$ consider their sum $x \\oplus y=(x_1\\oplus y_1,\\ldots , x_n\\oplus y_n)$ , where $\\oplus $ is addition modulo 2.", "Example.", "If $n=3$ , then $E^3=\\lbrace (000), (001), (010), (011), (100), (101), (110), (111)\\rbrace $ .", "Let $x=(010)$ and $y= (011)$ , then vector $x \\oplus y$ is equal to $(010)\\oplus (011)=(0 \\oplus 0, 1 \\oplus 1, 0 \\oplus 1)=(001)$ .", "We will say that a function $F$ maps $E^n$ to $E^n$ if it transforms any vector $x$ from $E^n$ into some vector $F(x)$ from $E^n$ .", "Example.", "Let $n=2$ .", "For instance, we can define $F$ that maps $E^2$ to $E^2$ as follows: $F(00)=(00)$ , $F(01)=(10)$ , $F(10)=(11)$ and $F(11)=(10)$ .", "Alice found a function $S$ that maps $E^6$ to $E^6$ in such a way that the vectors $S(x)$ and $S(y)$ are not equal for any nonequal vectors $x$ and $y$ .", "Also, $S$ has another curious property: the equation $S(x) \\oplus S(x\\oplus a) = b$ has either 0 or 2 solutions for any nonzero vector $a$ from $E^6$ and any vector $b$ from $E^6$ .", "Find the number of pairs $(a,b)$ such that this equation has exactly 2 solutions!" ], [ "Solution", "Consider a function $S$ that satisfies the conditions of the problem.", "Let us fix an arbitrary vector $a$ that is nonzero.", "Consider the set $B_a$ of all possible values of $S(x) \\oplus S(x\\oplus a)$ , i. e. $B_a=\\lbrace S(x) \\oplus S(x\\oplus a)~|~x \\in E^6\\rbrace $ .", "It holds that $|B_a| = 2^5$ , since $S(x) \\oplus S(x\\oplus a)=S(x\\oplus a) \\oplus S(x\\oplus a\\oplus a)$ .", "Then for every nonzero $a$ there exist $2^5$ values of $b$ , such that $S(x) \\oplus S(x\\oplus a) = b$ has 2 solutions.", "Then the number of pairs is equal to $63*32=2016$ .", "Correct answers were sent by only three school students: Alexey Lvov (Gymnasium 6 of Novosibirsk, Russia), Borislav Kirilov (The First Private Mathematical Gymnasium of Sofia, Bulgaria), and Razvan Andrei Draghici (National College Fratii Buzesti, Romania)." ], [ "Formulation", "Alice and Bob are interested in quantum circuits.", "They studied quantum operations and would like to use them for their simple cipher.", "Let an input plaintext be $P=(p_1,p_2,\\ldots ,p_{16})\\in \\mathbb {F}_2^{16}$ .", "The ciphertext $C\\in \\mathbb {F}_2^{16}$ is calculated as $C=K\\oplus \\big (F(p_1,\\ldots ,p_4),\\ F(p_5,\\ldots ,p_8),\\ F(p_9,\\ldots ,p_{12}),\\ F(p_{13},\\ldots ,p_{16})\\big ),$ where $K\\in \\mathbb {F}_2^{16}$ is a secret key and $F$ is a function from $\\mathbb {F}_2^4$ to $\\mathbb {F}_2^4$ ; $\\oplus $ is bitwise XOR.", "The friends found a representation of $F$ from wires and elementary quantum gates which form a quantum circuit.", "They use Dirac notation and denote computational basis states by $\\left|0\\right\\rangle $ and $\\left|1\\right\\rangle $ .", "Further, quantum bits (qubits) are considered only in quantum states $\\left|0\\right\\rangle $ and $\\left|1\\right\\rangle $ .", "Alice and Bob used the following quantum gates and circuit symbols which are given in Table REF .", "Table: Quantum gates and circuit symbolsA quantum circuit which describes action of $F$ on $x=\\left(x_1,x_2,x_3,x_4\\right)\\in ~\\mathbb {F}_2^4$ , where $F = (f_1,f_2,f_3,f_4)$ and $f_i,i=1,2,3,4,$ are Boolean functions in 4 variables, is the following:  @C=2em @R=1em $\\left|x_1\\right\\rangle $ 1 1 X 1 1 1 f1(x) $\\left|x_2\\right\\rangle $ -1 1 -1 X -1 -1 f2(x) $\\left|x_3\\right\\rangle $ -1 -1 1 -1 X 1 f3(x) $\\left|x_4\\right\\rangle $ X -1 -1 -1 X f4(x) The problem.", "The friends encrypted the plaintext $P=(0011010111110010)$ and got the ciphertext $C=(1001101010010010)$ .", "Find the secret key $K$ !" ], [ "Solution", "One can notice that the given circuit can be simplified by observing that the following evolutions  @C=2em @R=1em $\\left|x_1\\right\\rangle $ 1 1 X 1 1 1 f1(x) $\\left|x_2\\right\\rangle $ -1 1 -1 X -1 -1 f2(x) $\\left|x_3\\right\\rangle $ -1 -1 1 -1 X 1 f3(x) $\\left|x_4\\right\\rangle $ X -1 -1 -1 X f4(x)3547.7em– 19211.7em– actually swap two states $\\left|x\\right\\rangle ,\\left|y\\right\\rangle $ , $x,y\\in \\lbrace 0,1\\rbrace $ :  @C=2em @R=1em $\\left|x\\right\\rangle $ 1 1 $\\left|y\\right\\rangle $ $\\left|y\\right\\rangle $ -1 $\\left|x\\right\\rangle $                      @C=2em @R=1em $\\left|x\\right\\rangle $ 1 $\\left|y\\right\\rangle $ $\\left|y\\right\\rangle $ -1 -1 $\\left|x\\right\\rangle $ Both of the evolutions  @C=2em @R=1em $\\left|x_1\\right\\rangle $ 1 1 X 1 1 1 f1(x) $\\left|x_2\\right\\rangle $ -1 1 -1 X -1 -1 f2(x) $\\left|x_3\\right\\rangle $ -1 -1 1 -1 X 1 f3(x) $\\left|x_4\\right\\rangle $ X -1 -1 -1 X f4(x)1527.7em– 39411.7em– have form  @C=2em @R=1em $\\left|x\\right\\rangle $ X 1 $\\left|x\\oplus y\\oplus 1\\right\\rangle $ $\\left|y\\right\\rangle $ -1 X $\\left|x\\right\\rangle $ for $\\left|x\\right\\rangle ,\\left|y\\right\\rangle $ , $x,y\\in \\lbrace 0,1\\rbrace $ .", "The algebraic normal forms of coordinate Boolean functions of $F$ are $f_1(x) &= x_1\\oplus x_2x_3, \\\\f_2(x) &= x_2\\oplus x_1x_4\\oplus x_2x_3x_4\\oplus 1, \\\\f_3(x) &= x_3\\oplus x_4\\oplus x_1x_2\\oplus x_1x_3, \\\\f_4(x) &= x_4\\oplus 1,$ where $x\\in \\mathbb {F}_2^4$ .", "Then $K_{1,...,4} &= C_{1,...,4}\\oplus F\\left(p_1,\\ldots ,p_4\\right)=C_{1,...,4}\\oplus (0100)=(1101), \\\\K_{5,...,8} &= C_{5,...,8}\\oplus F\\left(p_5,\\ldots ,p_8\\right)=C_{5,...,8}\\oplus (0010)=(1000), \\\\K_{9,...,12} &= C_{9,...,12}\\oplus F\\left(p_9,\\ldots ,p_{12}\\right)=C_{9,...,12}\\oplus (0000)=(1001), \\\\K_{13,...,16} &= C_{13,...,16}\\oplus F\\left(p_{13},\\ldots ,p_{16}\\right)=C_{13,...,16}\\oplus (0111)=(0101),$ and finally, the key is the following: $K=(1101100010010101).$ Many participants coped with this problem and correctly found the key." ], [ "Formulation", "Bob always takes into account all the recommendations of security experts.", "He switched from short passwords to long passphrases and changes them every month.", "Bob usually chooses passphrases from the books he is reading.", "Passphrases are so lengthy and are changed so often!", "In order to not forget them, Bob decided to use stickers with hints.", "He places them on his monitors (ooh, experts...).", "The only hope is that Bob's hint system is reliable because it uses encryption.", "But is that true?", "Could you recover Bob's current passphrase from the photo of his workspace (Figure REF )?", "Figure: Workspace" ], [ "Solution", "Looking at the picture we see three stickers.", "One of them is “A Discourse of Fire and Salt” that represents a title of a book written by Blaise de Vigenère.", "This is the first hint that probably the Viginère cipher was used.", "Then we have a sticker with the ciphertext AJKTUWLWLZYABQYRSLS that consists of 19 letters.", "And finally, we see the sticker with five directed polygonal paths containing a total of 19 vertices.", "These 19 vertices could correspond to the 19 ciphertext letters.", "There is a keyboard at the picture.", "So, we can guess that these arrows could be related to the letters from the keyboard.", "Let us look at the first two keyboard rows (Figure REF ).", "We can recover the secrete key ESWAQRDFTGYHIJUKOLP.", "By deciphering the ciphertext using this key and the Viginère cipher, we get WROTEFIRSTATTHEHEAD.", "Thus, Bob's current passphrase is “Wrote first at the head”.", "Figure: Keyboard rowsSurprisingly, nobody solved this problem in the first round, while five teams solved it in the second round." ], [ "Formulation", "The sponge function blueBash-f [3] uses the permutation $S3$ that transforms a triple of 64-bit binary words $a,b,c$ in the following way: $S3(a,b,c)= (b \\vee \\lnot c\\oplus a,\\ a \\vee c\\oplus b,\\ a \\wedge b\\oplus c).$ Here $\\lnot $ , $\\wedge $ , $\\vee $ , $\\oplus $ denote the binary bitwise operations “NOT', “AND”, “OR”, “XOR” respectively.", "The operations are listed in descending order of priority.", "Let $w^k$ also denote the cyclic shift of a 64-bit word $w$ to the left by $k\\in \\lbrace 1,2,\\ldots ,63\\rbrace $ positions.", "Alice wants to strengthen $S3$ .", "She can do this by XORing any input $a,b,c$ or its cyclic shift to any output.", "She must use at least one cyclic shift and she cannot add two identical terms to the same output.", "Help Alice change $S3$ in such a way that a modified $S3$ will still be a permutation!", "Remarks.", "1.", "For example, in the expression $b\\vee \\lnot c\\oplus a$ , we firstly calculate $\\lnot c$ , then calculate $b \\vee \\lnot c$ , and after that the final result (according to descending order of operations priority).", "2.", "The modification $(b \\vee \\lnot c\\oplus a\\oplus a^{11},\\ a \\vee c\\oplus a^7\\oplus c,\\ a \\wedge b\\oplus b^{32})$ is allowed but it does not satisfy the permutation condition.", "3.", "$S3$ has three outputs: $b \\vee \\lnot c\\oplus a,\\ a \\vee c\\oplus b,\\ a \\wedge b\\oplus c$ .", "Alice can add as many inputs and cyclic shifts of inputs as she wants to each of these outputs.", "In the remark 2 she add $a^{11}$ to the first output, $b \\oplus a^{7} \\oplus c$ to the second output, and $c \\oplus b^{32}$ to the third output.", "Note that the fact that $S3$ is a permutation (as a function $\\lbrace 0,1\\rbrace ^{64*3} \\rightarrow \\lbrace 0,1\\rbrace ^{64*3}$ ) is not obvious.", "But the problem is only to prove that the modification of $S3$ is a permutation too (as a function $\\lbrace 0,1\\rbrace ^{64*3} \\rightarrow \\lbrace 0,1\\rbrace ^{64*3}$ )." ], [ "Solution", "It is allowed to add to the outputs of $S3$ the outputs of the following linear transformation: $L(a,b,c)=(L_0(a,b,c),L_1(a,b,c),L_2(a,b,c))$ that is defined by bitwise XOR operations and cyclic shifts.", "The permutation property of a modified $S3$ will be broken if for some distinct $(a,b,c)$ and $(a^{\\prime },b^{\\prime },c^{\\prime })$ $S3(a,b,c)\\oplus S3(a^{\\prime },b^{\\prime },c^{\\prime })=L(a,b,c)\\oplus L(a^{\\prime },b^{\\prime },c^{\\prime }).$ We will call the expressions from both sides of equality (REF ) and the sum $(a,b,c)\\oplus (a^{\\prime },b^{\\prime },c^{\\prime })$ by differences.", "Let $(w_0,w_1,w_2) &= (a,b,c)\\oplus (a^{\\prime },b^{\\prime },c^{\\prime }), \\\\(W_0,W_1,W_2) &= S3(a,b,c)\\oplus S3(a^{\\prime },b^{\\prime },c^{\\prime }).$ On the one hand, input and output differences of $S3$ satisfy (for instance, see [3]) the equality $w_0 \\wedge W_0 \\oplus w_1 \\wedge W_1 \\oplus w_2 \\wedge W_2 = 11\\ldots 1.$ On the other hand, by $(\\ref {S3})$ the permutation property of a modified $S3$ will be broken if $(W_0,W_1,W_2) = L(a,b,c)\\oplus L(a^{\\prime },b^{\\prime },c^{\\prime })= L(w_0,w_1,w_2).$ As a result, a modified $S3$ will be still a permutation if the following equality $w_0 \\wedge L_0(w_0,w_1,w_2) \\oplus w_1 \\wedge L_1(w_0,w_1,w_2) \\oplus w_2 \\wedge L_2(w_0,w_1,w_2) = 11\\ldots 1$ does not hold for any nonzero $(w_0,w_1,w_2)$ .", "For example, if $L(a,b,c)=(a\\oplus a^d\\oplus b,a\\oplus c,b),\\quad d\\in \\lbrace 1,2,\\ldots ,63\\rbrace ,$ then (REF ) becomes $w_0 \\wedge (w_0\\oplus w_0^d \\oplus w_1) \\oplus w_1 \\wedge (w_0\\oplus w_2) \\oplus w_2 \\wedge (w_1) =w_0 \\wedge (w_0\\oplus w_0^d)\\ne 11\\ldots 1.$ Thus, we found the following solution for the problem: $S3(a,b,c)\\oplus L(a,b,c)=(b \\vee \\lnot c\\oplus a^d\\oplus b,a \\vee c\\oplus a \\oplus b\\oplus c,a \\wedge b\\oplus b\\oplus c).$ Note that there are many other possible solutions.", "This problem was completely solved by three participants in the first round and by nine teams in the second round.", "Many of these solutions were interesting and compact." ], [ "Formulation", "Let $\\mathbb {F}_2^n$ be an $n$ -dimensional vector space over the field $\\mathbb {F}_2=\\lbrace 0,1\\rbrace $ .", "Alice and Bob exchange messages using the following cryptosystem.", "[noitemsep] 1.", "First, they use a supercomputer to calculate two special large secret sets $A,B\\subseteq \\mathbb {F}_2^n$ which have the following property: there exists a constant $\\ell $ ($\\ell \\geqslant 26$ ), such that for any $x\\in \\mathbb {F}_2^n$ it holds $d(x,A)+d(x,B)=\\ell ,$ where $d(x,A)$ denotes Hamming distance from the vector $x$ to the set $A$ .", "2.", "Alice then saves the number $\\ell $ , the set $A$ and a set of vectors $a_1,a_2,\\ldots ,a_r$ such that for any $k: 0\\leqslant k \\leqslant \\ell $ , there is a vector $a_i$ at distance $k$ from $A$ .", "Similarly, Bob saves the number $\\ell $ , the set $B$ and a set of vectors $b_1,b_2,\\ldots ,b_s$ such that for any $k: 0\\leqslant k \\leqslant \\ell $ , there is a vector $b_i$ at distance $k$ from $B$ .", "3.", "Text messages are encrypted letter by letter.", "In order to encrypt a letter Alice replaces it with its number in the alphabet, say $k$ .", "Then she chooses some vector $a_i$ at distance $k$ from the set $A$ and sends this vector over to Bob.", "Bob then calculates the distance $d(a_i,B)$ and using the property of the sets $A,B$ , calculates $k = \\ell - d(a_i,B)$ .", "So, he gets the letter Alice sent.", "If Bob wants to send an encrypted message to Alice, he does the same but using his saved vectors and the set $B$ .", "Eve was able to hack the supercomputer when it was calculating the sets $A$ and $B$ .", "She extracted the set $C$ from its memory, which consists of all vectors of $\\mathbb {F}_2^n$ that are at distance 1 or less from either $A$ or $B$ .", "She also learned that $\\ell $ is even.", "Help Eve to crack the presented cryptosystem (to decrypt any short intercepted message)!", "You know that she has an (illegal) access to the supercomputer, which can calculate and output the list of distances from all vectors of $\\mathbb {F}_2^n$ to any input set $D$ in reasonable (but not negligible) time.", "Remarks.", "Recall several definitions and notions.", "The Hamming distance $d(x,y)$ between vectors $x$ and $y$ is the number of coordinates in which these vectors differ.", "Distance from vector $y\\in \\mathbb {F}_2^n$ to the set $X\\subseteq \\mathbb {F}_2^n$ is defined as $d(y,X)=\\min _{x\\in X} d(y,x)$ ." ], [ "Solution", "Let us denote by $A_i$ ($B_i$ respectively) the set of all vectors at distance $i$ from the set $A$ ($B$ respectively): $A_i = \\lbrace x\\in \\mathbb {F}_2^n : d(x,A) = i\\rbrace , \\ \\ B_i = \\lbrace x\\in \\mathbb {F}_2^n : d(x,B) = i\\rbrace .$ It is easy to see that [noitemsep] $A=A_0=B_{\\ell }$ , $B=B_0=A_{\\ell }$ , $A_i=B_{\\ell - i}$ for any $i\\in \\lbrace 0,\\ldots ,\\ell \\rbrace $ , $C = A_1 \\cup B_1 \\cup A_0 \\cup B_0$ .", "From the definition of the Hamming distance it is easy to prove that if a vector $x$ lies in the set $A_i$ , then it is at distance $|i-j|$ from the set $A_j$ for any $i,j$ .", "Indeed, if $i=j$ , the statement is trivial.", "Assume that $i>j$ .", "By definition, $d(x,A)=i$ , so there exists a shortest path of length $i$ from $A$ to $x$ , consisting of vectors $x_0,x_1,\\ldots ,x_i=x$ , where $x_0\\in A$ .", "Since consecutive vectors in the path differ in only one coordinate, and vectors from $A_s$ and $A_t$ can be neighbours only if $|s-t| \\leqslant 1$ , it follows that $x_k\\in A_k$ for every $k=0,\\ldots ,i$ .", "So, vector $x_j$ from the path belongs to $A_j$ and is at distance $i-j$ from vector $x$ .", "Therefore, $d(x,A_j)\\leqslant i-j$ .", "Distance cannot be less than $i-j$ , because then $d(x,A)$ would have been less that $i$ , which contradicts conditions of the statement.", "Thus, $d(x,A_j) = i-j$ .", "If $i<j$ , then we can replace $A_i$ with $B_{\\ell -i}$ , $A_j$ with $B_{\\ell -j}$ and use $B$ instead of $A$ for the same argument as in the previous case.", "In particular, given $x$ is in $A_i$ , it is at distance $|i-1|$ from the set $A_1$ and at distance $|i-(\\ell -1)|$ from the set $B_1$ .", "Let us “feed” the set $C$ to the supercomputer.", "We denote the maximal distance from vectors of $\\mathbb {F}_2^n$ to vectors of $C$ as $r$ , and the set of all vectors achieving this distance as $\\widehat{C}$ .", "Taking into account the statement proven above (and the fact that $\\ell $ is even), we can see that the maximum is achieved for vectors of the set $A_{\\frac{\\ell }{2}}$ .", "Hence, $r = \\frac{\\ell }{2} - 1$ and $\\widehat{C} = A_{\\frac{\\ell }{2}}$ .", "Thus, we can calculate $\\ell $ as $2r+2$ .", "Assume now that Alice sends a message $a_{i_1},a_{i_2},\\ldots ,a_{i_k}$ to Bob.", "Eve intercepts it and (using the obtained table of distances from the set $C$ ) calculates that these vectors are at distances $s_1,s_2,\\ldots ,s_k$ from the set $C$ .", "Therefore, they are at distances $s_1+1,s_2+1,\\ldots ,s_k+1$ from the set $A\\cup B$ .", "Since $d(x, A\\cup B) = \\min (d(x, A),d(x, B))$ , each encrypted letter could be either $s_i+1$ or $\\ell -(s_i+1)$ .", "If one of these two numbers is greater than 26, we can easily determine the encrypted letter, if not, we can consider both possibilities.", "In the worst case we would need to consider $2^N$ variants, where $N$ is the length of the message, but since messages are short and are written in natural language, we do not need to check all of them and the decryption should not be hard.", "Note: Sets $A$ and $B$ satisfying condition from Step 1 of the problem (for an arbitrary constant $\\ell $ not necessarily greater than 26) are called strongly metrically regular and are studied in [9].", "Best solutions to the problem were submitted by Alexey Chilikov (Bauman Moscow State Technical University, Russia) and Saveliy Skresanov (Novosibirsk State University, Russia)." ], [ "Formulation", "A polynomial $f(X_1,\\dots ,X_n)\\in F_2[X_1,\\dots ,X_n]$ is called reduced if the degree of each $X_i$ in $f$ is at most 1.", "For $0\\leqslant r\\leqslant n$ , the $r$ th order Reed — Muller code of length $2^n$ , denoted by $R(r,n)$ , is the $F_2$ -space of all reduced polynomials in $X_1,\\dots ,X_n$ of total degree less or equal than $r$ .", "We also define $R(-1,n)=\\lbrace 0\\rbrace $ .", "The general linear group $\\text{GL}(n,F_2)$ acts on $R(r,n)$ naturally: Given $A\\in \\text{GL}(n,F_2)$ and $f(X_1,\\dots ,X_n)\\in R(r,n)$ , $Af$ is defined to be the reduced polynomial obtained from $f((X_1,\\dots ,X_n)A)$ by replacing each power $X_i^k$ ($k\\geqslant 2$ ) with $X_i$ .", "Consequently, $\\text{GL}(n,F_2)$ acts on the quotient space $R(r, n)/R(r-1,n)$ .", "Let $A\\in \\text{GL}(n,F_2)$ be such that its characteristic polynomial is a primitive irreducible polynomial over $F_2$ .", "Prove that the only element in $R(r, n)/R(r-1,n)$ , where $0 < r < n$ , fixed by the action of $A$ is 0." ], [ "Solution", "Let $\\binom{\\lbrace 1,\\dots ,n\\rbrace }{r}$ denote the set of $r$ -subsets of $\\lbrace 1,\\dots ,n\\rbrace $ .", "When $A$ acts on $R(r, n)/R(r-1,n)$ , its matrix with respect to the basis $\\prod _{i\\in I}X_i$ , $I\\in \\binom{\\lbrace 1,\\dots ,n\\rbrace }{r}$ , is the $r$ th compound matrix $C_r(A)$ of $A$ .", "The eigenvalues of $A$ are $\\gamma ^{2^i}$ , $0\\le i\\le n-1$ , where $\\gamma $ is a primitive element of $F_{2^n}$ .", "The eigenvalues of $C_r(A)$ are all possible products of $r$ eigenvalues of $A$ , i.e., $\\gamma ^{\\sum _{i\\in I}2^i},\\quad I\\in \\binom{\\lbrace 0,\\dots ,n-1\\rbrace }{r}.$ Clearly, the above expression never equals 1.", "Hence 1 is not an eigenvalue of $C_r(A)$ .", "Therefore, the action of $A$ does not fix any nonzero element in $R(r, n)/(r-1,n)$ .", "The problem was solved by four teams in the second round: Aleksei Udovenko (University of Luxembourg), the team of Dianthe Bose and Neha Rino (Chennai Mathematical Institute, India), the team of Andrey Kalachev, Danil Cherepanov and Alexey Radaev (Bauman Moscow State Technical University, Russia), the team of Sergey Titov and Kristina Geut (Ural State University of Railway Transport, Russia)." ], [ "Formulation", "Hash function blueFNV-1a [13] processes a message $x$ composed of bytes $x_1,x_2,\\ldots ,x_n\\in \\lbrace 0,1,\\ldots ,255\\rbrace $ in the following way: $h\\leftarrow h_0$ ; for $i=1,2,\\ldots ,n$ : $h\\leftarrow (h\\oplus x_i)g\\bmod 2^{128}$ ; return $h$ .", "Here $h_0=144066263297769815596495629667062367629$ , $g=2^{88}+315$ .", "The expression $h\\oplus x_i$ means that the least significant byte of $h$ is added bitwise modulo 2 with the byte $x_i$ .", "Find a collision, that is, two different messages $x$ and $x^{\\prime }$ such that $\\text{{\\tt FNV-1a}}(x)=\\text{{\\tt FNV-1a}}(x^{\\prime })$ .", "Collisions on short messages and collisions that are obtained without intensive calculations are welcomed.", "Supply your answer as a pair of two hexadecimal strings which encode bytes of colliding messages." ], [ "Solution", "We will base on the solution for the problem “FNV2” (NSUCRYPTO'2017) [8], where it was required to find a collision for the similar hash function FNV2.", "FNV-1a differs from FNV2 in the following: instead of the $\\oplus $ operation for adding $h$ and $x_i$ it uses standard $+$ operation.", "It is easy to see that $\\text{{\\tt FNV2}}(x_1 x_2\\ldots x_n)=(h_0 g^n + x_1 g^n+x_2 g^{n-1}+\\ldots + x_n g)\\bmod 2^{128}.$ For FNV2, we found a relation $a_1 g^{n-1}+ a_2 g^{n-2} + \\ldots + a_n \\equiv 0\\!\\!\\pmod {2^{128}},$ where $a_i\\in \\lbrace -255,\\ldots ,255\\rbrace $ .", "Then we represented $a_i$ as the difference $x_i-x_i^{\\prime }$ and found a collision $\\text{{\\tt FNV2}}(x_1 x_2\\ldots x_n)-\\text{{\\tt FNV2}}(x_1^{\\prime } x_2^{\\prime }\\ldots x_n^{\\prime })=a_1 g^n +a_2 g^{n-1}+\\ldots + a_n g\\equiv 0\\!\\!\\pmod {2^{128}}.$ Let us call a representation $a_i=x_i-x_i^{\\prime }$ as a splitting of $a_i$ .", "There can be several splittings for a given $a_i$ .", "Each of them induces two trajectories of intermediate values of $h$ : the trajectory starting with a message $x_1x_2\\ldots x_n$ and the trajectory starting with a message $x_1^{\\prime }x_2^{\\prime }\\ldots x_n^{\\prime }$ .", "Let $h_i$ and $h_i^{\\prime }$  be the low bytes of $h$ for the first and second trajectories respectively before the additions $h+x_i$ and $h+x_i^{\\prime }$ .", "Let us call a splitting suitable if $h_i+x_i< 256,\\quad h_i^{\\prime }+x_i^{\\prime }< 256,\\quad i=1,2,\\ldots ,n.$ Let us evaluate the probability of existing a suitable splitting for $a_i$ .", "We will assume that $h_i$ , $h_i^{\\prime }$ are realizations of independent random variables with uniform distribution over $\\lbrace 0,1,\\ldots ,255\\rbrace $ .", "Bytes $x_i$ and $x_i^{\\prime }$ can take any value from intervals $\\lbrace 0,\\ldots ,255-h_i\\rbrace $ and $\\lbrace 0,\\ldots ,255-h_i^{\\prime }\\rbrace $ respectively.", "At the same time, the difference $x_i-x_i^{\\prime }$ takes value from the interval $\\lbrace -255+h_i^{\\prime },\\ldots ,255-h_i\\rbrace $ .", "Then $a_i$ is in the interval $\\lbrace -255+h_i^{\\prime },\\ldots ,255-h_i\\rbrace $ with the probability $\\Pr {(-255+h_i^{\\prime }\\leqslant a_i\\leqslant 255-h_i)}={\\left\\lbrace \\begin{array}{ll}\\Pr {(h_i\\leqslant 255-a_i)}, & a_i\\geqslant 0,\\\\\\Pr {(h_i^{\\prime }\\leqslant 255-|a_i|)}, & a_i< 0,\\end{array}\\right.", "}$ that is equal to $1-|a_i|/256$ .", "Thus, the probability of existing a suitable splitting for the whole sequence $a_1a_2\\ldots a_n$ is the following: $p=\\prod _{i=1}^n\\left(1-\\frac{|a_i|}{256}\\right).$ This probability can be rather high.", "For example, $p\\approx 1/25$ for the following sequence for $n=18$ : $(-64,5,73,35,-53,19,-10,-78,-44,48,61,-1,-80,26,-22,72,-31,0).$ Or, $p\\approx 1/13$ for the following sequence for $n=19$ : $(-37,34,-74,-4,-17,33,-18,21,54,33,-1,58,-71,-13,-10,11,-88,-19,0).$ Moreover, the probability can be increased if we change a strategy of finding suitable splittings.", "We can allow to modify splittings $a_1,\\ldots ,a_{i-1}$ that have been already built if it is impossible to find a splitting for $a_i$ .", "After finding a suitable splitting, we determine the sequences $(h_i)$ , $(h_i^{\\prime })$ .", "Then we determine the bytes $\\tilde{x}_i$ , $\\tilde{x}_i^{\\prime }$ such that $h_i\\oplus \\tilde{x}_i=h_i+x_i,\\quad h_i^{\\prime }\\oplus \\tilde{x}_i^{\\prime }=h_i^{\\prime }+x_i^{\\prime },\\quad i=1,2,\\ldots ,n.$ It is important that there are no carries in high bytes in additions $h_i+x_i$ , $h_i^{\\prime }+x_i^{\\prime }$ ; and $\\tilde{x}_i$ , $\\tilde{x}_i^{\\prime }$ can be always found.", "Then a collision for FNV-1a is a pair of messages $\\tilde{x}_1\\tilde{x}_2\\ldots \\tilde{x}_n$ and $\\tilde{x}_1^{\\prime }\\tilde{x}_2^{\\prime }\\ldots \\tilde{x}_n^{\\prime }$ .", "It remains to say that the sequence $(a_i)$ can be found using LLL algorithm.", "The algorithm is applied to the lattice defined by the basic vectors $\\mathbf {b}_1 &= (1,0,\\ldots ,0,g^{n-1}\\bmod 2^{128}),\\\\\\mathbf {b}_2 &= (0,1,\\ldots ,0,g^{n-2}\\bmod 2^{128}),\\\\&\\ldots \\\\\\mathbf {b}_n &= (0,0,\\ldots ,1,g^0\\bmod 2^{128}),\\\\\\mathbf {b}_{n+1} &= (0,0,\\ldots ,0, t 2^{128}),$ where $t$  is a small integer.", "LLL finds a short basis of the lattice, i. e. vectors $v=\\sum _{i=1}^{n+1}a_i \\mathbf {b}_i$ with small coordinate values.", "Let the last coordinate $v$ equal to 0.", "Then $\\sum _{i=1}^{n+1}a_i g^{n-i}\\equiv 0\\!\\!\\pmod {2^{128}},$ i. e. $(a_1,\\ldots ,a_n)$ is a required solution.", "This problem was completely solved by fourteen teams (the most of them used a reduction to the problem FNV2).", "Some examples of collisions proposed by participants (in HEX format) are given in Table REF .", "Table: Collisions of FNV-1a." ], [ "Formulation", "Bob realized that his cipher from last year, TwinPeaks (NSUCRYPTO'2017) [8], is not secure enough and modified it.", "He considerably increased the number of rounds and made rounds more complicated.", "Bob's new cipher works as follows.", "A message $X$ is represented as a binary word of length 128.", "It is divided into four 32-bit words $a,b,c,d$ and then the following round transformation is applied 48 times: $(a,b,c,d)\\leftarrow (b, c, d, a \\oplus S_3(S_1(b)\\oplus S_2(b\\wedge \\lnot c \\oplus c \\vee d)\\oplus S_1(d))),$ Here $S_1,S_2,S_3$ are secret permutations over 32-bit words; $\\lnot $ , $\\wedge $ , $\\vee $ , $\\oplus $ are binary bitwise “NOT', “OR”, “AND”, “XOR” respectively (the operations are listed in descending order of priority).", "The concatenation of the final $a,b,c,d$ is the resulting ciphertext $Y$ for the message $X$ .", "Agent Cooper again wants to read Bob's messages!", "He intercepted the ciphertext $Y=\\texttt {DEB239852F1B47B005FB390120314478}$ and also captured Bob's smartphone with the TwinPeaks2 implementation!", "blueHere it is [15].", "Now Cooper (and you too) can encrypt any messages with TwinPeaks2 but still can not decrypt any.", "Help Cooper to decrypt $Y$ .", "Remarks.", "The ciphertext is given in hexadecimal notation, the first byte is DE." ], [ "Solution", "Let $F$ be the round transformation of TwinPeaks2: $F(a,b,c,d)=(b, c, d, a \\oplus f(b,c,d)).$ The encryption transformation is the composition of 48 copies of $F$ , i. e. it can be written as $F^{48}$ .", "Consequently, $F^{-48}$ is the decryption transformation.", "Let $\\tau (a,b,c,d)=(d,c,b,a).$ Let us note that $f(b,c,d) = f(d,c,b)$ .", "Then the composition of $F$ , $\\tau $ and $F$ gives us $\\tau $ : $F\\circ \\tau \\circ F(a,b,c,d)=F(a \\oplus f(b,c,d),d,c,b)=F(a \\oplus f(d,c,b),d,c,b)=(d,c,b,a).$ Hence, $F^{48}\\tau F^{48}=\\tau $ or $F^{-48}=\\tau F^{48}\\tau ^{-1}=\\tau F^{48}\\tau .$ Thus, in order to decrypt $Y$ one should write its 32-bit blocks in reverse order, encrypt the result and then reverse the order of the blocks again.", "The result will be a hexadecimal word, which gives us the desired message $\\texttt {attacksgetbetter}.$ The best solution to the problem has been submitted by Carl Löndahl (Sweden), which not only provides a clean theoretical solution, but also proposes a slide attack on the cipher." ], [ "Formulation", "The Enigma machine is a symmetric cipher famous for being used during the Second World War by the German military.", "Its internal structure comprises a 26-letter Latin alphabetic permutation, implemented as rotors.", "The machine used for this problem consists of 3 rotors and a reflector.", "Figure REF shows how a simplified Enigma machine works.", "The key components are the set of input switches (2) – which are reduced to 4 in the example but could have been 26 for the Latin alphabet – an input plugboard (3,7,8), three rotors (5), the reflector (6) and the output board (9).", "The components have the following functionality: $\\bullet $ Rotors: a rotor (5) is a wheel with the upper-case alphabet in order on the rim and a hole for an axle.", "On both sides of a rotor are 26 electrical contacts each under a letter.", "Each contact on one side is wired to a contact on the other side at a different position.", "The rotor implements an one-to-one and onto function between the upper-case letters, where each letter is mapped to a different one (an irreflexive permutation).", "$\\bullet $ Reflector: the reflector (6) is positioned after the rotors and has contacts for each letter of the alphabet on one side only.", "The letters are wired up in pairs, so that an input current on a letter is reflected back to a different letter.", "The input message: is permuted by the rotors, passes through the reflector and then goes back through the rotors in the reverse order (as depicted in the figure).", "Finally, the light bulb indicates the encrypted letter.", "The plugboard plays no role in permuting the letter for this challenge, although it could have.", "To prevent simple frequency analysis attack the Right rotor rotates with every new input.", "After the Right rotor completed a full rotation (after 26 letters were encrypted), the Middle rotor rotates once.", "Similarly, after the Middle rotor completes a full rotation (and the Right rotor complete 676 rotations), the left rotor rotates onceThis means that an input letter is processed, in order, by three permutation – Right, Middle and Left – reflected by the reflector, and processed once again, in order, by the inverse permutations corresponding to Left, Middle and Right rotors before being output.", "Once the letter passes through a rotor, it is permuted with one position, the rotor's permutation is applied, and the result goes directly into the following rotor, which acts similarly..", "Challenge: you will play the role of an attacker that knows the source of the plaintext to be encrypted.", "You are given a ciphertext corresponding to a plaintext taken from this known source which happens to be “Moby Dick” by Herman Melville, and you are asked to recover the plaintext.", "The plaintext consists only of trimmed capital letters with no punctuation marks and spaces and is contiguous.", "All letters are from the Latin alphabet.", "Extra information on the settings of the rotors is provided: the configuration of the first rotor is very close to the one used in the 1930 commercial version (that was EKMFLGDQVZNTOWYHXUSPAIBRCJ).", "Ciphertext: RHSM ZHXX AOWW ZTWQ QQMB CRZA BARN MLAV MLSX SPBA ZTHG YLGE VGZG KULJ FLOZ RQAW YGAA DCJB YWBW IYQQ FAAO RAGK BGSW OARG EYSP IKYE LLUO YCNH HDBV AFKD HETA ONNR HXHE BBRT ROZD XJCC OMXR PNSW UAZB TNJY BANH FGCS GJWY YTBV VGLX KUZW PARO NMXP LDLZ ICBK XVSJ NXCF SOTA AQYS YZFX MZDH MSZI ABAH RFXT FTPU VWMC PEXQ NZVA LMFX BHKG QGYS BIYE MEUE PJNR AVTL JSUZ PLHQ MOUI IQFD HVXI NOOJ YJAF WAVU PVQA FMKP AHLK XJYD GITB QSPK CUZU XPRK MUJJ YRJ  Link to blue“Moby Dick” text file can be found in [14]." ], [ "Solution", "It is easy to observe that the Left and Middle Rotors will not change for each block of 26 characters of the plaintext.", "From this point of view, we can regard the composition of permutations induced (in order) by the Middle and Left rotors, the reflector and as well as the inverses of the Left and Middle rotors, as one, fixed permutation.", "After the next 26 letters are processed, the Middle rotor turns, and a distinct permutation is to be used for the incoming block of 26 letters.", "Due to the fact the challenge ciphertext is less than 676 characters, we do not bother with turning the Left rotor.", "To fix some notations, let $\\pi _i, L_i, M_i$ denote permutations defined on the set $\\lbrace {\\tt A}, \\ldots , {\\tt Z}\\rbrace $ .", "If $L:\\lbrace {\\tt A}, \\ldots , {\\tt Z}\\rbrace \\rightarrow \\lbrace {\\tt A}, \\ldots , {\\tt Z}\\rbrace $ denotes the permutation defining the Left rotor, by $L_i:\\lbrace {\\tt A}, \\ldots , {\\tt Z}\\rbrace \\rightarrow \\lbrace {\\tt A}, \\ldots , {\\tt Z}\\rbrace $ and $L_i = L \\circ Rot_i$ , we represent the action of applying the Left rotor over the alphabet, where $Rot_i$ represents the alphabet's rotation by $i$ .", "We use a similar notation for $M_i$ , with $i$ denoting each block of 26 letters to be processed.", "That is $i \\in \\lbrace 1, \\ldots , \\lceil {\\frac{|C|}{26}}\\rceil \\rbrace $ , where $|C|$ denotes the length of the ciphertext (its number of characters).", "We also write $\\pi _i = M_i^{-1} \\circ L_0^{-1} \\circ \\rho \\circ L_0 \\circ M_i,~~i \\in \\Big \\lbrace 1, \\ldots , \\Big \\lceil {\\frac{|C|}{26}}\\Big \\rceil \\Big \\rbrace .$ The next step is to split the challenge ciphertext into blocks of 26 characters, and use the fact that for each block $i$ , $\\pi _i$ acts as an oracle that returns the same value for the same input.", "We will correlate this with the information that is a priori given on the first rotor.", "Although we do not have its exact configuration, we use the fact that the unknown rotor is close to a known one (EKMFLGDQVZNTOWYHXUSPAIBRCJ — commercial Enigma 1930).", "The configuration used for this problem permutes 4 elements amongst the ones of the 1930 configuration and then applies a circular permutation of length four.", "The permutation corresponding to the given Right rotor of the commercial Enigma (1930) is the following: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z E K M F L G D Q V Z N T O W Y H X U S P A I B R C J Then we take the first block of 26 letters and obtain their inverses, minding the fact that the rotor shifts with one place to the left after we read one letter.", "Hence, for the first block we obtain: RHSM ZHXX AOWW ZTWQ QQMB CRZA BA UVAH redFOredFblueL VRDQ TDNG DQblueLA BOIR JJ Now we remark on a “distance-preserving\" property: if the distance between identical characters returned by $\\pi _i$ (the input to the Right rotors) is $\\ell $ , then it maintained in the original plaintext.", "As an example, the group ZHXX in the first block of ciphertext has been obtained for the group FOFL and we note a distance of 2 (F $\\rightarrow $ O $\\rightarrow $ F) between F and F. This means that an alphabetical distance of 2 exists between the corresponding letters of the plaintext.", "More precisely, if: $\\pi _1\\big ( R(x) \\big ) = \\pi _1\\big ( R^{\\prime }(y) \\big )~,$ where $R^{\\prime }$ is obtained by shifting $R$ with $\\ell $ elements, then the character $y$ is at a distance of $\\ell $ from the character $x$ (but in the opposite sense).", "Based on this observation, the solution is to identify such pairs inside a block and record the distance between them.", "As 4 elements are permuted in the real configuration of the rotor, false positives will appear.", "After the colliding characters per block, say in position $i$ and $j$ , have been identified and their distance recorded, say the distance is $\\ell $ , one will simply write a script that will pass through the given plaintext (after removing the non-alphabetic characters) and identify the sequence (matching the length of the ciphertext) where the distance between the characters in position $i$ and $j$ is $\\ell $ .", "Finally, the plaintext that is to be recovered is: ALREADY we are boldly launched upon the deep ; but soon we shall be lost in its unshored, harbourless immen- sities.", "Ere that come to pass ; ere the Pequod'a weedy hull rolls side by side with the barnacled hulls of the leviathan; at the outset it is but well to attend to a matter almost indispensable to a thorough appreciative understanding of the more special leviathanic revelations and allusions of all sorts which are to follow.", "Finally, eight teams completely solved the problem.", "Note, that many teams used a simple method that almost completely determined the plaintext.", "It is based on the fact that no letter from the plaintext gets mapped to the same letter in the ciphertext using Enigma.", "But this approach gives two possible solutions and does not allow one to prove that one of them is not correct." ], [ "Formulation", "Orthogonal arrays are closely connected with cryptographic Boolean functions.", "Namely, supports of correlation immune functions give orthogonal arrays when their elements are written as the rows of an array.", "Given three positive integers $n$ , $t$ and $\\lambda $ such that $t<n$ , we call a $\\lambda 2^t\\times n$ binary array (i.e.", "matrix over the 2-element field) a $t-(2,n,\\lambda )$ orthogonal array if in every subset of $t$ columns of the array, every (binary) $t$ -tuple appears in exactly $\\lambda $ rows.", "$t$ is called the strength of this orthogonal array.", "Find a $4-(2,11,\\lambda )$ orthogonal array with minimal value of $\\lambda $ ." ], [ "Solution", "The best known answer to this question is $\\lambda = 8$ [10], but it is unknown whether there exists a $4-(2,11,\\lambda )$ orthogonal array for $\\lambda < 8$ .", "This open problem remains unsolved.", "Participants suggested several ideas.", "The most interesting one was proposed by Aleksei Udovenko (University of Luxembourg).", "His study starts with the Nordstrom — Robinson code (that is, the Kerdock code of length 16 and size 256, whose dual distance is the minimum distance of the Preparata code, that is 6, which gives a strength of the orthogonal array (OA) equal to 5).", "Only the codewords with the first element equal to zero are kept and their coordinate at 0 is deleted, which makes size 128, length 15 and strength 4.", "Then three columns are erased from the OA, which does not reduce the strength, and the resulting OA provides a solution to the problem with $\\lambda =8$ .", "It is then shown (by using known results) that, for any solution to the problem, $\\lambda $ is at least 6.", "This is interesting.", "The solution found is written in the form $(x,F(x))$ where $F$ is a quadratic $(7,4)$ -function.", "Its determination allows one to determine the 4-th order correlation immune function whose support is this OA.", "This is a 11-variable Boolean function of algebraic degree 5.", "Then the annihilators of this function are studied.", "It is shown that the function has a linear annihilator (and has then algebraic immunity 1).", "After an observation on the impossibility of extending a solution which would have $\\lambda \\leqslant 7$ , the Xiao — Massey characterization of OA is proved again in different terms.", "It is also shown that any affine annihilator of a $t$ -th order correlation immune function must be $t$ -resilient which is a nice observation.", "A computer search is made with Integer Linear Programming showing that any 4-th order correlation immune function having an affine annihilator should have weight at least 128, which is a nice observation.", "This nice work concludes with open questions.", "Another good solution was given by the team of Evgeniya Ishchukova, Vyacheslav Salmanov and Oksana Shamilyan (Southern Federal University, Russia).", "They first studied the maximum value of $n$ , given $t$ , for small values of $t$ .", "Then an algorithm was designed which reduces the search to solutions having some symmetries observed in smaller values of $t$ and $n$ .", "Finally, a solution was given with $\\lambda =8$ which is the coset of a linear code of length $n=11$ and dimension $k=7$ (and therefore 128 codewords), with dual distance 5 and the corresponding function is then indeed 4th order correlation immune giving a $4-(2,11,\\lambda )$ orthogonal array.", "Unfortunately, the question whether 128 is minimal was not addressed." ], [ "Formulation", "Sylvester matrices play a role in security since they are connected with topics like secret sharing and MDS codes constructed with cellular automata.", "Consider two univariate polynomials over the 2-element field, $P_1(x)$ of degree $m$ and $P_2(x)$ of degree $n$ , where $P_1(x) = a_mx^m + \\ldots + a_0$ and $P_2(x) = b_nx^n + \\ldots + b_0$ .", "The Sylvester matrix is an $(m+n)\\times (m+n)$ matrix formed by filling the matrix beginning with the upper left corner with the coefficients of $P_1(x)$ , then shifting down one row and one column to the right and filling in the coefficients starting there until they hit the right side.", "The process is then repeated for the coefficients of $P_2(x)$ .", "All the other positions are filled with zero.", "Let $n>0$ , $m>0$ .", "Prove whether there exist $(m+n)\\times (m+n)$ invertible Sylvester matrices whose inverses are Sylvester matrices as well.", "Example.", "For $m=4$ and $n=3$ , the Sylvester matrix is the following: $\\left(\\begin{array}{ccccccc}a_4 & a_3 & a_2 & a_1 & a_0 & 0 & 0 \\\\0 & a_4 & a_3 & a_2 & a_1 & a_0 & 0 \\\\0 & 0 & a_4 & a_3 & a_2 & a_1 & a_0 \\\\b_3 & b_2 & b_1 & b_0 & 0 & 0 & 0 \\\\0 & b_3 & b_2 & b_1 & b_0 & 0 & 0 \\\\0 & 0 & b_3 & b_2 & b_1 & b_0 & 0 \\\\0 & 0 & 0 & b_3 & b_2 & b_1 & b_0 \\\\\\end{array}\\right)$" ], [ "Solution", "We are pleased to say that three teams completely solved this problem!", "They are Alexey Chilikov (Bauman Moscow State Technical University, Russia), the team of Radu Caragea, Madalina Bolboceanu and Miruna Rosca (Bitdefender, Romania), the team of Samuel Tang and Harry Lee (Hong Kong).", "Here we present the main idea for the solution.", "Case 1: $m\\leqslant n$.", "Let $P_1(x)=x^m$ and $P_2(x)=x^n+1$ .", "Then their Sylvester matrix is the following: $\\left(\\begin{array}{ccccccc}\\multicolumn{2}{c}{{\\bf I}_n} & \\multicolumn{1}{|c}{{\\bf 0}_{n\\times m}} \\\\ \\cline {1-3}\\multicolumn{1}{c|}{{\\bf I}_m} & \\multicolumn{1}{c|}{{\\bf 0}_{m\\times (n-m)}} &{\\bf I}_m\\end{array}\\right),$ where ${\\bf I}_k$ denotes the $k\\times k$ identity matrix; and ${\\bf 0}_{k\\times \\ell }$ is the $k\\times \\ell $ zero matrix.", "Taking all operations over the the 2-element field, it is clear that $\\left(\\begin{array}{ccccccc}\\multicolumn{2}{c}{{\\bf I}_n} & \\multicolumn{1}{|c}{{\\bf 0}_{n\\times m}} \\\\ \\cline {1-3}\\multicolumn{1}{c|}{{\\bf I}_m} & \\multicolumn{1}{c|}{{\\bf 0}_{m\\times (n-m)}} &{\\bf I}_m\\end{array}\\right)\\cdot \\left(\\begin{array}{ccccccc}\\multicolumn{2}{c}{{\\bf I}_n} & \\multicolumn{1}{|c}{{\\bf 0}_{n\\times m}} \\\\ \\cline {1-3}\\multicolumn{1}{c|}{{\\bf I}_m} & \\multicolumn{1}{c|}{{\\bf 0}_{m\\times (n-m)}} &{\\bf I}_m\\end{array}\\right)=\\left(\\begin{array}{ccccccc}\\multicolumn{1}{c|}{{\\bf I}_n} & {\\bf 0}_{n\\times m} \\\\ \\cline {1-2}\\multicolumn{1}{c|}{{\\bf 0}_{m\\times n}} & {\\bf I}_{m}\\end{array}\\right)={\\bf I}_{m+n}.$ Thus, the considered Sylvester matrix is an involutory matrix.", "Therefore, its inverse is the Sylvester matrix as well.", "Case 2: $m>n$.", "Assume that the inverse of the Sylvester matrix of $P_1(x)$ and $P_2(x)$ is also the Sylvester matrix for two polynomials over the 2-element field, say $Q_1(x)=c_px^p+c_{p-1}x^{p-1}+\\dots +c_0$ , $Q_2(x)=d_qx^q+d_{q-1}x^{q-1}+\\dots +d_0$ , of degrees $p>0$ and $q>0$ respectively, which satisfy $p+q=m+n$ .", "The product of Sylvester matrices which correspond to $P_1(x),P_2(x)$ and $Q_1(x),Q_2(x)$ is equal to ${\\bf I}_{m+n}$ , in particular $\\left(\\begin{array}{cccccccc}a_m & a_{m-1} & \\dots & a_0 \\\\& a_m & a_{m-1} & \\dots & a_0 \\\\& & & \\ddots & \\ddots \\\\& & & a_m & a_{m-1} & \\dots & a_0 \\\\ \\cline {1-7}b_n & b_{n-1} & \\dots & b_0 & 0 & \\dots & 0 \\\\ \\cline {1-7}& b_n & b_{n-1} & \\dots & b_0 \\\\& & & \\ddots & \\ddots \\\\& & & b_n & b_{n-1} & \\dots & b_0\\end{array}\\right)\\cdot \\left(\\begin{array}{cc}c_p \\\\0 \\\\\\vdots \\\\0 \\\\d_q \\\\0 \\\\\\vdots \\\\0\\end{array}\\right)=\\left(\\begin{array}{cc}1 \\\\0 \\\\\\vdots \\\\0\\end{array}\\right)\\in \\mathbb {F}_2^{m+n}.$ The condition $q>n$ implies $b_nc_p=0$ , but $b_n=c_p=1$ since the polynomials $P_2(x)$ and $Q_1(x)$ have degrees $n$ and $p$ respectively.", "Therefore, it must hold $q\\leqslant n$ .", "Since $Q_2(x)$ has degree $q$ , then $d_q=1$ and $\\left(1+b_{n-q}\\right)=b_{n-q+1}=b_{n-q+2}=\\ldots =b_{n-q+\\min \\lbrace q,m-1\\rbrace }=0.$ From $b_n=1$ it follows that $\\min \\lbrace q,m-1\\rbrace <q$ , that is $m\\leqslant q$ .", "Finally, we get $m\\leqslant q\\leqslant n<m$ that is a contradiction.", "Thus, in the case $m\\leqslant n$ there exist invertible Sylvester matrices whose inverse are Sylvester matrices as well but for $m>n$ it does not hold." ], [ "Formulation", "Disjunct Matrices are used in some key distribution protocols for traitor tracing.", "Disjunct Matrices (DM) are a particular kind of binary matrices which have been applied to solve the Non-Adaptive Group Testing (NAGT) problem, where the task is to detect any configuration of $t$ defectives out of a population of $N$ items.", "Traditionally, the methods used to construct DM leverage on error-correcting codes and other related algebraic techniques.", "Let $A = (x_1^\\top , x_2^\\top , \\ldots , x_{N}^\\top )$ be an $M\\times N$ binary matrix.", "Then, $A$ is called $t$ -disjunct if, for all subsets of $t$ columns $S = \\lbrace x_{i_1},\\ldots , x_{i_t}\\rbrace $ , and for all remaining columns $x_j \\notin S$ , it holds that $ {\\rm supp}(x_j) \\lnot \\subseteq \\bigcup _{k=1}^t {\\rm supp}(x_{i_k}),$ where ${\\rm supp}(x)$ denotes the set of coordinate positions of a binary vector $x$ with 1s.", "In other words, a matrix $A$ is $t$ -disjunct if for every subset $S$ of $t$ columns the support of any other column is not contained in the union of the supports of the columns in $S$ .", "Prove what is the minimum number of rows in a 5-disjunct matrix." ], [ "Solution", "We must admit that the formulation of the problem did not include the condition which makes this problem non-trivial.", "The condition is that the number of columns must be greater than the number of rows.", "This formulation comprises practical significance and has following equivalent form: given $t$ , when does there exist a $t$ -disjunct algorithm better than the trivial one that tests each item individually?", "Readers may find details regarding Non-Adaptive Group Testing (NAGT) problem together with known results and mentioned formulations in [12].", "The solution of the originally stated problem is 6 — consider the $6\\times 6$ identity matrix.", "This solution was discovered by several participants.", "However some participants (Alexey Chilikov from Bauman Moscow State Technical University, Aleksei Udovenko from University of Luxembourg, the team of Henning Seidler and Katja Stumpp from Technical University of Berlin) obtained bounds for the number of rows depending the parameter $t$ and the number of columns." ], [ "Winners of the Olympiad", "Here we list information about the winners of NSUCRYPTO'2018 in TablesREF , REF , REF , REF , REF , REF .", "Figure: Winners of NSUCRYPTO from 2014 to 2018top=1.0cm, bottom=0.5cm, left=2.6cm, right=2.6cm Table: Winners of the first round in school section A (“School Student”) Table: Winners of the first round, section B (in the category “University Student”) Table: Winners of the first round, section B (in the category“Professional”)Table: Winners of the second round (in the category “School Student”) Table: Winners of the second round (in the category“University student”) Table: Winners of the second round (in the category “Professional”)" ] ]
1906.04480
[ [ "Cation Disorder and Lithium Insertion Mechanism of Wadsley--Roth\n Crystallographic Shear Phases" ], [ "Abstract Wadsley--Roth crystallographic shear phases form a family of compounds that have attracted attention due to their excellent performance as lithium-ion battery electrodes.", "The complex crystallographic structure of these materials poses a challenge for first-principles computational modelling and hinders the understanding of their structural, electronic and dynamic properties.", "In this article, we study three different niobium-tungsten oxide crystallographic shear phases (Nb$_{12}$WO$_{33}$, Nb$_{14}$W$_{3}$O$_{44}$, Nb$_{16}$W$_5$O$_{55}$) using an enumeration-based approach and first-principles density-functional theory calculations.", "We report common principles governing the cation disorder, lithium insertion mechanism, and electronic structure of these materials.", "Tungsten preferentially occupies tetrahedral and block-central sites within the block-type crystal structures.", "The lithium insertion proceeds via a three-step mechanism, associated with an anisotropic evolution of the host lattice.", "Our calculations reveal an important connection between long-range and local structural changes: in the second step of the mechanism, the removal of local structural distortions leads to the contraction of the lattice along specific crystallographic directions, buffering the volume expansion of the material.", "Niobium-tungsten oxide shear structures host small amounts of localised electrons during initial lithium insertion due to the confining effect of the blocks, but quickly become metallic upon further lithiation.", "We argue that the combination of local, long-range, and electronic structural evolution over the course of lithiation is beneficial to the performance of these materials as battery electrodes.", "The mechanistic principles we establish arise from the compound-independent crystallographic shear structure, and are therefore likely to apply to Ti/Nb oxide or pure Nb oxide shear phases." ], [ "Introduction", "There is a high demand for energy storage materials with improved performance in terms of energy and power density, cycle life, and safety.", "High-rate electrode materials specifically are needed to accelerate the adoption of electric vehicles by increasing power density and decreasing charging times.", "While strategies like nanostructuring have been used extensively to improve high-rate performance in materials like LTO[1] (Li4Ti5O12), this has many drawbacks, including high cost, poor stability, and poor volumetric energy density[2].", "However, nanostructuring is not always necessary to obtain high rates.", "Recent work has shown that very high rates can be achieved in micrometre-sized particles of complex oxides of niobium (T-Nb2O5[3]), ternary Nb/W oxides (Nb16W5O55 and Nb18W16O93 [4]), and ternary Ti/Nb oxides (TiNb24O62 [5] and TiNb2O7).", "In addition to the high-rate capability of these materials, their voltage range of +2.0 V to +1.0 V vs. Li$^+$ /Li minimises electrolyte degradation and SEI formation, and avoids safety issues such as lithium dendrite formation.", "Crystallographically, these complex oxides fall into two structural families: compounds with a tungsten bronze-type structure (T-Nb2O5[6], [3] and Nb18W16O93[4]), and Wadsley–Roth phases with block-type structures.", "The present work is concerned with the family of Wadsley–Roth phases, which encompasses a large number of crystallographically similar compounds in the Nb2O5–WO3 [7] and Nb2O5–TiO2 [8] phase diagrams, in addition to pure Nb2O5 [9] and Nb2O5- [10] phases.", "The crystal structures of these compounds consist of blocks of corner-sharing octahedra of size $n\\times m$ , which are connected to each other by edge-sharing (Fig.", "REF ).", "The edge-sharing connections between the octahedra are present along so-called crystallographic shear planes, which frame the blocks.", "Perpendicular to the $n\\times m$ plane the units connect infinitely (Fig.", "REF d,e), and tetrahedral sites are present between the blocks in some structures to fill voids.", "Locally, the structures show strongly distorted octahedra due to a combination of electrostatic repulsion between cations and the second-order Jahn-Teller (SOJT) effect [11], [12].", "NbO6 octahedra at the block periphery are more strongly distorted than those in the centre, resulting in zigzag-like patterns of metal cations along the crystallographic shear planes (Fig.", "REF d).", "The block size depends in part on the oxygen-to-metal ratio of the compound; a higher number of oxygens per metal allows more corner-sharing connections between octahedra, and therefore larger blocks.", "Lithium insertion into Wadsley–Roth phases was first studied systematically by Cava et al.", "in 1983 [13].", "The authors examined 12 different niobium oxide-based shear structures and showed that the crystallographic shear stabilises the structures against undesirable octahedral tilt distortions of the host framework, which had previously been observed in ReO3 [14].", "The frustration of distortions allows lithium diffusion pathways to be kept open.", "Since the initial report by Cava et al., there have been articles detailing the electrochemical properties of many Wadsley–Roth phases, including TiNb2O7 [15], [16], Ti2Nb10O29 [17], [18], TiNb24O62 [5], Nb12WO33 [19], [20], Nb14W3O44 [21], [22], Nb16W5O55 [4], Nb12O29 [23], [24], H-Nb2O5 [3], and PNb9O25 [25].", "These studies have shown good performances of Wadsley–Roth phases as Li-ion battery electrodes, with a remarkable high-rate capability [4], [16].", "Ultrafast lithium diffusion was recently observed in Nb16W5O55 with pulsed field gradient NMR spectroscopy and electrochemical techniques [4].", "A strong similarity in the structural and phase evolution between different Wadsley–Roth phases has been noted [13], [4].", "The phase evolution and voltage profile up to 1.5 Li/TM (Li per transition metal) can generally be divided into three regions; a first solid solution region with a sloping voltage profile is followed by a two-phase-like region where the voltage profile slope is flatter.", "Depending on the specific Wadsley–Roth phase, this second region of the voltage profile might be almost flat (as in H-Nb2O5 [3]), or have a small slope (Nb16W5O55 [4]).", "Beyond the two-phase-like region, another solid solution ensues.", "The similarity of their electrochemistry is highlighted by the fact that most articles reporting properties of a single Wadsley–Roth phase draw comparisons to other compounds of the family [5], [4], [25], [13], [19].", "Cation ordering preferences (such as in the Ti/Nb oxides [26]) and electronic structure features [10], [27] are also very similar.", "Despite the rapidly growing number of experimental studies on Wadsley–Roth phases, reports of computational modelling are almost absent.", "First-principles modelling of Wadsley–Roth phases is both difficult and computationally expensive; the crystal structures are complex, have large unit cells with a multitude of lithium sites, and, in Nb/Ti and Nb/W oxides, feature inherent cation disorder.", "In this work, we study the cation disorder, lithium insertion mechanism, and electronic structure of three different Wadsley–Roth phases (Nb12WO33, Nb14W3O44, and Nb16W5O55) using first-principles density-functional theory calculations.", "Their similarity in terms of both structure (cf.", "Fig.", "REF ) and composition calls for a combined study.", "Building on our previous work on the electronic structure of Nb2O5- crystallographic shear phases [27], this study is motivated by the recent report of structural mechanisms in LixNb16W5O55 [4], which we aim to understand from first principles.", "The article is structured as follows.", "We begin by studying the Nb/W cation disorder using an enumeration approach.", "We establish cation ordering preferences and the lowest-energy cation configurations, and discover a variability of the local structure caused by the cation disorder.", "Next, we present a lithium insertion mechanism for Nb12WO33 in terms of the sequence of occupied lithium sites, the voltage profile, and the local and long-range structural evolution.", "We show that the mechanistic principles established for Nb12WO33 are transferable to Nb14W3O44 and Nb16W5O55.", "In fact, Nb12WO33 and Nb14W3O44 can serve as model compounds to study the more complex Nb16W5O55.", "After investigating the electronic structure of the materials over the course of lithium insertion, we go on to discuss common mechanistic principles for this structural family, and their implications for battery performance.", "We conclude by suggesting new directions for theory and experiment on structural, dynamic, and electrochemical properties of Wadsley–Roth phases." ], [ "Methods", "Structure enumeration.", "Symmetrically distinct cation configurations of Nb/W within the (primitive, single block) unit cells of Nb14W3O44 and Nb16W5O55 were enumerated with a homemade program using established techniques [28] based on a reduction of configurational space by the space group symmetry of a parent structure.", "Overall, 172 cation configurations were enumerated for Nb14W3O44, and 45 for Nb16W5O55.", "Further details can be found in the Supporting Information and the Results section.", "The minority cation occupancy (i.e.", "tungsten occupancy) $P_{S}$ for site $S$ within the crystal structure was obtained according to $P_{S} = \\frac{1}{Z} \\sum _i \\frac{N_{S,i}}{m_S}\\ g_i e^{-\\frac{E_i}{k_BT}}\\,,$ where the symmetrically inequivalent cation configurations are labelled by $i$ , and their degeneracy and energy above the ground state (per unit cell) are $g_i$ and $E_i$ , respectively.", "$N_{S,i}$ denotes the number of positions of type $S$ that are occupied by tungsten in cation configuration $i$ , and $m_S$ is the total number of positions of type $S$ within the unit cell.", "The partition function is given by $Z = \\sum _i g_i e^{-\\frac{E_i}{k_BT}}$ .", "Equation REF can be understood as a thermodynamic average of the fraction of positions of type $S$ occupied by tungsten.", "The lowest energy cation configuration of each phase was used as a starting point to generate structural models of lithiated phases.", "Structures of lithiated phases were generated by enumerating all possible lithium-vacancy configurations over sets of lithium sites in Nb12WO33 and Nb14W3O44.", "The crystal symmetry was kept during this enumeration.", "Overall, 2048 structures were enumerated for Nb12WO33, and 256 for Nb14W3O44.", "Due to the much larger number of possible lithium sites in Nb16W5O55, a full enumeration of lithium-vacancy configurations and subsequent DFT optimisation was computationally too expensive.", "Further details regarding the generation of lithiated structures can be found in the Results section.", "Computational details.", "All calculations were performed using the planewave pseudopotential DFT code CASTEP [29] (version 18.1).", "The gradient-corrected Perdew-Burke-Ernzerhof exchange-correlation functional for solids [30] (PBEsol) was used in the calculations presented in this work, unless otherwise specified.", "Many of the results we report are structural, and the PBEsol functional was therefore chosen because it provides better agreeement with experimental lattice parameters than PBE or LDA [30].", "However, all of the results presented in this article show the same trends if computed with PBE instead.", "Structural optimisations were always performed in two steps: an initial relaxation using efficient parameters, followed by re-optimisation using very high accuracy parameters.", "For efficient parameters, core electrons were described using Vanderbilt “ultrasoft” pseudopotentials [31], generated using the `efficient' specifications listed in Table S1.", "These require smaller planewave kinetic energy cutoffs than the `high accuracy' ones.", "The planewave basis set was truncated at an energy cutoff of 400 eV, and integration over reciprocal space was performed using a Monkhorst-Pack grid [32] with a spacing finer than $2\\pi \\times 0.05$  Å$^{-1}$ .", "Higher accuracy was used to refine low-energy lithiated structures and all cation configurations.", "Harder, more transferable ultrasoft pseudopotentials were generated using the CASTEP 18.1 “on-the-fly” pseudopotential generator with the `high accuracy' specifications listed in Table S1.", "The planewave cutoff energy was set to 800 eV, and the Monkhorst-Pack grid spacing was chosen to be $2\\pi \\times 0.05$  Å$^{-1}$ for calculations on pristine Nb12WO33, Nb14W3O44 and Nb16W5O55 structures.", "For the lithiated phases, the Monkhorst-Pack grid spacing was set to $2\\pi \\times 0.03$  Å$^{-1}$ due to their metallicity.", "Spin polarisation had a negligible effect on total energies, and structure optimisations using PBEsol were therefore performed without spin polarisation.", "Atomic positions and lattice parameters of all structures were optimised until the force on each atom was smaller than 0.01 eV/Å, and the maximum displacement of any atom over two consecutive optimisation steps was smaller than $10^{-3}$  Å. DFT+$U$ calculations (following the method of Ref.", "[33]) were performed to assess the impact of a change in the level of theory on thermodynamics and electronic structure.", "A value of $U = 4$ eV was chosen for the niobium and tungsten $d$ -orbitals if not specified otherwise.", "This choice is in line with previous work [27] on niobium oxides.", "We note (and later demonstrate) that the results are mostly independent of the inclusion and exact value of the $U$ parameter.", "Thermodynamics.", "The thermodynamic phase stability of lithiated niobium-tungsten oxide phases was assessed by comparing the formation energy of different phases.", "For the pseudobinary phases considered in this work, a formation energy is defined as $E_f = \\frac{E\\lbrace {Li_xY}\\rbrace - x E\\lbrace \\mathrm {Li}\\rbrace - E\\lbrace \\mathrm {Y}\\rbrace }{1+x}$ for Y = Nb12WO33, Nb14W3O44, or Nb16W5O55.", "The formation energies were plotted as a function of the Li number fraction $c_{\\mathrm {Li}} = \\frac{\\mathrm {x}}{1+\\mathrm {x}}$ .", "A pseudo-binary convex hull was constructed between the Y and Li end members at $(c_{\\mathrm {Li}}, \\mathrm {E_f})=(0,0);(1,0)$ .", "Thermodynamically stable phases at 0 K lie on the convex hull tieline.", "Voltages for transitions between phases lying on the convex hull were calculated from the DFT total energies.", "For two phases on the hull, $\\mathrm {Li}_{\\mathrm {x}_1}\\mathrm {Y}$ and $\\mathrm {Li}_{\\mathrm {x}_2}\\mathrm {Y}$ , with $\\mathrm {x}_2 > \\mathrm {x}_1$ , the voltage $V$ for a reaction ${Li_{x_1}Y} + (\\mathrm {x}_2 - \\mathrm {x}_1)\\ \\mathrm {Li} \\rightarrow {Li_{x_2}Y}$ is given by $\\begin{aligned}V ={} & -\\frac{\\Delta G}{x_2-x_1} \\approx -\\frac{\\Delta E}{x_2 - x_1} \\\\= & -\\frac{E(\\mathrm {Li}_{x_2}\\mathrm {Y}) - E(\\mathrm {Li}_{x_1}\\mathrm {Y})}{x_2-x_1} + E(\\mathrm {Li})\\,,\\end{aligned}$ where the Gibbs free energy is approximated by the internal energy, as the $pV$ and thermal contributions are small[34].", "Electronic structure and postprocessing.", "Bandstructure calculations were performed for high-symmetry Brillouin zone directions according to those obtained from the SeeK-path package [35], which relies on spglib [36].", "A spacing between $\\mathbf {k}$ -points of $2\\pi \\times 0.025$ $^{-1}$ was used for the bandstructures.", "Density of states calculations were performed with a grid spacing of $2\\pi \\times 0.01$ $^{-1}$ , and the results were postprocessed with the OptaDOS package [37] using the linear extrapolative scheme [38], [39].", "The c2x [40] utility and VESTA [41] were used for visualisation of crystal structures and density data.", "Data analysis and visualisation was performed with the matador [42] package.", "Figure: Symmetrically inequivalent transition metal cation sites and their occupancies in (a) Nb14W3O44, and (b) Nb16W5O55.", "The labelling follows Cheetham and von Dreele  for Nb14W3O44, and Wadsley and Roth  for Nb16W5O55.", "A temperature of 1200  ∘ ^\\circ C was used to determine the cation occupancies.", "The positions of axes of fourfold symmetry (Nb14W3O44) and twofold symmetry (Nb16W5O55) are indicated by circling arrows.", "In both structures, tungsten preferentially occupies the tetrahedral and block-central sites.Table: Tungsten occupancies on cation sites in Nb14W3O44.", "All sites except M5 have a multiplicity of four.", "Taking into account the degeneracies, the number of tungsten atoms in a single block (Fig. )", "is three, as required.", "The synthesis temperature is reported as 1350 ∘ ^{\\circ }C , or 1050 ∘ ^{\\circ }C .", "Note that the refinement of fractional occupancies reported in Ref.", "was performed in I4/mI4/m, while the DFT predictions are for I4 ¯I\\bar{4}.", "The multiplicity of the tetrahedral site is different in these two spacegroups, and the experimental occupancy has been adjusted accordingly.", "The experimental data  includes estimated standard deviations." ], [ "Cation Disorder", "Neutron diffraction studies have established that the cation distribution in block-type structures is disordered but not random [26], [43], [5].", "Some amount of disorder is also suggested by single crystal X-ray diffraction studies [8], [44].", "Labelling conventions for the cation sites in the crystal structures are shown in Figure REF , and abide by literature conventions as much as possible.", "To derive fractional occupancies for the tungsten cations in Nb14W3O44 and Nb16W5O55 we apply a Boltzmann distribution (Eqn.", "REF ) using the DFT total energies of the symmetrically inequivalent cation configurations.", "The results are listed in Tables REF and S2 for temperatures of 1050–1350 $^{\\circ }$ C, which corresponds to the range of synthesis and annealing temperatures [7], [43], [4].", "Cation occupancies in Nb14W3O44 and Nb16W5O55 at 1200 $^\\circ $ C are presented in Fig.", "REF using a colormap.", "Plots of the tungsten occupancies for an extended temperature range are available in the Supporting Information (Figs.", "S1, S2).", "If the cation distribution in Nb14W3O44 was completely random, each site would have a tungsten occupancy of $\\frac{3}{17}\\approx 0.176$ .", "Instead, tungsten is predicted to favour the M5 tetrahedral position and the M1 block-center position (Table REF ).", "The preferential occupancy of tungsten on the purely corner-shared M1 position is expected; the metal-metal distances are larger in the block center, and the occupation of these sites by the more highly charged tungsten cations (assuming W$^{6+}$ vs. Nb$^{5+}$ ) reduces the overall electrostatic repulsion.", "Preferential occupation of tungsten on the tetrahedral site is due to the shorter M-O distances, which, together with the higher charge of the tungsten cations, lead to better covalency and stronger bonds.", "In fact, the 15 lowest energy structures generated by enumeration and DFT optimisation all have tungsten on the tetrahedral site.", "The two lowest energy cation configurations both have the tetrahedral site occupied by tungsten, in addition to two M1 sites.", "The lowest energy configuration has spacegroup $C2$ , whereas the second lowest configuration has spacegroup $P1$ (+123 meV/f.u.", "above groundstate).", "The highest energy structure lies +1.29 eV/f.u.", "above the ground state.", "The cation ordering in Nb14W3O44 has previously been investigated by Cheetham and Allen using neutron powder diffraction [43].", "DFT-derived fractional occupancies are in reasonable agreement with experiment (Table REF ).", "The overall sequence of site occupancy preferences is the same.", "The occupancy of the tetrahedral site M5 is predicted to be larger, while the occupancy of M2 is predicted to be much smaller.", "Those two site occupancies also have the largest estimated experimental uncertainty (Table REF ).", "Given the very similar local structures of M2, M3, and M4, the large occupancy of M2 as compared to M3 and M4 seems inconsistent.", "Determining occupancies in these large and complex structures is difficult, particularly when the neutron scattering lengths are not very different ($7.054$ and $4.86\\times 10^{-15}$ m for Nb and W, respectively) [45].", "We suggest that the cation distribution should be revisited, perhaps with a joint X-ray/neutron study, to help constrain the occupancies.", "X-ray diffraction studies suggest that the tungsten atom in Nb12WO33 is ordered on the tetrahedral site [44].", "An enumeration within the primitive unit cell of Nb12WO33 produces only 7 structures, for the 7 symmetrically inequivalent sites.", "Placing the tungsten atom on the tetrahedral site results in the lowest energy structure.", "The second lowest energy structure with tungsten in the block-center lies +364 meV/f.u.", "above the ground state, suggesting a strong preference for the tetrahedral site even compared to the block-center position.", "Experimental data regarding the cation ordering in Nb16W5O55 is not available.", "However, the structure of Nb16W5O55 is very similar to that of Nb14W3O44, with only one additional row of octahedra within each block.", "For our calculations, the tetrahedral site has been fully occupied by tungsten given the preference of tungsten for the tetrahedral site in Nb14W3O44 and Nb12WO33.", "We have also constrained ourselves to configurations in space group $C2$ .", "The more highly charged tungsten cations again prefer to occupy the purely corner-shared octahedral positions in the block middle of Nb16W5O55; occupancies of sites M5, M6, and M8 are by far the largest (Fig.", "REF , Table S2).", "The lowest energy cation configuration for Nb16W5O55 has tungsten on sites M8 and M5, while the second and third lowest energy configurations have tungsten on sites M8 and M6 (+11 meV/f.u.", "vs. groundstate) and M5 and M6 (+147 meV/f.u.", "vs. groundstate).", "The highest energy cation configuration lies +2.27 eV/f.u.", "above the groundstate.", "There are several effects that are not taken into account by the DFT prediction; (1) the modelling necessarily assumes that the material is in thermal equilibrium, but depending on synthesis temperature and annealing time, the kinetics of solid state diffusion might play a role in determining the site occupancies, (2) only single-block cation configurations were studied, limiting the length scale of interactions, (3) at the high synthesis temperature of the metal oxide, temperature effects such as volume expansion, harmonic or even anharmonic vibrations certainly play a role and the DFT energy is a good, but limited substitute for the full free energy.", "Nevertheless, the lowest energy single-block cation configurations are the best choice to use in modelling the lithiation mechanism, and are shown explicitly in Fig.", "S3.", "We include crystallographic information files (CIF) for all PBEsol-optimised symmetrically inequivalent cation configurations of Nb14W3O44 and Nb16W5O55 in the Supporting Information of this article, in addition to a table of their space groups, relative energies, and degeneracies.", "The individual cation configurations deviate from the idealised parent crystal structure by different amounts.", "For both Nb14W3O44 and Nb16W5O55, the distributions of lattice parameters and unit cell volumes of the cation configurations show a spread of 1-2 % around the mean.", "In addition to slight differences in lattice parameters, the MO6 octahedra of both Nb14W3O44 and Nb16W5O55 exhibit different distortions depending on the cation configuration.", "To analyse these distortions, we introduce three distortion measures: a dimensionless bond angle variance $\\Delta (\\theta _{\\mathrm {oct}})$ , the quadratic elongation $\\lambda _{\\mathrm {oct}}$ , and an off-centering distance $d_{\\mathrm {oct}}$ .", "The bond angle variance and quadratic elongation are commonly used distortion measures [46] implemented, for example, in VESTA [41].", "The $\\Delta (\\theta _{\\mathrm {oct}})$ measure is defined as the bond angle variance divided by the square of the mean to make the quantity dimensionless: $\\Delta (\\theta _{\\mathrm {oct}}) = \\frac{1}{12} \\sum _{i=1}^{12} \\bigg [\\frac{\\theta _i - \\langle \\theta _i\\rangle }{\\langle \\theta _i\\rangle }\\bigg ]^2\\,,$ where the 12 O-M-O angles are denoted by $\\theta _i$ .", "Note that only angles which are 90$^\\circ $ in an ideal octahedron are included.", "The quadratic elongation $\\lambda _{\\mathrm {oct}}$ is defined as $\\lambda _{\\mathrm {oct}} = \\frac{1}{6} \\sum _i^6 \\Big (\\frac{l_i}{l_0}\\Big )^2\\,,$ where $l_i$ are the M-O bond lengths, and $l_0$ is the M-O bond length for an octahedron with $O_h$ symmetry whose volume is equal to that of the distorted octahedron [46].", "The off-centering distance is defined as the distance between the center of the O6 polyhedron and the metal position $d_{\\mathrm {oct}} = \\bigg \\Vert \\mathbf {r}_{\\mathrm {M}} - \\sum _{i=1}^{6} \\frac{\\mathbf {r}_{\\mathrm {O},i}}{6} \\bigg \\Vert \\,,$ where $\\mathbf {r}_M$ is the metal position and $\\mathbf {r}_{\\mathrm {O},i}$ are the oxygen positions.", "Both $\\Delta (\\theta _{\\mathrm {oct}})$ and $d_{\\mathrm {oct}}$ are zero for an ideal octahedron, and $\\lambda _{\\mathrm {oct}}$ is one.", "The three distortion measures are plotted in Fig.", "REF for the M1–M4 sites in Nb14W3O44 for all 172 cation configurations, but we note that not all configurations will contribute equally due to their different Boltzmann weight.", "Figure: Distortion measures for octahedral positions M1–M4 (cf.", "Fig. )", "for all 172 cation configurations of Nb14W3O44.", "The block-central M1 octahedra are more symmetric than the peripheral M2–M4 octahedra.", "All sites show a significant spread in their octahedral distortion measures.The M1 block-center octahedron is, on average, less distorted than the the block-peripheral M2, M3, and M4 octahedra.", "However, all octahedral positions show a significant spread in their distortion measures, indicating a dependence of the local structure on the cation configuration.", "To put these results into context, we note that quadratic elongation measures for octahedra in inorganic compounds fall in the range of 1.00–1.07 [46].", "Nb14W3O44 exhibits this entire range of distortions if all transition metal sites and cation configurations are considered together.", "The off-center distances show a spread of approximately 0.15–0.2 Å.", "Given the convergence tolerance of $10^{-3}$ Å for the DFT geometry optimisation, this indicates a significant static disorder in the atomic positions.", "Similar results are obtained for Nb16W5O55, also showing weaker distortions for the block-central sites (M5, M6, M8) and a significant spread in the distortion measures for all transition metal octahedra in the structure (Fig.", "S4).", "Overall, these results indicate a variability of the local structure at the unit cell level in mixed-metal shear phases that is not captured by a single cation configuration.", "Each cation configuration has a different set of cation-cation neighbour patterns, which can cause different local distortion directions and strengths.", "In this study, only cation configurations within the primitive unit cell have been considered.", "Effects on a longer range can be important, and would lead to a more continuous variation of the local structure.", "For example, there are two sets within the distortion measures for tungsten on the M2 site (Fig.", "REF ), separated by a gap.", "The more distorted set corresponds to WO6 octahedra edge-sharing with two other WO6 octahedra along the crystallographic shear plane, while the less distorted set corresponds to WO6 edge-sharing with two NbO6 octahedra.", "Configurations within a supercell along the $c$ direction (cf.", "Fig.", "REF ) would include WO6 octahedra sharing edges with one NbO6 and one WO6 octahedron, and likely close the gap.", "Both niobium and tungsten are generally classified as intermediate SOJT distorters within the group of $d^0$ cations [47].", "In Nb14W3O44, niobium and tungsten show very similar distortion strengths on the M1 positions, while the distortion for tungsten seems to be weaker for sites M2–M4.", "Given the local structure variability in Nb/W oxide shear structures, it is very likely that the Ti/Nb structures show the same properties, since $d^0$ titanium is also classified as an intermediate distorter.", "Stronger distortions are generally exhibited by molybdenum, while zirconium shows only very weak distortions [47].", "It would be interesting to examine the effect of Mo/Zr doping on the local structure in shear phases.", "Figure: (a) Types of lithium sites present in Nb12WO33.", "Window positions are fourfold coordinated by oxygens, pocket positions fivefold.", "The circling arrow marks the twofold rotation axis of the crystal structure.", "This symmetry element is kept for the enumeration of lithiated structures.", "(b) Local structure of lithium sites and site energies in Nb12WO33.", "Only one of each pair of equivalent sites is shown.", "Insertion into fivefold coordinated sites is energetically more favourable.", "The vertical window positions next to the crystallographic shear planes (sites 2, 3, 4) are too large for fourfold coordination of lithium.", "Niobium shown in dark blue, lithium in purple, and oxygen in orange.Lithium sites in block-type structures divide into three sets; fivefold coordinated `pocket' sites at the edge of the block, fourfold coordinated horizontal `window' positions, and fourfold coordinated vertical `window' positions (Fig.", "REF a).", "These sites have been deduced by neutron diffraction studies for lithiated block-type structures TiNb2O7 and H-Nb2O5 [48], [49].", "We will assume and verify the presence of these sites for Nb12WO33.", "The lithium site energies and local structures in Nb12WO33 are shown in Fig.", "REF b.", "Site energies and structures were obtained by placing a single lithium atom into a ($1\\times 2\\times 1$ ) supercell of Nb12WO33 (cf.", "Fig.", "REF ) and optimising the structure.", "The site energies $E_{f,i}$ were calculated as $E_{f,i} = E_{i} - E_{\\mathrm {SC}} - E(\\mathrm {Li})\\,,$ where $E_{i}$ is the energy of the supercell with a lithium atom placed at site $i$ , $E_{SC}$ is the energy of the supercell, and $E(\\mathrm {Li})$ is the energy of bulk lithium.", "A comparison of the site energies shows that the insertion into fivefold coordinated sites is energetically more favourable.", "Horizontal window positions have a symmetric arrangement of oxygen atoms, while vertical window positions and some of the pocket sites are less symmetric.", "In the horizontal window position, the lithium ion sits slightly above the plane formed by the four oxygen atoms.", "The vertical window positions (sites 2, 3, and 4) are too large for fourfold coordination of lithium by the oxygen atoms, and insertion into these sites is energetically the least favourable.", "The resulting threefold coordinated lithium ion moves far off the plane formed by the oxygens.", "The single site energies of around $-2.1$ eV agree well with the starting point of the voltage profile at 2 V vs. Li$^+$ /Li [20].", "In order to simulate the lithium insertion into Nb12WO33 over the entire range of lithium content, lithiated structures LixNb12WO33 were generated by enumerating all possible lithium-vacancy configurations for the sites shown in Fig.", "REF .", "The special position in the center of the block (site 1) was fixed to be occupied.", "Using the remaining 11 independent sites for Nb12WO33, $2^{11}=2048$ lithiated structures result, for stoichiometries of LixNb12WO33 with $x$ ranging from 1 to 23 in steps of 2.", "This enumeration produces `snapshots' of the structure and energetics of LixNb12WO33 at specific stoichiometries.", "The convex hull of the lowest energy LixNb12WO33 structures (Fig.", "S5) shows stable or nearly stable phases for each of the stoichiometries examined, indicating that no extended two-phase regions occur.", "To reliably capture the lithium insertion mechanism, it is useful to include metastable structures (i.e.", "up to a certain cutoff energy above the convex hull tieline) in the analysis.", "These metastable structures could be accessed at finite temperatures.", "If only thermodynamically stable structures are considered, there is no simple sequence of occupation of lithium sites (Fig.", "REF ), although there is a slight initial preference for occupation of fivefold coordinated sites and undistorted fourfold sites, especially if metastable structures are included.", "Both site energies (Fig.", "REF ) and Li-Li interactions are important for determining the lithium insertion sequence.", "Figure: Occupation of lithium sites for each sampled stoichiometry.", "Lithium sites labelled according to Fig. .", "Bold dots correspond to sites occupied in the structure on the convex hull tieline, smaller dots mark sites that are occupied in structures up to 200 meV/f.u.", "above the convex hull tieline.", "There is no simple sequence of lithium site occupation.A comparison of the experimental [20] and DFT-predicted voltage profiles (calculated with Eqn.", "REF ) at the GGA and GGA+$U$ levels of theory is shown in Figure REF .", "The DFT-predicted voltage profiles are necessarily composed of abrupt step changes due to the discrete number of stoichiometries, and only qualitative comparisons between experimental and DFT-predicted voltage profiles should be made.", "We also note that the experimental voltage profile has not explicitly been recorded under equilibrium conditions.", "Figure: Experimental voltage profile (orange, digitised from Ref. )", "compared to DFT predictions: PBEsol (blue), PBEsol+UU for U=3U=3 eV (green), and U=4U=4 eV (red).", "The predicted voltage profiles are composed of steps due to the discrete sampling of stoichiometries, and are in qualitative agreement with the experimental profile.Compared to the experiment, PBEsol slightly underestimates the average insertion voltage; the average experimental voltage up to 1 Li/TM is 1.65 V, whereas PBEsol predicts 1.44 V. The average insertion voltages evaluated with PBE and LDA are 1.30 V and 1.70 V, respectively.", "We note that the inclusion of a $U$ value for the niobium $4d$ orbitals has a minor effect on the average insertion voltage; for both $U=3$ eV and $U=4$ eV, the average insertion voltage up to 1 Li/TM is 1.45 V. It is well known that GGA functionals underestimate lithium insertion voltages of transition metal oxides, but this can be corrected for late first-row elements (Fe/Mn/Co/Ni) by DFT+$U$ methods [50].", "The case of niobium oxides seems to be closer to that of $d^0$ titanium oxides, in that the use of DFT+$U$ is ineffective [51] (cf.", "Supplementary Methods).", "In addition it is unclear what the value of $U$ should be for this case; the electronic structure and chemical bonding will change as a function of lithium concentrations, possibly requiring different $U$ values at different points to be described accurately.", "However, total energies (and therefore phase stability) for sets of structures with different $U$ values cannot be compared.", "Since the difference between GGA and GGA+$U$ results is small, we will continue with a GGA treatment and defer discussion of the electronic structure to a later section.", "We note that while hybrid functionals like HSE06 are able to provide better agreement with experimental voltages, their use is computationally more expensive and errors of $\\pm 0.2$  V are still common [50].", "While the average insertion voltage is underestimated, the shape of the DFT-predicted profiles does show similarity to the experimental one; there seems to be a region with rather flat slope between $x=3$ and $x=11$ , which matches the flatter second region of the experimental profile.", "Despite the shallow gradient of the electrochemical profile, this region does not correspond to a true two-phase region.", "The similarity between the experiment and DFT prediction is present for both the PBEsol and PBEsol+$U$ results, and becomes clearer if the predicted profiles are shifted upwards by the difference in the average insertion voltage, corresponding to an adjustment of the Li chemical potential (Fig.", "S6).", "Figure: Structural evolution of LixNb12WO33 as a function of lithium content xx.", "(a) The lattice parameters evolve anisotropically; bb expands over the entire xx range, while aa and cc first expand until x=5x=5, contract, and then expand again beyond x=13x=13.", "The average octahedral distortion 〈Δ(θ oct )〉\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle decreases, with most of the decrease between x=5x=5 and x=11x=11.", "(b) The local structure in (i) Nb12WO33 and (ii) Li13Nb12WO33 along the second row of octahedra in the 3×43\\times 4 block.", "Niobium in dark blue, oxygen in orange, and lithium in purple.", "The interatomic distances demonstrate 1) an expansion perpendicular to the block plane, 2) a contraction within the block plane, and 3) a decrease of Nb-Nb distances along the shear planes.", "Compared to Nb12WO33, the NbO6 octahedra in Li13Nb12WO33 are more symmetric, corresponding to a smaller distortion measure 〈Δ(θ oct )〉\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle .The evolution of the lattice parameters of Nb12WO33 as a function of lithium content is anisotropic (Fig.", "REF a).", "Lattice parameter $b$ , which is perpendicular to the plane of the block, expands, and most of the expansion takes place between $x=5$ and $x=11$ .", "Lattice parameters $a$ and $c$ first expand until $x=5$ , and then contract to a minimum at $x=11$ that lies almost $0.3$ below the lattice parameters of the pristine structure.", "For $x > 11$ , $a$ and $c$ expand again.", "The lattice contraction occurs in the same region as the flatter part of the voltage profile (shaded blue in Figs.", "REF , REF a).", "The same evolution of the lattice parameters is also observed when phases up to 200 meV/f.u.", "above the convex hull tieline are included in the analysis (Fig.", "S7).", "These metastable structures might be formed during cycling, or be partially accessible due to finite temperature effects.", "However, the same lattice evolution would result.", "Over the course of lithium insertion, the transition-metal oxygen octahedra become progressively more symmetric, as shown by the evolution of the average distortion measure $\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle $ (Fig.", "REF a), obtained according to $\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle = \\frac{1}{N_{\\mathrm {oct}}} \\sum _{j=1}^{N_{\\mathrm {oct}}} \\Delta _j(\\theta _{\\mathrm {oct}})\\,.$ Compared to the pristine Nb12WO33, the distortions in both the MO6 octahedra and the lithium sites are largely removed in Li13Nb12WO33 (Fig.", "REF b).", "The evolution in the lattice parameters and the local structure is closely linked.", "Over the course of lithium insertion, the blocks of octahedra in Nb12WO33 first expand and then contract within the $ac$ plane (Fig.", "REF ).", "Perpendicular to the $ac$ plane, they expand monotonically.", "An expansion is expected for lithium insertion, as an increase of the number of atoms within the same volume should lead to an increase thereof.", "The decrease in the lattice parameters within the block is associated with the MO6 octahedra symmetrisation.", "As the apical oxygens of the octahedra along the shear planes are pulled towards the block center, the lattice shrinks within the block plane (Fig.", "REF b).", "The block height expands from 3.81 to 4.09 , and the Nb-Nb distance along the shear plane decreases by over 0.4 .", "The structural changes are closely connected to the occupation of specific lithium sites; the thermodynamically stable phases of LixNb12WO33 (Fig.", "REF ) show occupation of undistorted sites (1, 6, and 8) for $x \\le 5$ .", "For $x\\ge 7$ , vertical window positions that were previously highly distorted are occupied, and the distortions in both lithium sites and octahedra start to be removed.", "Based on the predicted voltage profile, lattice evolution, and local structure changes, the overall phase evolution of Nb12WO33 through three regions can be rationalised.", "Taken together, and compared to previous experiments, these results suggest two solid solution regions, with a two-phase-like region in between.", "The two-phase-like region is marked by a block-plane contraction and a removal of distortions in the transition metal-oxygen octahedra." ], [ "Nb14W3O44 and Nb16W5O55", "Following on from Nb12WO33, we now demonstrate that very similar lithium insertion mechanisms apply to Nb14W3O44 and Nb16W5O55.", "Figure: Local and long-range structural evolution of Nb14W3O44 during lithium insertion.", "The anisotropic lattice evolution and the removal of the octahedral distortions (〈Δ(θ oct )〉\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle ) strongly resembles Nb12WO33.", "(cf.", "Fig. ).", "Compared to Nb14W3O44, the transition metal-oxygen framework (bottom) for the fully lithiated Li16Nb14W3O44 structure shows significantly weaker octahedral distortions.", "Lithium ions have been omitted in Li16Nb14W3O44 for clarity.", "The removal of the distortions leads to a contraction of the lattice parameters within the block plane (perpendicular to cc).Figure: Structure of the transition-metal oxygen framework in pristine and fully lithiated Nb16W5O55.", "Lithium ions have been omitted in the lithiated structure for clarity.", "The transition metal-oxygen framework in Li21Nb16W5O55 shows significantly more symmetric MO6 octahedra.", "The removal of the distortions leads to a contraction of the lattice parameters within the block plane (perpendicular to bb).The crystal structures of Nb12WO33, Nb14W3O44, and Nb16W5O55 are all are based on the block principle and feature the same local structural distortions (cf.", "Fig.", "REF ).", "This similarity leads to the presence of the same types of lithium environments in all three structures.", "The classification into pocket and window sites in Nb14W3O44 and Nb16W5O55 follows the same principles as for Nb12WO33 (Fig.", "S8).", "Notably, the vertical window positions next to the crystallographic shear planes (sites 3 and 5 in Nb14W3O44, and sites G, H, K in Nb16W5O55, Fig.", "S9) are strongly distorted due to the zigzag patterns of the octahedra (cf.", "Fig.", "REF ).", "Lithium site energies for Nb14W3O44 are in the range of -2.0 eV to -2.2 eV, while the site energies for Nb16W5O55 are slightly lower (-2.2 eV to -2.4 eV), due to the higher concentration of tungsten (cf.", "Table S3).", "Insertion into fivefold coordinated sites is energetically favoured.", "The enumeration for LixNb14W3O44 was performed in the same way as for LixNb12WO33.", "The special position in the center of the block was fixed to be unoccupied.", "Given the remaining 8 lithium sites, $2^8=256$ structures were enumerated.", "The number of lithiated structures generated by enumeration for Nb14W3O44 is much smaller compared to Nb12WO33.", "Structures with lithium content between those covered by enumeration were `interpolated' by using the low-energy enumerated structures as a starting point.", "For example, candidate structures of Li6Nb14W3O44 were generated by filling half of the lithium sites occupied in Li4Nb14W3O44 and Li8Nb14W3O44.", "Overall, the sampling of lithiated structures for Nb14W3O44 is coarser than for Nb12WO33, due to the higher computational cost of optimising the lithium configurations in a larger unit cell.", "A convex hull of the lowest energy LixNb14W3O44 phases is available in the Supporting Information (Fig.", "S10), and shows thermodynamically stable phases at every sampled stoichiometry.", "A full enumeration of lithium-vacancy configurations in Nb16W5O55 is not possible.", "The primitive unit cell contains 22 independent lithium sites, resulting in $2^{22}=4194304$ possible lithium-vacancy configurations.", "The structural evolution of Nb14W3O44 over the course of lithium insertion (Fig.", "REF ) bears a strong resemblance to that of Nb12WO33 (cf.", "Fig.", "REF ).", "Lattice parameter $c$ , perpendicular to the block plane, expands monotonically, with most of the expansion taking place between $x=12$ and $x=16$ (Fig.", "REF ).", "The parameter $a$ first increases, then shrinks below its initial value with a minimum at $x=16$ .", "Another expansion for $x>18$ follows.", "Note that lattice parameter $a$ (which is equal to $b$ in the $I\\bar{4}$ spacegroup of Nb14W3O44) was extracted as $a=\\sqrt{V/c}$ (cf.", "Fig.", "REF ).", "The same trend in the evolution of the lattice parameters is also observed when phases up to 100 meV/f.u.", "above the convex hull tieline are included in the analysis (Fig.", "S11).", "The distortions of the MO6 octahedra are removed as demonstrated by the decrease in the $\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle $ measure (Eqn.", "REF ).", "The contraction and distortion removal is associated with occupation of the distorted vertical window positions, in direct analogy to Nb12WO33.", "However, the extent of the structural regions differs between Nb12WO33 and Nb14W3O44.", "In Nb12WO33, the maximum expansion of the $a$ and $c$ parameters occurs at 0.4 Li/TM, while in Nb14W3O44 it occurs at 0.71 Li/TM.", "The contraction region is also wider in Nb12WO33; it spans from 0.38 Li/TM to 1.0 Li/TM, while in Nb14W3O44, the contraction occurs from 0.71 Li/TM to 1.06 Li/TM.", "It is difficult to decide whether this is a physically significant difference, or simply due to the smaller number of lithium configurations that were sampled for Nb14W3O44 as compared to Nb12WO33.", "Lithium insertion into Nb14W3O44 initially proceeds via occupation of sites 1, 4, and 8 (cf.", "Fig.", "S8), but overall there is no simple sequence for the filling of lithium sites.", "The lowest energy structures for each stoichiometry are available as crystallographic information files (CIF) in the Supporting Information.", "In complete analogy to Nb12WO33, the local and long-range structural changes in Nb14W3O44 are linked.", "The removal of the distortions of the MO6 octahedra along the shear planes pulls the blocks closer together (Fig.", "REF ).", "As a result, the lattice parameter in the block plane, $a$ , decreases.", "While we cannot perform a thorough sampling of lithium-vacancy configurations for Nb16W5O55, the strong structural similarity between these three niobium-tungsten oxides suggests that the same trend of lattice and local structural evolution will apply to Nb16W5O55.", "As a proof-of-principle, we have produced a structural model for Li21Nb16W5O55 by occupying sites E, I, J, L, N, M, and G (cf.", "Fig.", "S8), which is shown in Fig.", "REF .", "Compared to the pristine structure, the lithiated structure shows a contraction in the block plane ($a=29.54\\ \\mathrm {Å}$ vs. $a=29.34$ Å, $c=23.10\\ \\mathrm {Å}$ vs. $c=22.95$ Å, for Nb16W5O55 and Li21Nb16W5O55 respectively), and an expansion perpendicular to the block plane ($b=3.81\\ \\mathrm {Å}$ vs. $b=4.06$ Å), in good quantitative agreement with experimental findings [4].", "The octahedral distortion measure $\\langle \\Delta (\\theta _{\\mathrm {oct}})\\rangle $ decreases from $10.25\\times 10^{-3}$ for Nb16W5O55 to $0.86\\times 10^{-3}$ for Li21Nb16W5O55.", "Clearly, lithium insertion causes the same overall structural changes in all three niobium-tungsten oxides Nb12WO33, Nb14W3O44, and Nb16W5O55." ], [ "Electronic Structure of Lithiated Phases", "In this section, we briefly present key electronic structure features of niobium-tungsten oxide shear phases.", "The electronic structure of the shear structures determines their electronic conductivity, which is important for high-rate battery performance.", "Additionally, the results presented here serve to explain the mixed-metal redox process and to justify the level of theory used in this study.", "We will focus on Nb14W3O44, but the results are transferable to Nb12WO33 and Nb16W5O55.", "Figure: Bandstructure and electronic densities of states for Nb14W3O44.", "Oxygen 2p2p dominated valence band is coloured in orange, while the Nb 4d4d/W 5d5d conduction band is shown in blue.", "Both flat and dispersive conduction bands are present.", "The long band structure path segments involve changes in wavevector 𝐤\\mathbf {k} along the direction reciprocal to the lattice parameter perpendicular to the block plane (𝐜 * \\mathbf {c}^* for Nb14W3O44).", "The Fermi level (dashed line) sits on top of the valence band.The pristine shear phases are wide bandgap insulators (Fig.", "REF , S12).", "The metal cations are fully oxidised and formally have a $d^0$ configuration.", "The valence and conduction bands (Fig.", "REF ) are of O 2$p$ and Nb $4d$ /W $5d$ character, respectively.", "Lithium intercalation leads to $n$ -type doping of the material, introducing electrons into the previously empty conduction band.", "To understand the electronic structure of the mixed-metal shear phases, it is useful to draw comparisons to the niobium suboxides Nb2O5-, which also feature block-type crystal structures [10].", "These compounds are formed by $n$ -type doping of H-Nb2O5, and show interesting properties: magnetism, which is rare in niobium oxides, flat bands around the Fermi energy, and an ability to host both localised and delocalised electrons [52], [10], [53], [27].", "We have previously shown that these features are fundamentally associated with the block-type crystal structure [27] and therefore also occur in Nb12WO33, Nb14W3O44, and Nb16W5O55 on $n$ -doping.", "In fact, the bandstructures of the niobium-tungsten oxides show a strong similarity to those of the suboxides and H-Nb2O5 [27], with both flat and dispersive conduction bands present (Fig.", "REF , S12).", "Insertion of a single lithium into the block of Nb14W3O44 leads to the formation of a localised electronic state (Fig.", "REF ).", "This localised state is spread over multiple (predominantly block-central) sites and lies in the plane of the block.", "The localised state forms as the Fermi level is moved into the conduction band by $n$ -doping, specifically by the occupation of the flat band (corresponding to the peak in the DOS, cf.", "Fig.", "REF ).", "A small gap is opened up between the localised state and the remainder of the conduction bands (cf.", "Fig.", "REF a,b; S15).", "Remarkably, this localisation is independent of the inclusion of a $U$ value on the Nb or W $d$ -orbitals.", "The localisation is shown even at the GGA level, even though the gap is very small (35 meV), and increases with the introduction of a $U$ value for the metal $d$ -orbitals (270 meV for $U=4$ eV).", "However, the spin and charge density distribution is the same.", "Additionally, the spin and charge distribution is also independent of whether the lithium ion is positioned in the block center or periphery (cf.", "Fig.", "REF c,d).", "This indicates that there is no strong coupling between the lithium ion and electron.", "A similar formation of localised electrons is also observed in Nb12WO33 and Nb16W5O55 (cf.", "Fig.", "S13).", "It would be interesting to determine experimentally the position of the localised dopant state relative to the bottom of the conduction band.", "Given that the charge associated with the localised electronic state resides predominantly on block-central sites (M1 in Nb14W3O44, cf.", "Fig.", "REF ), the block interiors are reduced first upon lithium insertion into niobium-tungsten shear oxides.", "Since the metal positions in the block center are mostly occupied by tungsten in Nb14W3O44 and Nb16W5O55, tungsten reduction is slightly favoured initially.", "In fact, this preference has been observed in Nb16W5O55 by X-ray absorption spectroscopy [4].", "Figure: Li1Nb14W3O44 density of states of an antiferromagnetic spin arrangement between blocks computed with (a) PBEsol and (b) PBEsol+UU (U=4U=4 eV).", "A localised state (marked by the arrow) is present in both.", "Spin density plots (isosurface value 0.012e - /Å 3 0.012\\ e^-/\\mathrm {Å}^3) for structures with lithium positioned (c) in the center of the block (site 9, cf.", "Fig.", "S8), and (d) at the edge of the block (site 1).", "The spin density distribution is due to the localised state shown in (a) and (b), and is independent of the lithium position.Further $n$ -doping/lithium insertion up to Li3Nb14W3O44 fully fills the flat band, but also partially fills the remaining dispersive conduction bands, resulting in metallicity (Fig.", "REF ).", "In contrast to the flat band, the dispersive conduction bands are predominantly hosted on block edge sites [27] (M2-M4 in Nb14W3O44, cf.", "Fig.", "REF ).", "Reduction of the block edge sites takes place by filling these dispersive conduction bands.", "For even larger lithium concentrations, the structures are strongly metallic (cf.", "Fig.", "S16 for Li16Nb14W3O44).", "At the GGA level, we observe no spin polarisation for either Li3Nb14W3O44 or Li16Nb14W3O44.", "We do not observe the opening of a band gap by the introduction of $U$ value ($U=4$ eV) for either stoichiometry, and the compounds remain strongly metallic (Fig.", "S16).", "The same is true for fully lithiated Nb12WO33 and Nb16W5O55 (Fig.", "S14).", "Besides the slight initial preference for tungsten reduction, niobium and tungsten show similar redox activity in Nb16W5O55 (Nb$^{5+}$ /Nb$^{4+}$ and W$^{6+}$ /W$^{5+}$ , with multielectron reduction possible beyond 1.0 Li/TM) [4].", "Overall, we conclude that while lithiated shear phases can show electron localisation, it is of a different type than for typical transition metal oxides.", "The block-structure with its orthogonal crystallographic shear planes seems to have a confinement effect such that the electron localises within the block plane, but is not confined to a single $d$ -orbital on a single transition metal site.", "These electronic structure features are exactly the same as those observed in Nb2O5- [27].", "Compared to the strong localisation of small polarons in systems like LixTiO2 [54], [55] and LixFePO4 [56], the localisation in shear oxides is weaker, and easily overcome by further doping; the materials quickly become metallic on lithium insertion.", "The strong $d$ -orbital overlap along the shear planes gives rise to large bandwidths, and in fact, the delocalised states are hosted on transition metal sites at the block periphery [27].", "The preferred electron transport direction is expected to be perpendicular to the block plane, based both on experimental results on similar compounds and the calculated band dispersions [57], [27].", "The good electronic conductivity suggested by these calculations is beneficial for high-rate battery performance.", "In addition to a good conductivity upon lithium insertion, there will be a change in the colour of the materials from white-ish to blue/black [23], [10].", "Given the facile lithiation and high-rate performance, this naturally opens up the possibility of electrochromic applications of niobium-tungsten oxides.", "Figure: Bandstructure and density of states of Li3Nb14W3O44.", "Relative to Nb14W3O44 (Fig.", "), the nn-doping by lithium insertion has moved the Fermi level (dashed line) into the conduction band." ], [ "Common Mechanistic Principles", "The three niobium-tungsten oxides Nb12WO33, Nb14W3O44, and Nb16W5O55 are strikingly similar in their cation ordering preferences, lithium insertion mechanisms, and electronic structure.", "This is expected given their close chemical and structural relationship.", "Regarding the lithium insertion mechanism, a set of common mechanistic principles emerge from our DFT results: Lithium is initially inserted into fivefold coordinated sites and undistorted fourfold coordinated sites Between 0–1.5 Li/TM, the lattice evolves through three regions; the lattice parameter perpendicular to the plane of the block expands monotonically, while in the block plane, the lattice parameters expand, contract, and then expand again Distortions of the MO6 octahedra are removed over the course of lithium insertion; this symmetrisation makes previously highly distorted sites available for lithium occupation A DFT-predicted voltage profile of Nb12WO33 suggests that the lattice changes are associated with different regions of the voltage profile; during the block-plane contraction the voltage is almost constant Local and long-range structural evolution are closely linked; removal of octahedral distortions along the shear planes allows neighbouring blocks to slide closer together, causing the lattice contraction Experimentally, the three-region voltage profile and phase evolution is the most well-established feature of the lithiation mechanism [19], [20], [13], [16], [22], [21], [5], [4].", "The three-stage anisotropic host-lattice response has been observed in Nb16W5O55 by Griffith et al.", "[4] using operando synchrotron XRD, and correlates with the regions of the electrochemical profile.", "Lattice parameters of LixNb12WO33 phases have been reported by Cava et al.", "[13] and Yan et al. [20].", "Both authors observed an anisotropic lattice change after full lithiation (Li10.7Nb12WO33 and Li13Nb12WO33, respectively), with an $a$ -$c$ plane contraction and expansion along $b$ .", "However, the lattice changes between the two studies are not consistent, with Cava et al.", "reporting an expansion of +8.2 % along $b$ , while Yan et al.", "report +3.5 %.", "The study of Yan et al.", "was performed on nanosized material, making it not directly comparable to previous reports or DFT results.", "Lattice parameters of LixNb14W3O44 phases have been reported by Cava et al.", "[13], Fuentes et al.", "[21], and Yan et al.", "[22] While the results of Cava et al.", "again agree with our DFT prediction, and suggest an anisotropic evolution of the lattice parameters, the results obtained by Fuentes et al.", "(chemically lithiated material) and Yan et al.", "(nanosized material) are at variance with the DFT prediction and differ strongly from the structural evolution of the related oxides Nb12WO33 and Nb16W5O55.", "We suggest that the structural evolution of Nb12WO33 and Nb14W3O44 is closer to that of Nb16W5O55 and should be re-examined.", "There is strong reason to believe that the similar three-region voltage profiles of Nb12WO33, Nb14W3O44 and Nb16W5O55 are associated with a similar lattice evolution.", "Regarding the local structure evolution, only results on Nb16W5O55 are available, which clearly show that the MO6 octahedra become progressively more symmetric as lithium is inserted [4].", "The local structure evolution was observed through X-ray absorption spectrocopy (XAS) measurements at the Nb K-edge and W L$_{\\mathrm {I}}$ -edge, which show a decrease of pre-edge intensity over the course of lithium insertion.", "Pristine block-type crystal structures always feature strongly distorted metal-oxygen octahedra.", "The pre-edge arises from the dipole-forbidden $s\\rightarrow d$ transition, which is absent for a metal in perfectly octahedral coordination.", "Removal of octahedral distortions therefore results in a decrease of intensity in this transition.", "Based on the DFT results, this is expected to be a universal feature of the lithium insertion mechanism of shear structures.", "XAS experiments on shear phase TiNb2O7 also observe such a symmetrisation in the transition metal–oxygen octahedra [16], suggesting that our results are transferable to the Ti/Nb shear oxides.", "The reduction of $d^0$ cations prone to second-order Jahn-Teller (SOJT) distortions usually leads to a removal of the distortion (e.g.", "LixWO3 and NaxWO3 phases [58], [59]).", "In shear oxides, the reduction can alleviate both the SOJT distortions as well as the electrostatic repulsion between cations along the shear planes, inducing symmetrisation.", "Figure: Cavity types found in Wadsley–Roth phases according to Cava et al.", ".", "The tetrahedral site is denoted by a black dot.Most previous attempts to explain the lithium insertion mechanism of block-type phases have referred to the types of cavities that are found in shear structures, which were first identified by Cava et al [13].", "For example, the insertion mechanism for Nb12WO33 has been proposed to proceed via insertion into type II, type III, and then type IV cavities [20], [19] (Figure REF ).", "Similar mechanisms have been proposed for other block-type structures [25], [21].", "Our DFT calculations do not support this kind of mechanism; each cavity contains multiple lithium sites of different types (window, pocket).", "Instead of resorting to cavity types, it is more accurate to describe the lithium insertion mechanism by the type of site that is being filled, and what structural changes this lithium occupation causes.", "The cavity types are very useful, however, for the structural understanding of pristine shear oxide phases." ], [ "Implications for Battery Performance", "We have shown that cation disorder has a significant effect on the local structure in niobium-tungsten oxide shear phases.", "Compared to a hypothetical ordered structure, a lithium ion within a disordered niobium tungsten oxide shear structure experiences different local environments from one unit cell to the next.", "The same type of lithium site (cf.", "Fig.", "S8) will be framed by different patterns of niobium and tungsten ions, with different octahedral distortions, and different local electronic structures.", "This randomness in the potential energy landscape of the lithium ions in a disordered structure suppresses lithium ordering and makes a larger number of sites available for occupation.", "While an examination of the strength of coupling between the configurations of cations and lithium ions is beyond the scope of this study, it is expected to have a beneficial effect on performance.", "Given that cation disorder can be a favourable attribute to enhance electrochemical performance [4], [60], it is important to be able to control the degree of disorder.", "Our results suggest that tungsten energetically strongly prefers the tetrahedral site.", "Due to the site multiplicity and composition, Nb12WO33 can fully order with tungsten on the tetrahedral site and niobium on the block sites.", "However, it could be advantageous to quench from high temperatures during synthesis to lock in some degree of disorder.", "Nb14W3O44 and Nb16W5O55 have far more tungsten atoms than tetrahedral sites, but octahedral tungsten prefers the centre of the blocks.", "It would be interesting to examine the electrochemical behaviour as a function of cation disorder, controlled by the cooling rate during the synthesis of the material.", "Another way to increase the degree of disorder would be to introduce a third cation into the material.", "Within the group of $d^0$ cations titanium would be the obvious choice, since it is present in Ti/Nb crystallographic shear structures (such as TiNb2O7).", "Molybdenum and zirconium would be other interesting choices.", "The correlation between local and long-range structure evolution in the crystallographic shear phases directly affects the battery performance.", "As lithium intercalates, the total volume expansion is mitigated by the contraction within the block plane.", "The presence and subsequent relaxation of the octahedral distortions provides a mechanism to realise smaller volume changes in this structural family.", "Volume changes have an impact on long-term cycling stability; large expansion and contraction are associated with microstructural fracture, loss of particle contact within the electrode, and SEI degradation/reformation as fresh surfaces are exposed.", "The tempered volume changes in shear oxides thus likely contribute to their observed stability over 1000 cycles [4], even with micrometer-dimension particles that are generally more susceptible to cracking than nanoparticles.", "Many of the performance-critical properties of the niobium-tungsten oxides are intimately related to the crystal structure; the simultaneous presence of crystallographic shear planes and the ReO3-like block interiors is key to the electrochemical performance.", "As previously described by other authors [13], [4], the shear planes frustrate octahedral unit modes that clamp up diffusion pathways.", "In addition, the shear planes serve at least two other purposes: removal of local structural distortions along the shear planes buffers volume expansion, and the smaller metal-metal distances of edge-shared octahedra provide good orbital overlap and therefore enhanced electronic conductivity.", "The ReO3-like block interiors, on the other hand, feature open tunnels allowing rapid lithium-ion diffusion.", "It seems that only when the crystal structure reaches a certain level of complexity can all of these elements be present simultaneously.", "The structural motifs providing each different function require structural complexity and a large unit cell size." ], [ "Conclusion", "In this work, we have used an enumeration-based approach in combination with density-functional theory calculations to reveal common principles governing the cation disorder, lithium insertion mechanism, and electronic structure of the niobium-tungsten oxides Nb12WO33, Nb14W3O44, and Nb16W5O55.", "The cross-compound transferability of our results is due to the crystallographic shear structure common to all three materials.", "Our results shed light on the experimentally observed three-stage lithium insertion mechanism, and reveal an important connection between the long-range and local structural changes: the removal of octahedral distortions provides a mechanism to contract the lattice in the block plane during the second stage of lithium insertion, thereby buffering the overall volume expansion.", "Regarding the cation disorder, we find that there is a strong preference for tungsten occupation on the tetrahedral and block-central sites of the structures.", "The cation disorder also has a strong influence on the local structure of the materials; different Nb/W cation arrangements produce different local octahedral distortions.", "Electronic structure calculations of $n$ -doped/lithiated structures suggest only weak localisation of electrons upon initial lithium insertion, and the materials quickly become metallic on further lithium intercalation.", "Overall, our calculations suggest that the changes in local, long-range, and electronic structure on lithiation are beneficial to the battery electrode performance of the niobium-tungsten shear oxides.", "Our approach of studying multiple members of one structural family has allowed us to draw compound-independent conclusions, and to use smaller model structures to represent more complex ones.", "The principles we have established for the niobium-tungsten shear oxides likely apply in a similar fashion to Ti/Nb oxide shear structures as well.", "Future computational work will focus on the extension of the mechanistic principles described here to the Ti/Nb oxide shear structures, and on modelling the diffusion process within niobium-tungsten oxide shear structures.", "C.P.K.", "would like to thank James Darby for useful discussions.", "We acknowledge the use of Athena at HPC Midlands+, which was funded by the EPSRC on grant EP/P020232/1, in this research via the EPSRC RAP call of spring 2018.", "C.P.K.", "thanks the Winton Programme for the Physics of Sustainability and EPSRC for financial support.", "K.J.G.", "thanks the Winston Churchill Foundation of the United States and the Herchel Smith Foundation.", "K.J.G.", "and C.P.G.", "also thank the EPSRC for funding under a programme grant (EP/M009521/1).", "The authors declare that the data supporting the findings of this study are available within the paper and its Supporting Material files.", "Further details on structure enumeration and level of theory; pseudopotential specifications; tungsten site occupancies for wider temperature range; Structural evolution Nb12WO33 and Nb14W3O44 including metastable structures; Lithium sites and energies for Nb14W3O44 and Nb16W5O55; Supplementary results on electronic structure for Nb12WO33, Nb14W3O44, and Nb16W5O55; Crystallographic information files and energetics of all cation configurations of Nb14W3O44 and Nb16W5O55; Crystallographic information files of LixNb12WO33 and LixNb14W3O44 structures." ] ]
1906.04192
[ [ "Multi-Resolution Rendering for Computationally Expensive Lighting\n Effects" ], [ "Abstract Many lighting methods used in computer graphics such as indirect illumination can have very high computational costs and need to be approximated for real-time applications.", "These costs can be reduced by means of upsampling techniques which tend to introduce artifacts and affect the visual quality of the rendered image.", "This paper suggests a versatile approach for accelerating the rendering of screen space methods while maintaining the visual quality.", "This is achieved by exploiting the low frequency nature of many of these illumination methods and the geometrical continuity of the scene.", "First the screen space is dynamically divided into separate sub-images, then the illumination is rendered for each sub-image in an adequate resolution and finally the sub-images are put together in order to compose the final image.", "Therefore we identify edges in the scene and generate masks precisely specifying which part of the image is included in which sub-image.", "The masks therefore determine which part of the image is rendered in which resolution.", "A step wise upsampling and merging process then allows optically soft transitions between the different resolution levels.", "For this paper, the introduced multi-resolution rendering method was implemented and tested on three commonly used lighting methods.", "These are screen space ambient occlusion, soft shadow mapping and screen space global illumination." ], [ "Introduction", "As a subarea of computer science, real-time computer graphics has developed continuously since the middle of the last century and is of great importance today.", "With a variety of applications, including medicine or computer-aided design (CAD), real-time computer graphics is nowadays indispensable in many areas of life and is thus a relevant factor in research as well as in business.", "To render a realistic image many optical and physical phenomena such as camera lenses, light transport, or micro-surface structure must be taken into account.", "All of these phenomena need to be calculated at pixel level but might rely on information of the surrounding scene to create the effect.", "Therefore, the number of pixels to be rendered, especially with more complex illumination, is crucial to the necessary computing power and thus to the performance of an application.", "While the increase in computing power of modern graphics hardware allows for more complicated algorithms, the demand for photo-realistic global illumination effects and high output resolutions in real-time graphics can not be met by current hardware sufficiently.", "In order to reduce the computational effort upsampling is often used.", "This technique renders individual effects, or sometimes the full image, in a lower resolution.", "Subsequently, the generated images are scaled back up to the full resolution by interpolation.", "Ultimately, fewer pixels must be calculated and stored, which reduces the computational effort and also the required storage space.", "Upsampling is particularly common in soft, continuous post-processing effects such as bloom filters or blur, in which quality losses are virtually invisible, depending on the scaling factor.", "If, on the other hand, you render effects with more concrete structures such as shadows or reflections in a lower resolution and then scale them up, hard edges are displayed washed out and aliasing becomes visible.", "In addition, there is a risk of under-sampling, which can cause visual artifacts affecting the image quality, especially in animated scenes or during camera movements.", "Rendering such effects or the entire image by upsampling is therefore usually not always useful, however, two interesting observations can be made: Although such effects may generally have more concrete structures such as hard edges, these high-frequency details are firstly not necessarily evenly distributed in the image space, and secondly, they are often only marginally present in relation to the total area.", "For example, considering naive shadow mapping with a single light source, depending on the complexity of the scene, a rendered image may contain large areas that are either completely shaded or fully illuminated.", "Nevertheless, the necessary operations to determine the brightness of these areas are performed for each individual pixel.", "For naive shadow mapping, this is certainly not important, but if one considers computationally more complex effects such as ambient occlusion or indirect illumination, the performance could be drastically increased by an intelligent subsampling of certain image areas.", "The technique developed in this work exploits the often existing optical continuity of a scene in order to realize computationally intensive lighting effects more efficiently.", "For this purpose, the image space is first divided into multiple disjoint partial images, so that areas which contain edges or are in their immediate vicinity are separated from areas without edges or with a greater distance to them.", "Each partial image can be rendered individually with the illumination effects to be realized in suitable resolutions.", "In principle, a higher resolution is required to correctly create the effect in areas with a higher detail density.", "However, areas that do not include edges and thus have a lower density of detail can be rendered in lower resolution.", "The partial images are then reassembled to the original image.", "In the best case, this image should not differ visually from a full-resolution rendered image.", "Of particular importance for visual quality and performance is the way in which the individual steps of the technology work.", "For each different step approaches are presented and explained in this paper." ], [ "Related Work", "In this section we present and explain the techniques and approaches relevant to this work.", "They follow similar conceptual principles and can be considered as a starting point for the technique developed here.", "We also highlight the differences to these approaches." ], [ "Upsampling", "Upsampling is a technique commonly used in low-frequency visual effects in real-time computer graphics.", "Examples of effects that are often realized are Bloom or Glare filters [1] and Depth of Field [2].", "The blur for the respective effect is not rendered in the full resolution of the application, but in an often much lower resolution.", "Subsequently, the result is scaled back to the full screen size by means of bilinear interpolation.", "This can greatly increase the performance at the same optical quality." ], [ "Adaptive Multi-Resolution", "There are several approaches that split the computation of illumination effects into multiple resolutions to separate the rendering of low frequency and higher frequency components of these effects.", "Examples are implementations for indirect light transport [3] and Screen Space Ambient Occlusion [4], which achieve better performance with optically good results.", "In both approaches, multiple mipmap stages of the G-buffer are used to render the lighting effect to be realized in various resolutions.", "Subsequently, an upsampling is performed by means of bilateral filters and the different levels are combined.", "The multi-resolution rendering technique developed in this work makes use of the fundamental principle of separating high and low-frequency components of the illumination, but divides the image into several partial images on the basis of these different proportions.", "An area of the image is not rendered in all resolutions, but in the best case only in one.", "This makes it possible to drastically reduce the calculations for higher-frequency components in the image areas in which ultimately no high-frequency components occur exactly.", "Nichols and Wyman [5] describe a real-time technique for rendering indirect illumination using multi-resolution splatting.", "They use min-max mipmaps to find the discontinuities in the geometry.", "Using these discontinuities, the image space is hierarchically divided into smaller squares, so that areas with higher-frequency components obtain a finer resolution.", "After the image is completely split into such `splats' of an appropriate size, the indirect illumination is rendered in all resolutions and the layers are then combined by upsampling to produce the final image.", "Our technique differs from the algorithm presented by Nichols and Wyman among other things in the method used to decide which resolution to render in.", "We can apply more flexible filters depending on the situation, while their approach using min-max mipmaps can only find geometric discontinuities.", "We also use a different approach to combine the final images that prevents visible artifacts.", "Finally, our technique is not only specialized for indirect illumination using Reflective Shadow Maps, but can also be applied and optimized for various lighting effects due to its high flexibility.", "Iain Cantlay [6] describes a technique for rendering lower resolution particles offscreen and combining the result with high resolution renderings of other geometry.", "In contrast to our approach, this technique can only be applied, if distinct parts of the geometry (in this case particles) are to be rendered in a fixed lower resolution while our technique is more flexible working on pixels.", "Guennebaud et al.", "[7] use variable resolutions for soft shadow mapping in screen space.", "Again our approach is more flexible and can be applied to a multitude of screen space effects." ], [ "Variable Rate Shading", "He et al.", "[8] propose an extension of the graphics pipeline to natively support adaptive sampling techniques.", "Nvidia's Maxwell and Pascal architectures have already implemented graphics hardware technologies that could speed up the rendering of an image through the use of different resolutions.", "Multi-Resolution Shading [9] and Lens Matched Shading [10] can be applied in virtual reality applications to adapt the resolution of individual image areas to the optical properties of the physical lens that is part of the display.", "For more general uses Variable Rate Shading [11] (VRS) was introduced as part of the Nvidia Turing architecture.", "With this technique, the image can be divided into much finer regions, which can be rendered independently in appropriate resolutions.", "The regions are made up of squares with a edge length of sixteen pixels.", "Possible applications include `Content Adaptive Shading' (as for example presented by Vaidyanathan et al.", "[12]), `Motion Adaptive Shading' (as for example presented by Vaidyanathan et al.", "[13]), and `Foveated Rendering' (as presented by Guenter et al. [14]).", "In this case, the sampling rate of the image areas is selected adequately depending on the detail density, movement, or focus of the viewer.", "The multi-resolution rendering technique developed in this work allows for an even finer and more flexible division of the image, since image areas do not necessarily have to consist of square tiles, but can have any desired shape.", "This means that a possibly even lower part of the image must be rendered in full resolution, and the performance can be further increased.", "Apart from that, in contrast to VRS, our technique allows for any number of levels and even lower sampling rates.", "Our technique is also not dependent on current graphics hardware and can be implemented for widely available systems.", "In our implementation we focus on the density of details in a scene (Content Adaptive Shading) to decide for the resolution to render in but we can extend our technique by using different edge detection filters or even masks that describe the geometry of lenses in virtual reality." ], [ "Global Illumination Effects", "For the exemplary implementation of our technique we use three illumination effects commonly used in modern computer graphics.", "Screen Space Ambient Occlusion (SSAO) is a real-time approximation of the occlusion of ambient light by local geometry.", "The technique was first presented by Mittring [15] and further developed and improved (e.g.", "by Bavoil et al. [16]).", "Shadow Mapping is an algorithm presented by Williams [17] that allows for a fast calculation of shadow rays using a depth buffer.", "Artifacts introduced by the resolution of the depth buffer can be reduced by percentage closer filtering, introduced by Reeves et al.", "[18] that also softens the shadows edges.", "A plausible penumbra can also be realized as described by Fernando [19].", "The shadow map is not only sampled at a single position but at multiple neighboring locations.", "Screen Space Global Illumination as, for example, described by Ritschel et al.", "[20] generalizes SSAO to not only dim ambient illumination but also add indirect illumination from other surfaces visible on the screen.", "The light transport between chosen samples close to a pixel is calculated inducing information from the G-Buffer." ], [ "Multi-Resolution Rendering", "Our presented multi-resolution rendering technique can be subdivided into three basic steps.", "In the first step, we create a mask in screen space, based on which the image to be rendered is divided into disjoint or complementary sub-images.", "In the second step, the lighting method to be implemented is rendered for each sub-image in its adequate resolution.", "Finally the sub-images are combined to create the result image.", "The conceptual approaches of these steps will be described in more detail below.", "A visual overview of the algorithms workflow will be given in the supplementary material." ], [ "Mask Creation", "The masks are used to divide an image into individual sub-images.", "While masks can be acquired in multiple ways and even combined using the minimum or maximum (depending on the application) an obvious choice is to use them to separate the higher-frequency image parts from the low-frequency ones.", "It is often sufficient to use the geometry edges of the scene in screen space to achieve this.", "These can be found through the information available in the G-Buffer by numerically differentiating depth values and normals for each pixel.", "For the normal, the first derivative in each of the two dimensions is sufficient, whereas for the depth values, the second derivative gives more reliable results.", "The discontinuities found reproduce the geometric edges of the scene and can be used to split the image.", "For screen space ambient occlusion and screen space global illumination, the geometric edges are already sufficient but depending on the illumination effect to be realized, additional information may be required.", "In case of soft shadow mapping for example, the shadow edges of the scene are needed above all.", "To this purpose, when creating the mask using the previously created shadow map, a fast shadow calculation (one sample per pixel) can be implemented.", "We differentiate these values to find discontinuities in the shading.", "To avoid artifacts at the geometry edges, we also take them into account for the mask when rendering the soft shadows.", "Fig.", "REF shows an edge image of a scene in which normals, depths, and shadows are differentiated.", "As an alternative to the edge images we use, min-max mipmaps can also be used to decompose the image as explained by Nichols and Wyman [5].", "After we created the final high-resolution mask we downsample it to the resolutions we want our final sub-images to be.", "We use blur filters with different variances ($\\sigma ^2$ ) on the downsampled images to determine the areas near the edges.", "The blurs variance gives the developer control over the size of the area around the edges and determines which areas around the edges are rendered in which resolution.", "The variances we use can be found in Tab.", "REF .", "Figure: Without accounting for overlap (left), dead pixels (black) occur at the edges of the sub-images (red and blue), which are not contained in any of the sub-images and thus are not rendered.When ensuring an overlap (right), the intersection of the sub-images (green) prevents this circumstance.Table: Variances (σ i 2 \\sigma _i^2) and weights (w i w_i) for each sub-image (ii) of all techniques we used.The variances are used to blur the mask, while the weights are used to combine the final image.For SSGI we did not use the second sub-image at all.A simple way to separate the image into sub-images is to divide them into complementary tiles.", "An advantage of this method is the disjoint decomposition, whereby no area of the image has to be rendered multiple times.", "A drawback, however, is that the granularity of the decomposition of the image is limited by the lowest resolution of a sub-image.", "When naively using the granularity that is determined directly by the resolution of each sub-image, we obtained undefined spaces in the final image between two masked areas.", "To avoid these we make sure areas of different resolutions have an overlap as shown in Fig.", "REF .", "Therefore, we do not separate the image into almost disjoint areas, but always completely include the higher resolution levels in the underlying ones.", "This means, in particular, that the lowest resolution sub-image always renders the effect to be realized for the entire image.", "Losses in performance due to the multiple rendering of some image areas are extremely small, because the additional computational effort arises mainly in the lower resolutions.", "If the blur is optimally selected for the creation of the masks, this approach lets us keep the areas of the higher resolution levels extremely small, resulting in an overall good performance.", "In addition, this decomposition approach later allows for a very simple re-composition of the final image, because the masks together with fixed weights can serve as an alpha channel for blending the sub-images (see Section REF ).", "Fig.", "REF shows a possible decomposition of an example scene in screen space.", "Figure: Visualization of the decomposition of an image into four sub-images by means of inclusive areas: The sub-image of the full resolution contains all the red areas, the sub-image of the half resolution all red and green areas, the sub-image of the quarter resolution all red, green and blue areas.", "The fourth sub-image renders the entire image space at an eighth of the resolution." ], [ "Rendering the Sub-images", "Throughout the rendering process we generate all sub-images independently of each other in the chosen resolution.", "Shape and resolution of the sub-image are defined by the masks determined in step one.", "Accordingly, an image area of a sub-image is only rendered if and only if the corresponding mask in this image area permits it.", "Fig.", "REF shows an example of rendering four sub-images.", "Figure: Screen space ambient occlusion rendered in four sub-images, no lighting is calculated for the black areas.", "The individual sub-images render SSAO in full (top left), half (top right), quarter (bottom left) and eighth resolution (bottom right)." ], [ "Blending the Sub-Images", "As the final step of the technique we blend the individually rendered sub-images in order to generate the final image.", "All sub-images are upsampled to the full resolution and combined.", "Using a simple bilinear interpolation would lead to artifacts, as pixels containing visual information can be interpolated with those that contain no information.", "A simple solution for this problem would be bilateral interpolation as described by Tomasi and Manduchi [21].", "When using this, the sub-images are gradually scaled and merged without scattering missing information of a resolution level into the relevant pixels of the image.", "To this purpose, a sub-image is always combined with the sub-images already blended in one step.", "This upsampling technique is also used by Nichols and Wyman [5].", "In our case we can use the decomposition masks to calculate the final blending weights.", "Each sub-image, starting at the lowest resolution, is blended with the next higher resolution sub-image based on the alpha value of each mask.", "The softness of the transitions between the resolution levels can be determined flexibly using weights.", "These weights are multiplied with the alpha mask and define the final alpha value for blending." ], [ "Implementation", "In our implementation we applied our multi-resolution rendering technique to three illumination effects commonly found in modern real-time computer graphics.", "These effects are SSAO, soft shadow mapping (SSM) and screen space global illumination (SSGI).", "In this section, we describe the implementation of our technique and specific adjustments for the illumination effects used.", "Our implementation relies solely on the OpenGL 3.3 core profile and can as such run on widely available hardware.", "According to our experiences during the development stage, a decomposition in four sub-images appears as the best compromise between image quality and speed.", "The width of the sub-images is successively halved, starting at full resolution width, and are set to full, half, quarter, and eighth.", "For SSGI we found that not using the halved sub-image did not result in worse image quality.", "This contributed to a further performance enhancement." ], [ "Rendering of the Sub-Images", "To render the sub-images, we use the previously generated masks to create a stencil buffer for each resolution determining the areas.", "We check if the mask is greater than zero and set the stencil value to one or zero accordingly.", "We thought about using different thresholds for creating the stencil masks but for our purposes just using zero provided the best results.", "For each resolution level used, we subsequently render each sub-image using the stencil buffer to eliminate regions that we do not want to render.", "For SSAO, depending on the number of samples used, we blur the resulting sub-images in order to reduce the occurring variance of the effect, especially in the lower resolutions.", "However, we needed to ensure not to transport missing pixel information into the defined areas of the respective sub-image.", "We achieved this, with a bilateral blur filter." ], [ "Blending of the Sub-Images", "Subsequently, the rendered sub-images are blended to compose the final image.", "We use bilinear interpolation to scale the sub-images to full size and then combine them sequentially, starting at the lowest resolution level.", "We carry out the final blending between two sub-images by using the values of our masks ($a_i$ ) multiplied by a weight ($w_i$ ) as a linear interpolation parameter.", "The weights of our example cases can be found in Tab.", "REF .", "We calculate the following for each pixel of the final image.", "We define $c_i$ as that pixels color value in the $i$ -th sub-image, where $c_1$ is the full resolution image.", "The composed image including the $i$ -th sub-image as its highest resolution is called $c^{\\prime }_i$ .", "The fourth sub-image has the lowest resolution, covers the entire image space and is defined for each pixel.", "We use its value as the initial value $c^{\\prime }_4=c_4$ .", "All other $c^{\\prime }_i$ are calculated successively using the alpha values $a_i$ from the corresponding masks and the weights $w_i$ by: $c^{\\prime }_i=c_i\\cdot \\min (a_i w_i, 1) + c^{\\prime }_{i-1}\\cdot \\left(1-\\min (a_i w_i, 1)\\right)$ The last computed value $c_1^{\\prime }$ describes the pixel value of the final composite image." ], [ "Evaluation", "For a basic evaluation we applied our multi-resolution rendering technique to the three illumination effects mentioned (SSAO, SSM and SSGI).", "We used three test scenes “Office” (20,189 triangles), “Hall” (183,333 triangles), and “Breakfast Room” (a slightly modified version of the one provided by Morgan McGuire [22] with 269,565 triangles) with eight camera configurations for speed and visual comparison.", "For Soft Shadow Mapping and Screen Space Global Illumination, a modified version of the second scene with 255,432 triangles was used, because it works better with the given directional light sources.", "For each perspective, the rendering speed was measured using our technique and compared to the speed measured for naive rendering in full resolution.", "In addition, comparison images of the test scenes are shown and their differences measured and visualized.", "All tests were performed on a Nvidia Geforce GTX 1080." ], [ "Rendering Speed", "For testing the speedup of our technique we used $3840\\times 2160$ as a base resolution.", "We tested each technique with a different number of samples.", "The average results for 24 different configurations (scene and camera) are listed in Fig.", "REF .", "Despite the additional rendering steps needed, our technique outperforms naive rendering in all cases.", "For a higher number of samples our technique will perform better, since more processing on the GPU can be skipped due to lower resolution rendering.", "Figure: Average speedup in percent by using our multi resolution technique in 4K (3840x2160 Pixels).We show the speedup for our three tested techniques using different numbers of samples for each of them.We also tested our technique for lower resolutions.", "The Results were not as good as the ones reported for 4K.", "Nevertheless with the exception of SSM with 196 Samples we achieved clear positive speedups for all illumination techniques even in 720p.", "Starting from 1440p, all illumination techniques provided positive speedups.", "Our results for SSM can be explained by the fact that the technique is relatively simple while the mask generation still produces observable overhead.", "Compared to this overhead, the reduction in GPU computations is relatively low.", "For lower resolutions the overhead of generating the mask to divide the image and the cost of the additional rendering passes for multiple resolutions dominate over the positive effect of our technique.", "Fig.", "REF shows these results.", "Figure: Average speedup in percent of our multi resolution technique at different resolutions.", "We used fixed numbers of samples for all techniques: 64 samples for SSAO, 196 samples for SSM, and 228 samples for SSGI." ], [ "Visual Comparison", "While our technique tries to prevent producing images that differ from renderings created with naive full resolution rendering, we could not prevent all visual artifacts.", "As can be seen in Fig.", "REF to REF these errors occur at the borders of our masks and are mostly due to the Gaussian blur we need to apply to the images to reduce discontinuities at these edges.", "The blur kernel is very narrow so it is hard to detect the errors when just comparing the images directly but is visible in the difference images provided.", "Fig.", "REF shows the results for SSAO using 64 samples.", "We chose this number of samples as we think it is a reasonable choice for real applications and a good compromise between speed and image quality.", "As this image is very bright the differences in the difference image are also more prominent as with the other technique.", "Figure: The “Breakfast Room” dataset using SSM and 196 samples.", "The top image shows our multi resolution technique while in the lower corner the reference image is shown.", "In the lower right corner is an enhanced difference image between those two.Fig.", "REF shows the results for SSM using 196 samples.", "For this lighting effect we can use masks that do not depend directly on the screen space geometry for our technique.", "The occurring errors are relatively low compared to the other techniques due to the parts of the scene in shadow that are lit with a constant ambient illumination.", "Figure: The “Office” scene using SSGI and 4224 samples.", "The top image shows our multi resolution technique while in the lower corner the reference image is shown.", "In the lower right corner is an enhanced difference image between those two.", "The image only shows the SSGI effect without direct illumination to better show the differences caused by our technique.Results of the SSGI technique we implemented are shown in Fig.", "REF .", "For a visually plausible global illumination effect in screen space we needed a lot of samples so we chose to present the results for 4224 samples.", "While our results are still convincing some small artifacts can be seen in the corners of the right rack.", "While these present visible differences to the original image the effects are very minor.", "Figure: The absolute root mean squared (RMS) errors between result images of our multi resolution technique and images naively rendered with high resolution.We used 64 samples for the SSAO images, 196 samples for SSM and 288 samples for the SSGI images.Values in the compared images ranged from 0 to 1 so the resulting errors can be considered low.Besides the visual results we provide an overview over all errors in the graphs in Fig.", "REF .", "These numbers do not only include the presented images but include images from all three scenes with eight camera configurations each.", "These numbers support our claim that the errors introduced by our technique are very low." ], [ "Discussion", "We presented the performance and visual quality of our method and have two general findings.", "As a general rule, it was observed that illumination techniques that are more computationally demanding can benefit more from our technique than less demanding ones.", "This is because of a constant overhead due to mask generation and multiple rendering passes.", "This overhead becomes dominant for techniques that are less computationally demanding.", "The second finding is the fact that our technique excels especially in higher resolutions for the same reason.", "A minor finding is that masks which are more complicated to generate than by simply using the G-Buffer also cause a greater overhead.", "This makes the use of these masks only feasible for the highest resolutions or techniques that are more computationally demanding than the soft shadow mapping presented here." ], [ "Conclusion & Future Work", "We presented a technique for multi resolution rendering that can be implemented on widely available graphics hardware.", "Our technique can improve the rendering speed of screen space algorithms drastically (especially for high resolutions) as we have shown for three cases.", "While the technique presented here is only used for `Content Adaptive Shading' we can trivially extend it to `Foveated Rendering' by modulating the mask we use by an importance mask provided by eye trackers.", "Including `Motion Adaptive Shading' is also possible by using information of pixel motion in the mask generation process.", "To further improve our technique we think that the mask generation process should be modified.", "For determining the geometry edges, we use normals and depth values from the G-Buffer in screen space.", "In practice however, non-smooth, modified normals are often used to calculate the illumination.", "For smooth shading, pixel normals are calculated by the linear interpolation of vertex normals, but in real applications bumpmaps or normal maps are used to modify the normals.", "In this case, the edge filter could potentially find many more edges, which can result in dramatically increased computational effort and significantly lower efficiency.", "Possible solutions to these problems would be the exclusive use of unmodified normals or an alternative determination of the edges using the pixel locations in world space.", "Another problem may arise with certain effects, including, for example, reflections or caustics, since their edges can not be calculated with the information contained in the G-buffer.", "Also in this case, image areas with higher-frequency components could be rendered in too low a resolution.", "For such lighting effects, further development of the progressive decomposition of the image would certainly be beneficial.", "To prevent sub-sampling for some effects, sub-images could also be realized by just using a lower number of samples in full resolution instead of rendering the effect in a lower resolution.", "Another interesting application for the multi resolution rendering technique would be using ray tracing for physically correct illumination.", "In particular, diffuse indirect illumination can only be achieved by relatively high computational effort and can barely be realized in real-time on current graphics hardware.", "Using the multi-resolution approach, the performance could be increased drastically." ] ]
1906.04576
[ [ "Embeddings into countably compact Hausdorff spaces" ], [ "Abstract In this paper we consider the problem of characterization of topological spaces that embed into countably compact Hausdorff spaces.", "We study the separation axioms of subspaces of countably compact Hausdorff spaces and construct an example of a regular separable scattered topological space which cannot be embedded into an Urysohn countably compact topological space but embeds into a Hausdorff countably compact space." ], [ " In this paper we consider the problem of characterization of topological spaces that embed into countably compact Hausdorff spaces.", "We study the separation axioms of subspaces of countably compact Hausdorff spaces and construct an example of a regular separable scattered topological space which cannot be embedded into an Urysohn countably compact topological space but embeds into a Hausdorff countably compact space.", "It is well-known that a topological space $X$ is homeomorphic to a subspace of a compact Hausdorff space if and only if the space $X$ is Tychonoff.", "In this paper we discuss the following problem.", "Problem 1 Which topological spaces are homeomorphic to subspaces of countably compact Hausdorff spaces?", "A topological space $X$ is compact if each open cover of $X$ has a finite subcover; $\\omega $ -bounded if each countable set in $X$ has compact closure in $X$ ; countably compact if each sequence in $X$ has an accumulation point in $X$ ; totally countably compact if each infinite set in $X$ contains an infinite subset with compact closure in $X$ .", "compact if each open cover of $X$ has a finite subcover; $\\omega $ -bounded if each countable set in $X$ has compact closure in $X$ ; countably compact if each sequence in $X$ has an accumulation point in $X$ ; totally countably compact if each infinite set in $X$ contains an infinite subset with compact closure in $X$ .", "These properties relate as follows: ${\\mbox{compact}@{=>}[r]&\\mbox{$\\omega $-bounded}@{=>}[r]&\\mbox{totally countably compact}@{=>}[r]&\\mbox{countably compact}.", "}$ More information on various generalizations of the compactness can be found in [5], [6], [7], [8], [9], [10].", "In this paper we establish some properties of subspaces of countably compact Hausdorff spaces and hence find some necessary conditions of embeddability of topological spaces into Hausdorff countably compact spaces.", "Also, we construct an example of regular separable first-countable scattered topological space which cannot be embedded into a Urysohn countably compact topological space but embeds into a totally countably compact Hausdorff space.", "First we recall some results [1] on embeddings into $\\omega $ -bounded spaces.", "We recall [4] that the Wallman compactification $W(X)$ of a topological space $X$ is the space of closed ultrafilters, i.e., families $\\mathcal {U}$ of closed subsets of $X$ satisfying the following conditions: $\\emptyset \\notin \\mathcal {U}$ ; $A\\cap B\\in \\mathcal {U}$ for any $A,B\\in \\mathcal {U}$ ; a closed set $F\\subset X$ belongs to $\\mathcal {U}$ if $F\\cap U\\ne \\emptyset $ for every $U\\in \\mathcal {U}$ .", "$\\emptyset \\notin \\mathcal {U}$ ; $A\\cap B\\in \\mathcal {U}$ for any $A,B\\in \\mathcal {U}$ ; a closed set $F\\subset X$ belongs to $\\mathcal {U}$ if $F\\cap U\\ne \\emptyset $ for every $U\\in \\mathcal {U}$ .", "The Wallman compactification $W(X)$ of $X$ is endowed with the topology generated by the base consisting of the sets $\\langle U\\rangle =\\lbrace \\mathcal {F}\\in W(X):\\exists F\\in \\mathcal {F},\\;F\\subset U\\rbrace $ where $U$ runs over open subsets of $X$ .", "By (the proof of) Theorem [4], for any topological space $X$ its Wallman compactification $W(X)$ is compact.", "Let $j_X:X\\rightarrow W(X)$ be the map assigning to each point $x\\in X$ the principal ultrafilter consisting of all closed sets $F\\subset X$ containing the point $x$ .", "It is easy to see that the image $j_X(X)$ is dense in $W(X)$ .", "By [4], for a $T_1$ -space $X$ the map $j_X:X\\rightarrow W(X)$ is a topological embedding.", "In the Wallman compactification $W(X)$ , consider the subspace $W_\\omega X={\\textstyle \\bigcup }\\lbrace \\overline{j_X(C)}:C\\subset X,\\;|C|\\le \\omega \\rbrace ,$ which is the union of closures of countable subsets of $j_X(X)$ in $W(X)$ .", "The space $W_\\omega X$ will be called the Wallman $\\omega $ -compactification of $X$ .", "Following [1], we define a topological space $X$ to be $\\overline{\\omega }$ -normal if for any closed separable subspace $C\\subset X$ and any disjoint closed sets $A,B\\subset C$ there are disjoint open sets $U,V\\subset X$ such that $A\\subset U$ and $B\\subset V$ .", "The properties of the Wallman $\\omega $ -compactification are described in the following theorem whose proof can be found in [1].", "Theorem 1 For any ($\\overline{\\omega }$ -normal) topological space $X$ , its Wallman $\\omega $ -compactification $W_\\omega X$ is $\\omega $ -bounded (and Hausdorff).", "A topological space $X$ is called first-countable at a point $x\\in X$ if it has a countable neighborhood base at $x$ ; Fréchet-Urysohn at a point $x\\in X$ if for each subset $A$ of $X$ with $x\\in \\bar{A}$ there exists a sequence $\\lbrace a_n\\rbrace _{n\\in \\omega }\\subset A$ that converges to $x$ ; regular at a point $x\\in X$ if any neighborhood of $x$ contains a closed neighborhood of $x$ .", "completely regular at a point $x\\in X$ if for any neighborhood $U\\subset X$ of $x$ there exists a continuous function $f:X\\rightarrow [0,1]$ such that $f(x)=1$ and $f(X\\setminus U)\\subset \\lbrace 0\\rbrace $ .", "first-countable at a point $x\\in X$ if it has a countable neighborhood base at $x$ ; Fréchet-Urysohn at a point $x\\in X$ if for each subset $A$ of $X$ with $x\\in \\bar{A}$ there exists a sequence $\\lbrace a_n\\rbrace _{n\\in \\omega }\\subset A$ that converges to $x$ ; regular at a point $x\\in X$ if any neighborhood of $x$ contains a closed neighborhood of $x$ .", "completely regular at a point $x\\in X$ if for any neighborhood $U\\subset X$ of $x$ there exists a continuous function $f:X\\rightarrow [0,1]$ such that $f(x)=1$ and $f(X\\setminus U)\\subset \\lbrace 0\\rbrace $ .", "If for each point $x$ of a topological space $X$ there exists a countable family $\\mathcal {O}$ of open neighborhoods of $x$ such that $\\bigcap \\mathcal {O}=\\lbrace x\\rbrace $ , then we shall say that the space $X$ has countable pseudocharacter.", "Theorem 2 Let $X$ be a subspace of a countably compact Hausdorff space $Y$ .", "If $X$ is first-countable at a point $x\\in X$ , then $x$ is regular at the point $x$ .", "Fix a countable neighborhood base $\\lbrace U_n\\rbrace _{n\\in \\mathbb {N}}$ at $x$ and assume that $X$ is not regular at $x$ .", "Consequently, there exists an open neighborhood $U_0$ of $x$ such that $\\overline{V}\\lnot \\subset U_0$ for any neighborhood $V$ of $x$ .", "Replacing each basic neighborhood $U_n$ by $\\bigcap _{k\\le n}U_k$ , we can assume that $U_n\\subset U_{n-1}$ for every $n\\in \\mathbb {N}$ .", "The choice of the neighborhood $U_0$ ensures that for every $n\\in \\mathbb {N}$ the set $\\overline{U}_n\\setminus U_0$ contains some point $x_n$ .", "Since the space $Y$ is countably compact and Hausdorff, the sequence $(x_n)_{n\\in \\omega }$ has an accumulation point $y\\in Y\\setminus U_0$ .", "By the Hausdorff property of $Y$ , there exists a neighborhood $V\\subset Y$ of $x$ such that $y\\notin \\overline{V}$ .", "Find $n\\in \\omega $ such that $U_n\\subset V$ and observe that $O_y:=Y\\setminus \\overline{V}$ is a neighborhood of $y$ such that $O_y\\cap \\lbrace x_i:i\\in \\omega \\rbrace \\subset \\lbrace x_i\\rbrace _{i<n}$ , which means that $y$ is not an accumulating point of the sequence $(x_i)_{i\\in \\omega }$ .", "Fix a countable neighborhood base $\\lbrace U_n\\rbrace _{n\\in \\mathbb {N}}$ at $x$ and assume that $X$ is not regular at $x$ .", "Consequently, there exists an open neighborhood $U_0$ of $x$ such that $\\overline{V}\\lnot \\subset U_0$ for any neighborhood $V$ of $x$ .", "Replacing each basic neighborhood $U_n$ by $\\bigcap _{k\\le n}U_k$ , we can assume that $U_n\\subset U_{n-1}$ for every $n\\in \\mathbb {N}$ .", "The choice of the neighborhood $U_0$ ensures that for every $n\\in \\mathbb {N}$ the set $\\overline{U}_n\\setminus U_0$ contains some point $x_n$ .", "Since the space $Y$ is countably compact and Hausdorff, the sequence $(x_n)_{n\\in \\omega }$ has an accumulation point $y\\in Y\\setminus U_0$ .", "By the Hausdorff property of $Y$ , there exists a neighborhood $V\\subset Y$ of $x$ such that $y\\notin \\overline{V}$ .", "Find $n\\in \\omega $ such that $U_n\\subset V$ and observe that $O_y:=Y\\setminus \\overline{V}$ is a neighborhood of $y$ such that $O_y\\cap \\lbrace x_i:i\\in \\omega \\rbrace \\subset \\lbrace x_i\\rbrace _{i<n}$ , which means that $y$ is not an accumulating point of the sequence $(x_i)_{i\\in \\omega }$ .", "Remark 1 Example 6.1 from [1] shows that in Theorem REF the regularity of $X$ at the point $x$ cannot be improved to the complete regularity at $x$ .", "Corollary 1 Let $X$ be a subspace of a countably compact Hausdorff space $Y$ .", "If $X$ is first-countable, then $X$ is regular.", "The following example shows that Theorem REF cannot be generalized over Fréchet-Urysohn spaces with countable pseudocharacter.", "Example 1 There exists a Hausdorff space $X$ such that $X$ is locally countable and hence has countable pseudocharacter; $X$ is separable and Fréchet-Urysohn; $X$ is not regular; $X$ is a subspace of a totally countably compact Hausdorff space.", "$X$ is locally countable and hence has countable pseudocharacter; $X$ is separable and Fréchet-Urysohn; $X$ is not regular; $X$ is a subspace of a totally countably compact Hausdorff space.", "Choose any point $\\infty \\notin \\omega \\times \\omega $ and consider the space $Y=\\lbrace \\infty \\rbrace \\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y$ such that if $\\infty \\in U$ , then for every $n\\in \\omega $ the complement $(\\lbrace n\\rbrace \\times \\omega )\\setminus U$ is finite.", "The definition of this topology ensures that $Y$ is Fréchet-Urysohn at the unique non-isolated point $\\infty $ of $Y$ .", "Let $\\mathcal {F}$ be the family of closed infinite subsets of $Y$ that do not contain the point $\\infty $ .", "The definition of the topology on $Y$ implies that for every $F\\in \\mathcal {F}$ and $n\\in \\omega $ the intersection $(\\lbrace n\\rbrace \\times \\omega )\\cap F$ is finite.", "By the Kuratowski-Zorn Lemma, the family $\\mathcal {F}$ contains a maximal almost disjoint subfamily $\\mathcal {A}\\subset \\mathcal {F}$ .", "The maximality of $\\mathcal {A}$ guarantees that each set $F\\in \\mathcal {F}$ has infinite intersection with some set $A\\in \\mathcal {A}$ .", "Consider the space $X=Y\\cup \\mathcal {A}$ endowed with the topology consisting of the sets $U\\subset X$ such that $U\\cap Y$ is open in $Y$ and for any $A\\in \\mathcal {A}\\cap U$ the set $A\\setminus U\\subset \\omega \\times \\omega $ is finite.", "We claim that the space $X$ has the properties (1)–(4).", "The definition of the topology of $X$ implies that $X$ is separable, Hausdorff and locally countable, which implies that $X$ has countable pseudocharacter.", "Moreover, $X$ is first-countable at all points except for $\\infty $ .", "At the point $\\infty $ the space $X$ is Fréchet-Urysohn (because its open subspace $Y$ is Fréchet-Urysohn at $\\infty $ ).", "The maximality of the maximal almost disjoint family $\\mathcal {A}$ guarantees that each neighborhood $U\\subset Y\\subset X$ of $\\infty $ has an infinite intersection with some set $A\\in \\mathcal {A}$ , which implies that $A\\in \\overline{U}$ and hence $\\overline{U}\\lnot \\subset Y$ .", "This means that $X$ is not regular (at $\\infty $ ).", "In the Wallman compactification $W(X)$ of the space $X$ consider the subspace $Z:=X\\cup W_\\omega \\mathcal {A}=Y\\cup W_\\omega \\mathcal {A}$ .", "We claim that the space $Z$ is Hausdorff and totally countably compact.", "To prove that $Z$ is Hausdorff, take two distinct ultrafilters $a,b\\in Z$ .", "If the ultrafilters $a,b$ are principal, then by the Hausdorff property of $X$ , they have disjoint neighborhoods in $W(X)$ and hence in $Z$ .", "Now assume that one of the ultrafilters $a$ or $b$ is principal and the other is not.", "We lose no generality assuming that $a$ is principal and $b$ is not.", "If $a\\ne \\infty $ , then we can use the regularity of the space $X$ at $a$ and prove that $a$ and $b$ have disjoint neighborhoods in $W(X)\\supset Z$ .", "So, assume that $a=\\infty $ .", "It follows from $b\\in Z=X\\cup W_{\\omega } \\mathcal {A}$ that the ultrafilter $b$ contains some countable set $\\lbrace A_n\\rbrace _{n\\in \\omega }\\subset \\mathcal {A}$ .", "Consider the set $V=\\bigcup _{n\\in \\omega }\\big (\\lbrace A_n\\rbrace \\cup A_n\\setminus \\bigcup _{k\\le n}\\lbrace k\\rbrace \\times \\omega \\big )$ and observe that $V$ has finite intersection with every set $\\lbrace k\\rbrace \\times \\omega $ , which implies that $Y\\setminus V$ is a neighborhood of $\\infty $ .", "Then $\\langle Y\\setminus V\\rangle $ and $\\langle V\\rangle $ are disjoint open neighborhoods of $a=\\infty $ and $b$ in $W(X)$ .", "Finally, assume that both ultrafilters $a,b$ are not principal.", "Since $a,b\\in W_{\\omega } \\mathcal {A}$ are distinct, there are disjoint countable sets $\\lbrace A_n\\rbrace _{n\\in \\omega },\\lbrace B_n\\rbrace _{n\\in \\omega }\\subset \\mathcal {A}$ such that $\\lbrace A_n\\rbrace _{n\\in \\omega }\\in a$ and $\\lbrace B_n\\rbrace _{n\\in \\omega }\\in b$ .", "Observe that the sets $V=\\bigcup _{n\\in \\omega }(\\lbrace A_n\\rbrace \\cup A_n\\setminus \\bigcup _{k\\le n}B_k)\\mbox{ \\ and \\ }W=\\bigcup _{n\\in \\omega }(\\lbrace B_n\\rbrace \\cup B_n\\setminus \\bigcup _{k\\le n}A_k)$ are disjoint and open in $X$ .", "Then $\\langle V\\rangle $ and $\\langle W\\rangle $ are disjoint open neighborhoods of the ultrafilters $a,b$ in $W(X)$ , respectively.", "To see that $Z$ is totally countably compact, take any infinite set $I\\subset Z$ .", "We should find an infinite set $J\\subset I$ with compact closure $\\bar{J}$ in $Z$ .", "We lose no generality assume that $I$ is countable and $\\infty \\notin I$ .", "If $J=I\\cap W_\\omega \\mathcal {A}$ is infinite, then $\\bar{J}$ is compact by the $\\omega $ -boundedness of $W_\\omega \\mathcal {A}$ , see Theorem REF .", "If $I\\cap W_\\omega \\mathcal {A}$ is finite, then $I\\cap Z\\setminus W_\\omega \\mathcal {A}=I\\cap Y=I\\cap (\\omega \\times \\omega )$ is infinite.", "If for some $n\\in \\omega $ the set $J_n=I\\cap (\\lbrace n\\rbrace \\times \\omega )$ is infinite, then $\\bar{J}_n=J_n\\cup \\lbrace \\infty \\rbrace $ is compact by the definition of the topology of the space $Y$ .", "If for every $n\\in \\omega $ the set $I\\cap (\\lbrace n\\rbrace \\times \\omega )$ is finite, then $I\\cap (\\omega \\times \\omega )\\in \\mathcal {F}$ and by the maximality of the family $\\mathcal {A}$ , for some set $A\\in \\mathcal {A}$ the intersection $J=A\\cap I$ is infinite, and then $\\bar{J}=J\\cup \\lbrace A\\rbrace $ is compact.", "Choose any point $\\infty \\notin \\omega \\times \\omega $ and consider the space $Y=\\lbrace \\infty \\rbrace \\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y$ such that if $\\infty \\in U$ , then for every $n\\in \\omega $ the complement $(\\lbrace n\\rbrace \\times \\omega )\\setminus U$ is finite.", "The definition of this topology ensures that $Y$ is Fréchet-Urysohn at the unique non-isolated point $\\infty $ of $Y$ .", "Let $\\mathcal {F}$ be the family of closed infinite subsets of $Y$ that do not contain the point $\\infty $ .", "The definition of the topology on $Y$ implies that for every $F\\in \\mathcal {F}$ and $n\\in \\omega $ the intersection $(\\lbrace n\\rbrace \\times \\omega )\\cap F$ is finite.", "By the Kuratowski-Zorn Lemma, the family $\\mathcal {F}$ contains a maximal almost disjoint subfamily $\\mathcal {A}\\subset \\mathcal {F}$ .", "The maximality of $\\mathcal {A}$ guarantees that each set $F\\in \\mathcal {F}$ has infinite intersection with some set $A\\in \\mathcal {A}$ .", "Consider the space $X=Y\\cup \\mathcal {A}$ endowed with the topology consisting of the sets $U\\subset X$ such that $U\\cap Y$ is open in $Y$ and for any $A\\in \\mathcal {A}\\cap U$ the set $A\\setminus U\\subset \\omega \\times \\omega $ is finite.", "We claim that the space $X$ has the properties (1)–(4).", "The definition of the topology of $X$ implies that $X$ is separable, Hausdorff and locally countable, which implies that $X$ has countable pseudocharacter.", "Moreover, $X$ is first-countable at all points except for $\\infty $ .", "At the point $\\infty $ the space $X$ is Fréchet-Urysohn (because its open subspace $Y$ is Fréchet-Urysohn at $\\infty $ ).", "The maximality of the maximal almost disjoint family $\\mathcal {A}$ guarantees that each neighborhood $U\\subset Y\\subset X$ of $\\infty $ has an infinite intersection with some set $A\\in \\mathcal {A}$ , which implies that $A\\in \\overline{U}$ and hence $\\overline{U}\\lnot \\subset Y$ .", "This means that $X$ is not regular (at $\\infty $ ).", "In the Wallman compactification $W(X)$ of the space $X$ consider the subspace $Z:=X\\cup W_\\omega \\mathcal {A}=Y\\cup W_\\omega \\mathcal {A}$ .", "We claim that the space $Z$ is Hausdorff and totally countably compact.", "To prove that $Z$ is Hausdorff, take two distinct ultrafilters $a,b\\in Z$ .", "If the ultrafilters $a,b$ are principal, then by the Hausdorff property of $X$ , they have disjoint neighborhoods in $W(X)$ and hence in $Z$ .", "Now assume that one of the ultrafilters $a$ or $b$ is principal and the other is not.", "We lose no generality assuming that $a$ is principal and $b$ is not.", "If $a\\ne \\infty $ , then we can use the regularity of the space $X$ at $a$ and prove that $a$ and $b$ have disjoint neighborhoods in $W(X)\\supset Z$ .", "So, assume that $a=\\infty $ .", "It follows from $b\\in Z=X\\cup W_{\\omega } \\mathcal {A}$ that the ultrafilter $b$ contains some countable set $\\lbrace A_n\\rbrace _{n\\in \\omega }\\subset \\mathcal {A}$ .", "Consider the set $V=\\bigcup _{n\\in \\omega }\\big (\\lbrace A_n\\rbrace \\cup A_n\\setminus \\bigcup _{k\\le n}\\lbrace k\\rbrace \\times \\omega \\big )$ and observe that $V$ has finite intersection with every set $\\lbrace k\\rbrace \\times \\omega $ , which implies that $Y\\setminus V$ is a neighborhood of $\\infty $ .", "Then $\\langle Y\\setminus V\\rangle $ and $\\langle V\\rangle $ are disjoint open neighborhoods of $a=\\infty $ and $b$ in $W(X)$ .", "Finally, assume that both ultrafilters $a,b$ are not principal.", "Since $a,b\\in W_{\\omega } \\mathcal {A}$ are distinct, there are disjoint countable sets $\\lbrace A_n\\rbrace _{n\\in \\omega },\\lbrace B_n\\rbrace _{n\\in \\omega }\\subset \\mathcal {A}$ such that $\\lbrace A_n\\rbrace _{n\\in \\omega }\\in a$ and $\\lbrace B_n\\rbrace _{n\\in \\omega }\\in b$ .", "Observe that the sets $V=\\bigcup _{n\\in \\omega }(\\lbrace A_n\\rbrace \\cup A_n\\setminus \\bigcup _{k\\le n}B_k)\\mbox{ \\ and \\ }W=\\bigcup _{n\\in \\omega }(\\lbrace B_n\\rbrace \\cup B_n\\setminus \\bigcup _{k\\le n}A_k)$ are disjoint and open in $X$ .", "Then $\\langle V\\rangle $ and $\\langle W\\rangle $ are disjoint open neighborhoods of the ultrafilters $a,b$ in $W(X)$ , respectively.", "To see that $Z$ is totally countably compact, take any infinite set $I\\subset Z$ .", "We should find an infinite set $J\\subset I$ with compact closure $\\bar{J}$ in $Z$ .", "We lose no generality assume that $I$ is countable and $\\infty \\notin I$ .", "If $J=I\\cap W_\\omega \\mathcal {A}$ is infinite, then $\\bar{J}$ is compact by the $\\omega $ -boundedness of $W_\\omega \\mathcal {A}$ , see Theorem REF .", "If $I\\cap W_\\omega \\mathcal {A}$ is finite, then $I\\cap Z\\setminus W_\\omega \\mathcal {A}=I\\cap Y=I\\cap (\\omega \\times \\omega )$ is infinite.", "If for some $n\\in \\omega $ the set $J_n=I\\cap (\\lbrace n\\rbrace \\times \\omega )$ is infinite, then $\\bar{J}_n=J_n\\cup \\lbrace \\infty \\rbrace $ is compact by the definition of the topology of the space $Y$ .", "If for every $n\\in \\omega $ the set $I\\cap (\\lbrace n\\rbrace \\times \\omega )$ is finite, then $I\\cap (\\omega \\times \\omega )\\in \\mathcal {F}$ and by the maximality of the family $\\mathcal {A}$ , for some set $A\\in \\mathcal {A}$ the intersection $J=A\\cap I$ is infinite, and then $\\bar{J}=J\\cup \\lbrace A\\rbrace $ is compact.", "A topological space $X$ is called weakly $\\infty $ -regular if for any infinite closed subset $F\\subset X$ and point $x\\in X\\setminus F$ there exist disjoint open sets $V,U\\subset X$ such that $x\\in V$ and $U\\cap F$ is infinite.", "Proposition 1 Each subspace $X$ of a countably compact Hausdorff space $Y$ is weakly $\\infty $ -regular.", "Given an infinite closed subset $F\\subset X$ and a point $x\\in X\\setminus F$ , consider the closure $\\bar{F}$ of $F$ in $Y$ and observe that $x\\notin \\bar{F}$ .", "By the countable compactness of $Y$ , the infinite set $F$ has an accumulation point $y\\in \\bar{F}$ .", "By the Hausdorff property of $Y$ , there are two disjoint open sets $V,U\\subset Y$ such that $x\\in V$ and $y\\in U$ .", "Since $y$ is an accumulation point of the set $F$ , the intersection $F\\cap U$ is infinite.", "Then $V\\cap X$ and $U\\cap X$ are two disjoint open sets in $X$ such that $x\\in V\\cap X$ and $F\\cap U\\cap X$ is infinite, witnessing that the space $X$ is weakly $\\infty $ -regular.", "Given an infinite closed subset $F\\subset X$ and a point $x\\in X\\setminus F$ , consider the closure $\\bar{F}$ of $F$ in $Y$ and observe that $x\\notin \\bar{F}$ .", "By the countable compactness of $Y$ , the infinite set $F$ has an accumulation point $y\\in \\bar{F}$ .", "By the Hausdorff property of $Y$ , there are two disjoint open sets $V,U\\subset Y$ such that $x\\in V$ and $y\\in U$ .", "Since $y$ is an accumulation point of the set $F$ , the intersection $F\\cap U$ is infinite.", "Then $V\\cap X$ and $U\\cap X$ are two disjoint open sets in $X$ such that $x\\in V\\cap X$ and $F\\cap U\\cap X$ is infinite, witnessing that the space $X$ is weakly $\\infty $ -regular.", "A subset $D$ of a topological space $X$ is called discrete if each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that $D\\cap O_x=\\lbrace x\\rbrace $ ; strictly discrete if each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that the family $(O_x)_{x\\in D}$ is disjoint in the sense that $O_x\\cap O_y=\\emptyset $ for any distinct points $x,y\\in D$ ; strongly discrete if each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that the family $(O_x)_{x\\in D}$ is disjoint and locally finite in $X$ .", "discrete if each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that $D\\cap O_x=\\lbrace x\\rbrace $ ; strictly discrete if each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that the family $(O_x)_{x\\in D}$ is disjoint in the sense that $O_x\\cap O_y=\\emptyset $ for any distinct points $x,y\\in D$ ; strongly discrete if each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that the family $(O_x)_{x\\in D}$ is disjoint and locally finite in $X$ .", "It is clear that for every subset $D\\subset X$ we have the implications $\\mbox{strongly discrete $\\Rightarrow $ strictly discrete $\\Rightarrow $ discrete}.$ Theorem 3 Let $X$ be a subspace of a countably compact Hausdorff space $Y$ .", "Then each infinite subset $I\\subset X$ contains an infinite subset $D\\subset I$ which is strictly discrete in $X$ .", "By the countable compactness of $Y$ , the set $I$ has an accumulation point $y\\in Y$ .", "Choose any point $x_0\\in I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ , find a disjoint open neighborhoods $V_0$ and $U_0$ of the points $x_0$ and $y$ , respectively.", "Choose any point $y_1\\in U_0\\cap I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ choose open disjoint neighborhoods $V_1\\subset U_0$ and $U_1\\subset U_0$ of the points $x_1$ and $y$ , respectively.", "Proceeding by induction, we can construct a sequence $(x_n)_{n\\in \\omega }$ of points of $X$ and sequences $(V_n)_{n\\in \\omega }$ and $(U_n)_{n\\in \\omega }$ of open sets in $Y$ such that for every $n\\in \\mathbb {N}$ the following conditions are satisfied: 1) $x_n\\in V_n\\subset U_{n-1}$ ; 2) $y\\in U_n\\subset U_{n-1}$ ; 3) $V_n\\cap U_n=\\emptyset $ .", "The inductive conditions imply that the sets $V_n$ , $n\\in \\omega $ , are pairwise disjoint, witnessing that the set $D=\\lbrace x_n\\rbrace _{n\\in \\omega }\\subset I$ is strictly discrete in $X$ .", "By the countable compactness of $Y$ , the set $I$ has an accumulation point $y\\in Y$ .", "Choose any point $x_0\\in I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ , find a disjoint open neighborhoods $V_0$ and $U_0$ of the points $x_0$ and $y$ , respectively.", "Choose any point $y_1\\in U_0\\cap I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ choose open disjoint neighborhoods $V_1\\subset U_0$ and $U_1\\subset U_0$ of the points $x_1$ and $y$ , respectively.", "Proceeding by induction, we can construct a sequence $(x_n)_{n\\in \\omega }$ of points of $X$ and sequences $(V_n)_{n\\in \\omega }$ and $(U_n)_{n\\in \\omega }$ of open sets in $Y$ such that for every $n\\in \\mathbb {N}$ the following conditions are satisfied: 1) $x_n\\in V_n\\subset U_{n-1}$ ; 2) $y\\in U_n\\subset U_{n-1}$ ; 3) $V_n\\cap U_n=\\emptyset $ .", "$x_n\\in V_n\\subset U_{n-1}$ ; $y\\in U_n\\subset U_{n-1}$ ; $V_n\\cap U_n=\\emptyset $ .", "The inductive conditions imply that the sets $V_n$ , $n\\in \\omega $ , are pairwise disjoint, witnessing that the set $D=\\lbrace x_n\\rbrace _{n\\in \\omega }\\subset I$ is strictly discrete in $X$ .", "For closed discrete subspaces in Lindelöf subspaces, the strict discreteness of the set $D$ in Theorem REF can be improved to the strong discreteness.", "Let us recall that a topological space $X$ is Lindelöf if each open cover of $X$ contains a countable subcover.", "Theorem 4 Let $X$ be a Lindelöf subspace of a countably compact Hausdorff space $Y$ .", "Then each infinite closed discrete subset $I\\subset X$ contains an infinite subset $D\\subset I$ which is strongly discrete in $X$ .", "By the countable compactness of $Y$ , the set $I$ has an accumulation point $y\\in Y$ .", "Since $I$ is closed and discrete in $X$ , the point $y$ does not belong to the space $X$ .", "By the Hausdorff property of $Y$ , for every $x\\in X$ there are disjoint open sets $V_x,W_x\\subset Y$ such that $x\\in V_x$ and $y\\in W_x$ .", "Since the space $X$ is Lindelöf, the open cover $\\lbrace V_x:x\\in X\\rbrace $ has a countable subcover $\\lbrace V_{x_n}\\rbrace _{n\\in \\omega }$ .", "For every $n\\in \\omega $ consider the open neighborhood $W_n=\\bigcap _{k\\le n}W_{x_k}$ of $y$ .", "Choose any point $y_0\\in I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ , find a disjoint open neighborhoods $V_0$ and $U_0\\subset W_0$ of the points $y_0$ and $y$ , respectively.", "Choose any point $y_1\\in U_0\\cap W_1\\cap I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ choose open disjoint neighborhoods $V_1\\subset U_0$ and $U_1\\subset U_0\\cap W_1$ of the points $y_1$ and $y$ , respectively.", "Proceeding by induction, we can construct a sequence $(y_n)_{n\\in \\omega }$ of points of $X$ and sequences $(V_n)_{n\\in \\omega }$ and $(U_n)_{n\\in \\omega }$ of open sets in $Y$ such that for every $n\\in \\mathbb {N}$ the following conditions are satisfied: 1) $y_n\\in V_n\\subset U_{n-1}\\cap W_n$ ; 2) $y\\in U_n\\subset U_{n-1}\\cap W_n$ ; 3) $V_n\\cap U_n=\\emptyset $ .", "The inductive conditions imply that the family $(V_n)_{n\\in \\omega }$ are pairwise disjoint, witnessing that the set $D=\\lbrace y_n\\rbrace _{n\\in \\omega }\\subset I$ is strictly discrete in $X$ .", "To show that $D$ is strongly discrete, it remains to show that the family $(V_n)_{n\\in \\omega }$ is locally finite in $X$ .", "Given any point $x\\in X$ , find $n\\in \\omega $ such that $x\\in V_{x_n}$ and observe that for every $i>n$ we have $V_i\\cap V_{x_n}\\subset W_i\\cap V_{x_n}\\subset W_{n}\\cap V_{x_n}=\\emptyset $ .", "By the countable compactness of $Y$ , the set $I$ has an accumulation point $y\\in Y$ .", "Since $I$ is closed and discrete in $X$ , the point $y$ does not belong to the space $X$ .", "By the Hausdorff property of $Y$ , for every $x\\in X$ there are disjoint open sets $V_x,W_x\\subset Y$ such that $x\\in V_x$ and $y\\in W_x$ .", "Since the space $X$ is Lindelöf, the open cover $\\lbrace V_x:x\\in X\\rbrace $ has a countable subcover $\\lbrace V_{x_n}\\rbrace _{n\\in \\omega }$ .", "For every $n\\in \\omega $ consider the open neighborhood $W_n=\\bigcap _{k\\le n}W_{x_k}$ of $y$ .", "Choose any point $y_0\\in I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ , find a disjoint open neighborhoods $V_0$ and $U_0\\subset W_0$ of the points $y_0$ and $y$ , respectively.", "Choose any point $y_1\\in U_0\\cap W_1\\cap I\\setminus \\lbrace y\\rbrace $ and using the Hausdorff property of $Y$ choose open disjoint neighborhoods $V_1\\subset U_0$ and $U_1\\subset U_0\\cap W_1$ of the points $y_1$ and $y$ , respectively.", "Proceeding by induction, we can construct a sequence $(y_n)_{n\\in \\omega }$ of points of $X$ and sequences $(V_n)_{n\\in \\omega }$ and $(U_n)_{n\\in \\omega }$ of open sets in $Y$ such that for every $n\\in \\mathbb {N}$ the following conditions are satisfied: 1) $y_n\\in V_n\\subset U_{n-1}\\cap W_n$ ; 2) $y\\in U_n\\subset U_{n-1}\\cap W_n$ ; 3) $V_n\\cap U_n=\\emptyset $ .", "$y_n\\in V_n\\subset U_{n-1}\\cap W_n$ ; $y\\in U_n\\subset U_{n-1}\\cap W_n$ ; $V_n\\cap U_n=\\emptyset $ .", "The inductive conditions imply that the family $(V_n)_{n\\in \\omega }$ are pairwise disjoint, witnessing that the set $D=\\lbrace y_n\\rbrace _{n\\in \\omega }\\subset I$ is strictly discrete in $X$ .", "To show that $D$ is strongly discrete, it remains to show that the family $(V_n)_{n\\in \\omega }$ is locally finite in $X$ .", "Given any point $x\\in X$ , find $n\\in \\omega $ such that $x\\in V_{x_n}$ and observe that for every $i>n$ we have $V_i\\cap V_{x_n}\\subset W_i\\cap V_{x_n}\\subset W_{n}\\cap V_{x_n}=\\emptyset $ .", "A topological space $X$ is called $\\ddot{\\omega }$ -regular if it for any closed discrete subset $F\\subset X$ and point $x\\in X\\setminus F$ there exist disjoint open sets $U_F$ and $U_x$ in $X$ such that $F\\subset U_F$ and $x\\in U_x$ .", "Proposition 2 Each countable closed discrete subset $D$ of a (Lindelöf) $\\ddot{\\omega }$ -regular $T_1$ -space $X$ is strictly discrete (and strongly discrete) in $X$ .", "The space $X$ is Hausdorff, being an $\\ddot{\\omega }$ -regular $T_1$ -space.", "If the subset $D\\subset X$ is finite, then $D$ is strongly discrete by the Hausdorff property of $X$ .", "So, assume that $D$ is infinite and hence $D=\\lbrace z_n\\rbrace _{n\\in \\omega }$ for some pairwise distinct points $z_n$ .", "By the $\\ddot{\\omega }$ -regularity there are two disjoint open sets $V_0,W_0\\subset X$ such that $z_0\\in V_0$ and $\\lbrace z_n\\rbrace _{n\\ge 1}\\subset W_0$ .", "Proceeding by induction, we can construct sequences of open sets $(V_n)_{n\\in \\omega }$ and $(W_n)_{n\\in \\omega }$ in $X$ such that for every $n\\in \\omega $ the following conditions are satisfied: $z_n\\in V_n\\subset W_{n-1}$ ; $\\lbrace z_k\\rbrace _{k>n}\\subset W_n\\subset W_{n-1}$ ; $V_n\\cap W_n=\\emptyset $ .", "These conditions imply that the family $(V_n)_{n\\in \\omega }$ is disjoint, witnessing that the set $D$ is strictly discrete in $X$ .", "Now assume that the space $X$ is Lindelöf and let $V=\\bigcup _{n\\in \\omega }V_n$ .", "By the $\\ddot{\\omega }$ -regularity of $X$ , each point $x\\in X\\setminus V$ has a neighborhood $O_x\\subset X$ whose closure $\\bar{O}_x$ does not intersect the closed discrete subset $D$ of $X$ .", "Since $X$ is Lindelöf, there exists a countable set $\\lbrace x_n\\rbrace _{n\\in \\omega }\\subset X\\setminus V$ such that $X=V\\cup \\bigcup _{n\\in \\omega }O_{x_n}$ .", "For every $n\\in \\omega $ consider the open neighborhood $U_n:=V_n\\setminus \\bigcup _{k\\le n}\\bar{O}_{x_k}$ of $z_n$ and observe that the family $(U_n)_{n\\in \\omega }$ is disjoint and locally finite in $X$ , witnessing that the set $D$ is strongly discrete in $X$ .", "The space $X$ is Hausdorff, being an $\\ddot{\\omega }$ -regular $T_1$ -space.", "If the subset $D\\subset X$ is finite, then $D$ is strongly discrete by the Hausdorff property of $X$ .", "So, assume that $D$ is infinite and hence $D=\\lbrace z_n\\rbrace _{n\\in \\omega }$ for some pairwise distinct points $z_n$ .", "By the $\\ddot{\\omega }$ -regularity there are two disjoint open sets $V_0,W_0\\subset X$ such that $z_0\\in V_0$ and $\\lbrace z_n\\rbrace _{n\\ge 1}\\subset W_0$ .", "Proceeding by induction, we can construct sequences of open sets $(V_n)_{n\\in \\omega }$ and $(W_n)_{n\\in \\omega }$ in $X$ such that for every $n\\in \\omega $ the following conditions are satisfied: $z_n\\in V_n\\subset W_{n-1}$ ; $\\lbrace z_k\\rbrace _{k>n}\\subset W_n\\subset W_{n-1}$ ; $V_n\\cap W_n=\\emptyset $ .", "$z_n\\in V_n\\subset W_{n-1}$ ; $\\lbrace z_k\\rbrace _{k>n}\\subset W_n\\subset W_{n-1}$ ; $V_n\\cap W_n=\\emptyset $ .", "These conditions imply that the family $(V_n)_{n\\in \\omega }$ is disjoint, witnessing that the set $D$ is strictly discrete in $X$ .", "Now assume that the space $X$ is Lindelöf and let $V=\\bigcup _{n\\in \\omega }V_n$ .", "By the $\\ddot{\\omega }$ -regularity of $X$ , each point $x\\in X\\setminus V$ has a neighborhood $O_x\\subset X$ whose closure $\\bar{O}_x$ does not intersect the closed discrete subset $D$ of $X$ .", "Since $X$ is Lindelöf, there exists a countable set $\\lbrace x_n\\rbrace _{n\\in \\omega }\\subset X\\setminus V$ such that $X=V\\cup \\bigcup _{n\\in \\omega }O_{x_n}$ .", "For every $n\\in \\omega $ consider the open neighborhood $U_n:=V_n\\setminus \\bigcup _{k\\le n}\\bar{O}_{x_k}$ of $z_n$ and observe that the family $(U_n)_{n\\in \\omega }$ is disjoint and locally finite in $X$ , witnessing that the set $D$ is strongly discrete in $X$ .", "The following proposition shows that the property described in Theorem REF holds for $\\ddot{\\omega }$ -regular spaces.", "Proposition 3 Every infinite subset $I$ of an $\\ddot{\\omega }$ -regular $T_1$ -space $X$ contains an infinite subset $D\\subset I$ , which is strictly discrete in $X$ .", "If $I$ has an accumulation point in $X$ , then a strictly discrete infinite subset can be constructed repeating the argument of the proof of Theorem REF .", "So, we assume that $I$ has no accumulation point in $X$ and hence $I$ is closed and discrete in $X$ .", "Replacing $I$ by a countable infinite subset of $I$ , we can assume that $I$ is countable.", "By Proposition REF , the set $I$ is strictly discrete in $X$ .", "If $I$ has an accumulation point in $X$ , then a strictly discrete infinite subset can be constructed repeating the argument of the proof of Theorem REF .", "So, we assume that $I$ has no accumulation point in $X$ and hence $I$ is closed and discrete in $X$ .", "Replacing $I$ by a countable infinite subset of $I$ , we can assume that $I$ is countable.", "By Proposition REF , the set $I$ is strictly discrete in $X$ .", "A topological space $X$ is called superconnected [2] if for any non-empty open sets $U_1,\\dots , U_n$ the intersection $\\overline{U}_1\\cap \\dots \\cap \\overline{U}_n$ is not empty.", "It is clear that a superconnected space containing more than one point is not regular.", "An example of a superconnected second-countable Hausdorff space can be found in [2].", "Proposition 4 Any first-countable superconnected Hausdorff space $X$ with $|X|>1$ contains an infinite set $I\\subset X$ such that each infinite subset $D\\subset I$ is not strictly discrete in $X$ .", "For every point $x\\in X$ fix a countable neighborhood base $\\lbrace U_{x,n}\\rbrace _{n\\in \\omega }$ at $x$ such that $U_{x,n+1}\\subset U_{x,n}$ for every $n\\in \\omega $ .", "Choose any two distinct points $x_0,x_1\\in X$ and for every $n\\ge 2$ choose a point $x_n\\in \\bigcap _{k<n}\\overline{U}_{x_k,n}$ .", "We claim that the set $I=\\lbrace x_n\\rbrace _{n\\in \\omega }$ is infinite.", "In the opposite case, we use the Hausdorff property and find a neighborhood $V$ of $x_0$ such that $\\overline{V}\\cap I=\\lbrace x_0\\rbrace $ .", "Find $m\\in \\omega $ such that $U_{x_0,m}\\subset V$ and $x_0\\notin \\overline{U}_{x_1,m}$ .", "Observe that $x_m\\in I\\cap \\overline{U}_{x_0,m}\\cap \\overline{U}_{x_1,m}=\\lbrace x_0\\rbrace \\cap \\overline{U}_{x_1,m}=\\emptyset ,$ which is a desired contradiction showing that the set $I$ is infinite.", "Next, we show that any infinite subset $D\\subset I$ is not strictly discrete in $X$ .", "To derive a contradiction, assume that $D$ is strictly discrete.", "Then each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that the family $(O_x)_{x\\in D}$ is disjoint.", "Choose any point $x_k\\in D$ and find $m\\in \\omega $ such that $U_{x_k,m}\\subset O_{x_k}$ .", "Replacing $m$ by a larger number, we can assume that $m>k$ and $x_m\\in D$ .", "Since $x_m\\in \\overline{U}_{x_k,m}\\subset \\overline{O}_{x_k}$ , the intersection $O_{x_m}\\cap O_{x_k}$ is not empty, which contradicts the choice of the neighborhoods $O_x$ , $x\\in D$ .", "For every point $x\\in X$ fix a countable neighborhood base $\\lbrace U_{x,n}\\rbrace _{n\\in \\omega }$ at $x$ such that $U_{x,n+1}\\subset U_{x,n}$ for every $n\\in \\omega $ .", "Choose any two distinct points $x_0,x_1\\in X$ and for every $n\\ge 2$ choose a point $x_n\\in \\bigcap _{k<n}\\overline{U}_{x_k,n}$ .", "We claim that the set $I=\\lbrace x_n\\rbrace _{n\\in \\omega }$ is infinite.", "In the opposite case, we use the Hausdorff property and find a neighborhood $V$ of $x_0$ such that $\\overline{V}\\cap I=\\lbrace x_0\\rbrace $ .", "Find $m\\in \\omega $ such that $U_{x_0,m}\\subset V$ and $x_0\\notin \\overline{U}_{x_1,m}$ .", "Observe that $x_m\\in I\\cap \\overline{U}_{x_0,m}\\cap \\overline{U}_{x_1,m}=\\lbrace x_0\\rbrace \\cap \\overline{U}_{x_1,m}=\\emptyset ,$ which is a desired contradiction showing that the set $I$ is infinite.", "Next, we show that any infinite subset $D\\subset I$ is not strictly discrete in $X$ .", "To derive a contradiction, assume that $D$ is strictly discrete.", "Then each point $x\\in D$ has a neighborhood $O_x\\subset X$ such that the family $(O_x)_{x\\in D}$ is disjoint.", "Choose any point $x_k\\in D$ and find $m\\in \\omega $ such that $U_{x_k,m}\\subset O_{x_k}$ .", "Replacing $m$ by a larger number, we can assume that $m>k$ and $x_m\\in D$ .", "Since $x_m\\in \\overline{U}_{x_k,m}\\subset \\overline{O}_{x_k}$ , the intersection $O_{x_m}\\cap O_{x_k}$ is not empty, which contradicts the choice of the neighborhoods $O_x$ , $x\\in D$ .", "Next, we establish one property of subspaces of functionally Hausdorff countably compact spaces.", "We recall that a topological space $X$ is functionally Hausdorff if for any distinct points $x,y\\in X$ there exists a continuous function $f:X\\rightarrow [0,1]$ such that $f(x)=0$ and $f(x)=1$ .", "A subset $U$ of a topological space $X$ is called functionally open if $U=f^{-1}(V)$ for some continuous function $f:X\\rightarrow \\mathbb {R}$ and some open set $V\\subset \\mathbb {R}$ .", "A subset $K\\subset X$ of topological space is called functionally compact if each open cover of $K$ by functionally open subsets of $X$ has a finite subcover.", "Proposition 5 If $X$ is a subspace of a functionally Hausdorff countably compact space $Y$ , then no infinite closed discrete subspace $D\\subset X$ is contained in a functionally compact subset of $X$ .", "To derive a contradiction, assume that $D$ is contained in a functionally compact subset $K$ of $X$ .", "By the countable compactness of $Y$ , the set $D$ has an assumulation point $y\\in Y$ .", "Since $D$ is closed and discrete in $X$ , the point $y$ does not belong to $X$ and hence $y\\notin K$ .", "Since $Y$ is functionally Hausdorff, for every $x\\in K$ there exists a continuous function $f_x:Y\\rightarrow [0,1]$ such that $f_x(x)=0$ and $f_x(y)=1$ .", "By the functional compactness of $K$ , the cover $\\lbrace f_x^{-1}([0,\\frac{1}{2})):x\\in K\\rbrace $ contains a finite subcover $\\lbrace f_x^{-1}([0,\\frac{1}{2})):x\\in E\\rbrace $ where $E$ is a finite subset of $K$ .", "Then $D\\subset K\\subset f^{-1}([0,\\frac{1}{2}))$ for the continuous function $f=\\max _{x\\in E}f_x:Y\\rightarrow [0,1]$ , and $f^{-1}((\\frac{1}{2},1])$ is a neighborhood of $y$ , which is disjoint with the set $D$ .", "But this is not possible as $y$ is an accumulation point of $D$ .", "To derive a contradiction, assume that $D$ is contained in a functionally compact subset $K$ of $X$ .", "By the countable compactness of $Y$ , the set $D$ has an assumulation point $y\\in Y$ .", "Since $D$ is closed and discrete in $X$ , the point $y$ does not belong to $X$ and hence $y\\notin K$ .", "Since $Y$ is functionally Hausdorff, for every $x\\in K$ there exists a continuous function $f_x:Y\\rightarrow [0,1]$ such that $f_x(x)=0$ and $f_x(y)=1$ .", "By the functional compactness of $K$ , the cover $\\lbrace f_x^{-1}([0,\\frac{1}{2})):x\\in K\\rbrace $ contains a finite subcover $\\lbrace f_x^{-1}([0,\\frac{1}{2})):x\\in E\\rbrace $ where $E$ is a finite subset of $K$ .", "Then $D\\subset K\\subset f^{-1}([0,\\frac{1}{2}))$ for the continuous function $f=\\max _{x\\in E}f_x:Y\\rightarrow [0,1]$ , and $f^{-1}((\\frac{1}{2},1])$ is a neighborhood of $y$ , which is disjoint with the set $D$ .", "But this is not possible as $y$ is an accumulation point of $D$ .", "Finally, we construct an example of a regular separable first-countable scattered space that embeds into a Hausdorff countably compact space but does not embed into Urysohn countably compact spaces.", "We recall that a topological space $X$ is Urysohn if any distinct points of $X$ have disjoint closed neighborhoods in $X$ .", "Example 2 There exists a topological space $X$ such that $X$ is regular, separable, and first-countable; $X$ can be embedded into a Hausdorff totally countably compact space; $X$ cannot be embedded into an Urysohn countably compact space.", "$X$ is regular, separable, and first-countable; $X$ can be embedded into a Hausdorff totally countably compact space; $X$ cannot be embedded into an Urysohn countably compact space.", "In the construction of the space $X$ we shall use almost disjoint dominating subsets of $\\omega ^\\omega $ .", "Let us recall [3] that a subset $D\\subset \\omega ^\\omega $ is called dominating if for any $x\\in \\omega ^\\omega $ there exists $y\\in D$ such that $x\\le ^* y$ , which means that $x(n)\\le y(n)$ for all but finitely many numbers $n\\in \\omega $ .", "By $\\mathfrak {d}$ we denote the smallest cardinality of a dominating subset $D\\subset \\omega ^\\omega $ .", "It is clear that $\\omega _1\\le \\mathfrak {d}\\le \\mathfrak {c}$ .", "We say that a family of function $D\\subset \\omega ^\\omega $ is almost disjoint if for any distinct $x,y\\in D$ the intersection $x\\cap y$ is finite.", "Here we identify a function $x\\in \\omega ^\\omega $ with its graph $\\lbrace (n,x(n)):n\\in \\omega \\rbrace $ and hence identify the set of functions $\\omega ^\\omega $ with a subset of the family $[\\omega \\times \\omega ]^\\omega $ of all infinite subsets of $\\omega \\times \\omega $ .", "Claim 1 There exists an almost disjoint dominating subset $D\\subset \\omega ^\\omega $ of cardinality $|D|=\\mathfrak {d}$ .", "By the definition of $\\mathfrak {d}$ , there exists a dominating family $\\lbrace x_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {d}}\\subset \\omega ^\\omega $ .", "It is well-known that $[\\omega ]^\\omega $ contains an almost disjoint family $\\lbrace A_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {c}}$ of cardinality continuum.", "For every $\\alpha <\\mathfrak {d}$ choose a strictly increasing function $y_\\alpha :\\omega \\rightarrow A_\\alpha $ such that $x_\\alpha \\le y_\\alpha $ .", "Then the set $D=\\lbrace y_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {d}}$ is dominating and almost disjoint.", "By Claim REF , there exists an almost disjoint dominating subset $D\\subset \\omega ^\\omega \\subset [\\omega \\times \\omega ]^\\omega $ .", "For every $n\\in \\omega $ consider the vertical line $\\lambda _n=\\lbrace n\\rbrace \\times \\omega $ and observe that the family $L=\\lbrace \\lambda _n\\rbrace _{n\\in \\omega }$ is disjoint and the family $D\\cup L\\subset [\\omega \\times \\omega ]^\\omega $ is almost disjoint.", "Consider the space $Y=(D\\cup L)\\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y$ such that for every $y\\in (D\\cup L)\\cap U$ the set $y\\setminus U\\subset \\omega \\times \\omega $ is finite.", "Observe that all points in the set $\\omega \\times \\omega $ are isolated in $Y$ .", "Using the almost disjointness of the family $D\\cup L$ , it can be shown that the space $Y$ is regular, separable, locally countable, scattered and locally compact.", "Choose any point $\\infty \\notin \\omega \\times Y$ and consider the space $Z=\\lbrace \\infty \\rbrace \\cup (\\omega \\times Y)$ endowed with the topology consisting of the sets $W\\subset Z$ such that for every $n\\in \\omega $ the set $\\lbrace y\\in Y:(n,y)\\in W\\rbrace $ is open in $Y$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times Y\\subset W$ .", "It is easy to see $Z=\\lbrace \\infty \\rbrace \\cup (\\omega \\times Y)$ is first-countable, separable, scattered and regular.", "Let $\\sim $ be the smallest equivalence relation on $Z$ such that $\\mbox{$(2n,\\lambda )\\sim (2n+1,\\lambda )$ and $(2n+1,d)\\sim (2n+2,d)$}$ for any $n\\in \\omega $ , $\\lambda \\in L$ and $d\\in D$ .", "Let $X$ be the quotient space $Z/_\\sim $ of $Z$ by the equivalence relation $\\sim $ .", "It is easy to see that the equivalence relation $\\sim $ has at most two-element equivalence classes and the quotient map $q:Z\\rightarrow X$ is closed and hence perfect.", "Applying [4], we conclude that the space $X$ is regular.", "It is easy to see that $X$ is separable, scattered and first-countable.", "It remains to show that $X$ has the properties (2), (3) of Example REF .", "This is proved in the following two claims.", "Claim 2 The space $X$ does not admit an embedding into an Urysohn countably compact space.", "To derive a contradiction, assume that $X=q(Z)$ is a subspace of an Urysohn countably compact space $C$ .", "By the countable compactness of $C$ , the set $q(\\lbrace 0\\rbrace \\times L)\\subset X\\subset C$ has an accumulation point $c_0\\in C$ .", "The point $c_0$ is distinct from $q(\\infty )$ , as $q(\\infty )$ is not an accumulation point of the set $q(\\lbrace 0\\rbrace \\times L)$ in $X$ .", "Let $l\\in \\omega $ be the largest number such that $c_0$ is an accumulation point of the set $q(\\lbrace l\\rbrace \\times L)$ in $C$ .", "Let us show that the number $l$ is well-defined.", "Indeed, by the Hausdorffness of the space $C$ , there exists a neighborhood $W\\subset C$ of $q(\\infty )$ such that $c_0\\lnot \\subset \\overline{W}$ .", "By the definition of the topology of the space $Z$ , there exists $m\\in \\omega $ such that $\\bigcup _{k\\ge m}\\lbrace k\\rbrace \\times Y\\subset q^{-1}(W)$ .", "Then $c_0$ is not an accumulation point of the set $\\bigcup _{k\\ge m}q(\\lbrace k\\rbrace \\times L)$ and hence the number $l$ is well-defined and $l<m$ .", "The definition of the equivalence relation $\\sim $ implies that the number $l$ is odd.", "By the countable compactness of $C$ , the infinite set $q(\\lbrace l+1\\rbrace \\times L)$ has an accumulation point $c_1\\in C$ .", "The maximality of $l$ ensures that $c_1\\ne c_0$ .", "By the Urysohn property of $C$ , the points $c_0,c_1$ have open neighborhoods $U_0,U_1\\subset C$ with disjoint closures in $C$ .", "For every $i\\in \\lbrace 0,1\\rbrace $ consider the set $J_i=\\lbrace n\\in \\omega :q(l+i,\\lambda _n)\\in U_i\\rbrace $ , which is infinite, because $c_i$ is an accumulation point of the set $q(\\lbrace l+i\\rbrace \\times L)=\\lbrace q(l+i,\\lambda _n):n\\in \\omega \\rbrace $ .", "For every $n\\in J_i$ the open set $q^{-1}(U_i)\\subset Z$ contains the pair $(l+i,\\lambda _n)$ .", "By the definition of the topology at $(l+i,\\lambda _n)$ , the set $(\\lbrace l+i\\rbrace \\times \\lambda _n)\\setminus q^{-1}(U_i)\\subset \\lbrace l+i\\rbrace \\times \\lbrace n\\rbrace \\times \\omega $ is finite and hence is contained in the set $\\lbrace l+i\\rbrace \\times \\lbrace n\\rbrace \\times [0,f_i(n)]$ for some number $f_i(n)\\in \\omega $ .", "Using the dominating property of the family $D$ , choose a function $f\\in D$ such that $f(n)\\ge f_i(n)$ for any $i\\in \\lbrace 0,1\\rbrace $ and $n\\in J_i$ .", "It follows that for every $i\\in \\lbrace 1,2\\rbrace $ the set $\\lbrace l+i\\rbrace \\times f\\subset \\lbrace l+i\\rbrace \\times (\\omega \\times \\omega )$ has infinite intersections with the preimage $q^{-1}(U_i)$ and hence $\\lbrace (l+i,f)\\rbrace \\in \\overline{q^{-1}(U_i)}\\subset q^{-1}(\\overline{U}_i)$ .", "Taking into account that the number $l$ is odd, we conclude that $q(l,f)=q(l+1,f)\\in \\overline{U}_0\\cap \\overline{U}_1=\\emptyset .$ which is a desired contradiction completing the proof of the claim.", "Claim 3 The space $X$ admits an embedding into a Hausdorff totally countably compact space.", "Using the Kuratowski-Zorn Lemma, enlarge the almost disjoint family $D\\cup L$ to a maximal almost disjoint family $M\\subset [\\omega \\times \\omega ]^\\omega $ .", "Consider the space $Y_M=M\\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y_M$ such that for every $y\\in M\\cap U$ the set $y\\setminus U\\subset \\omega \\times \\omega $ is finite.", "It follows that $Y_M$ is a regular locally compact first-countable space, containing $Y$ as an open dense subspace.", "The maximality of $M$ implies that each sequence in $\\omega \\times \\omega $ contains a subsequence that converges to some point of the space $Y_M$ .", "This property implies that the subspace $\\tilde{Y}:=(W_\\omega M)\\cup (\\omega \\times \\omega )$ of the Wallman extension of $W(Y_M)$ is totally countably compact.", "Repeating the argument from Example REF , one can show that the space $\\tilde{Y}$ is Hausdorff.", "Let $\\tilde{Z}=\\lbrace \\infty \\rbrace \\cup (\\omega \\times \\tilde{Y})$ where $\\infty \\notin \\omega \\times \\tilde{Y}$ .", "The space $\\tilde{Z}$ is endowed with the topology consisting of the sets $W\\subset \\tilde{Z}$ such that for every $n\\in \\omega $ the set $\\lbrace y\\in \\tilde{Y}:(n,y)\\in W\\rbrace $ is open in $\\tilde{Y}$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times \\tilde{Y}\\subset W$ .", "Taking into account that the space $\\tilde{Y}$ is Hausdorff and totally countably compact, we can prove that so is the the space $\\tilde{Z}$ .", "Let $\\sim $ be the smallest equivalence relation on $\\tilde{Z}$ such that $\\mbox{$(2n,\\lambda )\\sim (2n+1,\\lambda )$ and $(2n+1,d)\\sim (2n+2,d)$}$ for any $n\\in \\omega $ , $\\lambda \\in W_\\omega L$ and $d\\in W_\\omega D$ .", "Let $\\tilde{X}$ be the quotient space $\\tilde{Z}/_\\sim $ of $\\tilde{Z}$ by the equivalence relation $\\sim $ .", "It is easy to see that the space $\\tilde{X}$ is Hausdorff, totally countably compact and contains the space $X$ as a dense subspace.", "In the construction of the space $X$ we shall use almost disjoint dominating subsets of $\\omega ^\\omega $ .", "Let us recall [3] that a subset $D\\subset \\omega ^\\omega $ is called dominating if for any $x\\in \\omega ^\\omega $ there exists $y\\in D$ such that $x\\le ^* y$ , which means that $x(n)\\le y(n)$ for all but finitely many numbers $n\\in \\omega $ .", "By $\\mathfrak {d}$ we denote the smallest cardinality of a dominating subset $D\\subset \\omega ^\\omega $ .", "It is clear that $\\omega _1\\le \\mathfrak {d}\\le \\mathfrak {c}$ .", "We say that a family of function $D\\subset \\omega ^\\omega $ is almost disjoint if for any distinct $x,y\\in D$ the intersection $x\\cap y$ is finite.", "Here we identify a function $x\\in \\omega ^\\omega $ with its graph $\\lbrace (n,x(n)):n\\in \\omega \\rbrace $ and hence identify the set of functions $\\omega ^\\omega $ with a subset of the family $[\\omega \\times \\omega ]^\\omega $ of all infinite subsets of $\\omega \\times \\omega $ .", "Claim 1 There exists an almost disjoint dominating subset $D\\subset \\omega ^\\omega $ of cardinality $|D|=\\mathfrak {d}$ .", "By the definition of $\\mathfrak {d}$ , there exists a dominating family $\\lbrace x_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {d}}\\subset \\omega ^\\omega $ .", "It is well-known that $[\\omega ]^\\omega $ contains an almost disjoint family $\\lbrace A_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {c}}$ of cardinality continuum.", "For every $\\alpha <\\mathfrak {d}$ choose a strictly increasing function $y_\\alpha :\\omega \\rightarrow A_\\alpha $ such that $x_\\alpha \\le y_\\alpha $ .", "Then the set $D=\\lbrace y_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {d}}$ is dominating and almost disjoint.", "By the definition of $\\mathfrak {d}$ , there exists a dominating family $\\lbrace x_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {d}}\\subset \\omega ^\\omega $ .", "It is well-known that $[\\omega ]^\\omega $ contains an almost disjoint family $\\lbrace A_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {c}}$ of cardinality continuum.", "For every $\\alpha <\\mathfrak {d}$ choose a strictly increasing function $y_\\alpha :\\omega \\rightarrow A_\\alpha $ such that $x_\\alpha \\le y_\\alpha $ .", "Then the set $D=\\lbrace y_\\alpha \\rbrace _{\\alpha \\in \\mathfrak {d}}$ is dominating and almost disjoint.", "By Claim REF , there exists an almost disjoint dominating subset $D\\subset \\omega ^\\omega \\subset [\\omega \\times \\omega ]^\\omega $ .", "For every $n\\in \\omega $ consider the vertical line $\\lambda _n=\\lbrace n\\rbrace \\times \\omega $ and observe that the family $L=\\lbrace \\lambda _n\\rbrace _{n\\in \\omega }$ is disjoint and the family $D\\cup L\\subset [\\omega \\times \\omega ]^\\omega $ is almost disjoint.", "Consider the space $Y=(D\\cup L)\\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y$ such that for every $y\\in (D\\cup L)\\cap U$ the set $y\\setminus U\\subset \\omega \\times \\omega $ is finite.", "Observe that all points in the set $\\omega \\times \\omega $ are isolated in $Y$ .", "Using the almost disjointness of the family $D\\cup L$ , it can be shown that the space $Y$ is regular, separable, locally countable, scattered and locally compact.", "Choose any point $\\infty \\notin \\omega \\times Y$ and consider the space $Z=\\lbrace \\infty \\rbrace \\cup (\\omega \\times Y)$ endowed with the topology consisting of the sets $W\\subset Z$ such that for every $n\\in \\omega $ the set $\\lbrace y\\in Y:(n,y)\\in W\\rbrace $ is open in $Y$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times Y\\subset W$ .", "for every $n\\in \\omega $ the set $\\lbrace y\\in Y:(n,y)\\in W\\rbrace $ is open in $Y$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times Y\\subset W$ .", "It is easy to see $Z=\\lbrace \\infty \\rbrace \\cup (\\omega \\times Y)$ is first-countable, separable, scattered and regular.", "Let $\\sim $ be the smallest equivalence relation on $Z$ such that $\\mbox{$(2n,\\lambda )\\sim (2n+1,\\lambda )$ and $(2n+1,d)\\sim (2n+2,d)$}$ for any $n\\in \\omega $ , $\\lambda \\in L$ and $d\\in D$ .", "Let $X$ be the quotient space $Z/_\\sim $ of $Z$ by the equivalence relation $\\sim $ .", "It is easy to see that the equivalence relation $\\sim $ has at most two-element equivalence classes and the quotient map $q:Z\\rightarrow X$ is closed and hence perfect.", "Applying [4], we conclude that the space $X$ is regular.", "It is easy to see that $X$ is separable, scattered and first-countable.", "It remains to show that $X$ has the properties (2), (3) of Example REF .", "This is proved in the following two claims.", "Claim 2 The space $X$ does not admit an embedding into an Urysohn countably compact space.", "To derive a contradiction, assume that $X=q(Z)$ is a subspace of an Urysohn countably compact space $C$ .", "By the countable compactness of $C$ , the set $q(\\lbrace 0\\rbrace \\times L)\\subset X\\subset C$ has an accumulation point $c_0\\in C$ .", "The point $c_0$ is distinct from $q(\\infty )$ , as $q(\\infty )$ is not an accumulation point of the set $q(\\lbrace 0\\rbrace \\times L)$ in $X$ .", "Let $l\\in \\omega $ be the largest number such that $c_0$ is an accumulation point of the set $q(\\lbrace l\\rbrace \\times L)$ in $C$ .", "Let us show that the number $l$ is well-defined.", "Indeed, by the Hausdorffness of the space $C$ , there exists a neighborhood $W\\subset C$ of $q(\\infty )$ such that $c_0\\lnot \\subset \\overline{W}$ .", "By the definition of the topology of the space $Z$ , there exists $m\\in \\omega $ such that $\\bigcup _{k\\ge m}\\lbrace k\\rbrace \\times Y\\subset q^{-1}(W)$ .", "Then $c_0$ is not an accumulation point of the set $\\bigcup _{k\\ge m}q(\\lbrace k\\rbrace \\times L)$ and hence the number $l$ is well-defined and $l<m$ .", "The definition of the equivalence relation $\\sim $ implies that the number $l$ is odd.", "By the countable compactness of $C$ , the infinite set $q(\\lbrace l+1\\rbrace \\times L)$ has an accumulation point $c_1\\in C$ .", "The maximality of $l$ ensures that $c_1\\ne c_0$ .", "By the Urysohn property of $C$ , the points $c_0,c_1$ have open neighborhoods $U_0,U_1\\subset C$ with disjoint closures in $C$ .", "For every $i\\in \\lbrace 0,1\\rbrace $ consider the set $J_i=\\lbrace n\\in \\omega :q(l+i,\\lambda _n)\\in U_i\\rbrace $ , which is infinite, because $c_i$ is an accumulation point of the set $q(\\lbrace l+i\\rbrace \\times L)=\\lbrace q(l+i,\\lambda _n):n\\in \\omega \\rbrace $ .", "For every $n\\in J_i$ the open set $q^{-1}(U_i)\\subset Z$ contains the pair $(l+i,\\lambda _n)$ .", "By the definition of the topology at $(l+i,\\lambda _n)$ , the set $(\\lbrace l+i\\rbrace \\times \\lambda _n)\\setminus q^{-1}(U_i)\\subset \\lbrace l+i\\rbrace \\times \\lbrace n\\rbrace \\times \\omega $ is finite and hence is contained in the set $\\lbrace l+i\\rbrace \\times \\lbrace n\\rbrace \\times [0,f_i(n)]$ for some number $f_i(n)\\in \\omega $ .", "Using the dominating property of the family $D$ , choose a function $f\\in D$ such that $f(n)\\ge f_i(n)$ for any $i\\in \\lbrace 0,1\\rbrace $ and $n\\in J_i$ .", "It follows that for every $i\\in \\lbrace 1,2\\rbrace $ the set $\\lbrace l+i\\rbrace \\times f\\subset \\lbrace l+i\\rbrace \\times (\\omega \\times \\omega )$ has infinite intersections with the preimage $q^{-1}(U_i)$ and hence $\\lbrace (l+i,f)\\rbrace \\in \\overline{q^{-1}(U_i)}\\subset q^{-1}(\\overline{U}_i)$ .", "Taking into account that the number $l$ is odd, we conclude that $q(l,f)=q(l+1,f)\\in \\overline{U}_0\\cap \\overline{U}_1=\\emptyset .$ which is a desired contradiction completing the proof of the claim.", "To derive a contradiction, assume that $X=q(Z)$ is a subspace of an Urysohn countably compact space $C$ .", "By the countable compactness of $C$ , the set $q(\\lbrace 0\\rbrace \\times L)\\subset X\\subset C$ has an accumulation point $c_0\\in C$ .", "The point $c_0$ is distinct from $q(\\infty )$ , as $q(\\infty )$ is not an accumulation point of the set $q(\\lbrace 0\\rbrace \\times L)$ in $X$ .", "Let $l\\in \\omega $ be the largest number such that $c_0$ is an accumulation point of the set $q(\\lbrace l\\rbrace \\times L)$ in $C$ .", "Let us show that the number $l$ is well-defined.", "Indeed, by the Hausdorffness of the space $C$ , there exists a neighborhood $W\\subset C$ of $q(\\infty )$ such that $c_0\\lnot \\subset \\overline{W}$ .", "By the definition of the topology of the space $Z$ , there exists $m\\in \\omega $ such that $\\bigcup _{k\\ge m}\\lbrace k\\rbrace \\times Y\\subset q^{-1}(W)$ .", "Then $c_0$ is not an accumulation point of the set $\\bigcup _{k\\ge m}q(\\lbrace k\\rbrace \\times L)$ and hence the number $l$ is well-defined and $l<m$ .", "The definition of the equivalence relation $\\sim $ implies that the number $l$ is odd.", "By the countable compactness of $C$ , the infinite set $q(\\lbrace l+1\\rbrace \\times L)$ has an accumulation point $c_1\\in C$ .", "The maximality of $l$ ensures that $c_1\\ne c_0$ .", "By the Urysohn property of $C$ , the points $c_0,c_1$ have open neighborhoods $U_0,U_1\\subset C$ with disjoint closures in $C$ .", "For every $i\\in \\lbrace 0,1\\rbrace $ consider the set $J_i=\\lbrace n\\in \\omega :q(l+i,\\lambda _n)\\in U_i\\rbrace $ , which is infinite, because $c_i$ is an accumulation point of the set $q(\\lbrace l+i\\rbrace \\times L)=\\lbrace q(l+i,\\lambda _n):n\\in \\omega \\rbrace $ .", "For every $n\\in J_i$ the open set $q^{-1}(U_i)\\subset Z$ contains the pair $(l+i,\\lambda _n)$ .", "By the definition of the topology at $(l+i,\\lambda _n)$ , the set $(\\lbrace l+i\\rbrace \\times \\lambda _n)\\setminus q^{-1}(U_i)\\subset \\lbrace l+i\\rbrace \\times \\lbrace n\\rbrace \\times \\omega $ is finite and hence is contained in the set $\\lbrace l+i\\rbrace \\times \\lbrace n\\rbrace \\times [0,f_i(n)]$ for some number $f_i(n)\\in \\omega $ .", "Using the dominating property of the family $D$ , choose a function $f\\in D$ such that $f(n)\\ge f_i(n)$ for any $i\\in \\lbrace 0,1\\rbrace $ and $n\\in J_i$ .", "It follows that for every $i\\in \\lbrace 1,2\\rbrace $ the set $\\lbrace l+i\\rbrace \\times f\\subset \\lbrace l+i\\rbrace \\times (\\omega \\times \\omega )$ has infinite intersections with the preimage $q^{-1}(U_i)$ and hence $\\lbrace (l+i,f)\\rbrace \\in \\overline{q^{-1}(U_i)}\\subset q^{-1}(\\overline{U}_i)$ .", "Taking into account that the number $l$ is odd, we conclude that $q(l,f)=q(l+1,f)\\in \\overline{U}_0\\cap \\overline{U}_1=\\emptyset .$ which is a desired contradiction completing the proof of the claim.", "Claim 3 The space $X$ admits an embedding into a Hausdorff totally countably compact space.", "Using the Kuratowski-Zorn Lemma, enlarge the almost disjoint family $D\\cup L$ to a maximal almost disjoint family $M\\subset [\\omega \\times \\omega ]^\\omega $ .", "Consider the space $Y_M=M\\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y_M$ such that for every $y\\in M\\cap U$ the set $y\\setminus U\\subset \\omega \\times \\omega $ is finite.", "It follows that $Y_M$ is a regular locally compact first-countable space, containing $Y$ as an open dense subspace.", "The maximality of $M$ implies that each sequence in $\\omega \\times \\omega $ contains a subsequence that converges to some point of the space $Y_M$ .", "This property implies that the subspace $\\tilde{Y}:=(W_\\omega M)\\cup (\\omega \\times \\omega )$ of the Wallman extension of $W(Y_M)$ is totally countably compact.", "Repeating the argument from Example REF , one can show that the space $\\tilde{Y}$ is Hausdorff.", "Let $\\tilde{Z}=\\lbrace \\infty \\rbrace \\cup (\\omega \\times \\tilde{Y})$ where $\\infty \\notin \\omega \\times \\tilde{Y}$ .", "The space $\\tilde{Z}$ is endowed with the topology consisting of the sets $W\\subset \\tilde{Z}$ such that for every $n\\in \\omega $ the set $\\lbrace y\\in \\tilde{Y}:(n,y)\\in W\\rbrace $ is open in $\\tilde{Y}$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times \\tilde{Y}\\subset W$ .", "Taking into account that the space $\\tilde{Y}$ is Hausdorff and totally countably compact, we can prove that so is the the space $\\tilde{Z}$ .", "Let $\\sim $ be the smallest equivalence relation on $\\tilde{Z}$ such that $\\mbox{$(2n,\\lambda )\\sim (2n+1,\\lambda )$ and $(2n+1,d)\\sim (2n+2,d)$}$ for any $n\\in \\omega $ , $\\lambda \\in W_\\omega L$ and $d\\in W_\\omega D$ .", "Let $\\tilde{X}$ be the quotient space $\\tilde{Z}/_\\sim $ of $\\tilde{Z}$ by the equivalence relation $\\sim $ .", "It is easy to see that the space $\\tilde{X}$ is Hausdorff, totally countably compact and contains the space $X$ as a dense subspace.", "Using the Kuratowski-Zorn Lemma, enlarge the almost disjoint family $D\\cup L$ to a maximal almost disjoint family $M\\subset [\\omega \\times \\omega ]^\\omega $ .", "Consider the space $Y_M=M\\cup (\\omega \\times \\omega )$ endowed with the topology consisting of the sets $U\\subset Y_M$ such that for every $y\\in M\\cap U$ the set $y\\setminus U\\subset \\omega \\times \\omega $ is finite.", "It follows that $Y_M$ is a regular locally compact first-countable space, containing $Y$ as an open dense subspace.", "The maximality of $M$ implies that each sequence in $\\omega \\times \\omega $ contains a subsequence that converges to some point of the space $Y_M$ .", "This property implies that the subspace $\\tilde{Y}:=(W_\\omega M)\\cup (\\omega \\times \\omega )$ of the Wallman extension of $W(Y_M)$ is totally countably compact.", "Repeating the argument from Example REF , one can show that the space $\\tilde{Y}$ is Hausdorff.", "Let $\\tilde{Z}=\\lbrace \\infty \\rbrace \\cup (\\omega \\times \\tilde{Y})$ where $\\infty \\notin \\omega \\times \\tilde{Y}$ .", "The space $\\tilde{Z}$ is endowed with the topology consisting of the sets $W\\subset \\tilde{Z}$ such that for every $n\\in \\omega $ the set $\\lbrace y\\in \\tilde{Y}:(n,y)\\in W\\rbrace $ is open in $\\tilde{Y}$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times \\tilde{Y}\\subset W$ .", "for every $n\\in \\omega $ the set $\\lbrace y\\in \\tilde{Y}:(n,y)\\in W\\rbrace $ is open in $\\tilde{Y}$ , and if $\\infty \\in W$ , then there exists $n\\in \\omega $ such that $\\bigcup _{m\\ge n}\\lbrace m\\rbrace \\times \\tilde{Y}\\subset W$ .", "Taking into account that the space $\\tilde{Y}$ is Hausdorff and totally countably compact, we can prove that so is the the space $\\tilde{Z}$ .", "Let $\\sim $ be the smallest equivalence relation on $\\tilde{Z}$ such that $\\mbox{$(2n,\\lambda )\\sim (2n+1,\\lambda )$ and $(2n+1,d)\\sim (2n+2,d)$}$ for any $n\\in \\omega $ , $\\lambda \\in W_\\omega L$ and $d\\in W_\\omega D$ .", "Let $\\tilde{X}$ be the quotient space $\\tilde{Z}/_\\sim $ of $\\tilde{Z}$ by the equivalence relation $\\sim $ .", "It is easy to see that the space $\\tilde{X}$ is Hausdorff, totally countably compact and contains the space $X$ as a dense subspace.", "However, we do not know the answer on the following intriguing problem: Problem 2 Is it true that each (scattered) regular topological space can be embedded into a Hausdorff countably compact topological space?" ] ]
1906.04541
[ [ "Dark matter capture in celestial objects: Improved treatment of multiple\n scattering and updated constraints from white dwarfs" ], [ "Abstract We revisit dark matter (DM) capture in celestial objects, including the impact of multiple scattering, and obtain updated constraints on the DM-proton cross section using observations of white dwarfs.", "Considering a general form for the energy loss distribution in each scattering, we derive an exact formula for the capture probability through multiple scatterings.", "We estimate the maximum number of scatterings that $can$ take place, in contrast to the number $required$ to bring a dark matter particle to rest.", "We employ these results to compute a \"dark\" luminosity $L_{\\rm DM}$, arising solely from the thermalized annihilation products of the captured dark matter.", "Demanding that $L_{\\rm DM}$ not exceed the luminosity of the white dwarfs in the M4 globular cluster, we set a bound on the DM-proton cross section: $\\sigma_{p} \\lesssim 10^{-44} {\\rm cm}^2$, almost independent of the dark matter mass between 100 GeV and 1 PeV and mildly weakening beyond.", "This is a stronger constraint than those obtained by direct detection experiments in both large mass $\\left(M \\gtrsim 5 \\,\\,\\rm TeV\\right)$ and small mass $\\left(M \\lesssim 10\\,\\, \\rm GeV\\right)$ regimes.", "For dark matter lighter than 350 MeV, which is beyond the sensitivity of present direct detection experiments, this is the strongest available constraint." ], [ "Introduction", "A weakly interacting massive particle (WIMP) is a well-motivated candidate for dark matter — a scenario that can be tested in a variety of different ways [1].", "Theories that address the relative smallness of the electroweak scale can “miraculously” predict a relic WIMP density that is consistent with the observed cosmological dark matter density [2].", "However, in addition to the so-called WIMP miracle, it is the eminently testable nature of WIMPs that has driven the experimental search for said particles.", "They are generically predicted to have non-negligible interactions with Standard Model (SM) particles: they can be produced at colliders, can directly collide with SM particles in the lab and elsewhere, and can be indirectly detected through the anomalous fluxes of SM particles from their annihilations.", "The very same vaunted testability of WIMPs has however led to some degree of disappointment at not having seen a positive signal yet.", "Searches using the Large Hadron Collider (LHC) haven't found any trace of new physics up to the TeV scale [3], [4].", "As a result, the parent theories now appear to be less well-motivated.", "The strongest challenge to WIMPs has however come from direct detection experiments that have improved the constraints by many orders of magnitude in the past decade [5], [6], [7].", "For masses around tens of GeV the constraints are now strong enough to disfavor large parts of parameter space motivated by the parent theories.", "Indirect searches for such dark matter particles have also largely yielded null results [8].", "Making further progress appears challenging.", "LHC searches will continue, but not explore significantly higher energies.", "The more sensitive direct detection experiments will soon reach a scale that will be difficult to improve upon.", "In addition, they will have to contend with the background due to neutrino-nucleon scattering [9], making dark matter searches more difficult.", "On the indirect detection front, it appears that uncertainties in backgrounds and systematics will continue to plague the attempts to extract a signal for dark matter annihilation.", "Nevertheless, it is now being appreciated that the WIMP paradigm is not as constrained as one might naively think.", "For one, the allowed range of masses for WIMP-like dark matter is larger than previously emphasized.", "While the hope for new physics at the TeV scale has not yet been met, as far as the WIMP miracle is concerned, the mass range for WIMPs can be quite wide — larger than $\\sim $ keV, so that the dark matter is cold, but smaller than $\\sim 100$ TeV, so that its annihilation rate does not violate unitarity.", "Throughout this mass range, WIMPs can produce the observed cosmological density with a suitable annihilation rate [10].", "Direct detection experiments are not yet sufficiently sensitive at the lower dark matter masses ($\\lesssim 1$ GeV) and the possibility of such sub-GeV dark matter remains open [11], [12], [13].", "Even the upcoming and planned new detectors, will only constrain dark matter heavier than $\\sim $ 350 MeV [14].", "For even lighter dark matter masses, in the MeV range, electron recoil experiments can be more relevant but their sensitivity is also rather modest [15], [16], [17], [18], [19].", "Interestingly, for indirect detection even in the canonical tens-of-GeV range, the perceived stringent constraints are only for annihilations to specific channels and the less model-dependent constraints are not very stringent [20].", "Obviously, the annihilations to neutrinos are much harder to probe.", "At larger WIMP masses, the constraints are significantly weaker.", "Thus, it is worthwhile to re-evaluate the multipronged search strategy for WIMP-like dark matter, recognizing the wider putative range of WIMP masses and unexplored territory.", "In this paper, we revisit one prong of this strategy — the search for signatures of WIMP-like dark matter captured in celestial objects.", "This search can probe really weak interactions between WIMPs and SM particles, while being practically insensitive to the dark matter mass and annihilation channel.", "Thus, though the bounds require astrophysical modeling, they are quite strong at low and high masses and are insensitive to many particle physics details.", "A dark matter particle in the galactic halo, while passing through an astrophysical object, such as the Earth, the Sun, white dwarfs, and neutron stars etc., can lose its kinetic energy by colliding with the protons, neutrons, nuclei, and electrons in the medium.", "If, as a result, the dark matter particle is slowed to below the object's escape velocity, it gets captured (see Fig.", "REF ).", "The quantitative description of the capture of dark matter by scattering with nucleons was developed by Press and Spergel [21] and by Gould [22], [23], [24], [25].", "Figure: A dark matter coming in from infinity with velocity uu enters the celestial object, e.g., a white dwarf, with velocity ww.", "After this, it scatters one or more times, losing energy, and ultimately its velocity falls below the escape velocity of the white dwarf whence it enters a closed orbit.", "During its subsequent passages through the star, it will lose more and more energy before finally being captured.These captured dark matter particles have several interesting signatures.", "Over time, the number density of captured dark matter particles increases within the celestial object, and dark matter may begin to appreciably annihilate.", "As long as these annihilations are into particles that can thermalize with the medium other details become unimportant and they essentially only heat up the celestial object.", "Astrophysical observations can be sensitive to such anomalous heating and offer a powerful search strategy.", "As an example, neutrino signals in terrestrial neutrino detectors from such captured dark matter within the Sun has been studied in literature earlier  [26], [27], [28], [29], [30], [31], [32], [33].", "Others have calculated the effect of this accumulated dark matter on cooling of celestial objects [34], [35], [36], [37], or have compared the dark luminosity with the observed luminosity to provide stringent constraints on dark matter interactions with SM particles [35], [38], [39].", "More recently, limits on DM-nucleon cross section have also been obtained from non-observation of collapse of massive white dwarfs [40] or from neutron star heating [41], [42], [43], [44].", "Most of the earlier treatments assume that the dark matter particle is captured either after a single collision or not at all.", "This is a reasonable approximation if the cross section of interaction $\\sigma $ is small enough, so that the free streaming length $\\lambda $ of the dark matter particle is as large as the size of the celestial object itself.", "However, this approximation fails in two distinct ways, as recently pointed out by Bramante et al. [45].", "Firstly, dark matter that is much heavier than the target particles loses small amounts of energy per collision and consequently requires multiple collisions to lose enough energy to be captured.", "For massive dark matter with mass $\\gtrsim 100$ TeV, multiple scatterings can therefore play an important role.", "Secondly, the smaller the radius of the celestial object the more pronounced will be the effect of multiple scatterings in capturing dark matter.", "This is understandable because the number of scatterings inside the star is $\\sim R/\\lambda = n_t\\,\\sigma \\, R \\simeq \\sigma / R^2$ , where $n_t$ is the number density of the target particles inside the object.", "Obviously larger cross sections lead to higher probability for multi-scatter capture.", "However, we should keep in mind that the cross section cannot be arbitrarily large.", "The maximum allowed cross section is given by the geometrical cross section per target particle $\\sigma _{\\rm sat}=\\pi R^2/ {\\cal N}_t$ , where ${\\cal N}_t$ is the number of target particles in the object.", "In addition, there is yet another way in which the single scattering approximation fails — if the differential scattering cross section for the dark matter collisions is forward peaked.", "Here too, energy loss in a single collision is typically small and the cumulative effect of multiple collision may dominate.", "In this work, we will not dwell too much on this third possibility but the formalism we will develop here is capable of including this possibility as well.", "In this work, we improve the treatment of the multi-scatter capture of dark matter in celestial objects and derive constraints using observed white dwarfs.", "In Sec.", ", we recapitulate the original treatments by Gould [22], [23], [24], [25] and the more recent treatment of capture via multiple scattering by Bramante et al. [45].", "We make conceptual and technical improvements in the underlying formalism, treating the energy loss distribution more precisely.", "We calculate the rate of capture of dark matter through multiple scatterings and its contribution to the luminosities of the stars.", "In Sec.", ", we then follow the treatment of Bertone and Fairbairn [35], and compare the dark luminosity with the luminosity of white dwarfs observed in the M4 globular cluster.", "With the inclusion of multiple scattering, we find that for very heavy dark matter with masses $\\gtrsim 5$  TeV, where multiple scattering is important, we are able to place stronger constraints than were previously obtained.", "We are also able to place completely new constraints on dark matter lighter than $\\sim $ 350 MeV, and improve the present limit on $\\sigma _{p}$ for sub-GeV dark matter from direct detection experiments by several orders of magnitude.", "We finally conclude in Sec.", "." ], [ "Review of previous treatments", "A dark matter particle in the halo can be gravitationally attracted towards an astronomical object, undergo one or more collisions inside the object, and eventually get captured.", "A schematic diagram of such a scenario is shown in Fig.", "REF .", "Far away from the object, the dark matter particle has a velocity $u$ and when it reaches the surface of the object its velocity increases to $w$ , given by $w^2 = u^2 + v_{\\rm esc}^2\\,.$ The dark matter particle may undergo one or many scatterings as it transits through the object.", "The velocity of the incoming dark matter particle decreases as a result of these collisions with the target nucleons or electrons in the medium.", "If eventually its velocity $v_f$ becomes less than the escape velocity $v_{\\rm esc}$ , it is captured.", "Here, we are assuming that the constituent particles of the astronomical object are at rest in the frame of the object.", "That is, they have no thermal motions and the dark matter particle can only lose energy.", "This is a good approximation when $\\frac{1}{2}M_{\\rm DM} v_{\\rm esc}^2 \\gtrsim k_B T$ , i.e., the dark matter is not too light and the star is not too hot.", "For example, in a solar mass white dwarf with temperatures of around $10^6$ K, this lower limit is approximately $M_{\\rm DM}\\gtrsim 6$  MeV.", "The rate of dark matter particle getting captured in the object depends not only on the size of the object and the flux of dark matter particles, but also on the probability of collisions and the probability of incurring energy loss.", "Therefore, the capture rate takes the form $C_{\\rm tot} = \\sum _{\\rm N} C_{\\rm N} &=& \\sum _{\\rm N} \\underbrace{\\pi R^2}_\\textrm {area of the object}\\times \\, \\underbrace{p_{\\rm N}(\\tau )}_\\textrm {probability for N collisions} \\nonumber \\\\&& \\times \\,\\underbrace{n_{\\rm DM} \\int \\dfrac{f(u)du}{u}\\,(u^2+v_{\\rm esc}^2)}_\\textrm {DM flux}\\,\\,\\,\\times \\underbrace{g_{\\rm N}(u)}_\\textrm {probability that v_f \\le v_{\\rm esc} after N collisions}\\,.$ The capture can occur after the $N^{\\rm th}$ collision, and the total rate is simply the sum of the rates corresponding to each $N$ .", "Here, $\\pi R^2$ is the area of the astrophysical object within which the dark matter particle is captured.", "$p_N(\\tau )$ is the probability of a dark matter particle with optical depth $\\tau $ to undergo $N$ scatterings.", "If we take into account all the incidence angles encoded in the variable $y$ , we have $p_{\\rm N}(\\tau )=2\\int _{0}^{1}dy\\,\\dfrac{ye^{-y\\tau }(y\\tau )^N}{N!", "}\\,,$ where the optical depth $\\tau = 3\\sigma {\\cal N}_t/(2\\pi R^2)$ , ${\\cal N}_t$ being the total number of targets in the object and $\\sigma $ is the DM-target interaction cross section.", "The flux of the captured dark matter particles is given by the product of the dark matter number density in the halo, $n_{\\rm DM}=\\rho _{\\rm DM}/M_{\\rm DM}$ , and their average velocity.", "The dark matter energy density near the celestial object is denoted by $\\rho _{\\rm DM}$ and in the Solar vicinity it is taken to be $\\sim $ 0.3 GeV cm$^{-3}$ .", "However, in other overdense regions of the Universe it can be much higher.", "$f(u)$ is the velocity distribution function of the dark matter particle, that is usually taken to be a Maxwell Boltzmann (MB) distribution $f_{\\rm MB}(v)= \\left(\\frac{3}{2 \\pi \\bar{v}^2} \\right)^\\frac{3}{2} 4\\pi v^2 \\exp \\left[-\\frac{3v^2}{2 \\bar{v}^2} \\right]\\,,$ with $\\bar{v}\\sim 287.8$ km s$^{-1}$ being the rms velocity of the distribution.", "To account for the motion of the Sun with respect to the rest frame of the galaxy, the distribution function in the Sun's rest frame is boosted, and modeled as $f_{\\rm Sun}(v)= f_{\\rm MB}(v)\\,e^{-\\eta ^2}\\dfrac{\\sinh (2x\\eta )}{2x\\eta }\\,,$ where $x^2 = 3v^2/(2\\bar{v}^2)$ and $\\eta ^2 = 3\\tilde{v}^2/(2\\bar{v}^2)$ , $\\tilde{v}\\sim 247$ km s$^{-1}$ being the velocity of the Sun with respect to the dark matter halo.", "To derive analytic results, we use the usual Maxwell-Boltzmann distribution in the next section.", "However, all the final results have been computed (numerically) using the boosted distribution, wherever applicable.", "The capture probability $g_{\\rm N}$ , i.e., the probability that the final velocity of dark matter after $N$ scatterings becomes less than $v_{\\rm esc}$ , i.e., $v_f \\le v_{\\rm esc}$ , is given by $g_{\\rm N}(u)&=& \\int _{0}^{1} dz_1 \\int _{0}^{1} dz_2 ...\\int _{0}^{1} dz_{\\rm N}\\, s_1(z_1)\\times s_2(z_1,z_2)...s_{\\rm N}(z_1,z_2 ... z_N) \\nonumber \\\\&\\times & \\Theta \\bigg (v_{\\rm esc}- \\left(u^2+v_{\\rm esc}^2\\right)^{1/2} \\prod _{i=1}^{N} (1-z_i \\beta )^{1/2}\\bigg ) \\,.$ Here, $z_i$ is a random variate which takes values between 0 and 1 and encodes the energy lost by the dark matter particle in the $i^{\\rm th}$ scattering.", "The kinetic energy that can be lost in a scattering is given by $\\Delta E_{\\rm max}=z_i\\beta E$ , where $\\beta = (4 M_{\\rm DM} M_t)/{(M_{\\rm DM}+M_t)^2}$ is the maximum fraction, with $M_t$ being the mass of the target particles.", "This variable $z_i$ is in fact closely related to the scattering angle in the center of mass frame, i.e., $z =\\sin ^2(\\theta _{\\rm CM}/2)$ , as explained in Appendix .", "Naturally, $g_{\\rm N}$ depends on the probability distributions for the scattering angle encoded in $s_i(z_1,z_2,...z_i)$ .", "Here we confine our discussion to the regime where the differential cross section is independent of the scattering angle and hence all $s_i(z_1,z_2,...z_i)=1$ .", "More general choices of $s_i$ can be considered without much more difficulty." ], [ "Exact formula for capture probability", "In order to get captured, the final velocity of dark matter particle must become less than the escape velocity.", "The probability that the dark matter particle with velocity $w$ scatters to a final velocity $v_f$ which is less than or equal to $v_{\\rm esc}$ , after $N$ number of scatterings, is given by $g_{\\rm N}(u)= \\int _{0}^{1} dz_1 \\int _{0}^{1} dz_2 ...\\int _{0}^{1} dz_{\\rm N}\\, \\Theta \\bigg (v_{\\rm esc}- \\left(u^2+v_{\\rm esc}^2\\right)^{1/2} \\prod _{i=1}^{N} (1-z_i \\beta )^{1/2}\\bigg )\\, ,$ where the $dz_i$ integrals correspond to sum over all possible scattering trajectories.", "We compute this integral analytically to find $g_{\\rm N}(u)= \\frac{1}{\\beta } \\frac{v_{\\rm esc}^2}{u^2+v_{\\rm esc}^2} \\left[\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right]^{N-1}-\\left( \\frac{1}{\\beta }-1\\right)\\,.$ We interpret $g_{\\rm N}(u)$ as the probability that a dark matter particle with speed $u$ at infinity will get captured at its $N^{\\rm th}$ collision provided that $N$ collisions occur.", "See Appendix  for a brief motivation behind this interpretation.", "To ensure that $g_{\\rm N}$ is positive, we write it as $g_{\\rm N}(u)&=& \\left[ \\frac{1}{\\beta } \\frac{v_{\\rm esc}^2}{u^2+v_{\\rm esc}^2} \\left[\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right]^{N-1}-\\left( \\frac{1}{\\beta }-1\\right) \\right] \\times \\nonumber \\\\&& \\Theta \\left( \\left[ \\frac{1}{\\beta } \\frac{v_{\\rm esc}^2}{u^2+v_{\\rm esc}^2} \\left[\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right]^{N-1}-\\left( \\frac{1}{\\beta }-1\\right) \\right] \\right)\\,.$ This differs from the analogous expression in the previous work, where $z_i$ was replaced by its average value of 1/2 [45], which instead gave $g_{\\rm N}^{\\rm approx}(u)=\\Theta \\left(v_{\\rm esc}\\prod _{i=1}^{N} \\left(1-\\frac{1}{2}\\beta \\right)^{-1/2}-(u^2+v_{\\rm esc}^2)^{1/2} \\right) \\,.$ The $\\Theta $ function in Eq.", "(REF ) sets an upper limit to the halo velocity $u$ given by $u^2_{\\rm max} \\le v_{\\rm esc}^2 \\left[ \\frac{1}{1-\\beta } \\left(\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right)^{N-1}-1 \\right]\\,.$ Figure: Typical number of scatterings required for a dark matter particle with velocity uu to be captured by a solar mass white dwarf with v esc ∼10 3 v_{\\rm esc}\\sim 10^3 km s -1 ^{-1}.", "The dashed lines are the approximate results where each of the energy loss fractions, z i z_i, was replaced by an “average” value of 1/2.", "The thick solid curves are the maximum number of collisions required for a given uu obtained using the exact analytical result.", "Similarly, the thin lines represent the minimum.", "The dotted curves represent the absolute minimum number of collisions required for capture, corresponding to the maximum loss in kinetic energy, i.e., z i =1z_i=1.This upper limit on $u$ indicates that dark matter particles with arbitrarily large velocity cannot typically be trapped by the celestial object after $N$ scatterings.", "Furthermore, as $g_{\\rm N}(u)$ is a probability, it should also satisfy the condition $g_{\\rm N}(u) \\le 1$ .", "This imposes a lower limit on $u$ that was not apparent in the single scattering case where it is trivially satisfied.", "Here, $g_{\\rm N}(u) \\le 1$ gives rise to the condition $u^2_{\\rm min} \\ge v_{\\rm esc}^2 \\left[\\left(\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right)^{N-1}-1 \\right]\\,.$ This lower limit encodes that if the velocity of the incoming dark matter particle is below this threshold then it is more likely to be captured already before the $N^{\\rm th}$ collision.", "The expressions for minimum and maximum velocity depend on the assumed expression for $s_i(z_1,z_2,...z_i)$ , and the above expressions have been obtained with a uniform distribution, and similar expressions can be obtained for more general choices." ], [ "Number of scatterings required for capture", "The conditions $0 \\le g_{\\rm N}(u)\\le 1$ can also be reinterpreted in a slightly different way.", "It gives the typical minimum and maximum number of collisions required to capture a dark matter particle with a given velocity $u$ , $1+ \\frac{\\log \\left[(1-\\beta )\\frac{u^2+v_{\\rm esc}^2}{v_{\\rm esc}^2}\\right]}{\\log \\left[ \\frac{\\log \\frac{1}{1-\\beta }}{\\beta } \\right]}\\le N_{\\rm req} \\le 1+ \\frac{\\log \\left[\\frac{u^2+v_{\\rm esc}^2}{v_{\\rm esc}^2}\\right]}{\\log \\left[ \\frac{\\log \\frac{1}{1-\\beta }}{\\beta } \\right]}\\,.$ One should not confuse this quantity with the typical maximum number of scatterings that the dark matter can experience inside the celestial object before coming to rest.", "This latter number depends not only on the capture rate in the object but also the life time of the object.", "In Fig.", "REF , we show the typical maximum required number of scatterings as a function of the dark matter velocity $u$ .", "For smaller dark matter masses and smaller halo velocities, our exact expression in Eq.", "(REF ) (solid lines) is always staying larger than 1 and gives a more meaningful result compared to the approximate result (dashed lines).", "This is expected, because multi-scatter capture is less viable for light dark matter particles, and the approximation of replacing $z_i$ by its average value of 1/2 is inaccurate for small $N$ [45].", "The improvement for smaller halo velocity $u$ is also understandable on similar grounds.", "Lower values of $u$ imply a lower initial velocity $w$ , and consequently it is more probable for the dark matter particle to get captured after a few scatterings (lower $N$ ) rather than multiple scatterings.", "Remarkably, $N_{\\rm req}$ is never smaller than 1 according to the result we obtain.", "The dotted lines in Fig.", "REF represent the absolute minimum number of collisions that is essential for the dark matter to be captured from kinematical considerations alone.", "This happens when the maximum about of kinetic energy is lost in each collision, i.e., when $z_i=1$ in Eq.", "(REF ).", "Note how the typical minimum number of collisions required (thin lines) is always larger than this absolute minimum number." ], [ "Capture rate", "Using the analytical expression for $g_{\\rm N}(u)$ in Eq.", "(REF ), we can now evaluate the capture rate for $N$ -scattering.", "Using energy per unit mass $\\zeta = u^2/2$ along with the definition of capture rate in Eq.", "(REF ), we find $C_{\\rm N} = \\pi R^2\\, p_{\\rm N}(\\tau ) \\,n_{\\rm DM} \\int _{\\zeta _{\\rm min}}^{\\zeta _{\\rm max}} \\frac{f(\\zeta )d\\zeta }{\\zeta } (\\zeta +\\zeta _{\\rm esc}) \\,g_{\\rm N}(\\zeta ) \\,,$ where $ \\zeta _{\\rm max}$ and $\\zeta _{\\rm min}$ can be obtained from Eq.", "(REF ) and Eq.", "(REF ) respectively and is given by $\\zeta _{\\rm max} = \\zeta _{\\rm esc} \\left[ \\frac{1}{1-\\beta } \\left(\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right)^{N-1}-1 \\right]\\,,$ and $\\zeta _{\\rm min} = \\zeta _{\\rm esc} \\left[\\left(\\frac{1}{\\beta } \\log \\frac{1}{1-\\beta } \\right)^{N-1}-1 \\right]\\,.$ with $\\zeta _{\\rm esc} = v_{\\rm esc}^2/2$ .", "Finally, using Maxwell Boltzmann distribution from Eq.", "(REF ) capture rate for $N$ -scattering is $C_{\\rm N}= \\left(\\frac{8}{\\pi } \\right)^\\frac{1}{2} \\pi R^2\\,p_{\\rm N}(\\tau ) \\, \\frac{n_{\\rm DM}}{\\sqrt{\\bar{\\zeta }}} \\left[ \\frac{\\zeta _{\\rm esc}}{\\beta ^N} \\left( \\log \\frac{1}{1-\\beta } \\right)^{N-1} p-\\left( \\frac{1}{\\beta }-1 \\right) q \\right]\\,,$ where $p$ and $q$ are given as $p= \\exp \\left[\\frac{-\\zeta _{\\rm min}}{\\bar{\\zeta }} \\right] - \\exp \\left[ \\frac{-\\zeta _{\\rm max}}{\\bar{\\zeta }} \\right]\\,,$ and $q= \\left(\\bar{\\zeta }+(\\zeta _{\\rm esc}+\\zeta _{\\rm min}) \\right) \\exp \\left[ \\frac{-\\zeta _{\\rm min}}{\\bar{\\zeta }} \\right] - \\left(\\bar{\\zeta }+(\\zeta _{\\rm esc}+\\zeta _{\\rm max}) \\right) \\exp \\left[\\frac{-\\zeta _{\\rm max}}{\\bar{\\zeta }} \\right]\\, ,$ with $\\bar{\\zeta } = \\bar{v}^2/3$ .", "Figure: Number of particles captured after NN collisions during the life time of the different celestial objects plotted against the number of collisions NN.", "Note that, here σ\\sigma denotes the interaction cross section with the relevant target.", "For example, in Earth the target nucleus is taken to be that of iron while for neutron stars it is simply a neutron.", "The density of dark matter, for simplicity, has been taken to be the that around the solar system, i.e., 0.3 GeV cm -3 ^{-3}." ], [ "Number of scatterings allowed in the object", "It is obvious that the maximum number of scatterings that a dark matter particle can actually undergo must also depend on the time over which such captures can take place.", "Roughly, $\\tau C_{\\rm N}$ gives the total number of dark matter particles that are captured at their $N^{\\rm th}$ collision within the lifetime $\\tau $ of the celestial object under study.", "In Fig.", "REF , we show the total number of dark matter particles captured in the Sun, Earth, a typical neutron star and white dwarf, with respect to the number of scatterings it took to capture them within a time $\\tau $ taken to be the age of the Universe.", "Note that the number of captured particles after $N\\gtrsim 10$ or so is already smaller than 1, for a cross section $\\sigma $ that we will see is marginally allowed.", "In contrast, the typical maximum number of scatters needed to capture WIMPs, as shown in Fig.", "REF , are much larger.", "This means that the capture rate is dominated by the low-velocity part of the galactic dark matter halo or they are extremely rare events.", "It is easy to see that the $C_{\\rm N}$ are monotonically decreasing, so that if $\\tau \\,C_{\\rm N}<1$ , on average less than one dark matter particles is captured after more than $N$ collisions.", "Thus, high-$N$ captures are exceedingly rare because the $C_{\\rm N}$ are exponentially decreasing with $N$ .", "We use this physically derived criterion to truncate the series in $C_{\\rm N}$ where $\\tau \\,C_{\\rm N}=1$ ." ], [ "Luminosity via multi-scatter capture and constraints from white dwarfs", "We now consider the capture of dark matter inside white dwarfs.", "White dwarfs are dominantly made up of carbon nuclei, which we take to be the target particle.", "For the range of dark matter masses that are of interest to us in this work, the typical average momentum transferred to a carbon nuclei inside a solar mass white dwarf is $\\lesssim $ MeV.", "This turns out to be much larger than the inverse of the de Broglie wavelength of the nucleus, which is less than a fm$^{-1}$ .", "Thus, we can treat the relevant collisions to be coherent and elastic.", "More precisely, the form factor which describes the loss of coherence in case of large energy transfers turns out to be $\\sim 1$ for low momentum transfers.", "For example, using the Helm form factor [46], we find that for a 10 GeV dark matter $F^2_{\\rm Helm} \\sim 0.8$ for the maximum possible momentum transfers.", "For higher dark matter masses, it goes down to $\\sim 0.3$ and saturates to this constant value.", "To compare with the present direct detection limits, we will translate the DM-carbon cross section $\\sigma $ to DM-proton cross section $\\sigma _p$ .", "As we are in the regime of coherent scattering, for spin-independent interactions and assuming equal contributions from protons and neutrons, this translation is simply given by [47] $\\sigma = \\dfrac{\\mu _N^2}{\\mu _p^2}A^2\\sigma _{p}\\,.$ Here, $\\mu _N$ and $\\mu _p$ are the reduced masses of the dark matter-nuclei and dark matter-proton system.", "The ratio $\\mu _{\\rm N}/\\mu _{\\rm p}$ is $\\sim 1$ for light dark matter particles $M_{\\rm DM} \\lesssim M_p$ and rises to $\\sim 12$ for the heavier $M_{\\rm DM} \\gg M_{\\rm carbon}$ .", "The number of captured dark matter particles $N_{\\rm cap}$ evolves as $dN_{\\rm cap}/dt = C_{\\rm tot} - A N_{\\rm cap}^2/2$ , where $A$ is the annihilation rate of the self-conjugate WIMP.", "As long as the capture and annihilation processes are in equilibriumTo ensure that the equilibration time $\\le t_{age} $ , the $\\langle \\sigma _{a} v \\rangle $ must be larger than $\\sim 10^{-56}\\,\\rm cm^{3} \\,s^{-1}$ which is obviously much smaller than expected for thermal WIMPs., the dark luminosity $L_{\\rm DM}$ arising solely from annihilation of captured dark matter particles is given by the mass capture rate $M_{\\rm DM}\\,C_{\\rm tot}$ .", "This additional luminosity is expected to thermalize inside a white dwarf, as long as the annihilation products are SM particlesFor the range of energies considered here, all SM particles, including neutrinos, are expected to thermalize inside a white dwarf..", "In Fig.", "REF , we plot the dark luminosity $L_{\\rm DM}$ as a function of the dark matter mass.", "For collision with carbon nuclei inside solar mass white dwarfs, we note that multi-scatter capture becomes important for dark matter masses $\\gtrsim 10$ TeV.", "This is still an order of magnitude below the unitarity bound $\\sim 100$ TeV [48], and relevant also to canonical thermal WIMPs that are elementary particles.", "Figure: Dark luminosity from annihilation of captured dark matter particles for multiple and single scatterings with carbon nuclei.", "σ\\sigma denotes the interaction cross section of dark matter with the target.To obtain constraints on the dark matter interaction cross section, we now compare the dark luminosity $L_{\\rm DM}$ , which depends mainly on the dark matter properties and the radius of the white dwarf, to the observed luminosities of M4 white dwarfs.", "McCullough and Fairbairn [39] reported independent measurements of luminosity $L_{\\rm obs}$ and temperature $T_{\\rm obs}$ of a few dozen white dwarfs in the M4 cluster.", "These white dwarfs are unique in that they are among the oldest known celestial objects and are used extensively to study the age of the Universe itself [49].", "In the absence of a dominant burning mechanism inside these dead stars, they are assumed to be nearly perfect black body emitters.", "Under this assumption, if the luminosity and temperature of a white dwarf are independently measured, we can infer its radius to be $R = \\left(L/(4\\pi \\sigma _0 T^4)\\right)^{1/2}$ .", "We next calculate the mass capture rate, i.e., $L_{\\rm DM}$ , using the procedure described in Sec.", ", for a fixed dark matter mass and interaction cross section, as a function of the white dwarf radius.", "Demanding that this dark luminosity should not exceed $L_{\\rm obs}$ for a white dwarf of known radius, we impose an upper bound on the dark matter cross section for a given dark matter mass.", "Figure: Dark luminosity arising from annihilation of captured dark matter compared with the observed white dwarf luminosities.", "The dark matter mass was fixed at 400 MeV and five benchmark dark matter-proton cross sections are shown.", "The topmost curve corresponds to the luminosity when DM-nuclei cross section takes its effectively maximum value, i.e, σ sat \\sigma _{\\rm sat}.", "The lower curves correspond to smaller cross sections, with the curve marked by σ p =3×10 -43 \\sigma _p=3\\times 10^{-43}\\,cm 2 ^2 being just excluded.", "The local dark matter density in the M4 cluster is taken to be ∼10 3 \\sim 10^3 GeV cm -3 ^{-3} and the dispersion velocity to be ∼20\\sim 20 km s -1 ^{-1} .In Fig.", "REF , the solid lines denote the predicted dark luminosity $L_{\\rm DM}$ as a function of the white dwarf radius, for several benchmark DM-proton cross sections and a fixed dark matter mass (400 MeV).", "The position of each colored dot denotes the observed luminosities of a white dwarf and its radius inferred through an independent measurement of its temperature, as explained before.", "The observed temperature is encoded in color, as per the shown color-bar.", "The topmost solid line, marked by $\\sigma _{\\rm sat}$ denotes the maximum attainable dark luminosity when the cross section reaches its saturation limit.", "As argued, $L_{\\rm DM}$ must be smaller than the $L_{\\rm obs}$ .", "Hence, we find that a DM-proton cross section $\\sigma _p \\sim 10^{-42}$ cm$^2$ is in tension with the lower luminosity white dwarfs.", "Figure: Upper bound on the DM-proton cross section (solid black line) from the observed luminosity of 2.5×10 31 2.5\\times 10^{31} GeV s -1 ^{-1} and a derived radius of ∼9×10 6 \\sim 9\\times 10^6 m from a white dwarf in the M4 cluster.", "Related exclusion limits from direct detection experiments, CRESST-III and CDMSlite in the low mass regime and XENON-1T in the high mass regime, that provide the most stringent bounds.", "The dashed black line corresponds to σ sat \\sigma _{\\rm sat} (translated to nucleonic cross sections).", "Above this, in the light gray shaded region, any cross section is essentially equivalent to σ sat \\sigma _{\\rm sat} and ruled out alike.In Fig.", "REF we furnish an upper bound on $\\sigma _p$ as a function of the dark matter mass.", "This is obtained by demanding that the dark luminosity contribution to the low luminosity white dwarf represented by the right-most red point in Fig.", "REF be smaller than its observed luminosity.", "The observed luminosity of this white dwarf is $\\sim $  $2.5\\times 10^{31}$ GeV s$^{-1}$ .", "We assume that, in the worst case scenario, all of this luminosity comes only from burning of trapped dark matter inside the star.", "The radius of this white dwarf is inferred to be $\\sim 9\\times 10^6$ m. The most stringent bounds obtained from different direct detection experiments in the light and heavy dark matter regimes are shown for comparison.", "Notice that the constraint is practically independent of the dark matter mass and, unlike the corresponding constraint from direct detection experiments, it remains quite strong at lower and higher dark matter masses.", "This is simply because $L_{\\rm DM}=M_{\\rm DM}C_{\\rm tot}$ , while $C_{\\rm tot}$ itself scales as $1/M_{\\rm DM}$ due to its dependence on dark matter number density.", "As a result the dark matter mass-dependence cancels out and the constraint is practically mass-independent in this range.", "The weak mass-dependence of the constraint on the DM-proton cross section $\\sigma _p$ is due the presence of the form factor and the ratio of the reduced masses, both of which depend on the dark matter mass.", "The dark-gray shaded region in Fig.", "REF corresponds parameter space excluded by our results.", "Cross sections exceeding $\\sigma _{\\rm sat}$ (above the dashed line) are also excluded, but at the same significance as at the dashed line.", "The constraints obtained here are highly competitive.", "In the low-mass regime, i.e., below 10 GeV, it is the strongest available bound.", "For such light dark matter masses, the constraint from direct detection experiments is rather weak and we find that we were able to make an improvement of nearly 3–7 orders of magnitude when compared with CRESST-III [14] or CDMSlite [50].", "Crucially, because of the signature mass-independence, one finds stringent bounds for dark matter particles less than 350 MeV that are below the sensitivity of typical direct detection experiments.", "Likewise, in the high-mass regime above a few TeV, these constraints are the strongest.", "In this regime the improvement due to multi-scattering is important." ], [ "Variations on the theme", "It is possible that dark matter particles are leptophilic and thus only collide with electrons, or perhaps have interactions that are not spin-independent.", "In these scenarios, and several others, the calculation we perform can be repeated to obtain a corresponding constraint, though they are not as strong.", "As an illustration of how the constraint changes, we rederive our constraint for DM-electron scattering in solar mass white dwarfs.", "This is also motivated by the fact that multiple scatterings are expected to become more important for much smaller dark matter masses with electrons as targets.", "When one considers electrons in a white dwarf, it becomes important to consider the efficiency factor due to Pauli blocking.", "The electron is pushed to a higher momentum state due to its collision with the incoming dark matter particle.", "However, this higher state may or may not be available, owing to Pauli exclusion.", "Hence, while calculating the total capture rate we have to include a corresponding efficiency factor [52], [53] $\\xi = {\\rm Min}\\left[1, \\dfrac{\\delta p}{p_F}\\right]\\,,$ where $\\delta p \\sim \\sqrt{2}\\,\\mu _r \\,v_{\\rm esc}$ with $\\mu _r$ being the corresponding reduced mass.", "The Fermi momentum is $p_F = (3\\pi ^2\\,n_t)^{1/3}$ .", "For a solar mass white dwarf with $R \\sim \\mathcal {O}(10^{-2})R_{\\rm Sun}$ , and, for the range of dark matter masses that we consider in this work, we find that for collisions with electrons $\\xi \\sim 10^{-2}$ but with nucleons it is $\\sim 1$ .", "So, we expect a suppression in case of collisions with electrons but not with a nucleon (or other heavier nuclei).", "The dark luminosity $L_{\\rm DM}$ in the case of collision with electrons inside a white dwarf is shown in Fig.", "REF .", "We see, unlike the case with collisions against nuclei, here multi-scatter capture becomes important for much lighter dark matter masses $\\sim \\mathcal {O}(1)$ GeV, as expected.", "Figure: Dark luminosity from annihilation of captured dark matter particles for multiple and single scatterings with electrons.", "σ\\sigma denotes the total cross section of dark matter with the target electrons.Unfortunately, with electrons as targets, we find that even with the largest allowed cross section, i.e., $\\sigma _{\\rm sat}$ , the dark luminosity $L_{\\rm DM}$ is always less than the observed luminosity of all the white dwarfs in the M4 globular cluster.", "Hence, we are not able to constrain any physically relevant cross sections for a large range of dark matter masses.", "The main source of this suppression in $L_{\\rm DM}$ in the case of electrons comes from the efficiency factor due to the Pauli blocking as discussed earlier.", "If somewhat colder white dwarfs are observed in future they would lead to very strong bounds.", "The limits presented in this work concern only with the DM-proton spin-independent cross sections.", "This is because, the white dwarfs are primarily rich in spin-zero carbon nuclei which are the principal targets for the dark matter particles.", "To derive similar bounds on DM-proton spin-dependent cross sections, one has to consider capture of dark matter through collisions with targets having a net non-zero spin [54].", "Recently several groups have explored scenarios where dark matter is captured inside neutron stars due to its collision with electrons [42] and neutrons [41], and consequently provided stringent projected constraints on $\\sigma _{e}$ and $\\sigma _{n}$ respectively.", "The limits we have obtained for the DM-proton cross section are competitive." ], [ "Summary", "We have revisited the formalism for capture of dark matter in celestial objects and, upon making improvements to the same, obtained constraints on the dark matter interactions with SM particles.", "One of the key improvement we have made is a careful consideration of the energy loss in each collision, that we relate to the differential cross section.", "Further, we have generalized the formalism to be able to include arbitrary energy loss distributions, in contrast to the uniform distribution based on the assumption of a heavy mediator.", "We then computed the capture probability after $N$ scatterings exactly, that leads to well-behaved results at low dark matter velocities.", "By studying the analytical results, we were able to interpret the calculation more physically, which provides a clearer picture of the importance of multiple scatterings.", "As a concrete improvement, we calculated the dark luminosity of white dwarfs in the M4 globular cluster arising only from the annihilation of captured dark matter.", "With electrons as targets, we found that even with the largest allowed cross section, i.e., $\\sigma _{\\rm sat}$ , the dark luminosity is well below the observed luminosities.", "Thus, in order to obtain a constraint, one would need to either model these stars accurately and estimate the non-dark contribution, or find much colder white dwarfs.", "The main source of this suppression in $L_{\\rm DM}$ in the case of electrons comes from the efficiency factor due to the Pauli blocking.", "More encouragingly, with carbon nuclei as targets, this suppression is absent.", "We were thus able to place a constraint on the DM-proton (or equivalently DM-nucleon) cross section $\\sigma _p$ that is stronger than direct searches.", "The improvement occurs mostly in the light (up to 7 orders of magnitude) and heavy dark matter regions ($\\sim 1$ order of magnitude).", "As a bonus, we found that our constraints can be extended to lower dark matter masses ($\\lesssim 350$ MeV), where there are no existing bounds from terrestrial direct detection experiments.", "These bounds at lower masses are much stronger than the recently reported constraints on very light dark matter due to their interactions with cosmic rays [55], [56], though with very complementary systematics.", "As caveats, we must note that the constraint is strongly dependent on the capture of low-velocity dark matter particles and thus subject to the uncertainties in the velocity distribution of dark matter in the M4 cluster.", "Microstructure in the dark matter density and velocity, e.g., due to possible dark matter streams or disks, might affect these constraints strongly." ], [ "Acknowledgements", "We thank John Beacom, Sudip Bhattacharyya, Francesco Capozzi, Sudip Chakraborty, Sourav Chatterjee, Anirban Das, Subhajit Ghosh, and Georg Raffelt for many useful suggestions and discussions.", "The work of B.D.", "is partially supported by the Dept.", "of Science and Technology of the Govt.", "of India through a Ramanujan Fellowship and by the Max-Planck-Gesellschaft through a Max-Planck-Partnergroup." ], [ "Kinematics and energy loss in one or more collisions", "The kinematics of single elastic scattering dictate that the fractional energy loss $\\Delta E/E$ is restricted in the range $0 \\le \\frac{\\Delta E}{E} \\le \\beta \\,,$ where $\\beta = \\dfrac{4 M_{DM} M_t}{(M_{DM}+M_t)^2}$ is the maximal energy loss fraction that itself is $\\le 1$ .", "On the other hand, scattering to velocity $ v_{\\rm esc} $ or less requires a minimum energy loss $\\frac{\\Delta E}{E} \\ge \\frac{w^2-v_{\\rm esc}^2}{w^2} =\\frac{u^2}{u^2+v_{\\rm esc}^2}\\,.$ Eq.", "(REF ) can be rewritten as $\\Delta E=\\beta E \\cos ^2\\theta _{\\rm recoil},$ where the recoil angle $\\theta _{\\rm recoil}$ is related to the scattering angle in CM frame $\\theta _{\\rm CM}$ by $\\theta _{\\rm recoil}=\\frac{\\pi }{2} -\\frac{\\theta _{\\rm CM}}{2}\\,.$ We define the collision parameter $z=\\cos ^2\\theta _{\\rm recoil}$ which takes values in the range $[0,1] $ .", "If we denote the velocity after collision by $v_f$ , then, from the kinematics described above, we get $v_f= (1-z \\,\\beta )^{1/2}\\,\\left(u^2+v_{\\rm esc}^2\\right)^{1/2}\\,.$ A simple extension of this result leads us to the expression of $v_N$ , the velocity after $N$ collisions.", "It is given by $v_N= \\prod _{i=1}^{N} (1-z_i \\beta )^{1/2}\\left(u^2+v_{\\rm esc}^2\\right)^{1/2}$ , with $z_i$ being the collision parameter for the $i^{\\rm th}$ scattering.", "The distribution of $z$ is determined by the distribution of $\\theta _{\\rm CM}$ , which in turn is dictated by the differential cross section ${d\\sigma }/{d\\Omega }$ of the relevant scattering process, $s(z)=\\dfrac{1}{\\sigma }\\dfrac{d\\sigma }{d\\Omega }\\,,$ where $\\Omega $ is the solid angle with $d\\Omega = \\sin \\theta \\, d\\theta \\, d\\phi $ .", "As an example, consider a fermionic dark matter with mass $M_{\\rm DM}$ whose interaction is mediated by a vector or a scalar of mass $M_{\\rm med}$ .", "In the non-relativistic perturbative limit, the Born differential cross section of dark matter self interaction is given by $\\dfrac{d\\sigma }{d\\Omega _{\\rm CM}} = \\dfrac{\\alpha _D^2 M_{\\rm DM}^2}{\\left(M_{\\rm DM}^2 v_{\\rm rel}^2 \\sin ^2(\\theta _{\\rm CM}/2) + M_{\\rm med}^2\\right)^2}\\,,$ where $\\alpha _D$ is the interaction strength.", "When the mediator is much heavier than dark matter, the differential cross section is approximately a constant with respect to the scattering angle.", "In such scenarios, $s(z)$ , i.e., the distribution of $z$ , is uniform.", "In the opposite limit of a very light mediator, where ${d\\sigma }/{d\\Omega _{\\rm CM}} \\sim {1}/{\\sin ^4(\\theta _{\\rm CM}/2)}$ , the assumption of uniform distribution function is a poor approximation.", "In this case, the distribution of $\\cos ^2\\theta _{\\rm recoil} \\equiv z $ goes as $1/z^2$ .", "If the distribution of energy loss is uniform, as in the case for a massive mediator, then using Eq.", "(REF ) and Eq.", "(REF ) the probability for the dark matter particle to scatter to a velocity $v_{\\rm esc}$ or less turns out to be $g_1(u)= \\frac{1}{\\beta } \\left(\\beta -\\frac{u^2}{u^2+v_{\\rm esc}^2} \\right) \\Theta \\left(\\beta -\\frac{u^2}{u^2+v_{\\rm esc}^2} \\right) \\,.$ The $\\Theta $ function ensures the positivity of this probability and sets an upper limit on the halo velocity $u$ .", "This is understandable because a dark matter particle with an arbitrarily large halo velocity cannot lose enough energy to get captured after a single collision.", "The remainder of the expression has a simple interpretation: it is the range of energy loss that leads to a successful capture, divided by the range of possible energy loss.", "For a uniform distribution of the energy loss, this ratio is the probability that there is sufficient energy loss that leads to a capture.", "Eq.", "(REF ) can also be looked upon as a special case of the more general expression of $g_{\\rm N}(u)$ presented in Eq.", "(REF ), i.e., $g_1(u)=\\int _{0}^{1}\\,dz\\,\\Theta \\left(v_{\\rm esc}-(1-z\\beta )^{1/2}(u^2+v_{\\rm esc})^{1/2}\\right)\\, .$ This, when integrated, yields Eq.", "(REF ) as expected.", "Furthermore, if we use $N=1$ in the general expression for the capture rate $C_N$ as given in Eq.", "(REF ), and use the fact that $p_1(\\tau ) \\sim \\, 2\\tau /3$ for $y\\,\\tau \\ll 1$ along with the definition of the optical depth $\\tau $ , we find that $\\pi R^2 p_1(\\tau ) \\rightarrow \\sigma {\\cal N}_t$ , where ${\\cal N}_t$ is the total number of targets present within the celestial body.", "Eq.", "(REF ) thus reduces to $C_1 = \\sigma \\,{\\cal N}_t \\int \\frac{f(u)du}{u}\\,(u^2+v_{\\rm esc}^2) \\,g_1(w)\\,$ Therefore, we recover the familiar result for single scatter capture as a limiting case of the general framework of capture through multiple scatterings, as presented here." ] ]
1906.04204
[ [ "A Novel Discrete Theory of a Screw Dislocation in the BCC Crystal\n Lattice" ], [ "Abstract In this paper, we proposed a novel method using the elementary number theory to investigate the discrete nature of the screw dislocations in crystal lattices, simple cubic (SC) lattice and body centered cubic (BCC) lattice, by developing the algebraic description of the dislocations in the previous report (Hamada, Matsutani, Nakagawa, Saeki, Uesaka, Pacific J. Math.~for Industry {\\bf{10}} (2018), 3).", "Using the method, we showed that the stress energy of the screw dislocations in the BCC lattice and the SC lattice are naturally described; the energy of the BCC lattice was expressed by the truncated Epstein-Hurwitz zeta function of the Eisenstein integers, whereas that of SC lattice is associated with the truncated Epstein-Hurwitz zeta function of the Gauss integers." ], [ "Introduction", "Since the dislocations in crystal lattices have effects on the properties of the materials, i.e., elasticity, plasticity and fracture, the screw dislocations have been studied from several viewpoints [1], [17], [32].", "Recently the progress of technology in material industry, especially steel industry, requires much higher spec of the properties of material than those decades before and requires to control the production processes of materials more highly from various viewpoints.", "It implies that in a next couple of decades, it will be necessary to control the dislocations much more precisely than current quality.", "The rapid development of technology also influences the experimental equipments and thus recently we can directly observe micro-scopic and meso-scopic features of materials even in crystal scale [19], [38], [39]; the observation scheme could meet such expectations.", "However there do not exist proper tools to represent such phenomena in discrete nature in complex system; the tools must be mathematical tools, which might be quite different from the current approaches.", "In order to prepare for the drastic change in material science from the viewpoint of mathematical science, we have had serial conferences for these five years in which mathematicians and material scientists including researchers in steel industry have discussed to provide the novel mathematical tools for next material science (see Acknowledgments); we provided a novel tool to describe the discrete nature in dislocation in terms of algebraic language in the previous paper [16].", "In this paper, we develop the previous result to describe the symmetry in the discrete nature of the screw dislocations well in terms of elementary number theory.", "Using the elementary number theory, we focus on the expression of the difference between the screw dislocations in the simple cubic (SC) lattices and the body centered cubic (BCC) lattices.", "Though some of them are represented by other methods, the number theoretic approach turns out to be a good and natural tool for the description of the discrete systems, which will be the basic tool to investigate much more complex systems.", "Though the origin of the screw dislocations is a discrete nature of crystals, the dislocations have been studied in the continuum picture because 1) there was no proper method to describe their discrete structure and 2) the continuum picture is appropriate for the behavior of macro-scale of the dislocations.", "In the geometrical description of the dislocations as a continuum picture [31], [23], [25], [29], which Kondo and Amari started to investigate [28], [4], [5], the global behavior of the dislocations is expressed well.", "Even in continuum picture of the dislocations including phenomenological models, there are so many crucial mathematical problems which are effective for the material science, e.g, [11], [13], [33], [35].", "However as mentioned above, we cannot avoid to understand micro- and meso-scopic feature of dislocations and in order to understand them, mathematics also plays important roles.", "Since the positions of atoms in the crystal in the micro-scopic scale are fluctuated, the micro-scopic properties of the dislocation have been investigated by means of the molecular dynamics or molecular mechanics in classical level and in the level of the first principle, e.g, [10], [15], [21].", "It is a crucial problem, in mathematics, how we introduce links to consider the topological properties for given position of atoms in our euclidean space.", "We are concerned with the properties in the meso-scopic scale, which cannot be represented by the continuum picture neither by the molecular mechanics nor the first principle approaches.", "One of our purposes in this paper is to investigate the dependence of the dislocations on the type of crystals mathematically.", "Recently Ponsiglione [34] and Alicandro, Cicalese and Ponsiglione [3] investigated the behavior of dislocations in the meso-scopic scale in the framework of $\\Gamma $ -convergence.", "Hudson and Ortner [18] and Braun, Buze, and Ortner [8] considered the discrete picture of dislocations.", "Ariza and Ortiz [6], Ramasubramaniam, Ariza and Ortiz [2], and Ariza, Tellechea, Menguiano and Ortiz [7] studied the discrete nature of the dislocations in terms of modern mathematics, i.e., homology theory, graph theory, group theory and so on.", "Especially Ariza and Ortiz [6] and Hudson and Ortner [18] provided geometrical methods to reveal the discrete nature of dislocations and studied the core energy of the dislocations of the BCC lattice.", "We recall that the crystal lattices have high symmetries such as translational and rotational symmetries governed by the crystal groups, which are studied in the framework of crystallography.", "These symmetries are described well in terms of algebraic language and algebraic tools in wider meaning [11], [36].", "Representation of finite groups is representation of their group rings and modules in the module theory.", "The lattice ${\\mathbb {Z}}^n$ in Euclidean space ${\\mathbb {E}}^n$ has been studied in the number theory, which is known as Minkowski arithmetic geometry and related to the quadratic fields and the harmonic analysis such as the Epstein zeta function [40].", "The two dimensional lattice, ${\\mathbb {Z}}+{\\mathbb {Z}}\\tau (\\subset {\\mathbb {C}})$ , ($\\tau \\in {\\mathbb {H}}:=\\lbrace x+y{\\sqrt{-1}}\\in {\\mathbb {C}}\\ |\\ y>0\\rbrace $ ), associated with the elliptic curves has been studied well in the study of modular forms [26], [20].", "The action of $\\mathrm {SL}(2,{\\mathbb {Z}})$ on the lattice and its subgroup show the symmetry of the lattice ${\\mathbb {Z}}+{\\mathbb {Z}}\\tau $ .", "When $\\tau ={\\sqrt{-1}}$ and $\\tau =\\omega _6$ (or $\\omega _3$ ) for $\\omega _p=\\mathrm {e}^{2\\pi {\\sqrt{-1}}/p}$ , they are known as the Gauss integers and the Eisenstein integers respectively [20], [41].", "They have been studied well in the framework of the algebra and the algebraic number theory.", "It is emphasized that the crystal lattices even with defects and their interfaces still have higher symmetries.", "They should be regarded as a kind of symmetry breaking of the group [42].", "It means that they are not stable for the crystal group in general but are stable for its subgroup, at least, approximately, and should be described by algebraic theory cooperated with analytic and geometric theories.", "The interfaces of two crystal lattices are described well by the quadratic fields in the elementary number theory [19], [24].", "Thus even for the dislocations, we should express their symmetry properly.", "In the previous report [16] with Hamada, Nakagawa, Saeki and Uesaka, we focused on the fiber structure of the screw dislocations as an essential of the screw dislocations.", "The bundle map in the Cartesian square realizes the screw dislocations in the SC and the BCC lattices induced from the continuum picture.", "The fiber structure shows the translational symmetry of the fiber direction, which is the survived symmetry in the these crystal lattices even if the screw dislocation exists.", "On the other hand, the vertical to the fiber direction (the direction of Burgers vector) there are other symmetries which are induced from the crystal group for the perfect crystals, i.e., the two-dimensional crystal lattices.", "Though we did not argue the analytic properties in [16], when we consider the minimal point of the configuration of the atoms, their initial configuration should be indexed by natural indices reflecting the symmetry of the dislocation.", "In this paper, we extend the method in previous report to express the difference between the screw dislocations in the SC and the BCC lattices algebraically.", "We propose a novel method to investigate the algebraic nature of the screw dislocation in crystal lattices, the SC and the BCC lattices, using the elementary number theory; the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]$ and the Eisenstein integers ${\\mathbb {Z}}[\\omega _3]={\\mathbb {Z}}[\\omega _6]$ correspond to the vertical two-dimensional lattices for the screw dislocations of the SC and the BCC lattices.", "Our method shows the natural indices of configurations of atoms, which must be useful even when we consider their analytic properties.", "For examples, as in Remarks REF and REF , and Lemmas REF and REF , the ring of integers ${\\mathbb {Z}}[\\tau ]$ of the cyclotomic field ${\\mathbb {Q}}[\\tau ]$ show the algebraic properties in these lattices.", "Especially, we investigate the symmetry of the two-dimensional crystal lattice in terms of ${\\mathbb {Z}}[\\tau ]$ to show the critical relations between the energy of the dislocations and the Epstein-Hurwitz zeta functions, in the SC and the BCC lattices as we show in Theorems REF and REF .", "The number theoretic approach shows the symmetry of these systems well.", "This paper is organized as follows.", "Section  and review the previous report [16].", "In Section , we show the screw dislocation in the continuum picture.", "Section  reviews the results of the SC lattice case in [16] in terms of Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]$ .", "In Section , after we also show the configuration of the screw dislocation in the BCC lattice in terms of the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ following [16], we provide the algebraic expression of the stress energy of the screw dislocation in the BCC lattice, which is our main result in this paper.", "In Section 5, we discuss these results and in Section 6, we summarize our results." ], [ "Screw Dislocations in Continuum Picture", "In this section, we review the previous report [16] and show the algebraic expression of the screw dislocations in continuum picture." ], [ "Notations and Conventions", "Since the translational symmetry is crucial in physics [42], in this paper, we distinguish the euclidean space ${\\mathbb {E}}$ from the real vector space ${\\mathbb {R}}$ : we regard that ${\\mathbb {R}}$ is a vector space, whereas ${\\mathbb {E}}$ is the space consisting of the position vectors with translational symmetry, though both ${\\mathbb {E}}^n$ and ${\\mathbb {R}}^n$ are topological spaces with the ordinary euclidean topology.", "Similarly we distinguish the set of the complex position vector, the affine space ${\\mathbb {E}}_{\\mathbb {C}}$ , from the complex vector space ${\\mathbb {C}}$ .", "We basically identify the 2-dimensional euclidean space ${\\mathbb {E}}^2$ with ${\\mathbb {E}}_{\\mathbb {C}}$ , and ${\\mathbb {R}}^2$ with ${\\mathbb {C}}$ .", "The group $U(1)$ naturally acts on the circle $S^1$ .", "${\\mathbb {Z}}$ and ${\\mathbb {Q}}$ are the sets of the rational integers and the rational numbers respectively.", "For a fiber bundle ${\\mathcal {F}}\\rightarrow {\\mathcal {M}}$ over a base space ${\\mathcal {M}}$ , the set of continuous sections $f:{\\mathcal {M}}\\rightarrow {\\mathcal {F}}$ is denoted by $\\Gamma ({\\mathcal {M}}, {\\mathcal {F}})$ .", "In this paper, for $\\delta =(\\delta _1, \\delta _2, \\delta _3)\\in {\\mathbb {E}}^3$ , we consider an embedding $\\iota _\\delta $ of the vector space ${\\mathbb {R}}^3={\\mathbb {C}}\\times {\\mathbb {R}}$ into ${\\mathbb {E}}^3={\\mathbb {E}}_{\\mathbb {C}}\\times {\\mathbb {E}}$ by $\\iota _\\delta : {\\mathbb {R}}^3 \\hookrightarrow {\\mathbb {E}}^3,\\quad (x \\mapsto x+\\delta ), \\quad \\mbox{or}\\quad $ $\\iota _\\delta : {\\mathbb {C}}\\times {\\mathbb {R}}\\hookrightarrow {\\mathbb {E}}_{\\mathbb {C}}\\times {\\mathbb {E}},\\quad ((x_1+{\\sqrt{-1}}x_2, x_3) \\mapsto (x_1+{\\sqrt{-1}}x_2+\\delta _{\\mathbb {C}}, x_3+\\delta _3 ),$ where $\\delta _{\\mathbb {C}}:=\\delta _1+{\\sqrt{-1}}\\delta _2$ .", "Further we employ some conventions listed up in Appendix." ], [ "Exact Sequence and Sequence of Maps", "We consider the exact sequence of groups (see [9]), $@!C=50pt{0 [r] & {\\mathbb {Z}}\\ [r]^-i & {\\mathbb {R}}[r]^-{\\exp 2\\pi {\\sqrt{-1}}} & {\\mathrm {U}}(1) [r]^-{} & 1,}$ which is essential in this paper.", "${\\mathbb {Z}}$ and ${\\mathbb {R}}$ are additive groups, ${\\mathrm {U}}(1)$ is a multiplicative group, $i(n)=n\\in {\\mathbb {R}}$ for $n \\in {\\mathbb {Z}}$ , and $(\\exp 2\\pi {\\sqrt{-1}})(x) = \\exp (2\\pi {\\sqrt{-1}}x)$ for $x \\in {\\mathbb {R}}$ .", "In our description of the screw dislocations, we fix the third axis as the direction of the Burgers vector.", "For $\\delta _3$ in $\\delta =(\\delta _1, \\delta _2, \\delta _3) \\in {\\mathbb {E}}^3$ and a certain positive number $d > 0$ which is given as $d=a$ in Section and $d = \\sqrt{3}a/2$ in Section , we define the shifted maps, $& \\widetilde{i}_{d,\\delta }:{\\mathbb {R}}\\rightarrow {\\mathbb {E}}, &(x \\mapsto d\\cdot x +\\delta _3), \\\\& i_{d,\\delta }:{\\mathrm {U}}(1) \\rightarrow S^1, & (\\exp ({{\\sqrt{-1}}\\theta }) \\mapsto \\exp {{\\sqrt{-1}}(\\theta +2\\pi \\delta _3/d)})$ satisfying the commutative diagram, $@!C=50pt{& & {\\mathbb {E}}\\ [r]^{\\psi _d} & S^1\\\\0 [r] & {\\mathbb {Z}}[r]^-{i} [ru]^-{\\varphi _{\\delta }}& {\\mathbb {R}}[u]_{\\widetilde{i}_{d,\\delta }}[r]^{\\exp 2\\pi {\\sqrt{-1}}} & {\\mathrm {U}}(1) [u]_{i_{d,\\delta }}[r]^-{} & 1, }$ where $\\psi _d(y) = \\exp (2\\pi {\\sqrt{-1}}y/d)$ , $y \\in {\\mathbb {E}}$ , and $\\varphi _{\\delta } = \\widetilde{i}_{d,\\delta }\\circ i$ .", "It means that we have the sequence of maps $@!C{{\\mathbb {Z}}\\ [r]^-{\\varphi _{\\delta }} & {\\mathbb {E}}[r]^-{\\psi _d} & \\ S^1,}$ where $ \\varphi _{\\delta }({\\mathbb {Z}}) =\\psi _d^{-1}(\\exp (2\\pi {\\sqrt{-1}}\\delta _3/d)).$" ], [ "Fiber Structures of Crystals in Continuum Picture", "Let us consider some trivial bundles over ${\\mathbb {E}}_{\\mathbb {C}}$ ; ${\\mathbb {Z}}$ -bundle $\\pi _{{\\mathbb {Z}}} : {\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}} \\rightarrow {\\mathbb {E}}_{\\mathbb {C}}$ , ${\\mathbb {E}}$ -bundle $\\pi _{{\\mathbb {E}}} : {\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}} \\rightarrow {\\mathbb {E}}_{\\mathbb {C}}$ and $S^1$ -bundle $\\pi _{S^1} : S^1_{{\\mathbb {E}}_{\\mathbb {C}}} \\rightarrow {\\mathbb {E}}_{\\mathbb {C}}$ .", "The sequence of maps (REF ) induces the sequence of bundle maps $\\widehat{\\varphi }_{\\delta }$ and $\\widehat{\\psi }_d$ , $@!C{{\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}} [r]^-{\\widehat{\\varphi }_{\\delta }} &{\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}} [r]^-{\\widehat{\\psi }_d} & S^1_{{\\mathbb {E}}_{\\mathbb {C}}}}.$ It is obvious that ${\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}}$ is identified with our three-dimensional euclidean space ${\\mathbb {E}}^3 = {\\mathbb {E}}\\times {\\mathbb {E}}_{\\mathbb {C}}$ whereas ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}} = {\\mathbb {Z}}\\times {\\mathbb {E}}_{\\mathbb {C}}$ is a covering space of ${\\mathbb {E}}_{\\mathbb {C}}$ .", "${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}}$ expresses the geometrical objects which consist of parallel sheets over ${\\mathbb {E}}_{\\mathbb {C}}$ .", "Let $\\delta =(\\delta _{\\mathbb {C}}=\\delta _1+{\\sqrt{-1}}\\delta _2, \\delta _3)\\in {\\mathbb {E}}^3$ .", "We consider the embedding $\\iota _\\delta : {\\mathbb {C}}\\hookrightarrow {\\mathbb {E}}_{\\mathbb {C}}, \\quad (x+{\\sqrt{-1}}y \\mapsto x+{\\sqrt{-1}}y+\\delta _{\\mathbb {C}}).$ This $\\iota _\\delta $ plays important roles in the following sections by restricting the domain ${\\mathbb {C}}$ into its discrete sets ${\\mathbb {Z}}[{\\sqrt{-1}}]$ and ${\\mathbb {Z}}[\\omega _6]$ , and thus we set $z = x+{\\sqrt{-1}}y+\\delta _{\\mathbb {C}}\\in {\\mathbb {E}}_{\\mathbb {C}}$ .", "Let us consider the image of the bundle map $\\widehat{\\varphi }_{\\delta }$ .", "For $\\gamma _\\delta =\\exp (2\\pi {\\sqrt{-1}}\\delta _3/d) \\in S^1$ , we define the global constant section $\\mathfrak {u}_\\delta \\in \\Gamma ({\\mathbb {E}}_{\\mathbb {C}}, S^1_{{\\mathbb {E}}_{\\mathbb {C}}})$ of $S^1_{{\\mathbb {E}}_{\\mathbb {C}}}$ by $\\mathfrak {u}_\\delta (z) =\\gamma _\\delta \\in S^1_{{\\mathbb {E}}_{\\mathbb {C}}}|_z =S^1 \\times {\\mathbb {E}}_{\\mathbb {C}}|_z,$ for $z \\in {\\mathbb {E}}_{\\mathbb {C}}$ .", "The following lemma is naturally obtained: Lemma 2.1 For $\\gamma _\\delta = \\exp (2\\pi {\\sqrt{-1}}\\delta _3/d)$ , we have ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}, \\delta }=\\widehat{\\varphi }_{\\delta }({\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}}),$ where ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}, \\delta }:={\\widehat{\\psi }_d^{-1}}\\left(\\mathfrak {u}_{\\delta }({\\mathbb {E}}_{\\mathbb {C}})\\right) \\subset {\\mathbb {E}}^3={\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}}.$ Here we note that $\\widehat{\\varphi }_{\\delta } ({\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}}) $ is the system consisting of parallel equi-interval sheets realized in the three-euclidean space ${\\mathbb {E}}^3={\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}}$ ." ], [ "Single Screw Dislocation in Continuum Picture", "For $z_0 \\in {\\mathbb {E}}_{\\mathbb {C}}$ , let us consider the non-trivial bundles ${\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace }$ and $S^1_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace }$ over ${\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ .", "In other words, we consider the section $\\mathfrak {u}_{z_0, \\delta } \\in \\Gamma ({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , S^1_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace })$ defined by $\\mathfrak {u}_{z_0, \\delta }(z) = \\gamma _\\delta \\frac{z-z_0}{|z-z_0|}\\mbox{ for } z \\in {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace ,$ and a natural universal covering of ${\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ , ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , \\delta }:={\\widehat{\\psi }_d^{-1}}\\left(\\mathfrak {u}_{z_0, \\delta }({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace )\\right) \\subset {\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace } \\subset {\\mathbb {E}}^3$ by letting the restriction $\\pi _{z_0, \\delta }=\\pi _{{\\mathbb {E}}}|_{{\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , \\delta }}$ , i.e., $\\pi _{z_0, \\delta }: {\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , \\delta } \\rightarrow {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ .", "In this paper, we call this covering a screw dislocation in a continuum picture which is realized as a subset of ${\\mathbb {E}}^3$ following [1], [17], [32]; In these textbooks [1], [17], [32], ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , \\delta }$ is given by geometrical consideration as a screw dislocation, which is mentioned in Remark REF , whereas it should be noted that our construction of ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , \\delta }$ is purely algebraic.", "As in [16], it is not difficult to extend this expression of the single screw dislocation to one of multi-screw dislocations.", "Remark 2.2 For the simply connected neighborhood $U_p \\subset {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ of a point $p$ of ${\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ , $\\pi _{{\\mathbb {E}}}^{-1}U_p \\cong {\\mathbb {Z}}\\times U_p,$ as a covering space of $U_p$ .", "Remark 2.3 ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , \\delta }$ can be obtained by the following the operation on the trivial covering ${\\mathbb {Z}}\\times ({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace )$ with the embedding $\\iota _{{\\mathbb {E}}}:{\\mathbb {Z}}\\times ({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace )\\hookrightarrow {\\mathbb {E}}^3$ , such that $\\pi _{{\\mathbb {E}}}: \\iota _{{\\mathbb {E}}}({\\mathbb {Z}}\\times ({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace )) \\rightarrow {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ .", "We regard it as the set of sheets indexed by the integers $n$ .", "The third position component of the $n$ -th sheet is given by $n d+\\delta _3$ .", "Let us consider a half line $L :=\\lbrace x+{\\sqrt{-1}}y_0 \\ | \\ x \\ge x_0\\rbrace $ for $z_0 = x_0 +{\\sqrt{-1}}y_0$ and ${\\mathbb {E}}_{\\mathbb {C}}\\setminus L$ as a simply connected open set of ${\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ .", "First we cut $\\iota _{{\\mathbb {E}}}({\\mathbb {Z}}\\times ({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace ))$ at the inverse $\\pi _{{\\mathbb {E}}}^{-1}(L) \\subset {\\mathbb {E}}^3$ .", "In other words, we consider $\\pi _{{\\mathbb {E}}}^{-1}({\\mathbb {E}}_{\\mathbb {C}}\\setminus L)$ noting Remark REF .", "We deform the $n$ -th sheet in ${\\mathbb {E}}^3$ such that the third component is given by $n d+\\delta _3 +\\displaystyle {\\frac{d}{2\\pi } \\mathrm {arg}\\frac{z-z_0}{|z-z_0|}}$ .", "After then, we connect the $n$ -th sheet to the $(n+1)$ -th sheet at the place $\\pi _{{\\mathbb {E}}}^{-1}(L)$ .", "Then we obtain ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace ,\\delta }$ in (REF ).", "It means that this is a construction of ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace ,\\delta }$ as a discontinuous deformation of ${\\mathbb {Z}}_{{\\mathbb {E}}_{\\mathbb {C}},\\delta }$ , which is the standard geometrical description of the dislocation [1], [17], [32]." ], [ "Screw Dislocation in Simple Cubic Lattice", "In this section, we show the algebraic description of the screw dislocation in the SC lattice and its stress energy in terms of the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]\\subset {\\mathbb {C}}$ (see Appendix)." ], [ "SC Lattice as Covering Space of ${\\mathbb {Z}}[{\\sqrt{-1}}]$", "For the SC lattice in the three euclidean space ${\\mathbb {E}}^3$ , ${\\mathbb {Z}}_{{\\mathrm {SC}},\\delta }:=\\lbrace (\\ell _1a , \\ell _2a, \\ell _3 a) +\\delta \\ | \\ \\ell _1, \\ell _2, \\ell _3 \\in {\\mathbb {Z}}\\rbrace ,$ where $\\delta = (\\delta _1, \\delta _2, \\delta _3) \\in {\\mathbb {E}}^3$ , and $a$ is the lattice length $(a>0)$ , we find its fiber structure as in the previous section.", "Let ${\\mathcal {Z}}_{\\mathrm {SC}}:=\\lbrace n_1 a + n_2 a{\\sqrt{-1}}\\ |\\ n_1, n_2 \\in {\\mathbb {Z}}\\rbrace \\subset {\\mathbb {C}},$ which can be expressed by the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}] = {\\mathbb {Z}}+{\\mathbb {Z}}{\\sqrt{-1}}$ , ${\\mathcal {Z}}_{\\mathrm {SC}}={\\mathbb {Z}}[{\\sqrt{-1}}]a\\subset {\\mathbb {C}}.$ For $\\delta = (\\delta _1, \\delta _2, \\delta _3) \\in {\\mathbb {E}}^3$ , we define the embedding, $\\iota ^{\\mathrm {SC}}_{\\delta }: {\\mathcal {Z}}_{\\mathrm {SC}}\\rightarrow {\\mathcal {Z}}_{\\mathrm {SC}}+ \\delta _{\\mathbb {C}}\\subset {\\mathbb {E}}_{\\mathbb {C}},$ where, $\\delta _{\\mathbb {C}}= (\\delta _1 + \\delta _2 {\\sqrt{-1}}) \\in {\\mathbb {E}}_{\\mathbb {C}}$ .", "The embedding $\\iota ^{\\mathrm {SC}}_{\\delta }$ induces the bundle map $\\widehat{\\iota }^{\\mathrm {SC}}_{\\delta }$ .", "Using $\\mathfrak {u}_{\\delta }(z)$ in Lemma REF of $\\gamma _\\delta := \\exp (2\\pi {\\sqrt{-1}}\\delta _3/a)$ for the position $\\delta \\in {\\mathbb {E}}^3$ , we reconstruct the SC lattice ${\\mathbb {Z}}_{{\\mathrm {SC}},\\delta }$ by ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},\\delta }={\\widehat{\\psi }_a^{-1}}\\left(\\mathfrak {u}_{\\delta }(\\iota ^{\\mathrm {SC}}_{\\delta }({\\mathcal {Z}}_{\\mathrm {SC}}))\\right),$ which is realized in ${\\mathbb {E}}^3$ , ${\\mathbb {Z}}_{{\\mathrm {SC}},\\delta }= {\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},\\delta } \\subset {\\mathbb {E}}^3$ .", "Here we set $d=a$ in $\\psi _d$ in the previous section.", "Remark 3.1 Corresponding to Lemmas REF and REF , and Remark REF for the BCC lattice case, we have the formula in ${\\mathbb {Z}}[{\\sqrt{-1}}]$ , $\\sum _{\\ell =0}^3 ({\\sqrt{-1}})^\\ell = 0,$ which is known as the cyclotomic symmetry of ${\\mathbb {Z}}[{\\sqrt{-1}}]$ or the cyclic group ${\\mathfrak {C}}_4$ action of the order 4 on ${\\mathbb {Z}}[{\\sqrt{-1}}]$ .", "This relation makes the formula (REF ) simply described and connected with the Epstein-Hurwitz zeta function as in Theorem REF ." ], [ "Graph related to ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},\\delta }$", "We introduce the infinite graph $G^{\\mathrm {SC}}_\\delta $ whose nodes are given by ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},\\delta } \\cong {\\mathbb {Z}}^3$ .", "We consider the edges among the nodes in $G^{\\mathrm {SC}}_\\delta $ .", "As $G^{\\mathrm {SC}}_\\delta $ is parameterized by ${\\mathbb {Z}}^3$ , we consider the edges $\\begin{split}&[(n_1, n_2, n_3), (n_1\\pm 1, n_2, n_3)],[(n_1, n_2, n_3), (n_1, n_2\\pm 1, n_3)],[(n_1, n_2, n_3), (n_1, n_2, n_3\\pm 1)],\\\\&[(n_1, n_2, n_3), (n_1, n_2\\pm 1, n_3\\pm 1)],[(n_1, n_2, n_3), (n_1\\pm 1, n_2, n_3\\pm 1)],\\\\&[(n_1, n_2, n_3), (n_1\\pm 1, n_2\\pm 1, n_3)]\\end{split}$ for every point $(n_1, n_2, n_3)\\in {\\mathbb {Z}}^3\\cong {\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},\\delta }$ .", "The first and the second components correspond to the horizontal directions whereas the third one does to the vertical direction." ], [ "Dislocation in SC Lattice as Covering Space of ${\\mathbb {Z}}[{\\sqrt{-1}}]$", "A screw dislocation in the simple cubic lattice appears along the $(0,0,1)$ -direction [32] up to automorphisms of the SC lattice.", "The Burgers vector is parallel to the $(0, 0, 1)$ -direction.", "Using the fibering structure of ${\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ , we can describe a single screw dislocation in the SC lattice as in [16].", "For $\\delta =(\\delta _1, \\delta _2, \\delta _3) \\in {\\mathbb {E}}^3$ , we also let $\\gamma _{\\delta }=\\exp (2\\pi {\\sqrt{-1}}\\delta _3/a)\\in S^1$ and $\\delta _{\\mathbb {C}}= (\\delta _1 + \\delta _2 {\\sqrt{-1}})$ .", "Using (REF ), let us define the section $\\mathfrak {u}_{z_0, \\delta }^{\\mathrm {SC}}\\in \\Gamma ({\\mathcal {Z}}_{\\mathrm {SC}}, S^1_{{\\mathcal {Z}}_{\\mathrm {SC}}})$ by $\\mathfrak {u}_{z_0, \\delta }^{\\mathrm {SC}}:=\\iota ^{{\\mathrm {SC}}*}_{\\delta _{\\mathbb {C}}}\\mathfrak {u}_{z_0, \\delta }=\\mathfrak {u}_{z_0, \\delta }\\circ \\iota ^{{\\mathrm {SC}}}_{\\delta _{\\mathbb {C}}},$ $\\mathfrak {u}_{z_0,\\delta }^{\\mathrm {SC}}(n a)= \\left(\\gamma _\\delta \\frac{na + \\delta _{\\mathbb {C}}- z_0}{|na + \\delta _{\\mathbb {C}}- z_0|} \\right),\\quad na \\in {\\mathcal {Z}}_{\\mathrm {SC}}={\\mathbb {Z}}[{\\sqrt{-1}}]a.$ Using this $\\mathfrak {u}_{z_0,\\delta }^{\\mathrm {SC}}$ , we define its screw dislocation in the SC lattice, which is realized in ${\\mathbb {E}}^3$ : Proposition 3.2 For a point $z_0\\in {\\mathbb {E}}_{\\mathbb {C}}$ and $\\delta = (\\delta _1, \\delta _2, \\delta _3)\\in {\\mathbb {E}}^3$ such that the image of the embedding $\\iota ^{\\mathrm {SC}}_{\\delta }:{\\mathcal {Z}}_{\\mathrm {SC}}\\rightarrow {\\mathcal {Z}}_{\\mathrm {SC}}+ \\delta _{\\mathbb {C}}$ is a subset of ${\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ , $\\iota ^{\\mathrm {SC}}_{\\delta }({\\mathcal {Z}}_{\\mathrm {SC}}) \\subset {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ , the screw dislocation around $z_0$ given by, ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}}, z_0,\\delta }^{\\mathrm {SC}}:= \\left( \\widehat{\\psi }_a^{-1}(\\mathfrak {u}_{z_0, \\delta }^{\\mathrm {SC}}({\\mathcal {Z}}_{\\mathrm {SC}}))\\right) =\\left( \\frac{a}{2\\pi {\\sqrt{-1}}} \\exp ^{-1}\\left(\\mathfrak {u}_{z_0, \\delta }^{\\mathrm {SC}}({\\mathcal {Z}}_{\\mathrm {SC}})\\right) \\right),$ is realized in ${\\mathbb {E}}^3$ , where $\\gamma _\\delta =\\exp (2\\pi {\\sqrt{-1}}\\delta _3/a)$ and $\\delta _{\\mathbb {C}}= (\\delta _1 + \\delta _2 {\\sqrt{-1}})$ .", "It is worth while noting that ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}}, z_0,\\delta }^{\\mathrm {SC}}$ can be regarded as a `covering space' of the lattice ${\\mathcal {Z}}_{\\mathrm {SC}}$ and thus there is a natural projection, $\\pi _{{\\mathcal {Z}}_{\\mathrm {SC}}}: {\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}}, z_0,\\delta }^{\\mathrm {SC}}\\rightarrow {\\mathcal {Z}}_{\\mathrm {SC}}.$ Here each fiber is ${\\mathbb {Z}}= \\pi _{{\\mathcal {Z}}_{\\mathrm {SC}}}^{-1}(\\ell )$ for every $\\ell \\in {\\mathcal {Z}}_{\\mathrm {SC}}$ ." ], [ "Graph of Screw Dislocation in SC Lattice ", "We basically consider the local structure of ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},z0,\\delta }^{\\mathrm {SC}}$ , i.e., ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},z0,\\delta }^{\\mathrm {SC}}\\bigcap \\pi _{{\\mathcal {Z}}_{\\mathrm {SC}}}^{-1}U_{\\ell a}$ for a simply connected neighborhood $U_{\\iota _{\\delta _{\\mathbb {C}}}(\\ell a)}$ of $\\iota _{\\delta _{\\mathbb {C}}}(\\ell a) \\in {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ and $\\ell \\in {\\mathbb {Z}}[{\\sqrt{-1}}]$ .", "The $\\pi _{{\\mathcal {Z}}_{\\mathrm {SC}}}^{-1}U_{\\iota _{\\delta _{\\mathbb {C}}}(\\ell a)}$ can be regarded as a “trivial covering” as in the sense of Remark REF .", "We can continue to consider the edges as in Subsection REF .", "The horizontal edges in (REF ) can be determined as a set on the same sheet as in Remark REF .", "Thus we can consider the graph $G^{\\mathrm {SC}}_{z0, \\delta }$ for ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},z0,\\delta }^{\\mathrm {SC}}$ as a natural extension of $G^{\\mathrm {SC}}_\\delta $ ." ], [ "Energy of Screw Dislocation in SC Lattice ", "Let us consider the graphs $G^{\\mathrm {SC}}_{z0, \\delta }$ and $G^{\\mathrm {SC}}_\\delta $ as the subsets of ${\\mathbb {E}}^3$ .", "Due to the dislocation, the length of each edge in $G^{\\mathrm {SC}}_{z0, \\delta }$ is different from that in $G^{\\mathrm {SC}}_\\delta $ .", "Since $G^{\\mathrm {SC}}_\\delta $ is stable mechanically, the energy of $G^{\\mathrm {SC}}_{z0, \\delta }$ is higher than that of $G^{\\mathrm {SC}}$ .", "We compute the energy difference following [16], which is called the stress energy of screw dislocation or the stress energy simply.", "Further we basically consider the local structure of ${\\mathbb {Z}}_{{\\mathcal {Z}}_{\\mathrm {SC}},z0,\\delta }^{\\mathrm {SC}}$ in this section.", "In the following, we also assume that $\\delta = (0, 0, 0)$ and $\\gamma _\\delta = 1$ , and identify ${\\mathcal {Z}}_{\\mathrm {SC}}$ and its image of $\\widehat{\\iota }^{\\mathrm {SC}}_{\\delta }$ for simplicity.", "Further we denote $\\mathfrak {u}^{\\mathrm {SC}}_{z_0,\\delta }$ etc.", "by $\\mathfrak {u}^{\\mathrm {SC}}_{z_0}$ etc.", "by suppressing $\\delta $ .", "For $\\ell \\in {\\mathbb {Z}}[{\\sqrt{-1}}]$ , we define the relative height differences $\\varepsilon _{\\ell }^{(1)}$ , $\\varepsilon _{\\ell }^{(2)}$ and $\\varepsilon _{\\ell }^{(\\pm )}$ by $\\begin{array}{rl}\\displaystyle {\\varepsilon _{\\ell }^{(1)}} & \\displaystyle {= \\frac{a}{2\\pi {\\sqrt{-1}}}\\left(\\log (\\mathfrak {u}^{\\mathrm {SC}}_{z_0}((\\ell +1)a)-\\log (\\mathfrak {u}^{\\mathrm {SC}}_{z_0}(\\ell a)) \\right),} \\\\\\displaystyle {\\varepsilon _{\\ell }^{(2)}} & \\displaystyle {= \\frac{a}{2\\pi {\\sqrt{-1}}}\\left(\\log (\\mathfrak {u}^{\\mathrm {SC}}_{z_0}(\\ell a+{\\sqrt{-1}}a))-\\log (\\mathfrak {u}^{\\mathrm {SC}}_{z_0}(\\ell a)) \\right), }\\\\\\displaystyle {\\varepsilon _{\\ell }^{(\\pm )}} & \\displaystyle {= \\frac{a}{2\\pi {\\sqrt{-1}}}\\left(\\log (\\mathfrak {u}^{\\mathrm {SC}}_{z_0}((\\ell +1)a \\pm {\\sqrt{-1}}a))-\\log (\\mathfrak {u}^{\\mathrm {SC}}_{z_0}(\\ell a) \\right), } \\\\\\end{array}$ respectively.", "It is obvious that for this dislocation of the simple cubic lattice, $-a/2 < \\varepsilon _{\\ell }^{(i)} < a/2$ for $i = 1, 2$ and $\\pm $ .", "It is easy to obtain $\\begin{array}{rl}\\displaystyle {\\varepsilon _{\\ell }^{(1)}} & \\displaystyle {= \\frac{a}{4\\pi {\\sqrt{-1}}}\\left(\\log (1 + a/(\\ell a-z_0)) - \\log (1+\\overline{a/(\\ell a-z_0)})\\right) },\\\\\\displaystyle {\\varepsilon _{\\ell }^{(2)}} & \\displaystyle {= \\frac{a}{4\\pi {\\sqrt{-1}}}\\left(\\log (1 + a{\\sqrt{-1}}/(\\ell a-z_0)) - \\log (1+\\overline{a{\\sqrt{-1}}/(\\ell a-z_0)})\\right) },\\\\\\displaystyle {\\varepsilon _{\\ell }^{(\\pm )}} &\\displaystyle {= \\frac{a}{4\\pi {\\sqrt{-1}}}\\left(\\log (1 + a(1\\pm {\\sqrt{-1}})/(\\ell a-z_0))- \\log (1+\\overline{a(1\\pm {\\sqrt{-1}})/(\\ell a-z_0)})\\right)} .\\end{array}$ Here $\\overline{z}$ is the complex conjugate of $z$ .", "The difference of length $\\Delta $ in each segment from the natural length of $G^{\\mathrm {SC}}_\\delta $ is obtained by, for $[(\\ell \\pm 1,\\ell _3)a,(\\ell ,\\ell _3)a]$ and $[(\\ell ,\\ell _3)a,(\\ell +{\\sqrt{-1}},\\ell _3)a]$ , $\\Delta _{\\ell }^{(i)} =\\sqrt{a^2 +(\\varepsilon _{\\ell }^{(i)})^2}-a, \\quad (i=1,2),$ for $[(\\ell ,\\ell _3),(\\ell +1, \\ell _3\\pm 1)]$ or $[(\\ell , \\ell _3),(\\ell \\pm {\\sqrt{-1}}, \\ell _3\\pm 1)]$ $\\Delta _{\\ell }^{d(i, \\pm )}=\\sqrt{(a \\pm \\varepsilon _{\\ell }^{(i)})^2+a^2}-\\sqrt{2}a, \\quad (i=1,2),$ for $[(\\ell ,\\ell _3),(\\ell +1\\pm {\\sqrt{-1}},\\ell _3)]$ , $\\Delta _{\\ell }^{d(\\pm )} =\\sqrt{2a^2 + (\\varepsilon _{\\ell }^{(\\pm )})^2}-\\sqrt{2}a \\quad \\mbox{and}$ for $[(\\ell ,\\ell _3),(\\ell ,\\ell _3+1)]$ , $\\Delta _{\\ell }^{(3)} =0$ .", "Remark 3.3 By letting $\\displaystyle {w:=\\frac{a}{\\ell a - z_0}}$ for $\\ell \\in {\\mathbb {Z}}[{\\sqrt{-1}}]$ , these $\\varepsilon $ 's are real valued functions of $w$ and $\\overline{w}$ , i.e, $\\begin{split}\\varepsilon _{\\ell }^{(1)}(w, \\overline{w})&= \\frac{a}{4\\pi {\\sqrt{-1}}}\\log \\left(\\frac{1+w}{1+\\overline{w}}\\right), \\quad \\varepsilon _{\\ell }^{(1)}(w, \\overline{w})=\\overline{ \\varepsilon _{\\ell }^{(1)}(w, \\overline{w}) },\\\\\\varepsilon _{\\ell }^{(2)}(w, \\overline{w})&= \\frac{a}{4\\pi {\\sqrt{-1}}}\\log \\left(\\frac{1+{\\sqrt{-1}}w}{1+\\overline{{\\sqrt{-1}}w}}\\right), \\quad \\varepsilon _{\\ell }^{(2)}(w, \\overline{w})=\\overline{ \\varepsilon _{\\ell }^{(2)}(w, \\overline{w}) },\\\\\\varepsilon _{\\ell }^{(\\pm )}(w, \\overline{w}) &= \\frac{a}{4\\pi {\\sqrt{-1}}}\\log \\left(\\frac{1+(1\\pm {\\sqrt{-1}}) w}{1+\\overline{(1\\pm {\\sqrt{-1}})w}}\\right), \\quad \\varepsilon _{\\ell }^{(\\pm )}(w, \\overline{w})=\\overline{\\varepsilon _{\\ell }^{(\\pm )}(w, \\overline{w})}.\\end{split}$ As these expressions looks simple, the origin of the simplicity is the description based on the elementary number theory which we employ.", "Since the square root function $\\sqrt{1+x}$ at $x=0$ is also a real analytic function of $x$ , these properties are succeeded to these $\\Delta $ 's.", "Lemma 3.4 For $\\ell a \\in {\\mathcal {Z}}_{\\mathrm {SC}}={\\mathbb {Z}}[{\\sqrt{-1}}]a$ satisfying that $\\displaystyle {\\frac{a}{\\sqrt{|\\ell a-z_0|^2}}} \\ll 1$ , $\\varepsilon ^{(1)}_{\\ell }$ , $\\varepsilon ^{(2)}_{\\ell }$ and $\\varepsilon ^{(\\pm )}_{\\ell }$ are approximated by $\\varepsilon _{\\ell }^{(1)} & = & - \\frac{a}{2\\pi }\\frac{a(\\ell _2 a-y_0)}{|\\ell a-z_0|^2} + o\\left(\\frac{a}{\\sqrt{|\\ell a -z_0|^2}}\\right), \\nonumber \\\\\\varepsilon _{\\ell }^{(2)} & = & -\\frac{a}{2\\pi }\\frac{a(\\ell _1 a -x_0)}{|\\ell a -z_0|^2}+ o\\left( \\frac{a}{\\sqrt{|\\ell a -z_0|^2}}\\right), \\\\\\varepsilon _{\\ell }^{(\\pm )} & = & -\\frac{a}{2\\pi }\\frac{(\\pm a( \\ell _1a-x_0) +a(\\ell _2a -y_0))}{|\\ell a -z_0|^2}+ o\\left( \\frac{a}{\\sqrt{|\\ell a -z_0|^2}}\\right), \\nonumber $ respectively, whereas $\\Delta _{\\ell }^{(i)}$ $\\Delta _{\\ell }^{d(i, \\pm )}$ and $\\Delta _{\\ell }^{d(\\pm )}$ are approximated by $\\begin{array}{l}\\displaystyle {\\Delta _{\\ell }^{(i)} = \\frac{1}{2a}(\\varepsilon _{\\ell }^{(i)})^2+ o\\left( \\left(\\frac{a}{\\sqrt{|\\ell a -z_0|^2}}\\right)^2 \\right)= o\\left( \\frac{a}{\\sqrt{|\\ell a -z_0|^2}} \\right),}\\raisebox {0mm}[7mm][7mm]{} \\\\\\displaystyle {\\Delta _{\\ell }^{d(i, \\pm )}= \\pm \\frac{1}{\\sqrt{2}}\\varepsilon _{\\ell }^{(i)}+ o\\left( \\frac{a}{\\sqrt{|\\ell a -z_0|^2}} \\right),}\\raisebox {0mm}[7mm][7mm]{} \\\\\\displaystyle {\\Delta _{\\ell }^{d(\\pm )} = \\frac{1}{2\\sqrt{2}a}(\\varepsilon _{\\ell }^{(\\pm )})^2+ o\\left( \\left(\\frac{a}{\\sqrt{|\\ell a -z_0|^2}}\\right)^2 \\right)= o\\left( \\frac{a}{\\sqrt{|\\ell a -z_0|^2}} \\right),}\\end{array}$ respectively, $i = 1, 2$ .", "Using $\\log (1+z) = z +o(z^2)$ , we have the leading terms in (REF ).", "Remark REF shows the estimation in (REF ).", "However we can also estimate them directly; for example, $ (\\varepsilon _{\\ell }^{(i)})^2$ is estimated by $\\left|(\\varepsilon _{\\ell }^{(i)})^2\\right|=\\left|\\frac{a^2(\\ell _2 a-y_0)^2}{|\\ell a-z_0|^4}\\right|\\le \\left|\\frac{a^2}{|\\ell a-z_0|^2}\\right|,$ since $|\\ell a-z_0|^2=(\\ell _1 a-x_0)^2+(\\ell _2 a-y_0)^2$ .", "Further the relation $\\displaystyle {\\sqrt{1+z}-1 = \\frac{1}{2}z +o(z^2)}$ shows (REF ).", "Lemma 3.5 We let $\\varepsilon _{\\ell }^{(c)} :=\\varepsilon _{\\ell }^{(2)} +{\\sqrt{-1}}\\varepsilon _{\\ell }^{(1)}$ and we have the following: $(\\varepsilon _{\\ell }^{(1)})^2+(\\varepsilon _{\\ell }^{(2)})^2= \\varepsilon _{\\ell }^{(c)}\\overline{\\varepsilon _{\\ell }^{(c)}}=\\frac{a^2}{4\\pi ^2}\\frac{a^2}{|\\ell a -z_0|^2}+ o\\left( \\frac{a}{\\sqrt{|\\ell a -z_0|^2}}\\right).$ Remark 3.6 As the term in Lemma REF is the leading term in the stress energy in Theorem REF , it represents the symmetry of the dislocation in the SC-lattice as follows.", "Since the lattice points are expressed in terms of the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]$ , the edges in the graph $G^{\\mathrm {SC}}_{z_0, \\delta }$ are described by $(\\ell , \\ell + d)$ for $d = \\lbrace \\pm 1, \\pm {\\sqrt{-1}}\\rbrace $ , and the real valued analytic function in $f(w, \\overline{w})$ in the complex structure in plane $w \\in {\\mathbb {C}}$ has the property, $f(w, \\overline{w}) = \\overline{f(w, \\overline{w})}$ .", "Our expression in terms of ${\\mathbb {Z}}[{\\sqrt{-1}}]$ shows the property manifestly.", "Further in the computations, we use the relation $1+{\\sqrt{-1}}^2=0$ , which comes from the cyclotomic symmetry as mentioned in Remark REF .", "Following [16], let us introduce the subsets of ${\\mathbb {Z}}[{\\sqrt{-1}}]$ , $A_{\\rho , N}^{{\\sqrt{-1}}} := \\left\\lbrace \\ell \\in {\\mathbb {Z}}[{\\sqrt{-1}}]\\,\\Bigr |\\,\\rho a < |\\ell a -z_0|< N a \\right\\rbrace \\subset {\\mathbb {Z}}[{\\sqrt{-1}}]$ for $N > \\rho $ , which is bounded and is a finite set, and the core region $C^{\\sqrt{-1}}_\\rho $ , $C^{\\sqrt{-1}}_\\rho := \\left\\lbrace \\ell \\in {\\mathbb {Z}}[{\\sqrt{-1}}]\\,\\Bigr |\\,|\\ell a -z_0|\\le \\rho a \\right\\rbrace \\subset {\\mathbb {Z}}[{\\sqrt{-1}}].$ Let $A_{\\rho }^{{\\sqrt{-1}}} :=\\displaystyle {\\lim _{N\\rightarrow \\infty }A_{\\rho , N}^{{\\sqrt{-1}}}}$ .", "Let us evaluate the stress energy, the elastic energy caused by the screw dislocation in the meso-scopic scale.", "Since the screw dislocation is invariant under the translation from $\\ell _3$ to $\\ell _3 +1$ , we compute the energy density for unit length in the $(0,0,1)$ -direction using Remark REF and call it simply the stress energy of dislocation again.", "Let $k_p$ and $k_d$ be the spring constants of the horizontal springs and the diagonal springs respectively.", "Then, the stress energy of dislocation in the annulus region $A_{\\rho ,N}^{\\sqrt{-1}}$ is given by $E^{\\mathrm {SC}}_{\\rho ,N}(z_0) := \\sum _{\\ell \\in A_{\\rho ,N}^{\\sqrt{-1}}}{\\mathcal {E}}^{\\mathrm {SC}}_{\\ell },$ where ${\\mathcal {E}}^{\\mathrm {SC}}_{\\ell }$ is the energy density defined by ${\\mathcal {E}}^{\\mathrm {SC}}_{\\ell }& := &\\frac{1}{2}k_p\\biggl (\\left(\\Delta _{\\ell }^{(1)}\\right)^2+\\left(\\Delta _{\\ell }^{(2)}\\right)^2 \\biggr )+ \\frac{1}{2}k_d\\biggl (\\left(\\Delta _{\\ell }^{d(1, +)}\\right)^2+\\left(\\Delta _{\\ell }^{d(2, +)}\\right)^2+\\left(\\Delta _{\\ell }^{d(1, -)}\\right)^2 \\nonumber \\\\& & \\qquad \\qquad \\qquad +\\left(\\Delta _{\\ell }^{d(2, -)}\\right)^2+\\left(\\Delta _{\\ell }^{d(+)}\\right)^2+\\left(\\Delta _{\\ell }^{d(-)}\\right)^2\\biggr ).\\raisebox {0mm}[4mm][4mm]{}.\\nonumber $ We recall Proposition 9 in [16] in terms of our convention, which are also directly obtained via Lemmas REF and REF , and Remarks REF and REF : Proposition 3.7 $(1)$ For $\\ell \\in A_\\rho ^{\\sqrt{-1}}$ , the energy density ${\\mathcal {E}}^{\\mathrm {SC}}_{\\ell }$ is expressed by a real analytic function ${\\mathcal {E}}^{\\mathrm {SC}}(w, \\overline{w})$ of $w$ and $\\bar{w} \\in {\\mathbb {C}}$ with $|w| < 1/\\sqrt{2}$ in such a way that ${\\mathcal {E}}^{\\mathrm {SC}}_{\\ell } = {\\mathcal {E}}^{\\mathrm {SC}}\\left(\\frac{a}{\\ell a - z_0},\\frac{a}{\\overline{\\ell a - z_0}}\\right).$ $(2)$ For the power series expansion ${\\mathcal {E}}^{\\mathrm {SC}}(w,\\overline{w}) = \\sum _{s=0}^\\infty {\\mathcal {E}}_{\\mathrm {SC}}^{(s)}(w, \\overline{w}), \\quad {\\mathcal {E}}_{\\mathrm {SC}}^{(s)}(w, \\overline{w}) := \\sum _{i+j=s, i,j\\ge 0} C_{i, j}w^i \\overline{w}^j,$ with $C_{i, j} \\in {\\mathbb {C}}$ , the following holds (a) ${\\mathcal {E}}_{\\mathrm {SC}}^{(0)}(w, \\overline{w})={\\mathcal {E}}_{\\mathrm {SC}}^{(1)}(w, \\overline{w})=0$ , (b) the leading term is given by ${\\mathcal {E}}_{\\mathrm {SC}}^{(2)}(w, \\overline{w}) =\\frac{a^2}{8\\pi ^2} k_dw\\overline{w},\\qquad {\\mathcal {E}}_{\\mathrm {SC}}^{(2)}\\left(\\frac{a}{\\ell a - z_0},\\frac{a}{\\overline{\\ell a - z_0}}\\right)=\\frac{1}{8\\pi ^2} k_d\\left[\\frac{a^4}{|\\ell a - z_0|^2 }\\right],$ (c) $C_{i, j}=\\overline{C_{j, i}}$ , and (d) for every $s \\ge 2$ , there is a constant $M_s > 0$ such that $|{\\mathcal {E}}_{\\mathrm {SC}}^{(s)}(w, \\overline{w})| \\le M_s |w|^s.$ As the summation in (REF ) is finite, we have $E^{\\mathrm {SC}}_{\\rho ,N}(z_0) =\\sum _{s=2}^\\infty \\sum _{\\ell \\in A_{\\rho ,N}^{\\sqrt{-1}}}{\\mathcal {E}}_{\\mathrm {SC}}^{(s)}\\left(\\frac{a}{\\ell a - z_0},\\frac{a}{\\overline{\\ell a - z_0}}\\right).$ As mentioned in Lemmas REF , following [16], the “principal part” of the stress energy of the screw dislocation in the SC lattice is given by the following theorem: Theorem 3.8 The principal part of the stress energy $E_{\\rho ,N}(z_0)$ , defined by $E^{{\\mathrm {SC}}(\\mathrm {p})}_{\\rho ,N}(z_0) :=\\sum _{\\ell \\in A_{\\rho ,N}^{\\sqrt{-1}}}{\\mathcal {E}}_{\\mathrm {SC}}^{(2)}\\left(\\frac{a}{\\ell a - z_0},\\frac{a}{\\overline{\\ell a - z_0}}\\right) = \\frac{1}{8\\pi ^2} k_d\\sum _{\\ell \\in A_{\\rho ,N}^{\\sqrt{-1}}}\\left[\\frac{a^4}{|\\ell a -z_0|^2}\\right]$ is given by the truncated Epstein-Hurwitz zeta function (see Appendix), $E^{{\\mathrm {SC}}(\\mathrm {p})}_{\\rho ,N}(z_0)= \\frac{1}{8\\pi ^2}k_d a^2\\zeta _{A_{\\rho , N}^{\\sqrt{-1}}}^{{\\sqrt{-1}}}(2, -z_0/a).$ As mentioned in Remark REF , it is noted that we obtain (REF ) and this theorem due to the cyclotomic symmetry (REF ).", "By Proposition REF (2) (d), we can estimate each of the other terms appearing in the power series expansion (REF ) by the truncated Epstein-Hurwitz zeta function as follows.", "Proposition 3.9 For each $s \\ge 3$ , there exists a positive constant $M_s^{\\prime }$ such that $\\sum _{\\ell \\in A_{\\rho ,N}^{\\sqrt{-1}}}{\\mathcal {E}}_{\\mathrm {SC}}^{(s)}\\left(\\frac{a}{\\ell a - z_0}, \\frac{a}{\\overline{\\ell a - z_0}}\\right) \\le M_s^{\\prime }\\zeta _{A_{\\rho , N}^{\\sqrt{-1}}}^{{\\sqrt{-1}}}(s, -z_0/a).$" ], [ "Screw Dislocation in BCC Lattice and its energy", "In this section, we consider the screw dislocation in the BCC lattice.", "The studies of the screw dislocations in the BCC crystal lattices have long history, e.g., [27], [37], and still attract attentions; some of the studies are based on the first principle approaches, e.g.", "[10], [15], [21], others are via the continuum approaches, e.g, [31], [23], [25], [29], and geometrical approaches [34], [6], [7], [18].", "However in this paper, we concentrate ourselves on the number theoretic approaches based on the previous report [16].", "We summarize the algebraic descriptions of the BCC lattice and its screw dislocation in [16].", "In this paper, we employ the novel description of the screw dislocation in terms of the elementary number theory.", "We show that the screw dislocation of the BCC lattice is expressed well in terms of the Eisenstein integers.", "Using this description, we compute its stress or the stress energy like the case of the SC lattice." ], [ "Preliminary: Eisenstein Integers", "We show the basic properties of the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]={\\mathbb {Z}}[\\omega _3]$ [41] (see Appendix).", "For the primitive sixth root of unit, $\\omega _6$ , we have the following relations: Lemma 4.1 $1+\\omega _6^2+\\omega _6^4=0, \\quad -\\omega _6=\\omega _6^4, \\quad \\overline{\\omega _6}=\\omega _6^5.$ We introduce $\\nu _i$ and $\\mu _i$ by $\\nu _i := \\frac{1}{3}(\\omega _6^i + \\omega _6^{i+1}), \\quad i = 0, 1, 2,\\ldots , 5,\\quad \\mu _0 := 0, \\quad \\mu _1 :=\\nu _0, \\quad \\mu _2:=\\nu _1.$ It is noted that they belong to ${\\displaystyle \\frac{1}{3}}{\\mathbb {Z}}[\\omega _6]:=\\lbrace \\ell _1 + \\ell _2 \\omega _6\\ | \\ 3 \\ell _a \\in {\\mathbb {Z}}\\rbrace $ and have the properties in the following lemma: Lemma 4.2 ${\\mathbb {Z}}[\\omega _6]= {\\mathbb {Z}}\\oplus {\\mathbb {Z}}\\omega _6={\\mathbb {Z}}[\\omega _3]$ , ${\\mathbb {Z}}[\\omega _6]+\\nu _0\\ni \\nu _2,\\nu _4$ , ${\\mathbb {Z}}[\\omega _6]+\\nu _1\\ni \\nu _3, \\nu _5$ , and for $z \\in {\\mathbb {E}}_{\\mathbb {C}}$ , $\\sum _{i=0}^2 \\left(\\frac{\\nu _{2 i}}{z}-\\frac{\\overline{\\nu _{2 i}}}{\\overline{z}}\\right)^2=-2\\frac{1}{|z|^2},$ $\\sum _{i=0}^2 \\left(\\frac{\\nu _{2 i+1}}{z}-\\frac{\\overline{\\nu _{2 i+1}}}{\\overline{z}}\\right)^2=-2\\frac{1}{|z|^2}.$ The relations (1)-(3) are geometrically obvious but it can, also, be proved by the cyclotomic properties in Lemma REF .", "In (4), the left hand side is equal to $\\sum _{i=0}^2 \\frac{(\\nu _{2 i} \\overline{z}-\\overline{\\nu _{2 i}} z)^2}{(|z|^2)^2} =\\frac{1}{(|z|^2)^2}\\left(\\overline{z}^2 \\sum _{i=0}^2\\overline{\\nu _{2 i}}^2-2 |z|^2\\sum _{i=0}^2|\\nu _{2 i}|^2 +{z}^2 \\sum _{i=0}^2\\nu _{2 i}^2\\right).$ From Lemma REF , we have $\\sum _{i=0}^2\\overline{\\nu _{2 i}}^2=0, \\quad \\sum _{i=0}^2\\nu _{2 i}^2=0$ and thus the left hand side gives the right hand side.", "Remark 4.3 As mentioned in Remark REF , ${\\mathbb {Z}}[{\\sqrt{-1}}]$ has the cyclic group ${\\mathfrak {C}}_4$ action of the order 4 as the cyclotomic symmetry of ${\\mathbb {Z}}[{\\sqrt{-1}}]$ .", "The Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ has the cyclic group ${\\mathfrak {C}}_6$ action of the order 6 as the cyclotomic symmetry of ${\\mathbb {Z}}[\\omega _6]$ .", "Since the cyclic group ${\\mathfrak {C}}_3$ of the order 3 is a subgroup of ${\\mathfrak {C}}_6$ , there is the ${\\mathfrak {C}}_3$ action on ${\\mathbb {Z}}[\\omega _6]$ .", "Lemma REF (4) is based on the symmetry.", "As the cyclotomic symmetry of ${\\mathbb {Z}}[{\\sqrt{-1}}]$ plays the important role in Lemma REF and Remark REF , and thus in Proposition REF and Theorem REF , the cyclotomic symmetry in ${\\mathbb {Z}}[\\omega _6]$ also plays an important role in the evaluation of the stress energy of the dislocation in BCC lattice as in Lemma REF , Remark REF , Proposition REF , and Theorem REF ." ], [ "Algebraic Structure of BCC Lattice", "Though there are several algebraic descriptions of the BCC lattice (see [11], for example), we recall the algebraic descriptions of the BCC lattice [16].", "We assume that $a_1=(a,0,0)$ , $a_2=(0,a,0)$ , $a_3=(0,0,a)$ in ${\\mathbb {R}}^3$ for a positive real number $a$ as shown in Figure REF .", "The generator $b$ corresponds to the center point of the cube generated by $a_1$ , $a_2$ and $a_3$ .", "The BCC lattice is the lattice in ${\\mathbb {R}}^3$ generated by $a_1$ , $a_2$ , $a_3$ and $b=(a_1+a_2+a_3)/2$ .", "Algebraically, it is described as an additive group (or a ${\\mathbb {Z}}$ -module) by ${\\mathbb {B}}^a := \\langle a_1, a_2, a_3, b\\rangle _{\\mathbb {Z}}/\\langle 2b-a_1-a_2-a_3 \\rangle _{\\mathbb {Z}},$ where $\\langle 2b-a_1-a_2-a_3 \\rangle _{\\mathbb {Z}}$ is the subgroup generated by $2b-a_1-a_2-a_3$ .", "The lattice point in ${\\mathbb {B}}^a$ is given by $\\ell _1 a_1 + \\ell _2 a_2 + \\ell _3 a_3 + \\ell _b b $ for a certain $\\ell _i \\in {\\mathbb {Z}}$ ($i=1, 2, 3$ ) and $\\ell _b\\in \\lbrace 0, 1\\rbrace $ .", "Figure: BCC lattice: The unit cell of the BCC latticeis illustrated by a 1 a_1, a 2 a_2, a 3 a_3 and bb, whereb=(a 1 +a 2 +a 3 )/2b = (a_1 + a_2 + a_3)/2.Figure: BCC lattice and its projection along the(1,1,1)(1,1,1)-direction: (a) shows the panoramic view of theunit cell of the BCC lattice which contains two triangles whosenormal direction is (1,1,1)(1,1,1).", "(b) shows its projection along the(1,1,1)(1,1,1)-direction corresponding to (a).Figure: BCC lattice: The black, gray and white dotscorrespond to the three sheets 𝔹 (0) {\\mathbb {B}}^{(0)}, 𝔹 (1) {\\mathbb {B}}^{(1)} and 𝔹 (2) {\\mathbb {B}}^{(2)},which are associated with𝒵 BCC (0) {\\mathcal {Z}_{\\mathrm {BCC}}^{(0)}}, 𝒵 BCC (1) {\\mathcal {Z}_{\\mathrm {BCC}}^{(1)}} and𝒵 BCC (2) {\\mathcal {Z}_{\\mathrm {BCC}}^{(2)}} respectively.The lattice ${\\mathbb {B}}^a$ is group-isomorphic to the multiplicative group ${\\mathbb {B}}:= \\lbrace \\alpha _1^{\\ell _1}\\alpha _2^{\\ell _2}\\alpha _3^{\\ell _3}\\beta ^{\\ell _4} \\,| \\, \\mbox{abelian}, \\ell _1, \\ell _2, \\ell _3, \\ell _4 \\in {\\mathbb {Z}}, \\,\\beta ^2 \\alpha _1^{-1} \\alpha _2^{-1} \\alpha _3^{-1} =1\\rbrace .$ Let us denote by ${\\mathbb {A}}_4$ the multiplicative free abelian group of rank 4 generated by $\\alpha _1$ , $\\alpha _2$ , $\\alpha _3$ and $\\beta $ , i.e., ${\\mathbb {A}}_4 := \\lbrace \\alpha _1^{\\ell _1}\\alpha _2^{\\ell _2}\\alpha _3^{\\ell _3}\\beta ^{\\ell _4} \\,| \\, \\mbox{abelian}, \\, \\ell _1, \\ell _2, \\ell _3, \\ell _4 \\in {\\mathbb {Z}}\\rbrace .$ Then, ${\\mathbb {B}}$ is also described as the quotient group ${\\mathbb {B}}= {\\mathbb {A}}_4/\\langle \\beta ^2 \\alpha _1^{-1} \\alpha _2^{-1} \\alpha _3^{-1} \\rangle ,$ where $\\langle \\beta ^2 \\alpha _1^{-1} \\alpha _2^{-1} \\alpha _3^{-1} \\rangle $ is the (normal) subgroup generated by $\\beta ^2 \\alpha _1^{-1} \\alpha _2^{-1} \\alpha _3^{-1}$ .", "We shall consider the group ring ${\\mathbb {C}}[{\\mathbb {B}}]$ of ${\\mathbb {B}}$ , ${\\mathcal {R}}_6:={\\mathbb {C}}[{\\mathbb {B}}] = {\\mathbb {C}}[\\alpha _1,\\alpha _2, \\alpha _3,\\alpha _1^{-1},\\alpha _2^{-1}, \\alpha _3^{-1},\\beta , \\beta ^{-1}]/(\\beta ^2 - \\alpha _1 \\alpha _2 \\alpha _3).$" ], [ "Algebraic Structure of BCC Lattice for $(1,1,1)$ -Direction", "It is known that a screw dislocation in the BCC lattice is basically given by the $(1,1,1)$ -direction since the Burgers vector is parallel to the $(1,1,1)$ -direction [32].", "In this subsection, we consider the algebraic structure of the BCC lattice of (111)-direction to describe its fibering structure by noting Figure REF (a) and (b).", "Let us consider the subgroup of ${\\mathbb {B}}$ , which corresponds to the translation in the plane vertical to $(1,1,1)$ -direction, ${\\mathbb {B}}_H:=\\lbrace (\\alpha _1\\alpha _3^{-1})^{\\ell _1}(\\alpha _2\\alpha _3^{-1})^{\\ell _2} \\, | \\,\\ell _1, \\ell _2 \\in {\\mathbb {Z}}\\rbrace ,$ and ${\\mathbb {C}}[{\\mathbb {B}}_H]$ -modules.", "Lemma 4.4 There are isomorphisms as ${\\mathbb {C}}[{\\mathbb {B}}_H]$ -modules: ${\\mathcal {R}}_6/(\\alpha _1\\alpha _2\\alpha _3-1)& \\cong &{\\mathbb {C}}[{\\mathbb {B}}_H]\\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\alpha _1\\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\alpha _1\\alpha _2\\\\& & \\quad \\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\beta \\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\alpha _1\\beta \\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\alpha _1\\alpha _2\\beta .$ ${\\mathcal {R}}_3:={\\mathcal {R}}_6/(\\beta -1) \\cong {\\mathbb {C}}[{\\mathbb {B}}_H]\\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\alpha _1\\oplus {\\mathbb {C}}[{\\mathbb {B}}_H]\\alpha _1\\alpha _2.$ These decompositions mean that the BCC lattice has the triple different fiber structures of three sheets.", "We should note that ${\\mathcal {R}}_6$ can be regarded as a double covering of ${\\mathcal {R}}_3$ .", "The interval between the sheets is now given by $\\sqrt{3}a/6$ , and let us denote ${\\mathcal {R}}_3$ as a set, the image of the forgetful functor to the category of set, by ${\\mathbb {B}}^a$ as a subset of the vector space ${\\mathbb {R}}^3$ corresponding to the three sheets: Lemma 4.5 As a set, ${\\mathbb {B}}^a$ is also decomposed as ${\\mathbb {B}}^a = {\\mathbb {B}}^{(0)} \\coprod {\\mathbb {B}}^{(1)} \\coprod {\\mathbb {B}}^{(2)} ,$ where $\\begin{array}{rl}{\\mathbb {B}}^{(0)}&:=\\lbrace \\ell _1(a_1-a_3) +\\ell _2(a_2-a_3) +\\ell _3 b \\,| \\, \\ell _1, \\ell _2, \\ell _3 \\in {\\mathbb {Z}}\\rbrace \\subset {\\mathbb {R}}^3, \\\\{\\mathbb {B}}^{(1)}&:=\\lbrace \\ell _1(a_1-a_3) +\\ell _2(a_2-a_3) + a_1 +\\ell _3 b \\,| \\, \\ell _1, \\ell _2, \\ell _3 \\in {\\mathbb {Z}}\\rbrace \\subset {\\mathbb {R}}^3,\\\\{\\mathbb {B}}^{(2)}&:=\\lbrace \\ell _1(a_1-a_3) +\\ell _2(a_2-a_3)+ a_1+ a_2+\\ell _3 b \\,| \\, \\ell _1, \\ell _2, \\ell _3 \\in {\\mathbb {Z}}\\rbrace \\subset {\\mathbb {R}}^3.\\\\\\end{array}$" ], [ "Fiber Structure of BCC Lattice\nand Eisenstein Integers", "We can regard ${\\mathbb {B}}^{(a)}$ as trivial covering space of $\\ell _3$ -direction.", "On the other hand, the additive group of ${\\mathbb {B}}_H$ , ${\\mathbb {B}}_H^a:=\\lbrace \\ell _1(a_1-a_3) +\\ell _2(a_2-a_3) \\,| \\, \\ell _1, \\ell _2\\in {\\mathbb {Z}}\\rbrace \\subset {\\mathbb {R}}^2,$ can be expressed by the the Eisenstein integers.", "We define $d_0:=\\frac{\\sqrt{3}}{2}a=|b|,\\quad d_1 := \\sqrt{2} a, \\quad d_2:=\\frac{1}{\\sqrt{3}}d_1=\\frac{\\sqrt{2}}{\\sqrt{3}} a,\\quad d_3:=\\frac{\\sqrt{3}}{6}a=\\frac{d_0}{3}.$ and ${\\mathcal {Z}_{\\mathrm {BCC}}^{(a)}} := ({\\mathbb {Z}}[\\omega _6]+\\mu _c)d_1$ $(a=0,1,2)$ using (REF ), i.e., ${\\mathcal {Z}_{\\mathrm {BCC}}^{(0)}} = {\\mathbb {Z}}[\\omega _6]d_1, \\quad {\\mathcal {Z}_{\\mathrm {BCC}}^{(1)}} = {\\mathbb {Z}}[\\omega _6]d_1+\\nu _0d_1, \\quad {\\mathcal {Z}_{\\mathrm {BCC}}^{(2)}} = {\\mathbb {Z}}[\\omega _6]d_1+\\nu _1d_1, \\quad $ which correspond to ${\\mathbb {B}}^{(0)}$ , ${\\mathbb {B}}^{(1)}$ and ${\\mathbb {B}}^{(2)}$ respectively as in Figure REF , i.e., there are natural projections for $b$ -direction, $\\pi _{{\\mathrm {BCC}}}^{(a)}: {\\mathbb {B}}^{(a)}\\rightarrow {\\mathcal {Z}_{\\mathrm {BCC}}^{(a)}}$ .", "Remark 4.6 The projection $\\pi _{{\\mathrm {BCC}}}^{(a)}$ of these ${\\mathbb {B}}^{(0)}$ , ${\\mathbb {B}}^{(1)}$ and ${\\mathbb {B}}^{(2)}$ are essentially equal to ${\\mathbb {Z}}[\\omega _6]$ up to translation and dilatation $d_1$ .", "Further $\\mu _c$ and $\\nu _i$ are third points in the lattice $\\nu _i \\in \\frac{1}{3}{\\mathbb {Z}}[\\omega _6]$ .", "They have the algebraic properties of Lemmas REF and REF and Remark REF , whose origin in the ring of integers of the cyclotomic field ${\\mathbb {Q}}[\\omega _6]$ .", "They have studied in the number theory and algebraic geometry [14], and their application to physics [30].", "As shown in the following, the $z_3$ position of each sheet in the screw dislocations along the $(1,1,1)$ -direction and the local energy due to the dislocation can be regarded as functions on ${\\mathbb {Z}}[\\omega _6]$ , more precisely on ${\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}} := ({\\mathbb {Z}}[\\omega _6]+\\mu _c)d_1$ $(c=0,1,2)$ .", "Thus Lemmas REF and REF govern the computations of the functions in Lemma REF and make them very simple and connected with the Epstein-Hurwitz zeta function as in Theorem REF .", "Though in the previous works including [16], such properties had not been mentioned, these properties show algebraic nature of the BCC lattice and the screw dislocation in the BCC lattice.", "For a point $\\delta = (\\delta _1, \\delta _2, \\delta _3) \\in {\\mathbb {E}}^3$ , we consider the embedding $\\iota _\\delta :{\\mathbb {B}}^a \\rightarrow {\\mathbb {E}}^3$ and its image $\\iota _\\delta ({\\mathbb {B}}^a)$ .", "Corresponding to $\\iota _\\delta $ , for the point $\\delta _{\\mathbb {C}}= \\delta _1+{\\sqrt{-1}}\\delta _2\\in {\\mathbb {E}}_{\\mathbb {C}}$ , let us also consider the embedding $\\iota _{\\delta _{\\mathbb {C}}}:{\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}} \\rightarrow {\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}}+\\delta _{\\mathbb {C}}\\in {\\mathbb {E}}_{\\mathbb {C}}$ and the bundle maps $\\widehat{\\iota }_{\\delta _{\\mathbb {C}}}$ .", "Further for $\\gamma _{\\delta }=\\mathrm {e}^{{\\sqrt{-1}}\\delta _3 /d_0}\\in S^1$ and a constant section $\\mathfrak {u}_{\\delta }\\in \\Gamma ({\\mathbb {E}}_{\\mathbb {C}}, S^1_{{\\mathbb {E}}_{\\mathbb {C}}})$ $(\\mathfrak {u}_{\\delta }(z)=\\gamma _{\\delta })$ , we consider $\\mathfrak {u}^{\\mathrm {BCC}}_\\delta \\in \\Gamma ({\\mathcal {Z}_{\\mathrm {BCC}}^{}}, S^1_{{\\mathcal {Z}_{\\mathrm {BCC}}^{}}})$ as $\\mathfrak {u}^{\\mathrm {BCC}}_\\delta :=\\mathfrak {u}_{\\delta } \\circ \\iota _{\\delta _{\\mathbb {C}}}.$ where $S^1_{{\\mathcal {Z}_{\\mathrm {BCC}}^{}}}$ is the trivial $S^1$ bundle over ${\\mathcal {Z}_{\\mathrm {BCC}}^{}}$ .", "Proposition 4.7 The BCC lattice $\\iota _\\delta ({\\mathbb {B}}^a)$ is expressed by $\\bigcup _{c=0}^2\\widehat{\\iota }_{\\delta _{\\mathbb {C}}} \\left(\\widehat{\\psi }_{d_0}^{-1}\\left(\\omega _3^{-c} \\mathfrak {u}^{\\mathrm {BCC}}_\\delta ({\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}})\\right)\\right)\\subset {\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}}={\\mathbb {E}}^3.$ Here we set $d=d_0=|b|$ of $\\psi _d$ in Section 2." ], [ "Spiral Structure in Graph of BCC Lattice", "In the BCC lattice $\\iota _\\delta ({\\mathbb {B}}^a)$ , let us consider the graph $G^{\\mathrm {BCC}}_\\delta $ whose nodes are given as the lattice points of the BCC lattice and edges are given as the shortest connections of the nodes as shown in Figure REF (a).", "We regard $G^{\\mathrm {BCC}}_\\delta $ as a subset of ${\\mathbb {E}}^3$ .", "The 0-th sheet $\\iota _\\delta ({\\mathbb {B}}^{(0)})$ ($\\iota _{\\delta _{\\mathbb {C}}}({\\mathcal {Z}_{\\mathrm {BCC}}^{(0)}})$ ) whose nodes are denoted by the black dots is connected with the first sheet $\\iota _\\delta ({\\mathbb {B}}^{(1)})$ ($\\iota _{\\delta _{\\mathbb {C}}}{\\mathcal {Z}_{\\mathrm {BCC}}^{(1)}})$ ) which corresponds to the gray dots, via the long dots lines in Figure REF (a).", "The short dot line connects $\\iota _\\delta ({\\mathbb {B}}^{(2)})$ ($\\iota _{\\delta _{\\mathbb {C}}}{\\mathcal {Z}_{\\mathrm {BCC}}^{(2)}})$ ) and $\\iota _\\delta ({\\mathbb {B}}^{(0)})$ ($\\iota _{\\delta _{\\mathbb {C}}}{\\mathcal {Z}_{\\mathrm {BCC}}^{(0)}})$ ) whereas the black lines connect $\\iota _\\delta ({\\mathbb {B}}^{(1)})$ ($\\iota _{\\delta _{\\mathbb {C}}}{\\mathcal {Z}_{\\mathrm {BCC}}^{(1)}})$ ) and $\\iota _\\delta ({\\mathbb {B}}^{(2)})$ ($\\iota _{\\delta _{\\mathbb {C}}}{\\mathcal {Z}_{\\mathrm {BCC}}^{(2)}})$ ).", "The graph $G^{\\mathrm {BCC}}_\\delta $ has a projection to a plane ${\\mathbb {E}}_{\\mathbb {C}}$ as in Figure REF (b): $\\pi _G: G^{\\mathrm {BCC}}_\\delta \\rightarrow {\\mathbb {E}}_{\\mathbb {C}}$ , These edges give the paths which connect these covering sheets.", "As in Figure REF (b), let us consider the path whose projection is a cycle $\\pi _G(G^{\\mathrm {BCC}}_\\delta )$ consisting of tree edges, which is called \"spiral path\" because the end point $p$ and the start point $q$ exist on the different covering sheets but $q \\in \\pi _G^{-1}(\\pi _G(p))$ ; if the start point is $(\\ell _1, \\ell _2, \\ell _3)$ in $\\iota _\\delta ({\\mathbb {B}}^{(a)})$ , the end point is given as $(\\ell _1, \\ell _2, \\ell _3^{\\prime })$ in $\\iota _\\delta ({\\mathbb {B}}^{(a)})$ for $|\\ell _3^{\\prime } - \\ell _3|=d_0=|b|$ .", "Thus the path shows the spiral curve in ${\\mathbb {E}}^3$ .", "The set of the spiral paths is classified by two types.", "We assign an orientation on ${\\mathbb {E}}_{\\mathbb {C}}$ and the orientation of the arrowed graph $[\\pi _G(G^{\\mathrm {BCC}}_\\delta )]$ is naturally induced from it.", "For the oriental cycle in $\\pi _G(G^{\\mathrm {BCC}}_\\delta )$ , the spiral path is ascendant or decedent with respect to $\\ell _3$ .", "We call these triangle cells ascendant cell and descendant cell respectively.", "They are illustrated in Figure REF (a) and (b) respectively.", "For a center point $z_c$ of a ascendant triangle cell of $\\pi _G(G^{\\mathrm {BCC}}_\\delta )$ , the nodes in $G^{\\mathrm {BCC}}_\\delta $ are given by $\\psi _{d_0}^{-1}\\left(\\gamma _\\delta \\frac{z-z_c}{|z-z_c|}\\right),$ whereas for a center point $z_c$ of a descendant triangle cell of $\\pi _G(G^{\\mathrm {BCC}}_\\delta )$ , the nodes are expressed by $\\psi _{d_0}^{-1}\\left(\\gamma _\\delta \\frac{\\overline{z-z_c}}{|z-z_c|}\\right).$ These pictures are well-described in the works of Ramasubramaniam, Ariza and Ortiz [2] and [6] using the homological investigations more precisely.", "Figure: Spiral Paths in BCC lattice:(a) and (c) are ascendant spiral pathsand (b) and (d) decedent spiral paths.", "(a) and (b) are normal cases whereas(c) and (d) are the behavior when in the center, thescrew dislocation exists." ], [ "Algebraic Description of Screw Dislocations in BCC Lattice", "As defined in Subsection REF , for $\\delta \\in {\\mathbb {E}}^3$ and $z_0 \\in {\\mathbb {E}}_{\\mathbb {C}}$ , we use the embedding $\\iota _{\\delta _{\\mathbb {C}}}:{\\mathcal {Z}_{\\mathrm {BCC}}^{(i)}} \\rightarrow {\\mathcal {Z}_{\\mathrm {BCC}}^{(i)}}+\\delta _{\\mathbb {C}}\\in {\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace $ , $\\delta _{\\mathbb {C}}= \\delta _1+{\\sqrt{-1}}\\delta _2\\in {\\mathbb {E}}_{\\mathbb {C}}$ , $\\gamma _{\\delta }=\\mathrm {e}^{{\\sqrt{-1}}\\delta _3 /d_0}\\in S^1$ and the bundle map $\\widehat{\\psi }_{d_0}$ so that the description of the screw dislocation is obtained as follows.", "Let us consider the non-trivial $S^1$ -bundle over ${\\mathcal {Z}_{\\mathrm {BCC}}^{}}$ induced from the embedding $\\iota _{\\delta _{\\mathbb {C}}}$ .", "Using the section of $\\mathfrak {u}_{z_0, \\delta } \\in \\Gamma ({\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace , S^1_{{\\mathbb {E}}_{\\mathbb {C}}\\setminus \\lbrace z_0\\rbrace })$ in (REF ), we define the section $\\widehat{\\mathfrak {u}}_{z_0,\\delta }$ in $\\Gamma ({\\mathcal {Z}_{\\mathrm {BCC}}^{}}, S^1_{{\\mathcal {Z}_{\\mathrm {BCC}}^{}}})$ , $\\mathfrak {u}^{\\mathrm {BCC}}_{z_0,\\delta }=\\iota ^{*}_{\\delta _{\\mathbb {C}}}\\mathfrak {u}_{\\delta ,z_0}=\\mathfrak {u}_{z_0,\\delta }\\circ \\iota _{\\delta _{\\mathbb {C}}}.$ It implies that $\\mathfrak {u}^{\\mathrm {BCC}}_{z_0,\\delta }(\\ell d_1)= \\gamma _\\delta \\frac{ d_1 \\ell +\\delta _{\\mathbb {C}}-z_0}{|d_1 \\ell +\\delta _{\\mathbb {C}}- z_0|},\\qquad \\mbox{\\rm for } \\ell d_1 \\in {\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}}.", "\\quad (c=0, 1, 2).$ Proposition 4.8 The single screw dislocation around $\\pi _{{\\mathbb {E}}}^{-1}(z_0) \\subset {\\mathbb {E}}^3={\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}}$ expressed by $\\bigcup _{c=0}^2\\widehat{\\iota }_{\\delta _{\\mathbb {C}}} \\left(\\widehat{\\psi }_{d_0}^{-1}\\left(\\omega _3^{-c} \\mathfrak {u}^{\\mathrm {BCC}}_{z_0,\\delta }({\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}})\\right)\\right)$ is a subset of ${\\mathbb {E}}^3$ .", "${\\mathbb {E}}_{{\\mathbb {E}}\\setminus \\lbrace z_0\\rbrace }$ is obviously a subset of ${\\mathbb {E}}^3={\\mathbb {E}}_{{\\mathbb {E}}_{\\mathbb {C}}}$ .", "Remark 4.9 Though it is obvious that the screw dislocation exists in ${\\mathbb {E}}^3$ in physics, it is not obvious that a geometrical object constructed in algebraic topology is realized in ${\\mathbb {E}}^3$ , e.g, the Klein bottle.", "Proposition REF is crucial in the description of the physical object in terms of algebraic language.", "As in the SC lattice, we also consider the graph $G_{z_0,\\delta }^{\\mathrm {BCC}}$ .", "In the following, we also assume that $\\gamma = 1$ and $\\delta _{\\mathbb {C}}=0$ for simplicity." ], [ "Note on the Core Region of Screw Dislocations in BCC Lattice", "Though the core structure in the screw dislocations in the BCC lattices has been studied well, e.g., in [37], [2], [7] and using the first principle approach [10], [15], [21], we show the description of the core region in terms of our framework.", "Noting the equations (REF ) and (REF ), we consider the core region of the screw dislocation.", "The core region is the cells neighborhood of $z_0$ .", "If $z_0$ is the center of the ascendant triangle, for a vertex $z$ of the triangle, the set of the fiber direction is $D(z)=\\psi _{d_0}^{-1}\\left(\\gamma _{\\delta }\\frac{(z-z_c)^2}{|z-z_c|^2}\\right)=\\psi _{d_0}^{-1}\\left(\\gamma _{\\delta }\\frac{z-z_c}{\\overline{z-z_c}}\\right).$ On the other hand, if $z_0$ is the center of the decedent triangle, the set of the fiber direction is $D(z)=\\psi _{d_0}^{-1}\\left(\\gamma _\\delta \\right)$ for each vertex $z$ of the triangle.", "They are illustrated in Figure REF (c) and (d) respectively.", "In the former case, there might exist different connections illustrated by the dotted lines.", "Thus the screw dislocation in the BCC lattice shows the quite different aspect from the case of the SC lattice.", "The operation in Remark REF can be applied to this system so that we have Figure REF (c) and (d).", "By the operation, the connected spiral paths are deformed the disjointed paths.", "The disjoint subgraphs characterize the direction of the screw dislocations." ], [ "Energy of Screw Dislocation in BCC Lattice ", "In this section, we estimate the stress energy of the screw dislocation in the BCC lattice in the meso-scopic scale.", "We basically investigate the energy in parallel with the computations in the SC lattice.", "For simply convention, we denote $\\mathfrak {u}^{\\mathrm {BCC}}_{z_0,\\delta }$ etc.", "simply by $\\mathfrak {u}^{\\mathrm {BCC}}_{z_0}$ etc.", "by suppressing $\\delta $ .", "For $\\ell \\in {\\mathbb {Z}}[\\omega _6] +\\mu _c$ $(c=0,1,2)$ , we define the relative height differences $\\varepsilon _{\\ell }^{(c,j)}$ $(j=0,1,\\cdots ,5)$ , $\\begin{split}\\varepsilon _{\\ell }^{(c,j)}&= \\frac{d_0}{2\\pi {\\sqrt{-1}}}\\left(\\log (\\mathfrak {u}^{\\mathrm {BCC}}_{z_0}((\\ell +\\nu _{j})d_1)-\\log (\\mathfrak {u}^{\\mathrm {BCC}}_{z_0}(\\ell d_1)) \\right)\\\\&= \\frac{d_0}{4\\pi {\\sqrt{-1}}}\\left(\\log \\left(1 + \\frac{d_1\\nu _{j}}{\\ell d_1-z_0}\\right)-\\log \\left(1 + \\frac{\\overline{d_1\\nu _{j}}}{\\overline{\\ell d_1-z_0}}\\right) \\right).\\end{split}$ Here we require that $-d_3/2 < \\varepsilon _{\\ell }^{(c,j)} < d_3/2$ .", "Let us introduce a parameter $\\varepsilon >0$ and using it, we define the core region $C^{{\\mathrm {BCC}}(j)}_{\\varepsilon ,I}$ of type I, $C^{{\\mathrm {BCC}}(c)}_{\\varepsilon ,\\mathrm {I}}:=\\lbrace \\ell \\in {\\mathbb {Z}}[\\omega _6] +\\mu _c\\ | \\ {}^\\exists {j} = 0,1,\\cdots ,5\\mbox{ such that }|\\varepsilon _{\\ell }^{(c,j)}| > \\varepsilon \\rbrace .$ Assume that $\\varepsilon < d_3/2$ .", "The difference of length in each segment between $\\ell \\in {\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}}\\setminus d_1C^{{\\mathrm {BCC}}(c)}_{\\varepsilon ,\\mathrm {I}} $ and its nearest neighbor lattice points is given by $\\Delta _{\\ell }^{(c,j)} =\\sqrt{\\left(d_3+(-1)^j\\varepsilon _{\\ell }^{(c,j)}\\right)^2 +d_2^2}-\\sqrt{d_3^2+d_2^2},$ for $j=0,1,\\cdots ,5$ .", "Here we note that $\\sqrt{d_3^2+d_2^2}=\\sqrt{3}a/2=d_0$ .", "We have the following.", "Lemma 4.10 If $\\displaystyle {\\frac{d_1}{\\sqrt{|\\ell d_1-z_0|^2}}}$ for an $\\ell \\in {\\mathbb {Z}}[\\omega _6] +\\mu _c$ $(c=0,1,2)$ is sufficiently small, $\\varepsilon ^{(c,j)}_{\\ell }$ 's are approximated by $\\varepsilon _{\\ell }^{(c,j)} =\\frac{d_0d_1}{4\\pi {\\sqrt{-1}}}\\left(\\frac{\\nu _{j}}{\\ell d_1-z_0}-\\frac{\\overline{\\nu _{j}}}{\\overline{(\\ell d_1-z_0)}}\\right)+ o\\left(\\frac{d_1}{\\sqrt{|\\ell d_1 -z_0|^2}}\\right),$ respectively, whereas $\\Delta _{\\ell }^{(c,j)}$ are approximated by $\\displaystyle {\\Delta _{\\ell }^{(c,j)}= \\frac{(-1)^jd_3}{d_0}\\varepsilon _{\\ell }^{(c,j)}+ o\\left( \\frac{d_1}{\\sqrt{|\\ell d_1 -z_0|^2}} \\right).", "}$ Noting $\\log (1+z) = z +o(z^2)$ , $\\displaystyle {\\sqrt{1+z}-1 = \\frac{1}{2}z +o(z^2)}$ , $\\sqrt{d_3^2+d_2^3}=\\sqrt{3}a/2=d_0$ , the direct computations show them.", "As mentioned in Remark REF , due to the properties of the Eisenstein integers ${\\mathbb {Z}}[\\omega _g]$ in Lemmas REF and REF , we have the simple expression: Lemma 4.11 If $\\displaystyle {\\frac{d_1}{\\sqrt{|\\ell d_1-z_0|^2}}}$ for an $\\ell \\in {\\mathbb {Z}}[\\omega _6] +\\mu _c$ $(c=0,1,2)$ is sufficiently small, $\\frac{1}{2}\\sum _{j=0}^5(\\Delta _{\\ell }^{(c,j)})^2 =\\frac{1}{384\\pi ^2}\\frac{d_1^4}{|\\ell d_1-z_0|^2}+ o\\left( \\frac{d_1}{\\sqrt{|\\ell d_1 -z_0|^2}^3} \\right).$ The left hand side is equal to $-\\frac{1}{2}\\frac{d_0^2d_1^2d_3^2}{16\\pi ^2d_0^2}\\sum _{j=0}^5\\left(\\frac{\\nu _{j}}{\\ell d_1-z_0}-\\frac{\\overline{\\nu _{j}}}{\\overline{\\ell d_1-z_0}}\\right)^2+ o\\left( \\frac{d_1}{\\sqrt{|\\ell d_1 -z_0|^2}^3} \\right),$ and thus using Lemmas REF and REF , it becomes $=\\frac{d_1^4}{384\\pi ^2}\\frac{1}{|\\ell d_1-z_0|^2}+ o\\left( \\frac{d_1}{\\sqrt{|\\ell d_1 -z_0|^2}^3} \\right).$ Here the extra terms are canceled due to the properties of $\\frac{1}{3}{\\mathbb {Z}}[\\omega _6]$ .", "Remark 4.12 As we show in Remark REF , $\\varepsilon _{\\ell }^{(c,j)}$ is a real analytic function of $\\displaystyle {w_j:= \\frac{d_1\\nu _{j}}{\\ell d_1-z_0}}$ and $\\overline{w}_j$ for $|w_j|\\ll 1$ , i.e., $\\varepsilon _{\\ell }^{(c,j)}=\\varepsilon _{\\ell }(w_j, \\overline{w_j}) =\\frac{d_0}{4\\pi {\\sqrt{-1}}}\\log \\left(\\frac{1+w_j}{\\overline{1+w_j}}\\right),$ and thus $\\Delta _{\\ell }^{(c,j)}$ is also a real analytic function of $w$ and $\\overline{w}$ .", "As we mentioned in Remark REF , we have the cyclotomic symmetry in ${\\mathbb {Z}}[\\omega _6]$ , or ${\\mathfrak {C}}_6$ action on ${\\mathbb {Z}}[\\omega _6]$ ; there is the element $g \\in {\\mathfrak {C}}_6$ such that $g \\nu _{j} = \\nu _{j+1}$ whose index is given modulo 6.", "The action induces the action on the function $f$ over ${\\mathbb {Z}}[\\omega _6]$ , i.e., $g^* f(x) = f(g x)$ .", "By fixing $\\ell d_1 - z_0$ , the action is given as $g^*\\varepsilon _{\\ell }^{(c,j)}=g^*\\varepsilon _{\\ell }(w_j, \\overline{w_j}) =\\varepsilon _{\\ell }^{(c,j+1)},$ and $\\displaystyle {\\sum _{j=0}^5(\\Delta _{\\ell }^{(c,j)})^2}$ in Lemma REF is invariant for the action $g \\in {\\mathfrak {C}}_6$ .", "For a positive number $\\rho $ , let us define another core region $C^{{\\mathrm {BCC}}(c)}_{\\rho ,\\mathrm {II}}$ of type II, $C^{{\\mathrm {BCC}}(c)}_{\\rho ,\\mathrm {II}}:=\\left\\lbrace \\ell \\in {\\mathbb {Z}}[\\omega _6] +\\mu _c \\ | \\ \\left|\\ell d_1-z_0\\right| < \\rho d_1 \\right\\rbrace .$ In order to avoid to count doubly, we should concentrate one of ${\\mathcal {Z}_{\\mathrm {BCC}}^{(c)}}$ 's and choose ${\\mathcal {Z}_{\\mathrm {BCC}}^{(0)}}$ in this paper.", "Let the core region of type III and its compliments be $C^{{\\mathrm {BCC}}(0)}_{\\varepsilon ,\\rho ,\\mathrm {III}}:=\\lbrace \\ell \\in {\\mathbb {Z}}[\\omega _6] \\ | \\ \\ell \\in C^{{\\mathrm {BCC}}(0)}_{\\varepsilon ,\\mathrm {I}}\\cup C^{{\\mathrm {BCC}}(0)}_{\\rho ,\\mathrm {II}} \\mbox{ or }\\mathrm {Ad}(\\ell d_1) \\subset \\bigcup _{c=1}^2C^{{\\mathrm {BCC}}(c)}_{\\varepsilon ,\\mathrm {I}} \\rbrace $ and $A^{\\omega _6}_{\\varepsilon ,\\rho }:={\\mathbb {Z}}[\\omega _6] \\setminus C^{{\\mathrm {BCC}}(0)}_{\\varepsilon ,\\rho ,\\mathrm {III}},\\qquad A^{\\omega _6}_{\\varepsilon ,\\rho ,N}:=\\lbrace \\ell \\in A^{\\omega _6}_{\\varepsilon ,\\rho } \\ | \\ |\\mathrm {dist}_{z_0} (\\mathrm {Ad}(\\ell d_1)| <N d_1\\rbrace .$ where for a node $v$ in the plane graph $\\pi _G(G_{z_0}^{\\mathrm {BCC}})$ , we denote the set of the adjacent nodes of $v$ by $\\mathrm {Ad}(v)$ and we define $\\mathrm {dist}_{z_0}(\\lbrace v_i\\rbrace ) := \\max _{v\\in \\lbrace v_i\\rbrace } |v -z_0|.$ For the case that $\\rho $ is sufficiently large for given $\\varepsilon \\displaystyle {\\left(<\\frac{d_3}{2}\\right)}$ so that $C^{{\\mathrm {BCC}}(0)}_{\\varepsilon ,\\rho ,\\mathrm {III}}\\subset C^{{\\mathrm {BCC}}(0)}_{\\rho ,\\mathrm {II}}$ , we also define $A^{\\omega _6}_{\\rho }:=A^{\\omega _6}_{\\varepsilon ,\\rho }, \\quad A^{\\omega _6}_{\\rho ,N}:=A^{\\omega _6}_{\\varepsilon ,\\rho ,N}.$ We compute the stress energy caused by the screw dislocation in the BCC lattice as in the SC lattice case.", "We compute the energy density for unit length in the $(1,1,1)$ -direction, and call it simply the stress energy of dislocation again.", "Let $k_d$ be the spring constant of the edges.", "The stress energy of dislocation in the annulus region $A^{\\omega _6}_{\\varepsilon ,\\rho ,N}$ is given by $E^{\\mathrm {BCC}}_{\\varepsilon ,\\rho ,N}(z_0) := \\sum _{\\ell \\in A^{\\omega _6}_{\\varepsilon ,\\rho ,N}}{\\mathcal {E}}^{\\mathrm {BCC}}_{\\ell },$ where ${\\mathcal {E}}^{\\mathrm {BCC}}_{\\ell }$ for every $\\ell \\in {\\mathbb {Z}}[\\omega _6]$ is the energy density defined by ${\\mathcal {E}}^{\\mathrm {BCC}}_{\\ell } :=\\frac{1}{2}k_d\\sum _{j=0}^5 \\left(\\Delta _{\\ell }^{(j)}\\right)^2.$ As in Proposition REF in the SC lattice, we summarize the above results as the stress energy of the dislocation of the BCC lattice case: Proposition 4.13 $(1)$ For $\\ell \\in A^{\\omega _6}_{\\varepsilon ,\\rho ,N}$ , the energy density ${\\mathcal {E}}_{\\ell }$ is expressed by a real analytic function ${\\mathcal {E}}^{\\mathrm {BCC}}(w, \\overline{w})$ of $w$ and $\\bar{w} \\in {\\mathbb {C}}$ with $|w| < 1/\\sqrt{2}$ in such a way that ${\\mathcal {E}}^{\\mathrm {BCC}}_{\\ell } = {\\mathcal {E}}^{\\mathrm {BCC}}\\left(\\frac{d_1}{\\ell d_1 - z_0},\\frac{d_1}{\\overline{\\ell d_1 - z_0}}\\right).$ $(2)$ Let us consider the power series expansion ${\\mathcal {E}}^{\\mathrm {BCC}}(w,\\overline{w}) = \\sum _{s=0}^\\infty {\\mathcal {E}}_{\\mathrm {BCC}}^{(s)}(w, \\overline{w}), \\quad {\\mathcal {E}}_{\\mathrm {BCC}}^{(s)}(w, \\overline{w}) := \\sum _{i+j=s, i,j\\ge 0} C_{i, j}w^i \\overline{w}^j,$ for some $C_{i, j} \\in {\\mathbb {C}}$ .", "Then, we have the following: (a) ${\\mathcal {E}}_{\\mathrm {BCC}}^{(0)}(w, \\overline{w})={\\mathcal {E}}_{\\mathrm {BCC}}^{(1)}(w, \\overline{w})=0$ , (b) the leading term is given by ${\\mathcal {E}}_{\\mathrm {BCC}}^{(2)}(w, \\overline{w}) =\\frac{d_1^4}{384\\pi ^2} k_dw\\overline{w},\\qquad {\\mathcal {E}}_{\\mathrm {BCC}}^{(2)}\\left(\\frac{d_1}{\\ell d_1 - z_0},\\frac{d_1}{\\overline{\\ell d_1 - z_0}}\\right)=\\frac{1}{384\\pi ^2} k_d\\left[\\frac{d_1^4}{|\\ell d_1 - z_0|^2 }\\right],$ (c) $C_{i, j}=\\overline{C_{j, i}}$ , and (d) for every $s \\ge 2$ , there is a constant $M_s > 0$ such that $|{\\mathcal {E}}_{\\mathrm {BCC}}^{(s)}(w, \\overline{w})| \\le M_s |w|^s.$ Remark REF Lemmas REF and REF show (1) and (2) (a), (b).", "Since the energy density is a real number, we obtain the relation in item (c).", "The analyticity in item (1) implies (d).", "This completes the proof.", "As the summation in (REF ) is finite, we have $E^{\\mathrm {BCC}}_{\\varepsilon ,\\rho ,N}(z_0) =\\sum _{s=2}^\\infty \\sum _{\\ell \\in A^{\\omega _6}_{\\varepsilon ,\\rho ,N}}{\\mathcal {E}}_{\\mathrm {BCC}}^{(s)}\\left(\\frac{d_1}{\\ell d_1 - z_0},\\frac{d_1}{\\overline{\\ell d_1 - z_0}}\\right).$ In particular, we have the following theorem for the “principal part” of the stress energy.", "Theorem 4.14 For the case that $\\rho $ is sufficiently large for given $\\varepsilon \\displaystyle {\\left(<\\frac{d_3}{2}\\right)}$ so that $C^{{\\mathrm {BCC}}(0)}_{\\varepsilon ,\\rho ,\\mathrm {III}} \\subset C^{{\\mathrm {BCC}}(0)}_{\\rho ,\\mathrm {II}}$ .", "Let $A^{\\omega _6}_{\\rho ,N} :=A^{\\omega _6}_{\\varepsilon ,\\rho ,N}$ .", "The principal part of the stress energy $E_{\\rho ,N}(z_0)$ , defined by $E^{{\\mathrm {BCC}}(\\mathrm {p})}_{\\rho ,N}(z_0) &:=&\\sum _{\\ell \\in A^{\\omega _6}_{\\rho ,N}}{\\mathcal {E}}_{\\mathrm {BCC}}^{(2)}\\left(\\frac{d_1}{\\ell d_1 - z_0},\\frac{d_1}{\\overline{\\ell a - z_0}}\\right) \\\\&=& \\frac{1}{384\\pi ^2} k_d\\sum _{\\ell \\in A^{\\omega _6}_{\\rho ,N}}\\left[\\frac{d_1^4}{|\\ell d_1 -z_0|^2}\\right],$ is given by the truncated Epstein-Hurwitz zeta function (see Appendix), $E^{{\\mathrm {BCC}}(\\mathrm {p})}_{\\rho ,N}(z_0)= \\frac{1}{384\\pi ^2}k_d d_1^2\\zeta _{\\rho , N}^{\\omega _6}(2, -z_0/d_1).$ It is noted that this theorem is obtained due to the properties of $\\displaystyle {\\frac{1}{3}{\\mathbb {Z}}[\\omega _6]}$ cf.", "Remark REF .", "By Proposition REF (2) (d), we can estimate each of the other terms appearing in the power series expansion (REF ) by the truncated Epstein-Hurwitz zeta function for Eisenstein integers as follows.", "Proposition 4.15 For each $s \\ge 3$ , there exists a positive constant $M_s^{\\prime }$ such that $\\sum _{\\ell \\in A^{\\omega _6}_{\\rho ,N}}{\\mathcal {E}}_{\\mathrm {BCC}}^{(s)}\\left(\\frac{d_1}{\\ell d_1 - z_0}, \\frac{d_1}{\\overline{\\ell d_1 - z_0}}\\right) \\le M_s^{\\prime } \\zeta _{A_{\\rho , N}^{\\omega _6}}^{\\omega _6}(s, -z_0/d_1).$" ], [ "Discussion", "In this paper, we investigated the screw dislocations of the SC lattice and the BCC lattice using the number theoretic descriptions in terms of the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]$ and the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ .", "As mentioned in Remark REF , using the properties of the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ , e.g., Lemmas REF and REF , we obtain the simple description of the stress energy for the screw dislocation for the finite region except the core region.", "It reflects the symmetry of the screw dislocations.", "It is quite natural to investigate the symmetry of a mathematical object using algebraic language.", "Without the representation of the dislocation in terms of the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ , it is very difficult to obtain the result because the dislocation in the BCC lattice is very complicate.", "Even for the core region, we can investigate it as in Subsection REF using the properties of ${\\mathbb {Z}}[\\omega _6]$ .", "Our description is natural even when we consider the analytic property like the energy minimum point of the screw dislocations because the symmetry is built in the descriptions.", "We could estimate the dislocation of meso-scopic scale because the stress energy $E_{\\mathrm {total}}$ is given by $E_{\\mathrm {total}}=E_{\\mathrm {core}}+E_{\\mathrm {meso}}.$ The effect in the core region should be investigated by the first principle computations but the meso-scopic energy could not be obtained.", "Even though we need more precise investigation for the estimation because there are some parameters, we have the formula to evaluate the meso-scopic energy.", "It is noted that the core energy is determined by the local data whereas the meso-scopic energy is determined by the meso-scopic data.", "Figure: The graph of ζ ρ,N τ (2,0)\\zeta _{\\rho ,N}^\\tau (2,0) v.s.", "logN\\log N for ρ=5.1\\rho =5.1The energy of the meso-scale essentially diverges and thus it is important to determine the cut-off parameter $N$ .", "Let $\\zeta _{\\rho , N}^\\tau (s, z_0) :=\\zeta _{A_{\\rho , N}^\\tau }^\\tau (s, z_0) $ .", "As we show the behaviors of the $\\zeta _{\\rho ,N}^\\tau (2,0)$ for $\\tau = {\\sqrt{-1}}$ and $\\omega _6$ in Figure REF , they are approximated well by the logarithmic function.", "It is natural since the continuum theory, in which the dislocation energy $E_{\\mathrm {total}}(R)$ in the inner region $\\lbrace z \\in {\\mathbb {C}}\\ | |z-z_0|<R\\rbrace $ is written by the logarithmic function with respect to the radius from the dislocation line; $E_{\\mathrm {total}}(R) \\propto \\log R$ .", "Further we show the density of the $\\zeta _{\\rho ,N}^\\tau (2,x+y\\tau ^{\\prime })$ as in Figure REF by numerical computations; the region of $z_0/d$ is divided by $20 \\times 20$ blocks.", "These aspects in the regions are different though the differences are not large due to the divergent properties like the logarithmic function.", "Figure: The graph of ζ ρ,N τ (2,x+yτ ' )\\zeta _{\\rho ,N}^\\tau (2,x+y\\tau ^{\\prime }) for.", "(ρ,N)=(7.2,75)(\\rho ,N)=(7.2,75):(a) ζ ρ,N -1 (2,x+y-1)\\zeta _{\\rho ,N}^{\\sqrt{-1}}(2,x+y{\\sqrt{-1}}) with gray scale: black = 14.664, white = 14.779and (b)ζ ρ,N ω 6 (2,x+yω 6 )\\zeta _{\\rho ,N}^{\\omega _6}(2,x+y\\omega _6) with gray scale: black =16.061, white = 16.907As we computed the double dislocations case in the SC lattice in the previous work [16], they are described well by the Green function in the statistical field theory like vortexes as in [6], [22].", "In the computations of the Green function, there appear the quadratic form $k \\ell \\in {\\mathbb {C}}$ modulo $2\\pi {\\mathbb {Z}}$ , where $k \\in {\\mathbb {Q}}(\\tau ) \\pi /d$ and $\\ell \\in {\\mathbb {Z}}(\\tau )d$ for $d=a$ or $d=d_1$ .", "These computations are very crucial in the quadratic number theory [41].", "If the distance between the dislocation is larger enough, the behavior of the dislocations are determined by the continuum theory.", "However otherwise, it implies that the prime numbers in the Gauss integer or the Eisenstein integer (Gauss primes or Eisenstein primes) might have effects on the configurations of the dislocations if the meso-scopic energy plays crucial in the total energy." ], [ "Conclusion", "In this paper, we show a discrete investigation on the screw dislocations of the SC lattices and the BCC lattices in terms of the elementary number theory.", "It is well-known that the two-dimensional lattices in the SC lattice perpendicular to $(0,0,1)$ -direction, and in the BCC lattices perpendicular to $(1,1,1)$ -direction are described in terms of the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]$ and the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ respectively.", "Since the Burger vectors in the screw-dislocation are the $(0,0,1)$ and the $(1,1,1)$ directions for the SC and the BCC lattices respectively, we use the facts and show the following: The displacement caused by the screw dislocations are expressed by the functions of the Gauss integers ${\\mathbb {Z}}[{\\sqrt{-1}}]$ in the SC lattice case, and of the Eisenstein integers ${\\mathbb {Z}}[\\omega _6]$ in the BCC lattice case respectively, as in Propositions REF and REF .", "As mentioned in Remarks REF and REF , the cyclic groups ${\\mathfrak {C}}_4$ and ${\\mathfrak {C}}_6$ act on the dislocations via ${\\mathbb {Z}}[{\\sqrt{-1}}]$ and ${\\mathbb {Z}}[\\omega _6]$ respectively, and the action plays important roles in the evaluation of the stress energy of the dislocations in these lattices as in Remarks REF and REF .", "Due to the symmetry, we explicitly evaluate the stress energy density of the screw dislocations in the meso-scopic scale in terms of the truncated Epstein-Hurwitz zeta functions as in Propositions REF and REF .", "We remark that without number theoretic descriptions, it is difficult to obtain these expressions of the energy in terms of the truncated Epstein-Hurwitz zeta functions.", "With knowledge in the elementary number theory, we can explicitly evaluate the leading term of the stress energy density in the meso-scopic scale using the truncated Epstein-Hurwitz zeta functions as in Theorems REF and REF , and Figures REF and REF in Discussion.", "They show the meso-scopic contributions of the stress density in the screw dislocations.", "As mentioned in Introduction, since the crystal lattices with dislocations even have high symmetries, we should investigate the dislocations by considering the symmetries.", "The number theoretic approach is a practical tool to describe their symmetries, translations and rotations.", "We demonstrated that the number theoretic approach reveals the properties of the dislocations and recovers the stress energy in the continuum picture using the Epstein-Hurwitz zeta functions.", "Further since the Gauss integers, the Eisenstein integers and the Epstein-Hurwitz zeta functions have interesting properties, we might find more crucial phenomenon by cooperating with analytic considerations in the future.", "Thus we expect that our method shed light on novel investigations on dislocations." ], [ "Acknowledgments", "The author thanks to all those who participated in the problem session “Mathematical description of disordered structures in crystal” in the Study Group Workshop 2015 held in Kyushu University and in the University of Tokyo during July 29–August 4, 2015, and to the participants in the “IMI workshop II: Mathematics of Screw Dislocation”, September 1–2, 2016, in the “IMI workshop I: Mathematics in Interface, Dislocation and Structure of Crystals”, August 28–30, 2017, “IMI workshop I: Advanced Mathematical Investigation of Screw Dislocation”, September 10–11, 2018\" held in Institute of Mathematics for Industry (IMI), in Kyushu University, especially Shun-ichi Amari, Toshikazu Sunada, Tetsuji Tokihiro, Kenji Higashida, Hiroyuki Ochiai, and Kazutoshi Inoue for variable discussions and comments.", "He is also grateful to the authors in [16], Hiroyasu Hamada, Junichi Nakagawa, Osamu Saeki and Masaaki Uesaka for helpful discussions and comments.", "The author has been supported by JSPS KAKENHI Grant Number 15K13438 and by Takahashi Industrial and Economic Research Foundation 2018-2019, 08-003-181.", "He also thanks to anonymous referees for critical and helpful comments." ], [ "Two-dimensional lattices and the Epstein-Hurwitz zeta function", "When we regard two dimensional lattice $L_{(a_1,a_2)}$ as the free ${\\mathbb {Z}}$ -modules, $L_{(a_1,a_2)} = {\\mathbb {Z}}a_1 + {\\mathbb {Z}}a_2 (\\subset {\\mathbb {R}}^2),$ for unit vectors $a_1, a_2\\in {\\mathbb {C}}$ , where $a_1$ and $a_2$ are linear independent.", "It is obvious that the lattice has the unit cell.", "When we consider the classification of $L_{(a_1,a_2)}$ , or its moduli space (its parameter space), it is natural to introduce the normalized lattice $L_{\\tau } = {\\mathbb {Z}}+ {\\mathbb {Z}}\\tau ,$ for $(1, \\tau :=a_2/a_1)$ .", "We assume $\\tau \\in {\\mathbb {H}}:=\\lbrace x+{\\sqrt{-1}}y\\in {\\mathbb {C}}\\ |\\ y > 0\\rbrace $ without loss of generality.", "However there are ambiguities which ones are regarded as the unit vectors.", "There is an action of $\\mathrm {SL}(2, {\\mathbb {Z}})$ as an automorphism on $L_{\\tau }\\times L_{\\tau }$ ; for $(\\ell _1, \\ell _2)$ and ${g:=\\begin{pmatrix} a & b\\\\ c& d\\end{pmatrix} \\in \\mathrm {SL}(2, {\\mathbb {Z}})}$ $\\displaystyle {g (\\ell _1, \\ell _2)= {}^t(g \\ {}^t(\\ell _1, \\ell _2))=(a\\ell _1+b \\ell _2 \\tau , a\\ell _1+d \\ell _2 \\tau )}$ so that the area of the parallelogram generated by $\\ell _1$ and $\\ell _2$ preserves.", "Here for the parallelogram generated by $z_1=x_1+y_1{\\sqrt{-1}}$ and $z_2=x_2+y_2{\\sqrt{-1}}$ , its area is equal to $x_1 y_2- x_2 y_1$ .", "Thus we regard every element $g (1,\\tau )$ in $\\mathrm {SL}(2, {\\mathbb {Z}})(1,\\tau )=\\lbrace g (1,\\tau )\\ | \\ g \\in \\mathrm {SL}(2, {\\mathbb {Z}})\\rbrace $ as the unit vector in $L_{\\tau }$ .", "Therefore the Möbius transformation (for $g\\in \\mathrm {SL}(2, {\\mathbb {Z}})$ , $g (z_1:z_2):=(a z_1 + b z_2: c z_1 + d z_2)$ ) is also introduced, which is denoted by $\\mathrm {PSL}(2, {\\mathbb {Z}})$ .", "By regarding $g\\tau := g(1:\\tau )$ , it induces a natural group action of $\\mathrm {PSL}(2, {\\mathbb {Z}})$ on ${\\mathbb {H}}$ .", "The fundamental domain as the moduli of $L_{\\tau }$ turns out to be ${\\mathbb {H}}/\\mathrm {PSL}(2, {\\mathbb {Z}})$ .", "The following are well-known facts: e.g.,[26] Lemma 1.1 For a point $\\tau \\in {\\mathbb {H}}/\\mathrm {PSL}(2, {\\mathbb {Z}})$ , the stabilizer subgroup $G_\\tau $ of $\\mathrm {SL}(2,{\\mathbb {Z}})$ , $G_\\tau :=\\lbrace g \\in \\mathrm {SL}(2, {\\mathbb {Z}}) | g (1,\\tau ) = (1,\\tau )\\rbrace $ , becomes a cyclic group ${\\mathfrak {C}}_n:=\\lbrace t^\\ell \\ |\\ \\ell = 0, 1, \\ldots , n-1, \\ t^n= t^0\\rbrace $ of the order $n$ , i.e., $\\tau ={\\sqrt{-1}}=\\omega _4$ , $G_\\tau = {\\mathfrak {C}}_4$ $\\tau = \\omega _6$ , $G_\\tau = {\\mathfrak {C}}_6$ , and otherwise, $G_\\tau = {\\mathfrak {C}}_2$ , where $\\omega _p:=\\mathrm {e}^{2\\pi {\\sqrt{-1}}/p}$ .", "In this paper, the both $\\omega _4$ and $\\omega _6$ play the crucial role.", "Let ${\\mathbb {Z}}[\\tau ]:=\\lbrace \\ell _1 + \\ell _2 \\tau \\ | \\ \\ell _1, \\ell _2 \\in {\\mathbb {Z}}\\rbrace $ as a discrete subset of ${\\mathbb {R}}^2$ and ${\\mathbb {C}}$ .", "The set of the Gauss integers is denoted by ${\\mathbb {Z}}[{\\sqrt{-1}}]$ and the set of the Eisenstein integers is by ${\\mathbb {Z}}[\\omega _6]={\\mathbb {Z}}[\\omega _3]$ for $\\omega _3=\\omega _6^2$ noting $\\omega _3+1=\\omega _6$ .", "The truncated Epstein-Hurwitz zeta function of $\\tau \\in {\\mathbb {H}}$ , $\\zeta _{A}^\\tau (s, z_0)$ is defined by [40] $\\zeta _{A}^{\\tau }(s, z_0) := \\sum _{\\ell \\in A}\\frac{1}{(|\\ell +z_0|^2)^{s/2}},$ where $z_0:=x_0+y_0 {\\sqrt{-1}}\\in {\\mathbb {C}}$ and $A$ is a subset of ${\\mathbb {Z}}[\\tau ]$ .", "Shigeki Matsutani Graduate School of Natural Science and Technology, Kanazawa University Kakuma Kanazawa, 920-1192, JAPAN [email protected]" ] ]
1906.04332
[ [ "Discovery of ST1 centers in natural diamond" ], [ "Abstract The ST1 center is a point defect in diamond with bright fluorescence and a mechanism for optical spin initialization and readout.", "The center has impressive potential for applications in diamond quantum computing as a quantum bus to a register of nuclear spins.", "This is because it has an exceptionally high readout contrast and, unlike the well-known nitrogen-vacancy center, it does not have a ground state electronic spin that decoheres the nuclear spins.", "However, its chemical structure is unknown and there are large gaps in our understanding of its properties.", "We present the discovery of ST1 centers in natural diamond.", "Our experiments identify interesting power dependence of the center's optical dynamics and reveal new electronic structure.", "We also present a theory of its electron-phonon interactions, which we combine with previous experiments, to shortlist likely candidates for its chemical structure." ], [ "Rate Equation Fitting Procedure", "We investigated the photodynamics of the ST1 center using the second-order photon correlation, $g^{(2)}$ , measured at different excitation powers.", "The recorded coincidence rate $c(t)$ is first normalized according to the formula $C_N(t)= c(t)/(N_1 N_2 w T)$ , where $N_{1,2}$ are the counts on each APDs, $w$ is the bin width and $T$ is the total signal accumulation time.", "The $g^{(2)}(t)$ is obtained from the normalized coincidence rate $C_N(t)$ as $g^{(2)}(t) = (C_N(t)-(1-{\\rho }^2))/{\\rho }^2$ , where $\\rho $ is the signal to background ratio.", "The experimental results were fitted with the function $\\begin{split}1-\\sum _{i=1}^{4}\\alpha _i\\:e^{-t/\\tau _i}\\end{split}$ where the $\\alpha _i$ and $\\tau _i$ are fit parameters.", "The power dependence of these parameters is shown in fig:decayrates.", "Figure: Dependence of the τ i \\tau _i(a) and α i \\alpha _i(b)parameters as a function of the excitation power.", "Points are fits to the measured g(2) functions using eq:g2fitfn." ], [ "Five level system: Theoretical model", "The analytic form of the transient solution of the system described in eq.", "(2) is found using Laplace transform.", "The autocorrelation function is written as [1] $\\begin{split}\\frac{\\text{S}_{\\text{1}}(\\tau )}{\\text{S}_{\\text{1}}(\\infty )} &=1-\\sum _{i=1}^{4}e^{-\\lambda _i t} \\frac{\\lambda _j\\:\\lambda _k\\:\\lambda _l}{k_0\\:k_-\\:k_+}\\frac{\\left(\\lambda _i + k_0 \\right)\\left(\\lambda _i + k_- \\right)\\left(\\lambda _i + k_+ \\right)}{(\\lambda _i-\\lambda _j) (\\lambda _i-\\lambda _k)(\\lambda _i-\\lambda _l)}\\end{split}$ where $ \\lambda _i $ are the roots of the characteristic equation of the form $A x^4+B x^3+ C x^2+D x +E =0 $ with, $/\\begin{split}A &=1 \\\\B &= k_{ex}+k_f+ 3 k_{ISC}+k_0+k_-+k_+\\\\C &=(k_{ex}+k_f+ 3 k_{ISC})(k_0+k_-+k_+)\\\\&\\qquad +3 k_{ex} k_{ISC}+ k_0 k_-+k_0 k_++k_- k_+\\\\D &= (k_{ex}+k_{f}+3 k_{ISC})(k_0 k_-+k_0 k_++k_- k_+)\\\\&\\qquad +2 k_{ex} k_{ISC}(k_0+k_-+k_+)+k_0\\:k_-\\:k_+\\\\E &=(k_{ex}+k_{f}+3 k_{ISC})(k_0\\:k_-\\:k_+)\\\\&\\qquad + 2 k_{ex} k_{ISC}(k_0 k_-+k_0 k_++k_- k_+)\\end{split}$ Additionally, we define the rate of detected photons as, $\\begin{split}R &= \\frac{k_{f}\\:k_{ex}\\:k_0\\:k_-\\:k_+\\:\\eta }{E}\\end{split}$ where $\\eta $ is the collection efficiency of the optical setup.", "Using Vieta's formula [2], we can relate the coefficients of the polynomial to sums and products of its roots.", "$\\begin{split}-B &=\\lambda _1+\\lambda _2+\\lambda _3+\\lambda _4\\\\C &=(\\lambda _1 \\lambda _2)+(\\lambda _1 \\lambda _3)+(\\lambda _1 \\lambda _4)\\\\&\\qquad +(\\lambda _2 \\lambda _3)+(\\lambda _2 \\lambda _4)+(\\lambda _3 \\lambda _4)\\\\-D &=(\\lambda _1 \\lambda _2 \\lambda _3)+(\\lambda _1 \\lambda _2 \\lambda _4)+(\\lambda _1 \\lambda _3 \\lambda _4)\\\\&\\qquad +(\\lambda _2 \\lambda _3 \\lambda _4)\\\\E &=\\lambda _1\\:\\lambda _2\\:\\lambda _3\\:\\lambda _4\\end{split}$" ], [ "Extraction of $k_i$ parameters", "Using REF and REF , the rates $ k_{T_+}$ , $k_{T_-} $ and $k_{T_0} $ can be expressed as a combination of the decay rates and their pre-exponential terms.", "The extracted triplet depopulation rates are shown in fig. 4 (d).", "After some substitutions, $k_{ex}$ can be expressed as $\\begin{split}k_{ex}^2&- B_n\\:k_{ex} + C_n + R_n =0\\\\\\text{where,}\\\\B_n &=B - k_0-k_--k_+ \\\\C_n &=C - (k_0 k_-+k_0 k_++k_- k_+)\\\\&\\qquad -B_n(k_0+k_-+k_+) \\\\R_n &=\\frac{R\\:E}{ k_0\\:k_-\\:k_+ \\eta }\\end{split}$ Thus $k_{ex}$ can be written as, $\\begin{split}k_{ex}=\\frac{1}{2}(B_n-\\sqrt{{B_n}^2 - 4 (C_n+R_n)}\\end{split}$ The extracted values of $k_{ex}$ as a function of the measured excitation power is plotted in fig. 4 (e).", "The other rates are expressed similarly, $\\begin{split}k_{f}=\\frac{R_n}{k_{ex}},\\quad k_{ISC}=\\frac{C_n}{k_{ex}}\\end{split}$ The observed intensity dependence of $k_{ISC}$ is phenomenological included in the rate equation as $\\begin{split}\\dot{\\text{S}}_\\text{0} &= -k_{ex} \\text{S}_\\text{0} + k_f \\text{S}_\\text{1} + k_0 \\text{T}_0 + k_- \\text{T}_- + k_+ \\text{T}_+ \\\\\\dot{\\text{S}}_\\text{1} &= k_{ex} \\text{S}_\\text{0} - k_f \\text{S}_\\text{1} - k_{ISC} \\text{S}_\\text{1} - k_{ex}\\:\\beta \\:\\text{S}_\\text{1} \\\\\\dot{\\text{T}}_+ &= \\frac{k_{ISC}}{3} \\text{S}_\\text{1} - k_+ \\text{T}_+ \\\\\\dot{\\text{T}}_- &= \\frac{k_{ISC}}{3} \\text{S}_\\text{1} - k_- \\text{T}_- \\\\\\dot{\\text{T}}_0 &= \\frac{k_{ISC}}{3} \\text{S}_\\text{1} - k_0 \\text{T}_0 + k_{ex}\\:\\beta \\:\\text{S}_\\text{1}\\end{split}$ For simplicity, we have added the intensity dependent population transfer pathway only to the long lived triplet sub-level.", "Using the modified rate equations, we obtain $k_{ISC}=\\frac{C_n}{k_{ex}}+\\beta k_{ex}$ , where $\\beta $ describes the absorption cross section.", "This second absorption cross section is estimated from the slope of $k_{ISC}$ ." ], [ "Optical Dipole and Spin-Spin Tensor Orientations", "Here we consider all possible HOMO and LUMO pairs that can be formed from the vacancy-centered MOs in $C_{2v}$ or $C_{1h}$ symmetry.", "By evaluating the optical dipole moment of the ground to first excited singlet transition and the spin-spin tensor components of the intermediate triplet, we identify those pairs that are consistent experiment.", "To begin let the the HOMO be denoted as $a$ and the LUMO as $b$ .", "The ground configuration is $a^2$ and the first excited configuration is $ab$ .", "The corresponding orbital states are: $\\phi _g = aa$ and $\\phi _e = ab$ , respectively.", "The term states of these configurations are formed by taking direct products of the orbital state with their associated spin states and then applying a Slater determinant to enforce electron interchange anti-symmetry.", "The resultant electronic states, $\\Phi _{S,m_s;\\Gamma }$ , have well defined total spin $S$ , spin projection $m_s$ , and orbital symmetry.", "Since the defect has low symmetry, like $C_{1h}$ and $C_{2v}$ , the orbital symmetry is given by the simple product of the symmetries its constituent orbital states.", "These are defined explicitly as 0,0;g = |a a) 0,0;e = 12|ab) - |ab) 1,ms;e = {ll |ab) ms=-1 12|ab) + |ab) ms=0 |ab) ms=1 .", "where $\\mathinner {\\left|{\\cdot }\\right)}$ denotes a Slater determinant.", "In this basis, we can exploit orbital symmetry to simplify evaluation of the transition dipole matrix elements.", "First, the multi-electron electric-dipole matrix elements can be written in terms of the single MOs as follows 0,0;gd0,0;e = 12 (aa| d |ab) - |ab) = 22 adb where $\\hat{d} = e{x \\hat{x} + y \\hat{y} + z \\hat{z}}$ is the electric dipole operator where we have defined the ST1 coordinate system such that $\\hat{z}||[110]$ and $\\hat{x}$ is in the $z$ -symmetry plane of the defect.", "Now it suffices to estimate this matrix element for all unique HOMO and LUMO pairs.", "In tab:selection-rules, we apply the symmetry selection rules for the electric dipole operator to determine the non-zero matrix elements for $C_{2v}$ and $C_{1h}$ , respectively.", "Table: The linear and quadratic operator symmetry selection rules for A 1 O ^Γ{A_1}{\\hat{O}}{\\Gamma } where O ^\\hat{O} is either the electric-dipole operator d ^\\hat{d} or the orbital spin-spin tensor D ^ ij \\hat{D}_{ij}.In $C_{1h}$ , we cannot distinguish $x$ and $z$ orientated dipoles directly from symmetry.", "Thus, for some HOMO/LUMO pairs we need to directly estimate the matrix elements.", "We do this by expanding the MOs in terms of their atomic orbitals, neglecting orbital overlap, and applying geometric arguments to simplify the matrix elements.", "For example, consider the HOMO/LUMO pair $(a_1,b_1)$ .", "The dipole matrix element is a1db1 = e c1 + c2x x + z zc1 - c2 = e(x1-x2x + z1 -z2z -2c1xc2x + c1zc2z) e(x1-x2x + z1 -z1z) where ${z}_1$ is the expectation value of $z$ with respect to the $c_1$ atomic orbital.", "As depicted in fig:geom, the $c_1$ and $c_2$ orbitals are both centered in the $xz$ -plane at the same mean $x$ position and anti-symmetric mean $z$ positions.", "Hence, the orbital integrals are related such that: ${c_1}{x}{c_1}={c_2}{x}{c_2}$ and ${c_1}{z}{c_1}=-{c_2}{z}{c_2}$ .", "This means that the the $x$ -component of the integral cancels out, leaving a1db1 2 ez1 z An analogous argument applied to the other ambiguous HOMO/LUMO pair: $(a_1^{\\prime },b_1)$ , yields ${a_1^{\\prime }}{\\hat{d}}{b_1} \\approx 2 e{z}_1 \\hat{z}$ .", "Figure: A sketch of the nuclear geometry and atomic orbitals of the viewed along the y ^\\hat{y} direction.", "The vacancy VV, and atomic orbitals, c 1 c_1 and c 2 c_2, are in the the xzxz-plane.", "The c 3 c_3 orbital lies in the out-of-plane direction.", "The mean position of the electron density is marked by a black circle and its in-plane positions are labelled by x{x} and z{z}.", "The variance of c 1 c_1 with along the orientation of the bond is marked by Δ\\Delta .Next, to calculate the orientation of the major spin axis of the defect, we need to evaluate the components of the triplet level's spin-spin interaction.", "The zero-field spin Hamiltonian of the triplet is H =SDS where $\\mathbf {S}=S_x \\hat{x}+ S_y \\hat{y}+S_z \\hat{z}$ are the $S=1$ spin operators and $\\mathbf {D}$ is the spin-spin tensor with components $D_{ij}=\\mathinner {\\left({\\phi _e}\\right|}\\hat{D}_{ij}\\mathinner {\\left|{\\phi _e}\\right)}$ and Dij = C ijr3 - 3 ri rjr5 where $C=3\\mu _0g_e^2\\mu _B^2/16\\pi h$ , $\\mu _0$ is the vacuum permeability, $g_e\\approx 2$ is the free electron g-factor and $\\mu _B$ is the Bohr magneton.", "The vector distance between the two electrons is $\\mathbf {r} = \\mathbf {r}_2-\\mathbf {r}_1$ , the magnitude of this vector is defined as $r$ , and $r_i$ is its components in the $x$ , $y$ , or $z$ directions.", "We apply symmetry selection rules to simplify the evaluation of these tensor components in tab:selection-rules.", "In $C_{2v}$ , the only quadratic operators that transform as the trivial representation are $x^2$ , $y^2$ , and $z^2$ .", "These, in addition to $xz$ , form the basis of quadratic operators in $C_{1h}$ that transform as the trivial representation.", "Hence, it suffices to calculate the two electron integrals ${D_{xx}}$ , ${D_{yy}}$ , ${D_{zz}}$ and ${D_{xz}}$ for each HOMO/LUMO pair.", "As we did with the dipole moment, we expand the non-zero terms in the basis of atomic orbitals.", "The resulting expression is still too difficult to integrate directly.", "The dominant contributions to the tensor component can be evaluated by extending the semi-classical approximation introduced in [3].", "The approximation corresponds to the replacement ijr3 - 3 ri rjr5 1r1-r23 - 3rirj - ijr1-r25 where $\\alpha , \\beta $ are atomic orbitals, such that ${\\mathbf {r}_1}={\\alpha }{\\mathbf {r}_1}{\\alpha }$ and ${\\mathbf {r}_2}={\\beta }{\\mathbf {r}_2}{\\beta }$ is the mean position of the electrons occupying the $\\alpha $ and $\\beta $ sp$^3$ orbitals.", "The first term can be interpreted physically as the interaction between electrons at their mean positions.", "The ${r_i}{r_j}$ component of the second term accounts for the mean relative positions of the electrons.", "This second term also includes ij=ri rj-rirj which is the co-variance in the electron position along the $ij$ axis.", "These approximate terms can be estimated by applying geometric arguments on a case by case basis for each of the HOMO/LUMO pairs.", "We can use the same arguments as the electric dipole case to simplify the mean positions.", "We estimate $\\Delta _{ij}$ by considering the electron distribution of each sp$^3$ bond.", "Each atomic orbital has a major axis along the line between the vacancy and the orbital's center.", "We define two minor axes perpendicular to the major axis, one in-plane and the other out of plane.", "With these definitions, we note that the co-variance in the major axis of each orbital is much larger than its co-variance in the minor axes.", "Applying this to our example, the $(a_1,b_1)$ pair, the variance $\\Delta _{yy}\\approx 0$ and $\\Delta _{xz}\\approx \\Delta _{xx}\\approx \\Delta _{zz}$ .", "Accordingly, the spin-spin tensor takes the form DC A-B 0 B 0 A 0 B 0 -2A+B where $A=4/{r_z}^3$ and $B=12 \\Delta _{xz}/{r_z}^5$ .", "The off-diagonal terms imply that the major spin axis is not exactly in the [110] direction.", "Rotating the spin-spin tensor around the $\\hat{y}$ gives us the offset of spin-quantization major axis, i.e.", "finding a rotation such that ${M}{\\theta }^{-1}{D}{M}{\\theta }$ is diagonal.", "Solving this equation and Taylor expanding the result we find that (to first order) $\\theta \\approx B/{2 B-3 A}$ .", "Given that the variance is much smaller that the mean, $B<<A$ , we claim that $\\theta \\approx 0$ .", "With this result, we can conclude that the major spin axis and electric dipole of the $(a_1,b_1)$ pair are co-aligned in the $[110]$ ($\\hat{z}$ ) direction, and the spin minor axis is in the $\\hat{x}$ direction.", "Table: A table of the properties of each HOMO/LUMO pair.", "It includes the symmetry of the excited electronic state, the orientation of the dipole moment and the major spin quantization axis for each pair.The orientations electric-dipole moment and spin-quantization axis for each of the HOMO/LUMO pairs in tab:orientations.", "In the defect coordinate system $\\hat{z}$ and $\\hat{y}$ are the in-equivalent $[110]$ directions.", "Hence, the only HOMO/LUMO pair which are consistent with experiment are: $(a_1,b_1)$ and $(a_1^{\\prime },b_2)$ because the electric-dipole moment and spin-quantization axis are in the $\\hat{z}$ $\\hat{y}$ directions.", "Further, our estimates predict that in both configurations the electric-dipole moment and spin quantization are co-linear to first order." ] ]
1906.04385
[ [ "SymNet: Symmetrical Filters in Convolutional Neural Networks" ], [ "Abstract Symmetry is present in nature and science.", "In image processing, kernels for spatial filtering possess some symmetry (e.g.", "Sobel operators, Gaussian, Laplacian).", "Convolutional layers in artificial feed-forward neural networks have typically considered the kernel weights without any constraint.", "In this paper, we propose to investigate the impact of a symmetry constraint in convolutional layers for image classification tasks, taking our inspiration from the processes involved in the primary visual cortex and common image processing techniques.", "The goal is to assess the extent to which it is possible to enforce symmetrical constraints on the filters throughout the training process of a convolutional neural network (CNN) by modifying the weight update preformed during the backpropagation algorithm and to evaluate the change in performance.", "The main hypothesis of this paper is that the symmetrical constraint reduces the number of free parameters in the network, and it is able to achieve near identical performance to the modern methodology of training.", "In particular, we address the following cases: x/y-axis symmetry, point reflection, and anti-point reflection.", "The performance has been evaluated on four databases of images.", "The results support the conclusion that while random weights offer more freedom to the model, the symmetry constraint provides a similar level of performance while decreasing substantially the number of free parameters in the model.", "Such an approach can be valuable in phase-sensitive applications that require a linear phase property throughout the feature extraction process." ], [ "Introduction", "Convolutional Neural Networks (CNNs) are a class of Artificial Neural Networks (ANNs) commonly used in deep learning [1].", "Their prevalence today in computer vision tasks is unprecedented, and rightfully so, as they have demonstrated extraordinary ability in challenging pattern recognitions tasks, most notably for object recognition [2], [3].", "In addition, recent results suggest that it is better to learn everything, with a shift from scale-invariant feature transform (SIFT) [4] based features to CNN based methods for instance retrieval applications.", "The trend is towards end-to-end feature learning and extraction approaches [5].", "While it has been shown that it is better to learn features, there is key evidence that pre-training helps with deep learning [6].", "In addition, the optimal choice of architecture including the number of layers, feature maps, and convolutional layer settings, remain a challenge given the large number of architectures that have been proposed in recent years.", "Today, researchers from a variety of backgrounds, including neural engineering [7], robotics [8], chemistry [9], and astronomy [10], are employing CNNs to not only advance understanding within their own fields but to produce trained networks that may viable for use commercially.", "This is evident in society as manufacturers of devices, such as for Internet of Things (IoT) applications, have begun to include highly specialized integrated circuits that are well adapted for CNN execution [11].", "As we move forward with the deployment of CNNs into embedded applications, it is necessary to examine the efficiency of CNNs in terms of number of free parameters, as this has direct physical implications for embedded systems.", "CNNs are typically known to require a large number of labeled examples to capture the variability that exists across examples, in particular to provide features that are tolerant to different types of transformations, e.g.", "translation and rotation.", "For computer vision tasks, the architectures of CNNs are typically based on the principle of the human primary visual cortex (V1) [12], where the goal is to preprocess the input image to extract edges or to enhance some particular features.", "For instance, before CNNs demonstrated their superiority for feature extraction through transfer learning, it was common to consider Gabor filters to preprocess images by selecting a set of frequencies and orientations [13], [14], [15].", "Such an approach places a significant bias on the features due to the arbitrary choice of the frequencies and orientations.", "This paper investigates parameter reduction in CNNs by attempting to enforce a symmetrical constraint on the learned weights within convolutional layers of a network.", "Symmetry is ever present in many natural and scientific phenomena, including modern physics [16], and from the perspective of information representation, symmetry has the potential to reduce complexity by compacting it into lighter structures.", "Moreover, the rationale of this paper is rooted from the fact that many state of the art spatial filtering techniques, for edge detection (e.g.", "Sobel, Prewitt, Gabor), smoothing (e.g.", "Gaussian filter), and image enhancement (e.g.", "Mexican Hat, Difference of Gaussian, Laplacian of Gaussian), have symmetrical properties that can be taken into account through training convolutional layers.", "For instance, the derivatives at high frequencies are useful for edge detection [17], [18].", "Since edges are known to be meaningful features, it is plausible that a CNN may eventually approximate a symmetrical filter in order to learn to classify the images it gets as an input.", "To enforce a symmetrical constraint throughout the learning process, a change to the backpropagation algorithm is required so that it adheres to specific weight update algorithms implemented for several filters, each with different forms of symmetry.", "First, the proposed filters are first initialized in a way that the symmetrical properties are set.", "Second, the back propagated errors are combined to satisfy the constraints of the filter kernels.", "A finite impulse response (FIR) filter is linear-phase if and only if its coefficients are symmetrical around the center coefficient.", "Then, symmetric filters provide a linear phase, corresponding to the condition where the phase response of the filter is a linear function of frequency.", "With such a filter, all frequency components of the input image are shifted in time by the same constant amount (the group delay).", "There is therefore no phase distortion due to the time delay of frequencies relative to one another [19]  [20].", "A filter will delay different frequency components of a signal by the same amount if the filter has linear phase.", "Such a property in filters can be desirable as applications without linear phase can introduce artifacts in images.", "The main contributions of this paper are: Three types of symmetrical 2D filters that can constraint the convolutional layers to extract specific types of features.", "These types of filters are linear phase and correspond to Type I (Even-order symmetric coefficients) and Type III Even-order (antisymmetric coefficients) FIR filters.", "A new way to reduce the number of free parameters in CNNs through weight sharing of the weights within a filter, which decreases the computational cost of the forward operation.", "The remainder of the paper is organized as follows.", "First, we begin by discussing relevant works in Section , then give brief descriptions of the forward and backward propagation procedures for CNNs in Section  in order to facilitate a clear foundation for our description of symmetric filters that follows.", "Descriptions of the databases and the network architectures used for testing are covered in Sections  and  .", "The results are presented in Section  and their impact on CNNs are discussed in Section ." ], [ "Related Works", "Today's state of the art convolutional neural networks are heavily influenced by the work of LeCun [21] and the neocognitron [22].", "In 1998, LeCun proposed the LeNet-5 architecture, which was able to successfully classify images from a large dataset of handwritten digits [23].", "This architecture combined convolutional layers, followed by pooling layers, and then terminated the network with fully connected layers.", "Most notably LeCun introduced the use of the backpropagation algorithm to ConvNets which allowed for the modification of the weights for the entire network based on the error calculated at the output.", "While LeNet used average pooling in its architecture, it has been shown that average pooling was not as robust as max pooling in their HMAX model [24].", "The argument being made was that average pooling would actually obscure features because the responses of simple cells were being summed, while the max operation simply returned strongest response and therefore had the best chance of detecting features.", "Additionally, average pooling was shown to be variant to scale, due to the response strength after pooling being correlated with object size.", "One such influential ConvNet architecture inspired by LeNet was the AlexNet architecture that received critical acclaim due to breaking the record at the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [25].", "In addition to max pooling layers, AlexNet changed the activation function used by neurons in each layer.", "Previously, it was common to use a saturating non-linearity as the activation function, such as the hyperbolic tangent or sigmoid, however AlexNet used ReLU or Rectified Linear Unit as its activation function as this function provides a significant performance boost to the training of the networks.", "They claimed a six times speed improvement to reach the same error rate with a network using ReLU compared to one using the hyperbolic tangent.", "AlexNet reduced overfitting by utilizing dropout layers, where neurons in some given layer are randomly selected to have their weights zeroed, effectively changing the number of neurons in that layer [26], and augmenting the data set by translating and reflecting images in the training set.", "In the wake of AlexNet came the discovery that building networks with a larger number of layers increased the performance of the network.", "The performance on Very Deep Convolutional networks showed that improvements could be achieved by using small convolutions (3 $\\times $ 3) and increasing the network size to encompass 16-19 layers [27].", "However, increasing the number layers increases the number of parameters required to learn.", "GoogLeNet was an even deeper network utilizing 22 layers [28].", "Yet, it includes the inception module in which multiple convolutions of different filter sizes are employed: 1 $\\times $ 1, 3 $\\times $ 3, and 5 $\\times $ 5 convolutions are computed in parallel.", "Additionally, a pooling operation is conducted within the module, in parallel with the other operations, for good measure.", "The output of all the convolutions are then concatenated together along the depth dimension.", "However, preforming these extra convolutions in parallel did not end up increasing the number parameters to learn due to the fact that Google introduced a 1 $\\times $ 1 convolution to be preformed on the input to another larger convolution to reduce the dimensionality of inputs.", "These works stress the importance of the architecture and the parameters related to the convolutional layers." ], [ "CNN Forward Propagation", "Forward propagation for a convolutional layer in a CNN involves either performing a 2D convolution or cross-correlation on an image inputted to the layer.", "Convolution and cross-correlation are similar processes and from the perspective of the artificial neural network are indistinguishable.", "Both operations involve taking a small matrix of weights (kernel/filter) of size $N_w \\times N_w$ , overlaying them with section of an image and summing the element wise multiplication of the weights and image intensities directly under the filter.", "The filter is typically translated across the entire image such that the operation has been executed at nearly every pixel.", "In order for the operation to be considered a convolution, its required that the filter be rotated by 180 before applying it to the image, otherwise it is a cross-correlation, expressed as follows: $y(i,j) & = & \\sum \\limits _{\\begin{array}{c}i1=-N_{half},\\\\j1=-N_{half}\\end{array}}^{\\begin{array}{c}N_{half},\\\\N_{half}\\end{array}} w_{i1,j1} \\cdot x(i+i1,j+j1)$ where $N_{half}=\\left\\lfloor N_w/2 \\right\\rfloor $ .", "After a convolution is performed on an image of size $N_{in} \\times N_{in}$ with a filter of size $N_w \\times N_w$ , the resultant image shape can be computed with Eq.", "REF .", "$N_{out} & = & \\frac{N_{in} + 2N_P - N_w}{N_S} + 1$ where $N_P$ indicates how much padding is added to the image before convolution and $N_S$ is the stride taken by the filter [2].", "Convolution and cross-correlation can be executed forming Toeplitz matrices from the filters and unrolling the image into a column vector.", "The filter is transformed into a Toeplitz matrix by inflating with zeros until it becomes the same shape as the input image.", "Then this inflated filter is unrolled into a row vector.", "Shifted versions of the filter vector are then copied in as rows of a Toeplitz matrix.", "A matrix product between the Toeplitz representation of the filter and the unrolled image yields the result of the cross-correlation operation.", "Using the example image $\\mathbf {I}$ and filter $\\mathbf {W}$ , we can form a Toeplitz matrix from $\\mathbf {W}$ by first expanding it to the shape of $\\mathbf {I}$ and filling with zeros.", "Then, unroll $ \\mathbf {W_{expanded}}$ and copy shifted versions of it into a new matrix, which will be the Toeplitz matrix.", "The shape of the Toeplitz matrix for an image of shape $ N_1 \\times N_2$ and filter of size $N_w \\times N_w$ can be determined by Eq.", "REF $\\left( \\frac{N_1 + 2P - N}{S} + 1 \\right)\\left( \\frac{N_2 + 2P - N}{S} + 1 \\right) \\times \\left(N_1N_2\\right)$ The result $\\mathbf {R}$ of the convolution is computed as follows: $\\mathbf {R} = \\mathbf {W_{toe}} \\mathbf {I}$ where $\\mathbf {I}$ is a column vector of the unrolled image and $\\mathbf {R}$ is a column vector, which needs to be reshaped back into a 2D matrix.", "Finally, the forward propagation equation for convolutional layer is: $\\mathbf {o}^{l} = f_l \\left( \\mathbf {W_{toe}}^{l} \\mathbf {o}^{l-1} \\right) $ where $\\mathbf {o}^{l}$ is the output for layer $l$ , $f_l(\\cdot )$ is the activation function for layer $l$ , $ \\mathbf {W_{toe}}^{l}$ is the filter Toeplitz matrix (weight matrix) that connects layer $l-1$ to layer $l$ , and $\\mathbf {o}^{l-1}$ is the column vector of the output from layer $l-1$ ." ], [ "CNN Backpropagation", "Backpropagation for a convolutional layer in a CNN is separated out into two steps: the error backpropagation and the weight update.", "In the error backpropagation step, we compute the error at a previous layer using the error at the current layer.", "For the computed error at a layer of the network: $\\mathbf {e}^l = \\begin{bmatrix} e^l_0 & e^l_1 & \\cdots & e^l_m \\end{bmatrix}$ We can back propagate the error by multiplying $\\mathbf {e}^l$ by the filter Toeplitz matrix $\\mathbf { W_{toe} }^{l}$ that connects layer $l-1$ to layer $l$ and then performing a dot product with the derivative of the activated outputs of layer $l-1$ .", "$\\mathbf {e}^{l-1} = \\left( \\mathbf {e}^l \\mathbf {W_{toe}}^{l} \\right) \\cdot f^{\\prime }_{l-1}(\\mathbf {o}^{l-1})$ where $f^{\\prime }_{l-1}(\\cdot )$ is the derivative of the activation function for layer $l-1$ .", "In order to perform the weight update, we take the error for current layer $\\mathbf {e}^l$ , reshape it into the same shape as the output of layer $l$ , and perform a cross-correlation with the input to the layer (i.e the output of the previous layer $\\mathbf {o}^{l-1}$ ), which is reshaped into a 2D matrix.", "$\\Delta \\mathbf { W } = \\alpha \\left( \\mathbf {E}^l \\star \\mathbf { O }^{l-1} \\right)$ where $\\alpha $ is the learning rate, $\\mathbf {E}^l$ is the error vector $\\mathbf {e}^l$ reshaped to be the same shape as the output for layer $l$ , and $\\mathbf { O }^{l-1}$ is the output vector of the previous layer $\\mathbf {o}^{l-1}$ reshaped into the same shape as the output shape of layer $l-1$ .", "It is worth noting that if a stride is greater than 1 ($S > 1$ ) was used in the forward convolution, then this must be accounted for in Eq.", "REF by appropriately extending $\\mathbf {E}^l$ with $S-1$ columns and rows of zeros inserted between each of its column and row." ], [ "Linear phase FIR filters", "FIR filters are filters with a finite duration.", "An $r^{th}$ order discrete FIR filter lasts $k+1$ time points; the number of taps is the same as the filter length.", "We denote by $N_b$ the size of the filter ($N_b=k+1$ ).", "The discrete convolution is expressed by: $y(n) & = & \\sum \\limits _{i=0}^{r} b_i \\cdot x(n-i)$ where $x$ and $y$ are the input and output signals, respectively; $k$ is the filter order, $b_i$ represents the weight of the filter at time $i$ , $0 \\le i \\le r$ .", "Linear phase FIR filters are divided into four types: Type I (even-order, symmetric coefficients), Type II (odd-order, symmetric coefficients), Type III (even-order antisymmetric coefficients), and Type IV (odd-order antisymmetric coefficients).", "Types III and IV can be used to design differentiators [29], which can be used for edge detection.", "The symmetry of the impulse response is written as: $w_n = w(N_w-1-n)$ (Type I and II), and the anti-symmetry is written as: $b_n = -b(N_b-1-n)$ (Type III and IV).", "The parameters of the filter correspond to an even function centered on N/2 for Type I and II, and an odd function for Type III and IV.", "In this paper, we will focus on Type I and III as they correspond to kernel sizes that are typically used in the literature for setting the input windows of convolutional layers, e.g.", "3 $\\times $ 3, 5 $\\times $ 5.", "We denote by $A(\\omega )$ and $\\theta (\\omega )$ the amplitude response and the continuous phase function of the filter, respectively.", "A linear phase filter is defined by its frequency response: $H^f(\\omega ) & = & A(\\omega ) \\cdot e^{j\\theta (\\omega )}$ with $\\theta (\\omega )& = & -M\\omega + B$ where $j$ is the imaginary unit.", "FIR filters Type I of length $N_w$ are defined as follows: $A(\\omega ) & = & h(M) + 2\\sum \\limits _{n=0}^{M-1} w_n \\cdot cos((M-n)\\omega ) \\\\\\theta (\\omega ) & = & -M\\omega \\nonumber $ where $M=(N-1)/2$ .", "For a filter of length 5, the filter can be expressed by: $H^{f} & = & b_0 + b_1e^{-j\\omega } + b_2 e^{-2j\\omega } + w_1 e^{-3j\\omega } + b_0 e^{-4\\omega } \\\\& = & e^{-2j\\omega } (2b_0 \\cdot cos(2\\omega )+ 2b_1 \\cdot cos(\\omega )+b_2) \\nonumber \\\\& = & A(\\theta )e^{j\\theta (\\omega )} \\nonumber $ with $\\theta (\\omega )=-2\\omega $ and $A(\\omega )=2b_0 \\cdot cos(2\\omega )+ 2b_1 \\cdot cos(\\omega )+b_2$ .", "FIR filters Type III of length $N$ are defined as follows: $A(\\omega ) & = & 2\\sum \\limits _{n=0}^{M-1} b_n \\cdot sin((M-n)\\omega ) \\\\\\theta (\\omega ) & = & -M\\omega + \\pi /2 \\nonumber $ where $M=(N-1)/2$ .", "For a filter of length 5, the filter can be expressed by: $H^{f} & = & b_0 + b_1e^{-j\\omega } - b_1 e^{-3j\\omega } - b_0 e^{-4\\omega } \\\\& = & e^{-2j\\omega } e^{j\\pi /2} (2b_0 \\cdot sin(2\\omega )+2b_1 \\cdot sin(\\omega )) \\nonumber \\\\& = & A(\\theta )e^{j\\theta (\\omega )} \\nonumber $ with $\\theta (\\omega )=-2\\omega +\\pi /2$ and $A(\\omega )=2b_0 \\cdot sin(2\\omega )+2b_1 \\cdot sin(\\omega )$ ." ], [ "Symmetric Receptive Fields", "The symmetric receptive fields/filters that are introduced in this section corresponds to 2D FIR filters of Type I and III defined in the previous sections.", "Symmetric Filter Type I (T1) is reminiscent of a Gaussian\\Laplacian kernel commonly used in image processing.", "This filter is symmetrical across multiple axes: its center vertically, its center horizontally, and diagonally at each corner.", "Moreover, filter T1 is capable of teasing out information about point reflection for objects that have central symmetry.", "T1 can learn Gaussian, Laplacian, Difference of Gaussian types of filters.", "T2 filters allow to take into account multiple orientations.", "Symmetric Filter Type 2 (T2) is split into two different filters.", "We denote by T2A a filter that possesses the property of point reflection.", "We denote by T2B a filter that possesses the property of anti point reflection, due to the introduction of a negative sign in its second half.", "The Sobel operator in the $x$ and $y$ dimensions can be learned through T2B only.", "Filters T1 and T2A have a linear phase due to their symmetry while T2B has a phase onset (antisymmetric coefficients).", "The number of parameters in T1 and T2 filters compared to default filters are depicted in Fig.", "REF , illustrating the potential gain of free parameters when there exists a large number of filters.", "In addition to the reduction of the number of parameters to learn, symmetric filters decrease the complexity of the forward operation for estimating the outputs in the CNN: the inputs can be first summed before being multiplied by the weights.", "Figure: Number of parameters as a function of size the filter size.The weights of the filters are defined as follows: $T1(x-x_c,y-y_c) & = & T1(x \\pm x_c,y \\pm y_c) \\\\T2A(x-x_c,y-y_c) & = & T2A(x+x_c,y+y_c) \\\\T2B(x-x_c,y-y_c) & = & -T2B(x+x_c,y+y_c) \\\\T2B(x_c,y_c) & = & 0$ where $(x_c,y_c)$ represents the center of the filter, $1 \\le x,y \\le N_w$ .", "For the total number of different weights for T1, T2A, and T2B is: $N_{T1} & = & (N_{half}+1)(N_{half}+2)/2 \\\\N_{T2A} & = & N_{half} \\cdot N_w + N_{half} + 1 \\\\N_{T2B} & = & N_{half} \\cdot N_w + N_{half}$ For the forward operation, for each filter, the number of multiplications decreases from $N_w^2$ to $N_{T1}$ , $N_{T2A}$ or $N_{T2A}$ .", "Figure: The four types of filters: R, T1, T2A, and T2B for a filter of size 5×55 \\times 5.All filters are initialized by randomly sampling a normal distribution with zero mean and standard deviation inversely proportional to the square root of the fan in to a neuron at some given layer [30].", "Filter T1 only requires 6 weights to be generated, in the case of a $ 5 \\times 5 $ filter, which then get copied to their appropriate positions.", "T2A and T2B require 13 and 12 weights, respectively.", "Without the symmetrical constraint, there are 25 free parameters to tune throughout the training process in the case of a single $5 \\times 5$ filter.", "The weights for the default condition (R), T1, T2A, and T2B are presented in Fig.", "REF .", "We denote by $w_{i,j}$ the value of the weight at the position $(i,j)$ in the filter, and $w_{k}$ , the $k^{th}$ different weight in the filter.", "For instance, for T1 we have $w_{1}=w_{1,1}=w_{N_w,1}=w_{1,N_w}=w_{N_w,N_w}$ .", "In order to insure that the filters retain symmetry throughout the training process, it is necessary to modify a portion of the backpropagation algorithm.", "The same way that weight sharing is achieved through averaging the errors on all the connections that share the same weight, it is necessary to share the weight between the elements of the filter that have the same weight.", "Within the weight update procedure, after the gradients for the receptive fields have been computed, they are not directly added to the current weights.", "Instead the gradients are passed off to the specific weight update procedure for the filter being used within a given layer.", "The update operation was experimented to determine what would yield the best results.", "Initially, an averaging operation was executed by summing the gradients for each weight with its symmetric counterparts.", "However, this was determined to decrease the gradient too much.", "Instead, the sum of the gradients was considered and implemented.", "To give a much more general description of the update procedures, let $i$ and $j$ index the rows and columns of a 2D filter, let $\\Delta \\mathbf {W}$ be a matrix of gradients for a 2D filter.", "For a filter of size $N_w \\times N_w$ where $N_w$ is odd, the updates proceed as follows.", "$\\Delta \\mathbf {W} & = &\\begin{bmatrix}\\delta _{1,1} & \\cdots & \\delta _{1,N_b} \\\\\\vdots & \\vdots & \\vdots \\\\\\delta _{N_b,1} & \\cdots & \\delta _{1,N_b} \\\\\\end{bmatrix}$ For T1, we define the distance of an element $(i,j)$ from the central element $(N_{half}+1,N_{half}+1)$ of the filter as $d(i,j) = \\sqrt{ \\left( i - \\left( N_{half} + 1 \\right) \\right)^{2} + \\left( j - \\left( N_{half} + 1 \\right) \\right)^{2} }.$ Let $S_k$ be the set of all elements defined by their coordinate $(i,j)$ in the filter that are at the same distance away from the center element.", "$\\forall ((i_1,j_1),(i_2,j_2)) \\in S_k^2 & \\rightarrow & d(i_1,j_1)=d(i_2,j_2) \\\\|S_k| & = & N_{T1} \\nonumber $ with the weight $w_k$ associated to each set $S_k$ , $1 \\le k \\le N_{T1}$ .", "For the propagation step, the output of the convolution is defined by: $y_{T1}(i,j) & = & \\sum \\limits _{k=1}^{N_{T1}} \\left( w_{k} \\cdot \\sum _{(i_1,j_1) \\in S_{k}} x(i_1,j_1) \\right) \\\\y_{T2A}(i,j) & = &\\sum \\limits _{(i_1,j_1) \\in S_{T2A}} w_{i_1,j_1} \\cdot \\nonumber \\\\& & (x(i+i_1,j+j_1)+x(i+i_2,j+j_2)) \\nonumber \\\\y_{T2B}(i,j) & = & \\sum \\limits _{(i_1,j_1) \\in S_{T2B}} w_{i_1,j_1} \\cdot \\nonumber \\\\& & (x(i+i_1,j+j_1)-x(i+i_2,j+j_2)) \\nonumber $ where $i3=N-i1+1$ and $j3==N-j1+1$ for $y_{T2A}$ and $y_{T2B}$ .", "$S_{T2A}$ and $S_{T2B}$ represent the set of coordinates containing different weights for T2A and T2B, respectively.", "All the expressions are equivalent to the original convolution operation but the number of multiplications is reduced due to the shared weights within the filter." ], [ "T1 Generalized Update Procedure", "The gradient for the weight $b_k$ , $1 \\le k \\le N_{T1}$ , within the filter is computed as follows: $\\delta _{\\mathbf {T1}[k]} & = & \\sum _{(i,j) \\in S_k}{\\Delta \\mathbf {W}[i,j]}.$" ], [ "T2A Generalized Update Procedure", "For T2A the gradient update for the weight $b_{i,j}$ within the filter is computed as follows: $\\delta _{\\mathbf {T2A}[i,j]} & = & \\Delta \\mathbf {W}[i,j] + \\\\& & \\Delta \\mathbf {W}[N_w - (i-1), N_w - (j-1)].", "\\nonumber $" ], [ "T2B Generalized Update Procedure", "For T2B the gradient update for a weight at location $(i,j)$ within the filter as long as $(i,j) < (N_{half} + 1, N_{half} + 1 )$ (positive weights in in T2B) is computed as follows: $\\delta _{\\mathbf {T2B}[i,j]} & = & \\Delta \\mathbf {W}[i,j] \\\\\\delta _{\\mathbf {T2B}[ N_w - (i-1), N_w - (j-1)]} & = & - \\mathbf {T2B}[i,j] \\nonumber $ The positive weights in filter $\\mathbf {T2B}$ are simply updated with their appropriate gradients in $\\mathbf {W}$ .", "Then these new positive weights are negated and copied over into the negative half of $\\mathbf {T2B}$ ." ], [ "Center Element Update Procedure", "For all filters, except T2B, the center element is updated as follows: $\\delta _{\\mathbf {T_{any}}\\left[ N_{half} + 1 , N_{half} + 1 \\right]} = \\Delta \\mathbf {W} \\left[N_{half} + 1 , N_{half} + 1 \\right]$ For T2B, the center element is never updated, it is initialized and remains zero throughout training.", "We consider four datasets corresponding to handwritten numerals of different scripts and in which it is possible to consider the same CNN architecture, so it is possible to focus on the differences related to the type of filters.", "These databases were chosen as they have different numbers of training examples.", "The MNIST database contains digits of the Latin script, it is a benchmark in supervised classifiers [23], [31].", "It includes a training and test database of 60000 and 10000 images, respectively.", "For the comparison of techniques, two characteristics are typically precised if they are used or not: the addition of distorted images in the database, and the type of normalization of the images.", "The error rate reaches quasi human performance level of 0.23% with a combination of 35 convolutional neural networks, where each network requires almost 14h to be trained [32].", "In addition, we consider three datasets of handwritten Indian numerals: Bangla [33], [34], Devanagari, and Oriya.", "These databases were created at the Indian Statistical Institute, Kolkata, India [35], [36], [37].", "The second database corresponds to Devanagari digits, which is part of the Brahmic family of scripts of India, Nepal, Tibet, and South-East Asia [38].", "The Oriya script is one of the many descendants of the Brahmi script of ancient India.", "In [39], Bhowmik et al.", "obtain an accuracy of 90.50% by using Hidden Markov Models.", "The images were preprocessed the same way as in the original images in the MNIST database.", "Because some databases have noisy images and/or images in color, images were first binarized with the Otsu method at their original size [40], then they were size normalized to fit in a 20 $\\times $ 20 pixel box while preserving their aspect ratio.", "The resulting images contain 8 bit gray levels due to the bicubic interpolation for resizing the images.", "Finally, all the images were centered in a 28 $\\times $ 28 pixel box field by computing the center of mass of the pixels, and translating the gravity center of the image to the center of the 28 $\\times $ 28 field.", "Finally, an additional 1 pixel border is added to the top and left side of every image to change the size to $29 \\times 29$ to fit into the CNN architecture.", "The total number of images and the number of images per class for each dataset is presented in Table REF .", "Images in all datasets from the training files were split, with 90% in a training set and rest in a validation set.", "The images were z-score normalized, i.e., by removing the mean and divided by the standard deviation across all examples in the training dataset.", "Table: Data distribution.Figure: Representative handwritten digits for the different databases (from zero to nine).Table: Performance for all datasets.", "Test set accuracy and cross entropy, averaged across five different runs, for each of the filter combinations for learned (`L') and fixed (`F') networks.", "The ** represents results with no significant difference with L-R-R." ], [ "Architecture", "The CNN architecture chosen for testing consisted of 4 layers, where the first two layers are convolutions and the last two layers are fully connected.", "The architecture is based on a state of the art architecture that does not require a large number of layers [31].", "This architecture was chosen as it is possible to decompose the network into two clear stages: feature extraction through 2 convolutional layers [41], and classification with 1 fully connected hidden layer.", "Fig.", "depicts the network along with several other parameters that were kept constant, such as the activations used and layer sizes.", "Figure: NO_CAPTION Additionally, the network learning rate was set to 0.001 with no optimizers in use and no scheduled learning rate decreases.", "However, the network does scale the learning rate for each layer depending on the fan-in for a particular neuron of that layer [30].", "The weights are randomly sampled from a Gaussian distribution with mean $\\mu = 0$ and standard deviation $\\sigma = \\frac{1}{\\sqrt{m}}$ , where $m$ is the fan in for a given layer [30].", "The loss function used was cross entropy (CE).", "For performance evaluation, we compare 14 conditions decomposed into two main types: `L' (learned) and `F' (fixed).", "In the learned conditions, all the convolutional kernels are learned while in the fixed condition, we consider fixed kernels that are initialized randomly, following the principle of the random projections used in the extreme learning machine (ELM) paradigm [42], [43].", "For each type, we estimate the performance of different types of convolutional layers (R for the default case, T1, T2A, and T2B) in the two first convolutional layers.", "For instance the condition L-T1-R corresponds to learned convolutional layers, with T1 for the first convolutional layer and R for the second convolutional layer.", "For assessing the differences across conditions, we consider a Wilcoxon rank sum test.", "If there is a failure to reject the null hypothesis at the 5% significance level, we consider that two conditions are equivalent.", "In particular, we focus the comparisons on L-R-R versus the other `L' conditions." ], [ "Results", "This section summarizes the results of running tests with several different combinations of the various filter types (i.e R, T1, T2A, and T2B) across the convolutional layers of the network described in Section .", "Each network with a unique filter combination was trained five times for each dataset listed in Section .", "All data presented in this section was produced by averaging the test dataset results across the five separately trained networks for a given dataset and filter combination.", "The baseline, labeled R-R in Table REF , had all filters in both of its convolutional layers set randomly and updated every parameter in each filter.", "The mean and standard deviation of the accuracy (Acc), in %, and the cross-entropy loss value are presented in Table REF for all the different conditions and datatsets.", "Graphs of the test performance for each network on every iteration are shown in Fig.", "REF .", "For MNIST, L-R-R slightly edges out as the best performing, this network achieved a mean accuracy of $98.23 \\pm 0.09\\%$ , followed by the networks L-T2A-T2A, L-T2A-R and L-T2B-R with a performance of $98.00\\pm 0.11$ , $98.19\\pm 0.05$ , and $98.09\\pm 0.10$ .", "It is worth mentioning that it is better to keep the second convolutional layer to the default condition as we observe a drop of performance from L-T1-R to L-T1-T1, from L-T2A-R to L-T2A-T2A, and L-T2B-T2B.", "Without learning the convolutional kernels, the best performance is obtained with L-T1-T1 with an accuracy of $96.35\\pm 0.27$ .", "This network contains the less number of free parameters.", "The statistical test reveals no difference between L-R-R and L-T2A-R, L-R-R and L-T2B-R. For Bangla, L-R-R preforms the best with $96.64\\pm 0.15$ % and networks L-T2A-R, L-T2B-R, and L-T2B-T2B achieve accuracies fairly close to it, i.e.", "above 96%.", "Network L-T1-T1 does the worst, with over a $2\\%$ difference from the top performing network, whereas all other have less than a $1\\%$ difference.", "The statistical analysis indicates no difference between L-R-R vs L-T2B-R.", "The best accuracy for the Devanagari dataset was attained by L-R-R with $97.00\\pm 0.10$ , but L-T2A-R performed with a relatively similar performance with $96.98\\pm 0.18$ .", "Nearly all networks for this dataset come in with under a $0.5\\%$ difference in accuracy, except for L-T1-T1 and L-T1-R. Interestingly, the condition F-R-R provides the best accuracy for the fixed conditions.", "The statistical tests show no difference between L-R-R and L-T2A-T2A, L-R-R and L-R-T2A.", "The maximum accuracy on the Oriya dataset was obtained by network L-R-R with $95.52\\pm 0.26$ followed by L-T2A-R with $95.50\\pm 0.06$ , which has a lower standard deviation.L-T2B-T2B performed well in this case.", "Note that all combinations achieve greater than $95\\%$ except for L-T1-T1 and L-T2A-T2A.", "Interestingly, the fixed weight networks seem to do relatively well when paired with a symmetric filter in the first layer and a random filter for the second layer on this dataset as the best performance is achieved with F-T1-R.", "It is highly likely that this is due to the fact that the first layer is actually able to extract some important features that helps to better distinguish between images of varying class.", "The statistical tests indicate no difference between L-R-R and L-T2B-T2B, L-T1-R, L-T2A-R, L-T2B-R, showing key evidence about the advantages of the proposed filters as the number of parameters for feature extraction is reduced to 660 from 1375 in the default case.", "Figure: Accuracy (in %) in relation to the number of parameters in convolutional layers.Figure: Test set accuracy, averaged across five runs for each filter combination.", "Top row: learned (`L') conditions, bottom row: fixed (`F')." ], [ "Discussion", "Three types of filters for convolutional neural networks have been proposed.", "These filters offer three key advantages: first, they reduce the total number of parameters to learn in the network; second they reduce the complexity of the forward operation, and third they provide linear phase filters which can have desirable properties for preserving the waveshape of the input signal.", "To validate the proposed approach, these filters have been tested on different databases of handwritten digits representing different types of challenges in relation to the number of examples for training, and the variability across examples for the chosen scripts.", "With the considered architecture, the results were aligned with the current state of the art, suggesting potential improvements with other more complex architectures.", "Taking the results for testing accuracy for each of the four datasets and plotting them against the number of free parameters in each network, we can observe that the number of parameters does in general seem to be correlated with better accuracy.", "However, it is possible to note that for every dataset, a near quadrupling in the number parameters results in approximately a 1-2% increase in accuracy.", "The figures that follow were developed using only the data from networks that actually learned.", "Looking at Fig.", "REF , for the MNIST dataset the biggest increase in accuracy occurs when moving from the T1-T1 type filter to a T2A-T2A filter, and increase in parameters after that point has tiny gains.", "In fact, this seems to be true for all of the datasets in Fig.", "REF .", "While the proposed approaches do not lead directly to the best accuracy, they provide key insights on the type of functions that need to be applied on images to extract robust descriptors.", "Typical image processing spatial filtering kernels embed filters with linear phase.", "In neural networks, the non-linearity is typically achieved through the activation function.", "We have shown that keeping a linear phase in the extracted filter slightly degrades the results while reducing substantially the number of weights in the network.", "Such an effect may be counter balanced by deep architectures.", "Furthermore, the difference in terms of number of examples per class from MNIST to the Oriya dataset and the pattern of performance across conditions suggest that the proposed filters have similar performance than the default condition when the number of examples is low.", "A linear phase filter will preserve the waveshape of the signal or component of the input signal.", "The proposed symmetrical filters can have implications in multiple applications that exploit transfer learning and in which it is necessary to provide linear phase filters.", "The waveshape is a relevant feature because a thresholding decision must be made on the waveshape in order to classify it.", "Therefore, preserving or recovering the originally transmitted waveshape is of critical importance, otherwise wrong thresholding decisions will be applied that would represent a bit error in a communications system.", "For instance, a CNN with linear phase convolutional layers can be used in phase-sensitive applications such as audio processing, radar signal processing, seismology [44], where the waveshape of a returned radar signal may embed information about the target's properties.", "The filters that have been proposed can be employed in existing architectures to provide linear phase properties.", "The current study analyzed the three types of filters separately for features extraction applied to the classification of handwritten digits.", "However, it is unknown how the combination of such filters would perform and what is the relationship between the type of filter and its place in the hierarchical architecture.", "Finally, this type of filter may be only used when there is a relationship between different input features, such as expressed through the notion of local neighborhood.", "For applications in which the convolution merges all the inputs from one dimension into multiple feature maps, i.e., when the size of one of the dimension of the filter has the same as one of the dimension of the input, then the proposed approach may not be considered." ], [ "Conclusion", "Deep learning approaches and especially convolutional neural networks have a high impact on society through their use in a large number of pattern recognition tasks.", "With their high performance, it is necessary to get some insights about their behavior and the advantages that they provide compared to more traditional approaches rooted in image and signal processing.", "A key challenge is to find the ideal frontier between what has to be learned and what can be determined analytically.", "In this paper, we have proposed a novel category of constraints for training convolutional layers that provide the linear phase property that can be found in typical FIR filters of Type I and III.", "Such an approach provides a substantial decrease of parameters, an increase in speed by reducing the complexity of the forward operation, and relatively equivalent performance compared to the traditional approach.", "Future works will be carried out to examine the behavior corresponding to the combinations of the such symmetrical filters." ] ]
1906.04252
[ [ "Chocolatine: Outage Detection for Internet Background Radiation" ], [ "Abstract The Internet is a complex ecosystem composed of thousands of Autonomous Systems (ASs) operated by independent organizations; each AS having a very limited view outside its own network.", "These complexities and limitations impede network operators to finely pinpoint the causes of service degradation or disruption when the problem lies outside of their network.", "In this paper, we present Chocolatine, a solution to detect remote connectivity loss using Internet Background Radiation (IBR) through a simple and efficient method.", "IBR is unidirectional unsolicited Internet traffic, which is easily observed by monitoring unused address space.", "IBR features two remarkable properties: it is originated worldwide, across diverse ASs, and it is incessant.", "We show that the number of IP addresses observed from an AS or a geographical area follows a periodic pattern.", "Then, using Seasonal ARIMA to statistically model IBR data, we predict the number of IPs for the next time window.", "Significant deviations from these predictions indicate an outage.", "We evaluated Chocolatine using data from the UCSD Network Telescope, operated by CAIDA, with a set of documented outages.", "Our experiments show that the proposed methodology achieves a good trade-off between true-positive rate (90%) and false-positive rate (2%) and largely outperforms CAIDA's own IBR-based detection method.", "Furthermore, performing a comparison against other methods, i.e., with BGP monitoring and active probing, we observe that Chocolatine shares a large common set of outages with them in addition to many specific outages that would otherwise go undetected." ], [ "Introduction", "Connectivity disruptions caused by physical outages, software bugs, misconfiguration, censorship, or malicious activity, occur repeatedly on the Internet [1].", "Monitoring the state of Internet connectivity is useful to raise public awareness on events of intentional disconnection due to censorship [14].", "It further helps operators pinpoint the location of an outage, i.e., the place where there is a loss of connectivity, when it happens outside their reach.", "This enables to speed up recovery as the correct network operator team can be contacted directly instead of reaching out to the global network operators community via mailing lists or personal contacts.", "Fast outage detection is also useful to locally switch to backup routes, when available [16].", "A few methods exist to detect connectivity outages.", "Monitoring for withdrawals of BGP prefixes is a commonly used approach, but it can only observe outages that affect the control plane [10], [2].", "Data-plane approaches solve this problem, and can be either based on active measurements—e.g., Trinocular [21] sends pings to 4 M remote /24 address blocks to measure their liveness—or on passive traffic analysis—Disco [27] relies on the long-running TCP connections between RIPE Atlas probes and their controlling infrastructure to identify bursts of disconnections.", "Another data-plane approach for the detection of connectivity outages, is based on the analysis of Internet Background Radiation (IBR) [5].", "IBR is unsolicited traffic captured by darknets (also known as network telescopes), which announce unused IP prefixes on BGP, i.e., there are no actual services running in the prefix, nor “eyeballs”.", "IBR is composed of a constantly evolving mix of various phenomena: network scans, the results of malware infections, DoS attacks using spoofed IPs from the range announced by the telescope [4], packets from misconfigured (or with a polluted DHT) BitTorrent clients, etc. [31].", "By leveraging the pervasiveness of IBR sources, and the consistent presence of traffic, we can infer a connectivity outage for a given geographic area or Autonomous System (AS) based on a significant reduction of IBR traffic that originates from them.", "In addition, Dainotti et al.", "[8], [5] demonstrated that IBR can effectively complement both control-plane and active probing data-plane approaches: both in terms of coverage (not all networks respond to pings) and in terms of information that it provides (e.g., confirming outbound connectivity for a remote network even when inbound connectivity is disrupted).", "The IODA system from CAIDA [17] has recently operationalized this method for extracting time series, i.e., “signals”, at different spatial grain (e.g., countries or ASs).", "However, IODA's current automated detection algorithm is simplistic (a threshold based on the last 7 days moving median) and unable to take into account the IBR's noise and the intensity variability of the signal.", "Indeed, in order to avoid an overwhelming amount of false positives, the threshold is currently set to raise an outage alert when the signal intensity drops under 25% of the intensity of the median value observed in the last 7 days.", "That is, an outage is detected only when there is a severe connectivity loss, leaving many cases of connectivity loss undetected [18].", "In particular, the test remains the same whatever the period of the day and the week, such that a drop occurring in an usually busy period is treated the same as if it was occurring during an inactive one.", "In one word, this naive model is static, and as such, challenging to calibrate, as it does not take into account any trends in the traffic.", "In this work, we take these trends into account by applying Seasonal ARIMA (SARIMA) [9], a popular technique that forecasts the behavior of the time series extracted at the UCSD Network Telescope [29].", "More specifically, we analyze the number of unique source IP addresses that try to reach the darknet of different countries/ASs.", "Chocolatine is sensitive and robust, respectively to the seasonality and noise observed in the data.", "We show that it is able to detect outages with a true positive rate of $90\\%$ and a false positive rate of $2\\%$ with a detection delay of only 5 minutes.", "Additionally, the comparison with CAIDA's method showed that Chocolatine can detect a large share of outages seen by other data sources, as well as additional specific outages.", "Another benefit of Chocolatine is that its algorithm automatically self-tunes on time series exhibiting very different magnitudes and levels of noise (e.g., time series of IBR extracted for ASs and countries of different sizes and with different compositions of IBR-generating sources).", "As a result, Chocolatine can be applicable to other seasonal and noisy data sources related to Internet traffic activity.", "The remainder of the paper is structured as follows: background on main outage detection methods is first provided in Section .", "In Section , we then introduce the dataset we use, and explain why it is suited for outage detection.", "In Section , we describe Chocolatine's high-level design.", "We also illustrate our outage detection process with a case study of censorship occurred during the Egyptian revolution (Section ).", "In Section , we evaluate Chocolatine, validating it with ground truth data and also comparing its performances against several current outage detection algorithms.", "Lastly, we address the reproducibility of our experiments in Section ." ], [ "Background", "Outage detection can be achieved with different measurement techniques and performance indicators.", "A recent survey [1] provides a taxonomy of most existing techniques, including three main monitoring categories: active, passive, and hybrid.", "We reuse this terminology here.", "Active monitoring techniques generate traffic in order to collect information and examine the state of networks.", "Most active monitoring approaches are based on variants of ping and traceroute, and rely on a set of vantage points (i.e., the devices that perform the measurements) that are usually distributed across different networks.", "For example, RIPE Atlas [24] is a popular platform for network measurement that is composed of over 10,000 probes.", "In [12], Fontugne et al.", "detect significant link delay changes and rerouting from RIPE Atlas built-in measurements.", "Dasu [28], on the other hand, is more versatile than RIPE Atlas.", "It has been used for diverse measurements, such as broadband performance measurements, as well as the mapping of the Google CDN infrastructure.", "Thunderping [26] measures the connectivity of residential Internet hosts before, during, and after forecast periods of severe weather.", "Passive monitoring techniques collect existing traffic and infer the state of networks from it.", "Generally speaking, they analyze real-user traffic to be close to the user experience.", "It ensures that the inferred statistics correspond to real traffic, thus granting a view of a network's current state.", "Different datasets have been leveraged for passive analysis, such as CDN traces [23], or darknets [3].", "Outage detection methods also rely on different theoretical modeling techniques to discriminate outages from normal network conditions.", "Trinocular [21] leverages Bayesian inference to estimate the reachability of /24 subnetworks.", "Disco [27] detects surge of Atlas probe disconnections using a burst modeling algorithm.", "Using also Atlas data, authors of [12] rely on the central limit theorem to model usual Internet delays and identify network disruptions.", "In this work, we rely on passive measurements collected from CAIDA's network telescope [29] and employ SARIMA models to forecast IBR time series and detect outages." ], [ "Dataset", "The data used for this study is obtained from the UCSD network telescope [29].", "The goal of this section is to provide an overview of the characteristics of this dataset, and to motivate why it is suitable for outage detection.", "The collected data consists exclusively of unsolicited traffic caused by both benign and malicious activities.", "For instance, software and hardware errors, such as bit-flipping or hard-coded IP addresses, result in IBR traffic.", "Network scans and backscatter traffic are another common source of IBR traffic.", "Backscatter traffic is usually the consequence of malicious spoofed traffic sent to a victim and whose replies are returned to unused addresses monitored by the network telescope.", "Consequently, IBR data has been extensively used to study worms [30], virus propagation [15], and Distributed Denial of Service (DDoS) attacks [11].", "CAIDA's IODA [17] aggregates UCSD network telescope data geographically and topologically, respectively using NetAcuity [19] IP geolocation datasets and longest prefix matching against BGP announcements from public BGP data [20].", "Consequently, we obtain IBR streams per country, regional area (e.g., states in the US, provinces in France, etc.", "), and AS.", "IODA also pre-filters the traffic that reaches the telescope, removing large components of potentially spoofed-source-IP traffic (since their presence would significantly alter inference about originating ASs and geographical areas) using a set of heuristics derived semi-manually [7].", "Figure: Illustration of preprocessing and seasonal integration of thetraining dataTraffic from these streams can be summarized in different ways, the most common being the number of bytes, the number of packets, and the number of unique source IP addresses.", "The number of unique source IP addresses [8] is defined as the number of IP addresses originating from the same location that contact the network telescope during a given time interval.", "It is an adequate metric to study Internet outages because it counts the number of devices that send traffic at a geographical or topological location, while abstracting the need to analyze traffic.", "In the event of an outage, some of these devices get disconnected from the Internet, so we expect to observe drops in the number of unique source IP addresses observed by the network telescope.", "The usage of IBR to detect outages is particularly pertinent since it is pervasive.", "Indeed, the amount of IBR packets that reaches network telescopes is considerable, incessant, and originates from a variety of applications [31].", "In [4], Benson et al.", "performed a spatial analysis and determined that IBR provided an Internet-wide view.", "All countries, except for 3 with a population of less than 4000 inhabitants, and more than half of all ASs are observed in their dataset.", "Note that half of the ASs that do not show up in the dataset are small, as they only advertise a /24 prefix, while 86% of ASs that advertise the equivalent of a /16 or more are visible.", "A fifth of the remaining 14% that do not generate IBR traffic are blocks that belong to the US government.", "The temporal analysis in [4] also shows that most networks frequently generate IBR traffic, in particular when considering coarse grain aggregations.", "Indeed, the median time between observations is shorter than 1 minute for over 90% of countries, and is shorter than 10 minutes for about 75% of the ASs.", "To summarize, IBR traffic is ubiquitous, and thus can be used to detect and analyze large-scale network events.", "It is continually sent by a variety of sources all around the world, which makes it a suitable source to make opportunistic worldwide Internet measurements and specifically for efficiently detecting outages." ], [ "Methodology", "In this section, we describe how Chocolatine forecasts the number of unique IP addresses in IBR traffic and detects outages.", "Among the numerous approaches available to forecast time series, Autoregressive Integrated Moving Average (ARIMA) models are a popular choice thanks to their simplicity and efficiency [32].", "For this study we select Seasonal-ARIMA (SARIMA) [9] models in order to deal with weekly patterns observed in IBR time series.", "We propose an outage detection method composed of four main steps.", "First, we sanitize the training part of the dataset (Section REF ) and we eliminate non-stationarity in the data by differencing the data with a lag of one week (Section REF ).", "Second, we compare results with multiple sets of parameters to find the best parameters for modeling each time series, and compute the corresponding prediction intervals (Section REF ).", "Finally, we detect and report outages based on the differences between the computed predictions and the actual data (Section REF )." ], [ "Data preparation", "In the following, the IBR time series are split into three sets: training, calibration, and test.", "These are used differently for the modeling (Section REF ) and detection phases (Section REF ).", "The training and calibration sets are used for the modeling, i.e., to learn the best set of parameters for the ARMA model.", "These parameters are then used on the test set to detect potential outages.", "The training data is used as the basis of the predictive model, and we need to sanitize it.", "There are three problems that need to be addressed: Missing values that we need to fill to have a working model, Extreme values, which will bias the model by greatly influencing the statistical properties of the time series, The presence of an outage inside the training data, which will lead to a model considering outages as the norm.", "To overcome these problems we assume that the occurrence of missing and extreme values are uncommon so we can synthesize ten weeks of data into two weeks of sanitized data.", "Our data preparation process is illustrated in Figure REF , with a time series having the three problems mentioned above.", "We consider five intervals of two weeks (top plot in Figure REF ), and compute the median values across all five intervals to obtain two weeks of clean synthesized data (middle plot in Figure REF ).", "This sanitized time series is then used as the training set for our SARIMA model." ], [ "SI: Seasonal Integration", "ARMA models assume that the data is stationary, that is, the statistical properties of the data (e.g., mean and variance) are constant over time.", "Because of the strong daily and weekly patterns present in IBR data, our time series are non-stationary (e.g., there is less traffic at night and during weekends, because more devices are turned off or disconnected during these periods of time [22]).", "This is the reason why a simple predictive model would not work with IBR time series.", "As a result, we make our time series stationary by filtering these trends with seasonal differencing.", "In practice, our time series contain a weekly and daily trend which we both remove by applying a seasonal differencing (SI part of SARIMA) of a week (e.g., bottom plot in Figure REF ).", "The computed training data, which is now sanitized and stationary, can then be used in the following step to create a predictive model and to make predictions on the calibration data." ], [ "ARMA: Autoregressive Moving Average", "In this step, we estimate the best parameters for any given time series.", "In pratice, Chocolatine will compute a different set of parameters for each analyzed time series, which will increase the adaptability of the solution and the quality of the predictions.", "To achieve this goal, we have to precisely estimate the values for the two key parameters of ARMA, that is the order of the autoregressive model (named $p$ ), and the order of the moving-average model (named $q$ ).", "We use the sanitized training data for that purpose, as ARMA models only work on the condition that the training data is anomaly free and stationary.", "In order to find the best combination of parameters for any given time series, we make predictions on a second set of data that we refer to as the calibration data.", "In practice, we use the period following the training data for defining such a calibration.", "We consider several predictive models, each with their own set of $(p,q)$ parameters, to evaluate the performances of various distinct predictions.", "We finally compare the accuracy of these predictive models on the data used for this calibration.", "We use the Root Mean Square Error (RMSE) to compute the error between the real time series and the one obtained from the predictive model.", "We chose the RMSE in order to penalize predictive models that made predictions that are significantly far from the actual data.", "The predictive model (i.e., the set of $(p, q)$ parameters) with the lowest error will thus be used for future predictions.", "Now that we have the best parameters to use within the ARMA model, we also compute a prediction interval.", "It defines the boundaries of our predictions and it is used for the outage detection process.", "We compute $99.5$ % prediction intervals using the residual variance.", "We compute the residual variance using the Median Absolute Deviation (MAD), a robust measure of data variability used for anomaly detection [12] (the RMSE being not suitable enough in this case).", "This is essential, as the prediction intervals should be both robust to false positives but still able to capture extreme values introduced by measurement errors and outages.", "The model, and its associated prediction interval, are then used to detect outages, as described in the next section." ], [ "Detection", "The steps described above provide us with stationary data and an optimized predictive model for each time series.", "The next step is to detect outages with the predictive models.", "We define an outage as a point in time where a value of a time series is smaller than the lower bound of the prediction interval.", "The severity of this alarm will be determined by computing the following distance: $d = (\\hat{X} - X) / (\\hat{X} - L),$ where $\\hat{X}$ is the predicted value, $X$ is the actual value from the time series, and $L$ is the lower bound of the prediction interval.", "Distances $d>1$ and $d<-1$ mean that the time series is outside of the prediction interval, whereas the time series is within the prediction interval when $-1 \\le d \\ge 1$ .", "The only cases that are reported as outages are cases where $d > 1$ , that is, when the actual values are outside of the prediction interval and are smaller than the lower bound of the prediction interval, which translates in a significant drop in the number of IPs observed in the time series.", "Cases where $d < -1$ (i.e., points that are greater than the upper bound of the prediction interval) are considered as extreme values, but they do not fall into our definition of an outage, and are thus not reported.", "Every hour (i.e., 12 data points) we make predictions for the next hour and compare the actual data to these predictions as explained above.", "Each time we move forward in the data, ARMA takes into account the new data points for the future predictions.", "However, we take particular precautions to maintain the quality of the predictive model.", "Data identified as part of an outage should not be used for future predictions, which brings us back to the problems discussed in Section REF , where missing values, extreme values, and outages would diminish the quality of the predictive model.", "In this phase, we solve these problems differently, by doing what we refer to as inpainting: if a new sample of data is considered to be an extreme value (i.e., $d < -1$ or $d > 1$ ), we feed the predictive model with the predicted value instead of the real one." ], [ "Case study", "To illustrate the functioning of the proposed method and some of its benefits, this section provides thorough results from a specific case study.", "On January 25th 2011, the Mubarak regime ordered network operators to shut down Internet connectivity during the Egyptian revolution, in an attempt to silence the opposition.", "The chronology of this event has been described in [8].", "The authors used BGP routing data, ping, traceroute, and IBR data.", "The IBR data was manually analyzed to shed light on the massive packet-filtering mechanisms that were put in place, and to identify denial-of-service attacks related to the political events happening in Egypt during the same period.", "In this section, we present how our solution analyzes the same IBR data but allows us to systematically detect the beginning and the end of the connectivity loss, and to estimate the severity of the outage.", "Figure REF shows the time series of unique source IP addresses from Egypt reaching the UCSD Network Telescope (plotted in blue).", "The disconnections occurred between the 28th of January and the 3rd of February, 2011, as it can be seen by the loss of intensity of the signal depicted in the figure.", "Here, we chose to include in our analysis also the values of the time series after the outages, because of an interesting phenomenon that was occurring: the values of the time series are higher than usual during the days that follow the Egyptian revolution and go back to normal around the 7th of February.", "In [6], the authors revealed that a botnet covertly (and massively) scanned the Internet during those days.", "This time series is analyzed as follows.", "The training set, to the left, is sanitized following the methods discussed in REF .", "Multiple sets of ARMA parameters are then going to be used to predict the calibration set.", "The predictions are plotted with a green line.", "The set of parameters that resulted in the lowest error ($p=4, q=1$ in this case) will be used for the rest of the analysis.", "The difference between the predicted time series and the original time series allowed us to compute prediction intervals using the MAD.", "These intervals are plotted with gray bars that surround the predictions.", "Then the test set is compared to the ARMA model and the prediction intervals computed in the previous step.", "The sudden drop that occurs when the outage starts, puts the time series below the prediction intervals, which means that an outage is reported.", "Visually, this is shown with a red vertical line.", "Additionally, it also means that the inpainting process described in Section REF will take place, which is clear here, since the trend of the predicted time series stays similar to that of the original time series, even if an outage is occurring at the same time.", "No alarm is reported during the botnet activity ([6]) that follows the outage, because the original time series values are higher than our prediction intervals, which means that the data is again inpainted and it will not count as an anomaly." ], [ "Validation, Calibration and Comparison", "We evaluate the limits, and performance of Chocolatine through a validation and a comparison.", "We start by considering a set of verified outages from our ground-truth dataset, which we use to assess the accuracy of our outage detector, and look for the best threshold, e.g., the one determining the minimal number of IPs required to make accurate predictions.", "We then use a different set of outages in order to compare Chocolatine against CAIDA's outage detection techniques (using BGP dumps, active probing and the network telescope data)." ], [ "Validation", "In this section, we evaluate the reliability of our technique using a reference dataset and gathering 130 time series containing outages.", "These time series contain three different types of spatial aggregates—ASs, countries, and regions within countries—from various years (2009 to 2018).", "The duration of these outages spans from an hour to a week.", "The comprehensive list of time series that compose this dataset is given in Table REF .", "As an example, the RIPE NCC and Duke University BGP experiment [25] caused several outages in different ASs worldwide by triggering a bug in some Cisco routers.", "Table: Number of time series per IP threshold and per spatial scaleFigure: ROC curve with a 5 minutes time grain and athreshold of 20We evaluate Chocolatine by computing the True Positive Rate (TPR) and the False Positive Rate (FPR), and show our calibration results with a ROC curve.", "Our purpose is twofold: we look into the accuracy of our approach, and we search for its best parameters by exploring its calibration spectrum.", "In particular, we determine which confidence level should be used to assess whether an outage is occurring or not.", "Our aim is to find the best trade-off between the TPR and the FPR by considering our collection of documented outages as the ground truth.", "Moreover, to quantify the ability of our method to maximize the TPR while keeping the FPR low, we need to set two evaluation parameters used in our ROC analysis.", "On the one hand, we need to find out the minimal intensity required in the time series for our method to finely operate, and on the other hand, the smallest time granularity at which we can accurately detect outages.", "The intensity of time series is measured as the median number of observed IPs in a week.", "Trying multiple thresholds showed us that Chocolatine yielded better results with a threshold of 20 IPs, and that increasing this number had little effect on the accuracy.", "The results are presented in Figure REF , where three different ROC curves are plotted: The green curve plots the accuracy for all time series; The red curve plots the accuracy for time series with a median of IP addresses in a week that is greater than 20; The blue curve plots the accuracy for time series with a median of IP addresses in a week that is smaller than 20.", "On the one hand, using a small number of IP addresses provides performance only slightly better than using a random model, which is expected, as the central limit theorem does not hold for samples that are too small.", "On the other hand, the higher the number of IPs is, the better the performance.", "(the red curve yields much better results than the blue one).", "The accuracy of our method for all time series (the green curve), is not satisfactory because of the influence of the time series contained in the blue curve.", "As a result, we have chosen to limit our analysis to the time series that have a median of more than 20 IPs per week.", "Table REF summarizes the impact of this threshold on the number of remaining time series.", "Setting this threshold to 20 limits the number of time series that we can analyze to 1625, but it significantly increases the accuracy of our detector.", "Here, we make the assumption that network operators will want to have a low FPR, even if it means missing smaller outages.", "We also found that the size of the time bins we use can be relatively small (around 5 minutes) without impacting the performance much.", "This analysis is not included due to space constraints.", "To conclude this section, we recommend to use a threshold of 20 IPs for the time series and 5 minutes long time-windows as in Figure REF .", "These two parameters can of course be tuned according to the data collection's specificity.", "Using such a threshold and time granularity (we can estimate outage durations at a 5 min granularity), the best confidence level for the prediction intervals is $99,5\\%$ ($3\\sigma $ ).", "With these settings we obtain an acceptable true positive rate of $90\\%$ while keeping the false positive rate under $2\\%$ ." ], [ "Comparison", "In this section, we compare the performance of our detector to three other techniques hosted in IODA: CAIDA's darknet detector (DN), CAIDA's BGP detector (BGP), and a technique based on active probing (AP), Trinocular [21].", "A description of the integration of these 3 detectors in IODA can be found in [18].", "In order to compare the detectors, we use a second ground-truth sample to emphasize the versatility of Chocolatine on different time series.", "Its set of outages is distinct from the previous one, but still decomposed in 5 minutes bins (see Table REF ).", "We ran the 4 detectors and enumerated the number of 5 minutes time bins where an outage is detected, for each detector.", "Fig.", "REF  fig:c1 plots the number of outages detected by IODA's components, and Fig.", "REF  fig:c2 plots how Chocolatine compares against BGP and Active Probing (AP) detectors.", "Note that the number of events given below the name of each detector are events detected only with that technique.", "The intersections depict the number of events detected by multiple detectors.", "For example, there are 1680 BGP events (the sum of each intersection combination in the magenta based set), 985 of which are also detected by the active probing technique.", "Comparing Chocolatine with the IODA's darknet detector, one can observe that Chocolatine detects two order of magnitude more outages, 1193 compared to 71.", "This result highlights the much higher sensitivity of our approach, while CAIDA's darknet detector is extremely conservative by nature.", "By modeling weekly, and a fortiori daily, patterns our predictions are adaptively following the time series oscillations, while this is not the case in CAIDA's detector, which uses a global threshold approach.", "Another way to evaluate Chocolatine is to cross-reference the set of alarms it is able to detect compared to the other detectors (and look at all the intersections).", "When there are intersections, the corresponding events are very likely to be actual outages, i.e., they are true positives.", "Fig.", "REF  fig:c2 shows that the outages detected by Chocolatine are likely to intersect the outages of the other sources.", "Indeed, there are only 251 alarms that are specific to Chocolatine.", "The analysis of these alarms shows us that 59% of them occur in a range of 1 hour around alarms detected by other data sources.", "Generally speaking, these results suggest that our tool is complementary to the two others (BGP and AP) and clearly outperforms IODA's current darknet detector.", "Figure: Comparison of the number of 5 minutes time bins identified asoutage per detector" ], [ "Reproducibility", "All results presented in this paper are easily reproducible by the research community.", "Our source code is made publicly available [13] and it automatically fetches and processes IBR data, which means that the dataset is also available.", "The code is structured in such a way that one simply needs to format its data according to our data format to be able to launch Chocolatine on different data sources." ], [ "Conclusion", "In this paper we proposed Chocolatine, which detects remote outages using Internet Background Radiation traffic.", "The underlying predictive methodology is based on SARIMA models.", "Both the method and the data are easy to respectively deploy and collect in most ISP.", "We show that our method detects outages as quickly as 5 minutes after their occurrence, with a $90\\%$ true positive rate and a small percentage of false alarms ($2\\%$ ).", "Chocolatine is able to detect outages in time series with as little as 20 IP addresses.", "Moreover, we compare its performance against other passive and active detectors.", "Its share of common events with other sources is high, while each technique seems able to reveal specific events.", "We observe that the shares of common events, the overall and two-by-two intersections, are the most significant, while each technique seems able to reveal specific events too.", "Our method is tailored to seasonal data and is robust to noise.", "It is therefore applicable to many other data sources reflecting Internet activity.", "For example, we plan to experiment its deployment on access logs of widely popular content, while its operational integration into the CAIDA's IODA outage detection system [17] is already in progress.", "Table: Ground truth — validation (Section )Table: Ground truth — comparison (Section )" ], [ "Acknowledgments", "The authors thank Brandon Foubert, Julian Del fiore, and Kenjiro Cho for their valuable comments.", "This work has been partially funded by the IIJ-II summer internship program, and has been made possible in part by a grant from the Cisco University Research Program Fund, an advised fund of Silicon Valley Foundation.", "This research is supported by the National Science Foundation grant CNS-1730661, by the U.S. Department of Homeland Security S&T Directorate via contract number 70RSAT18CB0000015, by the Air Force Research Laboratory under agreement number FA8750-18-2-0049, and by the Open Technology Fund." ] ]
1906.04426
[ [ "Meta-Learning Neural Bloom Filters" ], [ "Abstract There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression.", "In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence.", "In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical.", "This motivates the need for few-shot neural data structures.", "In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning.", "We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks." ], [ "Introduction", "One of the simplest questions one can ask of a set of data is whether or not a given query is contained within it.", "Is $q$ , our query, a member of $S$ , our chosen set of observations?", "This set membership query arises across many computing domains; from databases, network routing, and firewalls.", "One could query set membership by storing $S$ in its entirety and comparing $q$ against each element.", "However, more space-efficient solutions exist.", "The original and most widely implemented approximate set membership data-structure is the Bloom Filter [3].", "It works by storing sparse distributed codes, produced from randomized hash functions, within a binary vector.", "The Bloom-filter trades off space for an allowed false positive rate, which arises due to hash collisions.", "However its error is one-sided; if an element $q$ is contained in $S$ then it will always be recognized.", "It never emits false negatives.", "One can find Bloom Filters embedded within a wide range of production systems; from network security [17], to block malicious IP addresses; databases, such as Google's Bigtable [8], to avoid unnecessary disk lookups; cryptocurrency [20], to allow clients to filter irrelevant transactions; search, such as Facebook's typeahead search [1], to filter pages which do not contain query prefixes; and program verification [14], to avoid recomputation over previously observed states.", "While the main appeal of Bloom Filters is favourable compression, another important quality is the support for dynamic updates.", "New elements can be inserted in $\\mathcal {O}(1)$ time.", "This is not the case for all approximate set membership data structures.", "For example, perfect hashing saves $\\approx 40\\%$ space over Bloom Filters but requires a pre-processing stage that is polynomial-time in the number of elements to store [13].", "Whilst the static set membership problem is interesting, it limits the applicability of the algorithm.", "For example, in a database application that is serving a high throughput of write operations, it may be intractable to regenerate the full data-structure upon each batch of writes.", "We thus focus on the data stream computation model [28], where input observations are assumed to be ephemeral and can only be inspected a constant number of times — usually once.", "This captures many real-world applications: network traffic analysis, database query serving, and reinforcement learning in complex domains.", "Devising an approximate set membership data structure that is not only more compressive than Bloom Filters, but can be applied to either dynamic or static sets, could have a significant performance impact on modern computing applications.", "In this paper we investigate this problem using memory-augmented neural networks and meta-learning.", "We build upon the recently growing literature on using neural networks to replace algorithms that are configured by heuristics, or do not take advantage of the data distribution.", "For example, Bloom Filters are indifferent to the data distribution.", "They have near-optimal space efficiency when data is drawn uniformly from a universe set [6] (maximal-entropy case) but (as we shall show) are sub-optimal when there is more structure.", "Prior studies on this theme have investigated compiler optimization [11], computation graph placement [24], and data index structures such as b-trees [23].", "In the latter work, [23] explicitly consider the problem of static set membership.", "By training a neural network over a fixed $S$ (in their case, string inputs) along with held-out negative examples, they observe $36\\%$ space reduction over a conventional Bloom FilterThe space saving increases to $41\\%$ when an additional trick is incorporated, in discretizing and re-scaling the classifier outputs and treating the resulting function as a hash function to a bit-map.. Crucially this requires iterating over the storage set $S$ a large number of times to embed its salient information into the weights of a neural network classifier.", "For a new $S$ this process would have to be repeated from scratch.", "Instead of learning from scratch, we draw inspiration from the few-shot learning advances obtained by meta-learning memory-augmented neural networks [31], [35].", "In this setup, tasks are sampled from a common distribution and a network learns to specialize to (learn) a given task with few examples.", "This matches very well to applications where many Bloom Filters are instantiated over different subsets of a common data distribution.", "For example, a Bigtable database usually contains one Bloom Filter per SSTable file.", "For a large table that contains Petabytes of data, say, there can be over $100,000$ separate instantiated data-structures which share a common row-key format and query distribution.", "Meta-learning allows us to exploit this common redundancy.", "We design a database task with similar redundancy to investigate this exact application in Section REF .", "The main contributions of this paper are (1) A new memory-augmented neural network architecture, the Neural Bloom Filter, which learns to write to memory using a distributed write scheme, and (2) An empirical evaluation of the Neural Bloom Filter meta-learned on one-shot approximate set membership problems of varying structure.", "We compare with the classical Bloom Filter alongside other memory-augmented neural networks such as the Differentiable Neural Computer [19] and Memory Networks [34].", "We find when there is no structure, that differentiates the query set elements and queries, the Neural Bloom Filter learns a solution similar to a Bloom Filter derivative — a Bloom-g filter [29] — but when there is a lot of structure the solution can be considerably more compressive (e.g.", "$30\\times $ for a database task)." ], [ "Approximate Set Membership", "The problem of exact set membership is to state whether or not a given query $q$ belongs to a set of $n$ distinct observations $S = \\lbrace x_1, \\ldots , x_n \\rbrace $ where $x_i$ are drawn from a universe set $U$ .", "By counting the number of distinct subsets of size $n$ it can be shown that any such exact set membership tester requires at least $\\log _2 \\binom{|U|}{n}$ bits of space.", "To mitigate the space dependency on $|U|$ , which can be prohibitively large, one can relax the constraint on perfect correctness.", "Approximate set membership allows for a false positive rate of at most $\\epsilon $ .", "Specifically we answer $q \\in A(S)$ where $A(S) \\supseteq S$ and $p(q \\in A(S) - S) \\le \\epsilon $ .", "It can be shownBy counting the minimal number of $A(S)$ sets required to cover all $S \\subset U$ .", "the space requirement for approximate set membership of uniformly sampled observations is at least $n\\log _2(\\frac{1}{\\epsilon })$ bits [6] which can be achieved with perfect hashing.", "So for a false positive rate of $1\\%$ , say, this amounts to $6.6$ bits per element.", "In contrast to storing raw or compressed elements this can be a huge space saving, for example ImageNet images require 108 KB per image on average when compressed with JPEG, an increase of over four orders of magnitude." ], [ "Bloom Filter", "The Bloom Filter [3] is a data structure which solves the dynamic approximate set membership problem with near-optimal space complexity.", "It assumes access to k uniform hash functions $h_i : U \\rightarrow \\lbrace 1, \\ldots , m \\rbrace , \\; i = 1, \\ldots , k$ such that $p(h_i(x) = j) = 1/m$ independent of prior hash values or input $x$ .", "The Bloom Filter's memory $M \\in [0, 1]^m$ is a binary string of length $m$ which is initialized to zero.", "Writes are performed by hashing an input $x$ to $k$ locations in $M$ and setting the corresponding bits to 1, $M[h_i(x)] \\leftarrow 1; \\; i = 1, \\ldots , k$ .", "For a given query $q$ the Bloom Filter returns true if all corresponding hashed locations are set to 1 and returns false otherwise: $Query(M, q):= M[h_1(q)] \\wedge M[h_2(q)] \\wedge \\ldots \\wedge M[h_k(q)]$ .", "This incurs zero false negatives, as any previously observed input must have enabled the corresponding bits in $M$ , however there can be false positives due to hash collisions.", "To achieve a false positive rate of $\\epsilon $ with minimal space one can set $k = \\log _2{(1 / \\epsilon )}$ and $m = n \\log _2{(1 / \\epsilon )} \\log _2{e}$ , where $e$ is Euler's number.", "The resulting space is a factor of $\\log _2{e} \\approx 1.44$ from the optimal static lower bound given by [6]." ], [ "Memory-Augmented Neural Networks", "Recurrent neural networks such as LSTMs retain a small amount of memory via the recurrent state.", "However this is usually tied to the number of trainable parameters in the model.", "There has been recent interest in augmenting neural networks with a larger external memory.", "The method for doing so, via a differentiable write and read interface, was first popularized by the Neural Turing Machine (NTM) [18] and its successor the Differentiable Neural Computer (DNC) [19] in the context of learning algorithms, and by Memory Networks [34] in the context of question answering.", "Memory Networks store embeddings of the input in separate rows of a memory matrix $M$ .", "Reads are performed via a differentiable content-based addressing operation.", "Given a query embedding $q$ we take some similarity measure $D$ (e.g.", "cosine similarity, or negative euclidean distance) against each row in memory and apply a softmax to obtain a soft address vector $a \\propto e^{D(q, M)}$ .", "A read is then a weighted sum over memory $r \\leftarrow a^TM$ .", "The NTM and DNC use the same content-based read mechanism, but also learns to write.", "These models can arbitrate whether to write to slots in memory with similar content (content-based writes), temporally ordered locations, or unused memory.", "When it comes to capacity, there has been consideration to scaling both the DNC and Memory Networks to very large sizes using sparse read and write operations [30], [7].", "However another way to increase the capacity is to increase the amount of compression which occurs in memory.", "Memory Nets can create compressive representations of each input, but cannot compress jointly over multiple inputs because they are hard-wired to write one slot per timestep.", "The NTM and DNC can compress over multiple slots in memory because they can arbitrate writes across multiple locations, but in practice seem to choose very sharp read and write addresses.", "The Kanerva Machine [37], [38] tackles memory-wide compression using a distributed write scheme to jointly compose and compress its memory contents.", "The model uses content-based addressing over a separate learnable addressing matrix $A$ , instead of the memory $M$ , and thus learns where to write.", "We take inspiration from this scheme." ], [ "Model", "px Figure: Overview of the Neural Bloom Filter architecture.One approach to learning set membership in one-shot would be to use a recurrent neural network, such as an LSTM or DNC.", "Here, the model sequentially ingests the $N$ elements to store, answers a set of queries using the final state, and is trained by BPTT.", "Whilst this is a general training approach, and the model may learn a compressive solution, it does not scale well to larger number of elements.", "Even when $N = 1000$ , backpropagating over a sequence of this length induces computational and optimization challenges.", "For larger values this quickly becomes intractable.", "Alternatively one could store an embedding of each element $x_i \\in S$ in a slot-based Memory Network.", "This is more scalable as it avoids BPTT, because the gradients of each input can be calculated in parallel.", "However Memory Networks are not a space efficient solution (as shown in in Section ) because there is no joint compression of inputs.", "[h] Neural Bloom Filter [1] def controller(x): $\\quad z \\leftarrow f_{enc}(x)$ Input embedding $\\quad q \\leftarrow f_q(z)$ Query word $\\quad a \\leftarrow \\sigma (q^T A)$ Memory address $\\quad w \\leftarrow f_{w}(z) $ Write word def write(x): $\\quad a, w \\leftarrow \\hbox{controller}(x)$ $\\quad M_{t + 1} \\leftarrow M_t + w a^T$ Additive write def read(x): $\\quad a, w, z \\leftarrow \\hbox{controller}(x)$ $\\quad r \\leftarrow \\hbox{flatten}(M \\odot $ a) Read words $\\quad o \\leftarrow f_{out}([r, w, z])$ Output logit This motivates the proposed memory model, the Neural Bloom Filter.", "Briefly, the network is augmented with a real-valued memory matrix.", "The network addresses memory by classifying which memory slots to read or write to via a softmax, conditioned on the input.", "We can think of this as a continuous analogue to the Bloom Filter's hash function; because it is learned the network can co-locate or separate inputs to improve performance.", "The network updates memory with a simple additive write operation — i.e.", "no multiplicative gating or squashing — to the addressed locations.", "An additive write operation can be seen as a continuous analogue to the the Bloom Filter's logical OR write operation.", "Crucially, the additive write scheme allows us to train the model without BPTT — this is because gradients with respect to the write words $\\partial L / \\partial w = (\\partial L / \\partial M)^T a$ can be computed in parallel.", "Reads involve a component-wise multiplication of address and memory (analogous to the selection of locations in the Bloom Filter via hashing), but instead of projecting this down to a scalar with a fixed function, we pass this through an MLP to obtain a scalar familiarity logit.", "The network is fully differentiable, allows for memories to be stored in a distributed fashion across slots, and is quite simple e.g.", "in comparison to DNCs.", "The full architecture depicted in Figure REF consists of a controller network which encodes the input to an embedding $z \\leftarrow f_{enc}(x)$ and transforms this to a write word $w \\leftarrow f_w(z)$ and a query $q \\leftarrow f_q(z)$ .", "The address over memory is computed via a softmax $a \\leftarrow \\sigma (q^T A)$ over the content-based attention between $q$ and a learnable address matrix $A$ .", "Here, $\\sigma $ denotes a softmax.", "The network thus learns where to place elements or overlap elements based on their content, we can think of this as a soft and differentiable relaxation of the uniform hashing families incorporated by the Bloom Filter (see Appendix REF for further discussion).", "A write is performed by running the controller to obtain a write word $w$ and address $a$ , and then additively writing $w$ to $M$ , weighted by the address $a$ , $M_{t + 1} \\leftarrow M_t + w a^T$ .", "The simple additive write ensures that the resulting memory is invariant to input ordering (as addition is commutative) and we do not have to backpropagate-through-time (BPTT) over sequential writes — gradients can be computed in parallel.", "A read is performed by also running the controller network to obtain $z, w,$ and $a$ and component-wise multiplying the address $a$ with $M$ , $r \\leftarrow M \\odot a$ .", "The read words $r$ are fed through an MLP along with the residual inputs $w$ and $z$ and are projected to a single scalar logit, indicating the familiarity signal.", "We found this to be more powerful than the conventional read operation $r \\leftarrow a^T M$ used by the DNC and Memory Networks, as it allows for non-linear interactions between rows in memory at the time of read.", "See Algorithm for an overview of the operations.", "To give an example network configuration, we chose $f_{enc}$ to be a 3-layer CNN in the case of image inputs, and a 128-hidden-unit LSTM in the case of text inputs.", "We chose $f_w$ and $f_q$ to be an MLP with a single hidden layer of size 128, followed by layer normalization, and $f_{out}$ to be a 3-layer MLP with residual connections.", "We used a leaky ReLU as the non-linearity.", "Although the described model uses dense operations that scale linearly with the memory size $m$ , we discuss how the model could be implemented for $\\mathcal {O}(\\log m)$ time reads and writes using sparse attention and read/write operations, in Appendix REF .", "Furthermore the model's relation to uniform hashing is discussed in Appendix REF .", "Space Complexity In this section we discuss space lower bounds for the approximate set membership problem when there is some structure to the storage or query set.", "This can help us formalise why and where neural networks may be able to beat classical lower bounds to this problem.", "The $n\\log _2{(1 / \\epsilon )}$ lower bound from [6] assumes that all subsets $S \\subset U$ of size $n$ , and all queries $q \\in U$ have equal probability.", "Whilst it is instructive to bound this maximum-entropy scenario, which we can think of as `worst case', most applications of approximate set membership e.g.", "web cache sharing, querying databases, or spell-checking, involve sets and queries that are not sampled uniformly.", "For example, the elements within a given set may be highly dependent, there may be a power-law distribution over queries, or the queries and sets themselves may not be sampled independently.", "A more general space lower bound can be defined by an information theoretic argument from communication complexity [39].", "Namely, approximate set membership can be framed as a two-party communication problem between Alice, who observes the set $S$ and Bob, who observes a query $q$ .", "They can agree on a shared policy $\\Pi $ in which to communicate.", "For given inputs $S, q$ they can produce a transcript $A_{S, q} = \\Pi (S, q) \\in \\mathcal {Z}$ which can be processed $g : \\mathcal {Z} \\rightarrow {0, 1}$ such that $\\mathbb {P}\\left(g(A_{S, q}) = 1 | q \\notin S \\right) \\le \\epsilon $ .", "[2] shows that the maximum transcript size is greater than the mutual information between the inputs and transcript: $\\max _{S, q} |A_{S, q}| \\ge I\\!\\left(S, q ; A_{S, q}\\right) = H(S, q) - H(S, q | A_{S, q})$ .", "Thus we note problems where we may be able to use less space than the classical lower bound are cases where the entropy $H(S, q)$ is small, e.g.", "our sets are highly non-uniform, or cases where $H(S, q | A_{S, q})$ is large, which signifies that many query and set pairs can be solved with the same transcript.", "Experiments Figure: Sampling strategies on MNIST.", "Space consumption at 1% FPR.Our experiments explore scenarios where set membership can be learned in one-shot with improved compression over the classical Bloom Filter.", "We consider tasks with varying levels of structure in the storage sets $S$ and queries $q$ .", "We compare the Neural Bloom Filter with three memory-augmented neural networks, the LSTM, DNC, and Memory Network, that are all able to write storage sets in one-shot.", "[tb] Meta-Learning Training [1] Let $S^{train}$ denote the distribution over sets to store.", "Let $Q^{train}$ denote the distribution over queries.", "$i = 1$ to max train steps Sample task: $\\quad $ Sample set to store: $S \\sim \\mathcal {S}^{train}$ $\\quad $ Sample $t$ queries: $x_1, \\ldots , x_t \\sim Q^{train}$ $\\quad $ Targets: $y_j = 1 \\hbox{ if } x_j \\in S \\hbox{ else } 0; \\; j = 1, \\ldots , t$ Write entries to memory: $M \\leftarrow f_{\\theta }^{write}(S)$ Calculate logits: $o_j = f_{\\theta }^{read}(M, x_j); \\; j = 1, \\ldots , t$ XE loss: $L = \\sum _{j = 1}^t y_j \\log {o_j} + (1 - y_j)(1 - \\log {o_j})$ Backprop through queries and writes: $dL / d\\theta $ Update parameters: $\\theta _{i + 1} \\leftarrow \\hbox{Optimizer}(\\theta _i, dL / d\\theta )$ The training setup follows the memory-augmented meta-learning training scheme of [35], only here the task is familiarity classification versus image classification.", "The network samples tasks which involve classifying familiarity for a given storage set.", "Meta-learning occurs as a two-speed process, where the model quickly learns to recognize a given storage set $S$ within a training episode via writing to a memory or state, and the model slowly learns to improve this fast-learning process by optimizing the model parameters $\\theta $ over multiple tasks.", "We detail the training routine in Algorithm .", "For the RNN baselines (LSTM and DNC) the write operation corresponds to unrolling the network over the inputs and outputting the final state.", "For these models, the query network is simply an MLP classifier which receives the concatenated final state and query, and outputs a scalar logit.", "For the Memory Network, inputs are stored in individual slots and the familiarity signal is computed from the maximum content-based attention value.", "The Neural Bloom Filter read and write operations are defined in Algorithm .", "Space Comparison We compared the space (in bits) of the model's memory (or state) to a Bloom Filter at a given false positive rate and $0\\%$ false negative rate.", "The false positive rate is measured empirically over a sample of $50,000$ queries for the learned models; for the Bloom Filter we employ the analytical false positive rate.", "Beating a Bloom Filter's space usage with the analytical false positive rate implies better performance for any given Bloom Filter library version (as actual Bloom Filter hash functions are not uniform), thus the comparison is reasonable.", "For each model we sweep over hyper-parameters relating to model size to obtain their smallest operating size at the desired false positive rate (for the full set, see Appendix ).", "Because the neural models can emit false negatives, we store these in a (ideally small) backup Bloom Filter, as proposed by [23], [26].", "We account for the space of this backup Bloom Filter, and add it to the space usage of the model's memory for parity (See Appendix for further discussion).", "The neural network must learn to output a small state in one-shot that can serve set membership queries at a given false positive rate, and emit a small enough number of false negatives such that the backup filter is also small, and the total size is considerably less than a Bloom Filter.", "Sampling Strategies on MNIST Figure: Memory access analysis.", "Three different learned solutions to class-based familiarity.", "We train three Neural Bloom Filter variants, with a succession of simplified read and write mechanisms.", "Each model contains 10 memory slots and the memory addressing weights aa and contents M ¯\\bar{M} are visualised, broken down by class.", "Solutions share broad correspondence to known algorithms: (a) Bloom-g filters, (b) Bloom Filters, (c) Perfect hashing.To understand what kinds of scenarios neural networks may be more (or less) compressive than classical Bloom Filters, we consider three simple set membership tasks that have a graded level of structure to the storage sets and queries.", "Concretely, they differ in the sampling distribution of storage sets $\\mathcal {S}^{train}$ and queries $\\mathcal {Q}^{train}$ .", "However all problems are approximate set membership tasks that can be solved by a Bloom Filter.", "The tasks are (1) Class-based familiarity, a highly structured task where each set of images is sampled with the constraint that they arise from the same randomly-selected class.", "(2) Non-uniform instance-based familiarity, a moderately structured task where the images are sampled without replacement from an exponential distribution.", "(3) Uniform instance-based familiarity, a completely unstructured task where each subset contains images sampled uniformly without replacement.", "For each task we varied the size of the sample set to store, and calculated the space (in bits) of each model's state at a fixed false positive rate of $1\\%$ and a false negative rate of $0\\%$ .", "We used relatively small storage set sizes (e.g.", "$100 - 1000$ ) to start with, as this highlights that some RNN-based approaches struggle to train over larger set sizes, before progressing to larger sets in subsequent sections.", "See Appendix for further details on the task setup.", "In the class-based sampling task we see in Figure REF a that the DNC, LSTM and Neural Bloom Filter are able to significantly outperform the classical Bloom Filter when images are sampled by class.", "The Memory Network is able to solve the task with a word size of only 2, however this corresponds to a far greater number of bits per element, 64 versus the Bloom Filter's $9.8$ (to a total size of $4.8$ kb), and so the overall size was prohibitive.", "The DNC, LSTM, and Neural Bloom Filter are able to solve the task with a storage set size of 500 at $1.1$ kb , 217b, and 382b; a $4.3\\times $ , $22\\times $ , and $12\\times $ saving respectively.", "For the non-uniform sampling task in Figure REF b we see the Bloom Filter is preferable for less than 500 stored elements, but is overtaken thereafter.", "At 1000 elements the DNC, LSTM, and Neural Bloom Filter consume $7.9$ kb, $7.7$ kb, and $6.8$ kb respectively which corresponds to a $17.6\\%$ , $19.7\\%$ , and $28.6\\%$ reduction over the $9.6$ kb Bloom Filter.", "In the uniform sampling task shown in Figure REF c, there is no structure to the sampling of $S$ .", "The two architectures which rely on BPTT essentially fail to solve the task at some threshold of storage size.", "The Neural Bloom Filter solves it with $6.8$ kb (using a memory size of 50 and word size of 2).", "The overall conclusion from these sets of experiments is that the classical Bloom Filter works best when there is no structure to the data, however when there is (e.g.", "skewed data, or highly dependent sets that share common attributes) we do see significant space savings.", "Memory Access Analysis We wanted to understand how the Neural Bloom Filter uses its memory, and in particular how its learned solutions may correspond to classical algorithms.", "We inspected the memory contents (what was stored to memory) and addressing weights (where it was stored) for a small model of 10 memory slots and a word size of 2, trained on the MNIST class-based familiarity task.", "We plot this for each class label, and compare the pattern of memory usage to two other models that use increasingly simpler read and write operations: (1) an ablated model with constant write words $w \\leftarrow \\mathbf {1}$ , and (2) an ablated model with $w \\leftarrow \\mathbf {1}$ and a linear read operator $r \\leftarrow a^T M$ .", "The full model, shown in Figure REF a learns to place some classes in particular slots, e.g.", "class $1 \\rightarrow $ slot 5, however most are distributed.", "Inspecting the memory contents, it is clear the write word encodes a unique 2D token for each class.", "This solution bears resemblance with Bloom-g Filters [29] where elements are spread across a smaller memory with the same hashing scheme as Bloom Filters, but a unique token is stored in each slot instead of a constant 1-bit value.", "With the model ablated to store only 1s in Figure REF b we see it uses semantic addressing codes for some classes (e.g.", "0 and 1) and distributed addresses for other classes.", "E.g.", "for class 3 the model prefers to uniformly spread its writes across memory slot 1, 4, and 8.", "The model solution is similar to that of Bloom Filters, with distributed addressing codes as a solution — but no information in the written words themselves.", "When we force the read operation to be linear in Figure REF c, the network maps each input class to a unique slot in memory.", "This solution has a correspondence with perfect hashing.", "In conclusion, with small changes to the read/write operations we see the Neural Bloom Filter learn different algorithmic solutions.", "Database Queries Table: Database task.", "Storing 5000 row-key strings for a target false positive rate.We look at a task inspired by database interactions.", "NoSQL databases, such as Bigtable and Cassandra, use a single string-valued row-key, which is used to index the data.", "The database is comprised of a union of files (e.g.", "SSTables) storing contiguous row-key chunks.", "Bloom Filters are used to determine whether a given query $q$ lies within the stored set.", "We emulate this setup by constructing a universe of strings, that is alphabetically ordered, and by sampling contiguous ranges (to represent a given SSTable).", "Queries are sampled uniformly from the universe set of strings.", "We choose the $2.5M$ unique tokens in the GigaWord v5 news corpus to be our universe as this consists of structured natural data and some noisy or irregular strings.", "We consider the task of storing sorted string sets of size 5000.", "We train the Neural Bloom Filter to several desired false positive rates ($5\\%, 1\\%, 0.1\\%$ ) and used a backup Bloom Filter to guarantee $0\\%$ false negative rate.", "We also trained LSTMs and DNCs for comparison, but they failed to learn a solution to the task after several days of training; optimizing insertions via BPTT over a sequence of length 5000 did not result in a remotely usable solution.", "The Neural Bloom Filter avoids BPTT via its simple additive write scheme, and so it learned to solve the task quite naturally.", "As such, we compare the Neural Bloom Filter solely to classical data structures: Bloom Filters and Cuckoo Filters.", "In Table REF we see a significant space reduction of $3-40\\times $ , where the margin grows with increasing permitted false positive rates.", "Since memory is an expensive component within production databases (in contrast to disk, say), this memory space saving could translate to a non-trivial cost reduction.", "We note that a storage size of 5000 may appear small, but is relevant to the NOSQL database scenario where disk files (e.g.", "SSTables) are typically sharded to be several megabytes in size, to avoid issues with compaction.", "E.g.", "if the stored values were of size 10kB per row, we would expect 5000 unique keys or less in an average Bigtable SSTable.", "One further consideration for production deployment is the ability to extrapolate to larger storage set sizes during evaluation.", "We investigate this for the Neural Bloom Filter on the same database task, and compare it to an LSTM.", "To ensure both models train, we set the maximum training storage set size to 200 and evaluate up to sizes 250, a modest $25\\%$ size increase.", "We find that the Neural Bloom Filter uses up to $3\\times $ less space than the LSTM and the neural models are able to extrapolate to larger set sizes than those observed during training (see Appendix Figure REF ).", "Whilst the performance eventually degrades when the training limit size is exceeded, it is not catastrophic for either the LSTM or Neural Bloom Filter.", "Timing benchmark Table: Latency for a single query, and throughput for a batch of 10,000 queries.", "*Query-efficient Bloom Filter from .We have principally focused on space comparisons in this paper, we now consider speed for the database task described in the prior section.", "We measure latency as the wall-clock time to complete a single insertion or query of a row-key string of length 64.", "We also measure throughput as the reciprocal wall-clock time of inserting or querying $10,000$ strings.", "We use a common encoder architecture for the neural models, a 128-hidden-unit character LSTM.", "We benchmark the models on the CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) and on the GPU (NVIDIA Quadro P6000) with models implemented in TensorFlow without any model-specific optimizations.", "We compare to empirical timing results published in a query-optimized Bloom Filter variant [10].", "We include the Learned Index from [23] to contrast timings with a model that is not one-shot.", "The architecture is simply the LSTM character encoder; inserts are performed via gradient descent.", "The number of gradient-descent steps to obtain convergence is domain-dependent, we chose 50 steps in our timing benchmarks.", "The Learned Index queries are obtained by running the character LSTM over the input and classifying familiarity — and thus query metrics are identical to the LSTM baseline.", "We see in Table REF .", "that the combined query and insert latency of the Neural Bloom Filter and LSTM sits at 5ms on the CPU, around $400\\times $ slower than the classical Bloom Filter.", "The Learned Index contains a much larger latency of 780ms due to the sequential application of gradients.", "For all neural models, latency is not improved when operations are run on the GPU.", "However when multiple queries are received, the throughput of GPU-based neural models surpasses the classical Bloom Filter due to efficient concurrency of the dense linear algebra operations.", "This leads to the conclusion that a Neural Bloom Filter could be deployed in scenarios with high query load without a catastrophic decrease in throughput, if GPU devices are available.", "For insertions we see a bigger separation between the one-shot models: the LSTM and Neural Bloom Filter.", "Whilst all neural models are uncompetitive on the CPU, the Neural Bloom Filter surpasses the Bloom Filter's insertion throughput when placed on the GPU, with $101K$ insertions per second (IPS).", "The LSTM runs at $4.6K$ IPS, one order of magnitude slower, because writes are serial, and the Learned Index structure is two orders of magnitude slower at 816 IPS due to sequential gradient computations.", "The benefits of the Neural Bloom Filter's simple write scheme are apparent here.", "Related Work There have been a large number of Bloom Filter variants published; from Counting Bloom Filters which support deletions [16], Bloomier Filters which store functions vs sets [9], Compressed Bloom Filters which use arithmetic encoding to compress the storage set [25], and Cuckoo Filters which use cuckoo hashing to reduce redundancy within the storage vector [15].", "Although some of these variants focus on better compression, they do not achieve this by specializing to the data distribution.", "One of the few works which address data-dependence are Weighted Bloom Filters [5], [36].", "They work by modulating the number of hash functions used to store or query each input, dependent on its storage and query frequency.", "This requires estimating a large number of separate storage and query frequencies.", "This approach can be useful for imbalanced data distributions, such as the non-uniform instance-based MNIST familiarity task.", "However it cannot take advantage of dependent sets, such as the class-based MNIST familiarity task, or the database query task.", "We see the Neural Bloom Filter is more compressive in all settings.", "[33] proposes a neurally-inspired set membership data-structure that works by replacing the randomized hash functions with a randomly-wired computation graph of OR and AND gates.", "The false positive rate is controlled analytically by modulating the number of gates and the overall memory size.", "However there is no learning or specialization to the data with this setup.", "[4] investigates a learnable neural familiarity module, which serves as a biologically plausible model of familiarity mechanisms in the brain, namely within the perirhinal cortex.", "However this has not shown to be empirically effective at exact matching.", "[23] consider the use of a neural network to classify the membership of queries to a fixed set $S$ .", "Here the network itself is more akin to a perfect hashing setup where multiple epochs are required to find a succinct holistic representation of the set, which is embedded into the weights of the network.", "In their case this search is performed by gradient-based optimization.", "We emulate their experimental comparison approach but instead propose a memory architecture that represents the set as activations in memory, versus weights in a network.", "[26] discusses the benefits and draw-backs of a learned Bloom Filter; distinguishing the empirical false positive rate over the distribution of sets $S$ versus the conditional false positive rate of the model given a particular set $S$ .", "In this paper we focus on the empirical false positive rate because we wish to exploit redundancy in the data and query distribution.", "[27] also considers an alternate way to combine classical and learned Bloom Filters by `sandwiching' the learned model with pre-filter and post-filter classical Bloom Filters to further reduce space.", "Conclusion In many situations neural networks are not a suitable replacement to Bloom Filters and their variants.", "The Bloom Filter is robust to changes in data distribution because it delivers a bounded false positive rate for any sampled subset.", "However in this paper we consider the questions, “When might a single-shot neural network provide better compression than a Bloom Filter?”.", "We see that a model which uses an external memory with an adaptable capacity, avoids BPTT with a feed-forward write scheme, and learns to address its memory, is the most promising option in contrast to popular memory models such as DNCs and LSTMs.", "We term this model the Neural Bloom Filter due to the analogous incorporation of a hashing scheme, commutative write scheme, and multiplicative read mechanism.", "The Neural Bloom Filter relies on settings where we have an off-line dataset (both of stored elements and queries) that we can meta-learn over.", "In the case of a large database we think this is warranted, a database with 100K separate set membership data structures will benefit from a single (or periodic) meta-learning training routine that can run on a single machine and sample from the currently stored data, generating a large number of efficient data-structures.", "We envisage the space cost of the network to be amortized by sharing it across many neural Bloom Filters, and the time-cost of executing the network to be offset by the continuous acceleration of dense linear algebra on modern hardware, and the ability to batch writes and queries efficiently.", "A promising future direction would be to investigate the feasibility of this approach in a production system.", "Acknowledgments We thank Peter Dayan, Yori Zwols, Yan Wu, Joel Leibo, Greg Wayne, Andras Gyorgy, Charles Blundell, Daan Weirstra, Pushmeet Kohli, and Tor Lattimor for their insights during this project.", "Further Model Details Efficient addressing We discuss some implementation tricks that could be employed for a production system.", "Firstly the original model description defines the addressing matrix $A$ to be trainable.", "This ties the number of parameters in the network to the memory size.", "It may be preferable to train the model at a given memory size and evaluate for larger memory sizes.", "One way to achieve this is by allowing the addressing matrix $A$ to be non-trainable.", "We experiment with this, allowing $A \\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ to be a fixed sample of Gaussian random variables.", "We can think of these as point on a sphere in high dimensional space, the controller network must learn to organize inputs into separate buckets across the surface of the sphere.", "To make the addressing more efficient for larger memory sizes, we experiment with sparsification of the addressing softmax by preserving only the top k components.", "We denote this sparse softmax $\\sigma _k(\\cdot )$ .", "When using a sparse address, we find the network can fixate on a subset of rows.", "This observation is common to prior sparse addressing work [32].", "We find sphering the query vector, often dubbed whitening, remedies this (see Appendix for an ablation).", "The modified sparse architecture variant is illustrated in Algorithm REF .", "[] Sparse Neural Bloom Filter [1] def sparse_controller(x): $\\quad z \\leftarrow f_{enc}(x)$ $\\quad s \\leftarrow f_q(z)$ Raw query word $\\quad q \\leftarrow moving\\_zca(q)$ Spherical query $\\quad a \\leftarrow \\sigma _k(q^T A)$ Sparse address $\\quad w \\leftarrow f_{w}(z) $ def sparse_write(x): $\\quad a, w \\leftarrow \\hbox{sparse\\_controller}(x)$ $\\quad M_{t + 1}[a_{idx}] \\leftarrow M_t[a_{idx}] + w a_{val}^T$ def sparse_read(x): $\\quad a, w, z \\leftarrow \\hbox{sparse\\_controller}(x)$ $\\quad r \\leftarrow M[a_{idx}] \\odot a_{val}$ $\\quad o \\leftarrow f_{out}([r, w, z])$ One can avoid the linear-time distance computation $q^TA$ in the addressing operation $\\sigma _k(q^TA)$ by using an approximate k-nearest neighbour index, such as locality-sensitive hashing [12], to extract the nearest neighbours from $A$ in $\\mathcal {O}(\\log m)$ time.", "The use of an approximate nearest neighbour index has been empirically considered for scaling memory-augmented neural networks [30], [22] however this was used for attention on $M$ directly.", "As $M$ is dynamic the knn requires frequent re-building as memories are stored or modified.", "This architecture is simpler — $A$ is fixed and so the approximate knn can be built once.", "To ensure the serialized size of the network (which can be shared across many memory instantiations) is independent of the number of slots in memory $m$ we can avoid storing $A$ .", "In the instance that it is not trainable, and is simply a fixed sample of random variables that are generated from a deterministic random number generator — we can instead store a set of integer seeds that can be used to re-generate the rows of $A$ .", "We can let the $i$ -th seed $c_i$ , say represented as a 16-bit integer, correspond to the set of 16 rows with indices $16 i, 16i + 1, \\ldots , 16i + 15$ .", "If these rows need to be accessed, they can be regenerated on-the-fly by $c_i$ .", "The total memory cost of $A$ is thus $m$ bits, where $m$ is the number of memory slotsOne can replace 16 with 32 if there are more than one million slots.", "Putting these two together it is possible to query and write to a Neural Bloom Filter with $m$ memory slots in $\\mathcal {O}(\\log m)$ time, where the network consumes $\\mathcal {O}(1)$ space.", "It is worth noting, however, the Neural Bloom Filter's memory is often much smaller than the corresponding classical Bloom Filter's memory, and in many of our experiments is even smaller than the number of unique elements to store.", "Thus dense matrix multiplication can still be preferable - especially due to its acceleration on GPUs and TPUs [21] - and a dense representation of $A$ is not inhibitory.", "As model optimization can become application-specific, we do not focus on these implementation details and use the model in its simplest setting with dense matrix operations.", "Moving ZCA The moving ZCA was computed by taking moving averages of the first and second moment, calculating the ZCA matrix and updating a moving average projection matrix $\\theta _{zca}$ .", "This is only done during training, at evaluation time $\\theta _{zca}$ is fixed.", "We describe the update below for completeness.", "$& \\hbox{Input: } s \\leftarrow f_{q}(z) \\\\& \\mu _{t+1} \\leftarrow \\gamma \\mu _t + (1 - \\gamma ) \\bar{s} & \\hbox{1st moment EMA} \\\\& \\Sigma _{t + 1} \\leftarrow \\gamma \\Sigma _{t} + (1 - \\gamma ) \\; s^T s & \\hbox{2nd moment EMA} \\\\& U, s, \\_ \\leftarrow \\mathtt {svd}(\\Sigma - \\mu ^2) & \\hbox{Singular values} \\\\& W \\leftarrow U U^T / \\sqrt{(}s) & \\hbox{ZCA matrix}\\\\& \\theta _{zca} \\leftarrow \\eta \\theta _{zca} + (1 - \\eta ) W & \\hbox{ZCA EMA} \\\\& q \\leftarrow s \\; \\theta _{zca} & \\hbox{Projected query}$ In practice we do not compute the singular value decomposition at each time step to save computational resources, but instead calculate it and update $\\theta $ every $T$ steps.", "We scale the discount in this case $\\eta ^{\\prime } = \\eta / T$ .", "Relation to uniform hashing We can think of the decorrelation of $s$ , along with the sparse content-based attention with $A$ , as a hash function that maps $s$ to several indices in $M$ .", "For moderate dimension sizes of $s$ (256, say) we note that the Gaussian samples in $A$ lie close to the surface of a sphere, uniformly scattered across it.", "If $q$ , the decorrelated query, were to be Gaussian then the marginal distribution of nearest neighbours rows in $A$ will be uniform.", "If we chose the number of nearest neighbours $k = 1$ then this implies the slots in $M$ are selected independently with uniform probability.", "This is the exact hash function specification that Bloom Filters assume.", "Instead we use a continuous (as we choose $k > 1$ ) approximation (as we decorrelate $s \\rightarrow q$ vs Gaussianize) to this uniform hashing scheme, so it is differentiable and the network can learn to shape query representations.", "Figure: Database extrapolation task.", "Models are trained up to sets of size 200 (dashed line).", "We see extrapolation to larger set sizes on test set, but performance degrades.", "Neural architectures perform best for larger allowed false positive rates.", "Space Comparison For each task we compare the model's memory size, in bits, at a given false positive rate — usually chosen to be $1\\%$ .", "For our neural networks which output a probability $p = f(x)$ one could select an operating point $\\tau _{\\epsilon }$ such that the false positive rate is $\\epsilon $ .", "In all of our experiments the neural network outputs a memory (state) $s$ which characterizes the storage set.", "Let us say SPACE(f, $\\epsilon $ ) is the minimum size of $s$ , in bits, for the network to achieve an average false positive rate of $\\epsilon $ .", "We could compare SPACE(f,$\\epsilon $ ) with SPACE(Bloom Filter,$\\epsilon $ ) directly, but this would not be a fair comparison as our network $f$ can emit false negatives.", "To remedy this, we employ the same scheme as [23] where we use a `backup' Bloom Filter with false positive rate $\\delta $ to store all false negatives.", "When $f(x) < \\tau _{\\epsilon }$ we query the backup Bloom Filter.", "Because the overall false positive rate is $\\epsilon + (1 - \\epsilon ) \\delta $ , to achieve a false positive rate of at most $\\alpha $ (say $1\\%$ ) we can set $\\epsilon = \\delta = \\alpha / 2$ .", "The number of elements stored in the backup bloom filter is equal to the number of false negatives, denoted $n_{fn}$ .", "Thus the total space can be calculated, TOTAL_SPACE(f,$\\alpha $ ) = SPACE(f,$\\frac{\\alpha }{2}$ ) + $n_{fn}$ * SPACE(Bloom Filter,$\\frac{\\alpha }{2}$ ).", "We compare this quantity for different storage set sizes.", "Model Size For the MNIST experiments we used a 3-layer convolutional neural network with 64 filters followed by a two-layer feed-forward network with $64 \\& 128$ hidden-layers respectively.", "The number of trainable parameters in the Neural Bloom Filter (including the encoder) is $243,437$ which amounts to $7.8$ Mb at 32-bit precision.", "We did not optimize the encoder architecture to be lean, as we consider it part of the library in a sense.", "For example, we do not count the size of the hashing library that an implemented Bloom Filter relies on, which may have a chain of dependencies, or the package size of TensorFlow used for our experiments.", "Nevertheless we can reason that when the Neural Bloom Filter is 4kb smaller than the classical, such as for the non-uniform instance-based familiarity in Figure REF b, we would expect to see a net gain if we have a collection of at least $1,950$ data-structures.", "We imagine this could be optimized quite significantly, by using 16-bit precision and perhaps using more convolution layers or smaller feed-forward linear operations.", "For the database experiments we used an LSTM character encoder with 256 hidden units followed by another 256 feed-forward layer.", "The number of trainable parameters in the Neural Bloom Filter $419,339$ which amounts to 13Mb.", "One could imagine optimizing this by switching to a GRU or investigating temporal convolutions as encoders.", "Hyper-Parameters We swept over the following hyper-parameters, over the range of memory sizes displayed for each task.", "We computed the best model parameters by selecting those which resulted in a model consuming the least space as defined in Appendix .", "This depends on model performance as well as state size.", "The Memory Networks memory size was fixed to equal the input size (as the model does not arbitrate what inputs to avoid writing).", "Table: Hyper-parameters considered Experiment Details For the class-based familiarity task, and uniform sampling task, the model was trained on the training set and evaluated on the test set.", "For the class-based task sampling, a class is sampled at random and $S$ is formed from a random subset of images from that class.", "The queries $q$ are chosen uniformly from either $S$ or from images of a different class.", "For the non-uniform instance-based familiarity task we sampled images from an exponential distribution.", "Specifically we used a fix permutation of the training images, and from that ordering chose $p(i_{th} \\hbox{ image}) \\propto 0.999^i$ for the images to store.", "The query images were selected uniformly.", "We used a fixed permutation (or shuffle) of the images to ensure most probability mass was not placed on images of a certain class.", "I.e.", "by the natural ordering of the dataset we would have otherwise almost always sampled 0 images.", "This would be confounding task non-uniformity for other latent structure to the sets.", "Because the network needed to relate the image to its frequency of occurence for task, the models were evaluated on the training set.", "This is reasonable as we are not wishing for the model to visually generalize to unseen elements in the setting of this exact-familiarity task.", "We specifically want the network weights to compress a map of image to probability of storage.", "For the database task a universe of $2.5M$ unique tokens were extracted from GigaWord v5.", "We shuffled the tokens and placed $2.3$ M in a training set and 250K in a test set.", "These sets were then sorted alphabetically.", "A random subset, representing an SSTable, was sampled by choosing a random start index and selecting the next $n$ elements, which form our set $S$ .", "Queries are sampled uniformly at random from the universe set.", "Models are trained on the training set and evaluated on the test set.", "Database Extrapolation Task We investigate whether neural models are able to extrapolate to larger test sizes.", "Using the database task setup, where each set contains a contiguous set of sorted strings; we train both the Neural Bloom Filter and LSTM on sets of sizes 2 - 200.", "We then evaluate on sets up to 250, i.e.", "a 25% increase over what is observed during training.", "This is to emulate the scenario that we train on a selection of databse tablets, but during evaluation we may observe some tablets that are slightly larger than those in the training set.", "Both the LSTM and Neural Bloom Filter are able to solve the task, with the Neural Bloom Filter using significantly less space for the larger allowed false positive rate of 5% and 1%.", "We do see the models' error increase as it surpasses the maximum training set size, however it is not catastrophic.", "Another interesting trend is noticeable; the neural models have higher utility for larger allowed false positive rates.", "This may be because of the difficulty in training the models to an extremely low accuracy.", "Effect of Sphering We see the benefit of sphering in Figure REF where the converged validation performance ends up at a higher state.", "Investigating the proportion of memory filled after all elements have been written in Figure REF , we see the model uses quite a small proportion of its memory slots.", "This is likely due to the network fixating on rows it has accessed with sparse addressing, and ignoring rows it has otherwise never touched — a phenomena noted in [32].", "The model finds a local minima in continually storing and accessing the same rows in memory.", "The effect of sphering is that the query now appears to be Gaussian (up to the first two moments) and so the nearest neighbour in the address matrix A (which is initialized to Gaussian random variables) will be close to uniform.", "This results in a more uniform memory access (as seen in Figure REF ) which significantly aids performance (as seen in Figure REF ).", "Figure: For sparse addresses, sphering enables the model to learn the task of set membership to high accuracy.Figure: For sparse addresses, sphering the query vector leads to fewer collisions across memory slots and thus a higher utilization of memory.", "Timing Benchmark We use the Neural Bloom Filter network architecture for the large database task (Table REF ).", "The network uses an encoder LSTM with 256 hidden units over the characters, and feeds this through a 256 fully connected layer to encode the input.", "A two-layer 256-hidden-unit MLP is used as the query architecture.", "The memory and word size is 8 and 4 respectively, and so the majority of the compute is spent in the encoder and query network.", "We compare this with an LSTM containing 32 hidden units.", "We benchmark the single-query latency of the network alongside the throughput of a batch of queries, and a batch of inserts.", "The Neural Bloom Filter and LSTM is implemented in TensorFlow without any custom kernels or specialized code.", "We benchmark it on the CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) and a GPU (NVIDIA Quadro P6000).", "We compare to empirical timing results published in a query-optimized Bloom Filter variant [10].", "It is worth noting, in several Bloom Filter applications, the actual query latency is not in the critical path of computation.", "For example, for a distributed database, the network latency and disk access latency for one tablet can be orders of magnitude greater than the in-memory latency of a Bloom Filter query.", "For this reason, we have not made run-time a point of focus in this study, and it is implicitly assumed that the neural network is trading off greater latency for less space.", "However it is worth checking whether run-time could be prohibitive." ], [ "Space Complexity", "In this section we discuss space lower bounds for the approximate set membership problem when there is some structure to the storage or query set.", "This can help us formalise why and where neural networks may be able to beat classical lower bounds to this problem.", "The $n\\log _2{(1 / \\epsilon )}$ lower bound from [6] assumes that all subsets $S \\subset U$ of size $n$ , and all queries $q \\in U$ have equal probability.", "Whilst it is instructive to bound this maximum-entropy scenario, which we can think of as `worst case', most applications of approximate set membership e.g.", "web cache sharing, querying databases, or spell-checking, involve sets and queries that are not sampled uniformly.", "For example, the elements within a given set may be highly dependent, there may be a power-law distribution over queries, or the queries and sets themselves may not be sampled independently.", "A more general space lower bound can be defined by an information theoretic argument from communication complexity [39].", "Namely, approximate set membership can be framed as a two-party communication problem between Alice, who observes the set $S$ and Bob, who observes a query $q$ .", "They can agree on a shared policy $\\Pi $ in which to communicate.", "For given inputs $S, q$ they can produce a transcript $A_{S, q} = \\Pi (S, q) \\in \\mathcal {Z}$ which can be processed $g : \\mathcal {Z} \\rightarrow {0, 1}$ such that $\\mathbb {P}\\left(g(A_{S, q}) = 1 | q \\notin S \\right) \\le \\epsilon $ .", "[2] shows that the maximum transcript size is greater than the mutual information between the inputs and transcript: $\\max _{S, q} |A_{S, q}| \\ge I\\!\\left(S, q ; A_{S, q}\\right) = H(S, q) - H(S, q | A_{S, q})$ .", "Thus we note problems where we may be able to use less space than the classical lower bound are cases where the entropy $H(S, q)$ is small, e.g.", "our sets are highly non-uniform, or cases where $H(S, q | A_{S, q})$ is large, which signifies that many query and set pairs can be solved with the same transcript." ], [ "Experiments", "Our experiments explore scenarios where set membership can be learned in one-shot with improved compression over the classical Bloom Filter.", "We consider tasks with varying levels of structure in the storage sets $S$ and queries $q$ .", "We compare the Neural Bloom Filter with three memory-augmented neural networks, the LSTM, DNC, and Memory Network, that are all able to write storage sets in one-shot.", "[tb] Meta-Learning Training [1] Let $S^{train}$ denote the distribution over sets to store.", "Let $Q^{train}$ denote the distribution over queries.", "$i = 1$ to max train steps Sample task: $\\quad $ Sample set to store: $S \\sim \\mathcal {S}^{train}$ $\\quad $ Sample $t$ queries: $x_1, \\ldots , x_t \\sim Q^{train}$ $\\quad $ Targets: $y_j = 1 \\hbox{ if } x_j \\in S \\hbox{ else } 0; \\; j = 1, \\ldots , t$ Write entries to memory: $M \\leftarrow f_{\\theta }^{write}(S)$ Calculate logits: $o_j = f_{\\theta }^{read}(M, x_j); \\; j = 1, \\ldots , t$ XE loss: $L = \\sum _{j = 1}^t y_j \\log {o_j} + (1 - y_j)(1 - \\log {o_j})$ Backprop through queries and writes: $dL / d\\theta $ Update parameters: $\\theta _{i + 1} \\leftarrow \\hbox{Optimizer}(\\theta _i, dL / d\\theta )$ The training setup follows the memory-augmented meta-learning training scheme of [35], only here the task is familiarity classification versus image classification.", "The network samples tasks which involve classifying familiarity for a given storage set.", "Meta-learning occurs as a two-speed process, where the model quickly learns to recognize a given storage set $S$ within a training episode via writing to a memory or state, and the model slowly learns to improve this fast-learning process by optimizing the model parameters $\\theta $ over multiple tasks.", "We detail the training routine in Algorithm .", "For the RNN baselines (LSTM and DNC) the write operation corresponds to unrolling the network over the inputs and outputting the final state.", "For these models, the query network is simply an MLP classifier which receives the concatenated final state and query, and outputs a scalar logit.", "For the Memory Network, inputs are stored in individual slots and the familiarity signal is computed from the maximum content-based attention value.", "The Neural Bloom Filter read and write operations are defined in Algorithm ." ], [ "Space Comparison", "We compared the space (in bits) of the model's memory (or state) to a Bloom Filter at a given false positive rate and $0\\%$ false negative rate.", "The false positive rate is measured empirically over a sample of $50,000$ queries for the learned models; for the Bloom Filter we employ the analytical false positive rate.", "Beating a Bloom Filter's space usage with the analytical false positive rate implies better performance for any given Bloom Filter library version (as actual Bloom Filter hash functions are not uniform), thus the comparison is reasonable.", "For each model we sweep over hyper-parameters relating to model size to obtain their smallest operating size at the desired false positive rate (for the full set, see Appendix ).", "Because the neural models can emit false negatives, we store these in a (ideally small) backup Bloom Filter, as proposed by [23], [26].", "We account for the space of this backup Bloom Filter, and add it to the space usage of the model's memory for parity (See Appendix for further discussion).", "The neural network must learn to output a small state in one-shot that can serve set membership queries at a given false positive rate, and emit a small enough number of false negatives such that the backup filter is also small, and the total size is considerably less than a Bloom Filter." ], [ "Sampling Strategies on MNIST", "To understand what kinds of scenarios neural networks may be more (or less) compressive than classical Bloom Filters, we consider three simple set membership tasks that have a graded level of structure to the storage sets and queries.", "Concretely, they differ in the sampling distribution of storage sets $\\mathcal {S}^{train}$ and queries $\\mathcal {Q}^{train}$ .", "However all problems are approximate set membership tasks that can be solved by a Bloom Filter.", "The tasks are (1) Class-based familiarity, a highly structured task where each set of images is sampled with the constraint that they arise from the same randomly-selected class.", "(2) Non-uniform instance-based familiarity, a moderately structured task where the images are sampled without replacement from an exponential distribution.", "(3) Uniform instance-based familiarity, a completely unstructured task where each subset contains images sampled uniformly without replacement.", "For each task we varied the size of the sample set to store, and calculated the space (in bits) of each model's state at a fixed false positive rate of $1\\%$ and a false negative rate of $0\\%$ .", "We used relatively small storage set sizes (e.g.", "$100 - 1000$ ) to start with, as this highlights that some RNN-based approaches struggle to train over larger set sizes, before progressing to larger sets in subsequent sections.", "See Appendix for further details on the task setup.", "In the class-based sampling task we see in Figure REF a that the DNC, LSTM and Neural Bloom Filter are able to significantly outperform the classical Bloom Filter when images are sampled by class.", "The Memory Network is able to solve the task with a word size of only 2, however this corresponds to a far greater number of bits per element, 64 versus the Bloom Filter's $9.8$ (to a total size of $4.8$ kb), and so the overall size was prohibitive.", "The DNC, LSTM, and Neural Bloom Filter are able to solve the task with a storage set size of 500 at $1.1$ kb , 217b, and 382b; a $4.3\\times $ , $22\\times $ , and $12\\times $ saving respectively.", "For the non-uniform sampling task in Figure REF b we see the Bloom Filter is preferable for less than 500 stored elements, but is overtaken thereafter.", "At 1000 elements the DNC, LSTM, and Neural Bloom Filter consume $7.9$ kb, $7.7$ kb, and $6.8$ kb respectively which corresponds to a $17.6\\%$ , $19.7\\%$ , and $28.6\\%$ reduction over the $9.6$ kb Bloom Filter.", "In the uniform sampling task shown in Figure REF c, there is no structure to the sampling of $S$ .", "The two architectures which rely on BPTT essentially fail to solve the task at some threshold of storage size.", "The Neural Bloom Filter solves it with $6.8$ kb (using a memory size of 50 and word size of 2).", "The overall conclusion from these sets of experiments is that the classical Bloom Filter works best when there is no structure to the data, however when there is (e.g.", "skewed data, or highly dependent sets that share common attributes) we do see significant space savings." ], [ "Memory Access Analysis", "We wanted to understand how the Neural Bloom Filter uses its memory, and in particular how its learned solutions may correspond to classical algorithms.", "We inspected the memory contents (what was stored to memory) and addressing weights (where it was stored) for a small model of 10 memory slots and a word size of 2, trained on the MNIST class-based familiarity task.", "We plot this for each class label, and compare the pattern of memory usage to two other models that use increasingly simpler read and write operations: (1) an ablated model with constant write words $w \\leftarrow \\mathbf {1}$ , and (2) an ablated model with $w \\leftarrow \\mathbf {1}$ and a linear read operator $r \\leftarrow a^T M$ .", "The full model, shown in Figure REF a learns to place some classes in particular slots, e.g.", "class $1 \\rightarrow $ slot 5, however most are distributed.", "Inspecting the memory contents, it is clear the write word encodes a unique 2D token for each class.", "This solution bears resemblance with Bloom-g Filters [29] where elements are spread across a smaller memory with the same hashing scheme as Bloom Filters, but a unique token is stored in each slot instead of a constant 1-bit value.", "With the model ablated to store only 1s in Figure REF b we see it uses semantic addressing codes for some classes (e.g.", "0 and 1) and distributed addresses for other classes.", "E.g.", "for class 3 the model prefers to uniformly spread its writes across memory slot 1, 4, and 8.", "The model solution is similar to that of Bloom Filters, with distributed addressing codes as a solution — but no information in the written words themselves.", "When we force the read operation to be linear in Figure REF c, the network maps each input class to a unique slot in memory.", "This solution has a correspondence with perfect hashing.", "In conclusion, with small changes to the read/write operations we see the Neural Bloom Filter learn different algorithmic solutions." ], [ "Database Queries", "We look at a task inspired by database interactions.", "NoSQL databases, such as Bigtable and Cassandra, use a single string-valued row-key, which is used to index the data.", "The database is comprised of a union of files (e.g.", "SSTables) storing contiguous row-key chunks.", "Bloom Filters are used to determine whether a given query $q$ lies within the stored set.", "We emulate this setup by constructing a universe of strings, that is alphabetically ordered, and by sampling contiguous ranges (to represent a given SSTable).", "Queries are sampled uniformly from the universe set of strings.", "We choose the $2.5M$ unique tokens in the GigaWord v5 news corpus to be our universe as this consists of structured natural data and some noisy or irregular strings.", "We consider the task of storing sorted string sets of size 5000.", "We train the Neural Bloom Filter to several desired false positive rates ($5\\%, 1\\%, 0.1\\%$ ) and used a backup Bloom Filter to guarantee $0\\%$ false negative rate.", "We also trained LSTMs and DNCs for comparison, but they failed to learn a solution to the task after several days of training; optimizing insertions via BPTT over a sequence of length 5000 did not result in a remotely usable solution.", "The Neural Bloom Filter avoids BPTT via its simple additive write scheme, and so it learned to solve the task quite naturally.", "As such, we compare the Neural Bloom Filter solely to classical data structures: Bloom Filters and Cuckoo Filters.", "In Table REF we see a significant space reduction of $3-40\\times $ , where the margin grows with increasing permitted false positive rates.", "Since memory is an expensive component within production databases (in contrast to disk, say), this memory space saving could translate to a non-trivial cost reduction.", "We note that a storage size of 5000 may appear small, but is relevant to the NOSQL database scenario where disk files (e.g.", "SSTables) are typically sharded to be several megabytes in size, to avoid issues with compaction.", "E.g.", "if the stored values were of size 10kB per row, we would expect 5000 unique keys or less in an average Bigtable SSTable.", "One further consideration for production deployment is the ability to extrapolate to larger storage set sizes during evaluation.", "We investigate this for the Neural Bloom Filter on the same database task, and compare it to an LSTM.", "To ensure both models train, we set the maximum training storage set size to 200 and evaluate up to sizes 250, a modest $25\\%$ size increase.", "We find that the Neural Bloom Filter uses up to $3\\times $ less space than the LSTM and the neural models are able to extrapolate to larger set sizes than those observed during training (see Appendix Figure REF ).", "Whilst the performance eventually degrades when the training limit size is exceeded, it is not catastrophic for either the LSTM or Neural Bloom Filter." ], [ "Timing benchmark", "We have principally focused on space comparisons in this paper, we now consider speed for the database task described in the prior section.", "We measure latency as the wall-clock time to complete a single insertion or query of a row-key string of length 64.", "We also measure throughput as the reciprocal wall-clock time of inserting or querying $10,000$ strings.", "We use a common encoder architecture for the neural models, a 128-hidden-unit character LSTM.", "We benchmark the models on the CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) and on the GPU (NVIDIA Quadro P6000) with models implemented in TensorFlow without any model-specific optimizations.", "We compare to empirical timing results published in a query-optimized Bloom Filter variant [10].", "We include the Learned Index from [23] to contrast timings with a model that is not one-shot.", "The architecture is simply the LSTM character encoder; inserts are performed via gradient descent.", "The number of gradient-descent steps to obtain convergence is domain-dependent, we chose 50 steps in our timing benchmarks.", "The Learned Index queries are obtained by running the character LSTM over the input and classifying familiarity — and thus query metrics are identical to the LSTM baseline.", "We see in Table REF .", "that the combined query and insert latency of the Neural Bloom Filter and LSTM sits at 5ms on the CPU, around $400\\times $ slower than the classical Bloom Filter.", "The Learned Index contains a much larger latency of 780ms due to the sequential application of gradients.", "For all neural models, latency is not improved when operations are run on the GPU.", "However when multiple queries are received, the throughput of GPU-based neural models surpasses the classical Bloom Filter due to efficient concurrency of the dense linear algebra operations.", "This leads to the conclusion that a Neural Bloom Filter could be deployed in scenarios with high query load without a catastrophic decrease in throughput, if GPU devices are available.", "For insertions we see a bigger separation between the one-shot models: the LSTM and Neural Bloom Filter.", "Whilst all neural models are uncompetitive on the CPU, the Neural Bloom Filter surpasses the Bloom Filter's insertion throughput when placed on the GPU, with $101K$ insertions per second (IPS).", "The LSTM runs at $4.6K$ IPS, one order of magnitude slower, because writes are serial, and the Learned Index structure is two orders of magnitude slower at 816 IPS due to sequential gradient computations.", "The benefits of the Neural Bloom Filter's simple write scheme are apparent here." ], [ "Related Work", "There have been a large number of Bloom Filter variants published; from Counting Bloom Filters which support deletions [16], Bloomier Filters which store functions vs sets [9], Compressed Bloom Filters which use arithmetic encoding to compress the storage set [25], and Cuckoo Filters which use cuckoo hashing to reduce redundancy within the storage vector [15].", "Although some of these variants focus on better compression, they do not achieve this by specializing to the data distribution.", "One of the few works which address data-dependence are Weighted Bloom Filters [5], [36].", "They work by modulating the number of hash functions used to store or query each input, dependent on its storage and query frequency.", "This requires estimating a large number of separate storage and query frequencies.", "This approach can be useful for imbalanced data distributions, such as the non-uniform instance-based MNIST familiarity task.", "However it cannot take advantage of dependent sets, such as the class-based MNIST familiarity task, or the database query task.", "We see the Neural Bloom Filter is more compressive in all settings.", "[33] proposes a neurally-inspired set membership data-structure that works by replacing the randomized hash functions with a randomly-wired computation graph of OR and AND gates.", "The false positive rate is controlled analytically by modulating the number of gates and the overall memory size.", "However there is no learning or specialization to the data with this setup.", "[4] investigates a learnable neural familiarity module, which serves as a biologically plausible model of familiarity mechanisms in the brain, namely within the perirhinal cortex.", "However this has not shown to be empirically effective at exact matching.", "[23] consider the use of a neural network to classify the membership of queries to a fixed set $S$ .", "Here the network itself is more akin to a perfect hashing setup where multiple epochs are required to find a succinct holistic representation of the set, which is embedded into the weights of the network.", "In their case this search is performed by gradient-based optimization.", "We emulate their experimental comparison approach but instead propose a memory architecture that represents the set as activations in memory, versus weights in a network.", "[26] discusses the benefits and draw-backs of a learned Bloom Filter; distinguishing the empirical false positive rate over the distribution of sets $S$ versus the conditional false positive rate of the model given a particular set $S$ .", "In this paper we focus on the empirical false positive rate because we wish to exploit redundancy in the data and query distribution.", "[27] also considers an alternate way to combine classical and learned Bloom Filters by `sandwiching' the learned model with pre-filter and post-filter classical Bloom Filters to further reduce space." ], [ "Conclusion", "In many situations neural networks are not a suitable replacement to Bloom Filters and their variants.", "The Bloom Filter is robust to changes in data distribution because it delivers a bounded false positive rate for any sampled subset.", "However in this paper we consider the questions, “When might a single-shot neural network provide better compression than a Bloom Filter?”.", "We see that a model which uses an external memory with an adaptable capacity, avoids BPTT with a feed-forward write scheme, and learns to address its memory, is the most promising option in contrast to popular memory models such as DNCs and LSTMs.", "We term this model the Neural Bloom Filter due to the analogous incorporation of a hashing scheme, commutative write scheme, and multiplicative read mechanism.", "The Neural Bloom Filter relies on settings where we have an off-line dataset (both of stored elements and queries) that we can meta-learn over.", "In the case of a large database we think this is warranted, a database with 100K separate set membership data structures will benefit from a single (or periodic) meta-learning training routine that can run on a single machine and sample from the currently stored data, generating a large number of efficient data-structures.", "We envisage the space cost of the network to be amortized by sharing it across many neural Bloom Filters, and the time-cost of executing the network to be offset by the continuous acceleration of dense linear algebra on modern hardware, and the ability to batch writes and queries efficiently.", "A promising future direction would be to investigate the feasibility of this approach in a production system." ], [ "Acknowledgments", "We thank Peter Dayan, Yori Zwols, Yan Wu, Joel Leibo, Greg Wayne, Andras Gyorgy, Charles Blundell, Daan Weirstra, Pushmeet Kohli, and Tor Lattimor for their insights during this project." ], [ "Efficient addressing", "We discuss some implementation tricks that could be employed for a production system.", "Firstly the original model description defines the addressing matrix $A$ to be trainable.", "This ties the number of parameters in the network to the memory size.", "It may be preferable to train the model at a given memory size and evaluate for larger memory sizes.", "One way to achieve this is by allowing the addressing matrix $A$ to be non-trainable.", "We experiment with this, allowing $A \\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ to be a fixed sample of Gaussian random variables.", "We can think of these as point on a sphere in high dimensional space, the controller network must learn to organize inputs into separate buckets across the surface of the sphere.", "To make the addressing more efficient for larger memory sizes, we experiment with sparsification of the addressing softmax by preserving only the top k components.", "We denote this sparse softmax $\\sigma _k(\\cdot )$ .", "When using a sparse address, we find the network can fixate on a subset of rows.", "This observation is common to prior sparse addressing work [32].", "We find sphering the query vector, often dubbed whitening, remedies this (see Appendix for an ablation).", "The modified sparse architecture variant is illustrated in Algorithm REF .", "[] Sparse Neural Bloom Filter [1] def sparse_controller(x): $\\quad z \\leftarrow f_{enc}(x)$ $\\quad s \\leftarrow f_q(z)$ Raw query word $\\quad q \\leftarrow moving\\_zca(q)$ Spherical query $\\quad a \\leftarrow \\sigma _k(q^T A)$ Sparse address $\\quad w \\leftarrow f_{w}(z) $ def sparse_write(x): $\\quad a, w \\leftarrow \\hbox{sparse\\_controller}(x)$ $\\quad M_{t + 1}[a_{idx}] \\leftarrow M_t[a_{idx}] + w a_{val}^T$ def sparse_read(x): $\\quad a, w, z \\leftarrow \\hbox{sparse\\_controller}(x)$ $\\quad r \\leftarrow M[a_{idx}] \\odot a_{val}$ $\\quad o \\leftarrow f_{out}([r, w, z])$ One can avoid the linear-time distance computation $q^TA$ in the addressing operation $\\sigma _k(q^TA)$ by using an approximate k-nearest neighbour index, such as locality-sensitive hashing [12], to extract the nearest neighbours from $A$ in $\\mathcal {O}(\\log m)$ time.", "The use of an approximate nearest neighbour index has been empirically considered for scaling memory-augmented neural networks [30], [22] however this was used for attention on $M$ directly.", "As $M$ is dynamic the knn requires frequent re-building as memories are stored or modified.", "This architecture is simpler — $A$ is fixed and so the approximate knn can be built once.", "To ensure the serialized size of the network (which can be shared across many memory instantiations) is independent of the number of slots in memory $m$ we can avoid storing $A$ .", "In the instance that it is not trainable, and is simply a fixed sample of random variables that are generated from a deterministic random number generator — we can instead store a set of integer seeds that can be used to re-generate the rows of $A$ .", "We can let the $i$ -th seed $c_i$ , say represented as a 16-bit integer, correspond to the set of 16 rows with indices $16 i, 16i + 1, \\ldots , 16i + 15$ .", "If these rows need to be accessed, they can be regenerated on-the-fly by $c_i$ .", "The total memory cost of $A$ is thus $m$ bits, where $m$ is the number of memory slotsOne can replace 16 with 32 if there are more than one million slots.", "Putting these two together it is possible to query and write to a Neural Bloom Filter with $m$ memory slots in $\\mathcal {O}(\\log m)$ time, where the network consumes $\\mathcal {O}(1)$ space.", "It is worth noting, however, the Neural Bloom Filter's memory is often much smaller than the corresponding classical Bloom Filter's memory, and in many of our experiments is even smaller than the number of unique elements to store.", "Thus dense matrix multiplication can still be preferable - especially due to its acceleration on GPUs and TPUs [21] - and a dense representation of $A$ is not inhibitory.", "As model optimization can become application-specific, we do not focus on these implementation details and use the model in its simplest setting with dense matrix operations.", "Moving ZCA The moving ZCA was computed by taking moving averages of the first and second moment, calculating the ZCA matrix and updating a moving average projection matrix $\\theta _{zca}$ .", "This is only done during training, at evaluation time $\\theta _{zca}$ is fixed.", "We describe the update below for completeness.", "$& \\hbox{Input: } s \\leftarrow f_{q}(z) \\\\& \\mu _{t+1} \\leftarrow \\gamma \\mu _t + (1 - \\gamma ) \\bar{s} & \\hbox{1st moment EMA} \\\\& \\Sigma _{t + 1} \\leftarrow \\gamma \\Sigma _{t} + (1 - \\gamma ) \\; s^T s & \\hbox{2nd moment EMA} \\\\& U, s, \\_ \\leftarrow \\mathtt {svd}(\\Sigma - \\mu ^2) & \\hbox{Singular values} \\\\& W \\leftarrow U U^T / \\sqrt{(}s) & \\hbox{ZCA matrix}\\\\& \\theta _{zca} \\leftarrow \\eta \\theta _{zca} + (1 - \\eta ) W & \\hbox{ZCA EMA} \\\\& q \\leftarrow s \\; \\theta _{zca} & \\hbox{Projected query}$ In practice we do not compute the singular value decomposition at each time step to save computational resources, but instead calculate it and update $\\theta $ every $T$ steps.", "We scale the discount in this case $\\eta ^{\\prime } = \\eta / T$ .", "Relation to uniform hashing We can think of the decorrelation of $s$ , along with the sparse content-based attention with $A$ , as a hash function that maps $s$ to several indices in $M$ .", "For moderate dimension sizes of $s$ (256, say) we note that the Gaussian samples in $A$ lie close to the surface of a sphere, uniformly scattered across it.", "If $q$ , the decorrelated query, were to be Gaussian then the marginal distribution of nearest neighbours rows in $A$ will be uniform.", "If we chose the number of nearest neighbours $k = 1$ then this implies the slots in $M$ are selected independently with uniform probability.", "This is the exact hash function specification that Bloom Filters assume.", "Instead we use a continuous (as we choose $k > 1$ ) approximation (as we decorrelate $s \\rightarrow q$ vs Gaussianize) to this uniform hashing scheme, so it is differentiable and the network can learn to shape query representations.", "Figure: Database extrapolation task.", "Models are trained up to sets of size 200 (dashed line).", "We see extrapolation to larger set sizes on test set, but performance degrades.", "Neural architectures perform best for larger allowed false positive rates.", "Space Comparison For each task we compare the model's memory size, in bits, at a given false positive rate — usually chosen to be $1\\%$ .", "For our neural networks which output a probability $p = f(x)$ one could select an operating point $\\tau _{\\epsilon }$ such that the false positive rate is $\\epsilon $ .", "In all of our experiments the neural network outputs a memory (state) $s$ which characterizes the storage set.", "Let us say SPACE(f, $\\epsilon $ ) is the minimum size of $s$ , in bits, for the network to achieve an average false positive rate of $\\epsilon $ .", "We could compare SPACE(f,$\\epsilon $ ) with SPACE(Bloom Filter,$\\epsilon $ ) directly, but this would not be a fair comparison as our network $f$ can emit false negatives.", "To remedy this, we employ the same scheme as [23] where we use a `backup' Bloom Filter with false positive rate $\\delta $ to store all false negatives.", "When $f(x) < \\tau _{\\epsilon }$ we query the backup Bloom Filter.", "Because the overall false positive rate is $\\epsilon + (1 - \\epsilon ) \\delta $ , to achieve a false positive rate of at most $\\alpha $ (say $1\\%$ ) we can set $\\epsilon = \\delta = \\alpha / 2$ .", "The number of elements stored in the backup bloom filter is equal to the number of false negatives, denoted $n_{fn}$ .", "Thus the total space can be calculated, TOTAL_SPACE(f,$\\alpha $ ) = SPACE(f,$\\frac{\\alpha }{2}$ ) + $n_{fn}$ * SPACE(Bloom Filter,$\\frac{\\alpha }{2}$ ).", "We compare this quantity for different storage set sizes.", "Model Size For the MNIST experiments we used a 3-layer convolutional neural network with 64 filters followed by a two-layer feed-forward network with $64 \\& 128$ hidden-layers respectively.", "The number of trainable parameters in the Neural Bloom Filter (including the encoder) is $243,437$ which amounts to $7.8$ Mb at 32-bit precision.", "We did not optimize the encoder architecture to be lean, as we consider it part of the library in a sense.", "For example, we do not count the size of the hashing library that an implemented Bloom Filter relies on, which may have a chain of dependencies, or the package size of TensorFlow used for our experiments.", "Nevertheless we can reason that when the Neural Bloom Filter is 4kb smaller than the classical, such as for the non-uniform instance-based familiarity in Figure REF b, we would expect to see a net gain if we have a collection of at least $1,950$ data-structures.", "We imagine this could be optimized quite significantly, by using 16-bit precision and perhaps using more convolution layers or smaller feed-forward linear operations.", "For the database experiments we used an LSTM character encoder with 256 hidden units followed by another 256 feed-forward layer.", "The number of trainable parameters in the Neural Bloom Filter $419,339$ which amounts to 13Mb.", "One could imagine optimizing this by switching to a GRU or investigating temporal convolutions as encoders.", "Hyper-Parameters We swept over the following hyper-parameters, over the range of memory sizes displayed for each task.", "We computed the best model parameters by selecting those which resulted in a model consuming the least space as defined in Appendix .", "This depends on model performance as well as state size.", "The Memory Networks memory size was fixed to equal the input size (as the model does not arbitrate what inputs to avoid writing).", "Table: Hyper-parameters considered Experiment Details For the class-based familiarity task, and uniform sampling task, the model was trained on the training set and evaluated on the test set.", "For the class-based task sampling, a class is sampled at random and $S$ is formed from a random subset of images from that class.", "The queries $q$ are chosen uniformly from either $S$ or from images of a different class.", "For the non-uniform instance-based familiarity task we sampled images from an exponential distribution.", "Specifically we used a fix permutation of the training images, and from that ordering chose $p(i_{th} \\hbox{ image}) \\propto 0.999^i$ for the images to store.", "The query images were selected uniformly.", "We used a fixed permutation (or shuffle) of the images to ensure most probability mass was not placed on images of a certain class.", "I.e.", "by the natural ordering of the dataset we would have otherwise almost always sampled 0 images.", "This would be confounding task non-uniformity for other latent structure to the sets.", "Because the network needed to relate the image to its frequency of occurence for task, the models were evaluated on the training set.", "This is reasonable as we are not wishing for the model to visually generalize to unseen elements in the setting of this exact-familiarity task.", "We specifically want the network weights to compress a map of image to probability of storage.", "For the database task a universe of $2.5M$ unique tokens were extracted from GigaWord v5.", "We shuffled the tokens and placed $2.3$ M in a training set and 250K in a test set.", "These sets were then sorted alphabetically.", "A random subset, representing an SSTable, was sampled by choosing a random start index and selecting the next $n$ elements, which form our set $S$ .", "Queries are sampled uniformly at random from the universe set.", "Models are trained on the training set and evaluated on the test set.", "Database Extrapolation Task We investigate whether neural models are able to extrapolate to larger test sizes.", "Using the database task setup, where each set contains a contiguous set of sorted strings; we train both the Neural Bloom Filter and LSTM on sets of sizes 2 - 200.", "We then evaluate on sets up to 250, i.e.", "a 25% increase over what is observed during training.", "This is to emulate the scenario that we train on a selection of databse tablets, but during evaluation we may observe some tablets that are slightly larger than those in the training set.", "Both the LSTM and Neural Bloom Filter are able to solve the task, with the Neural Bloom Filter using significantly less space for the larger allowed false positive rate of 5% and 1%.", "We do see the models' error increase as it surpasses the maximum training set size, however it is not catastrophic.", "Another interesting trend is noticeable; the neural models have higher utility for larger allowed false positive rates.", "This may be because of the difficulty in training the models to an extremely low accuracy.", "Effect of Sphering We see the benefit of sphering in Figure REF where the converged validation performance ends up at a higher state.", "Investigating the proportion of memory filled after all elements have been written in Figure REF , we see the model uses quite a small proportion of its memory slots.", "This is likely due to the network fixating on rows it has accessed with sparse addressing, and ignoring rows it has otherwise never touched — a phenomena noted in [32].", "The model finds a local minima in continually storing and accessing the same rows in memory.", "The effect of sphering is that the query now appears to be Gaussian (up to the first two moments) and so the nearest neighbour in the address matrix A (which is initialized to Gaussian random variables) will be close to uniform.", "This results in a more uniform memory access (as seen in Figure REF ) which significantly aids performance (as seen in Figure REF ).", "Figure: For sparse addresses, sphering enables the model to learn the task of set membership to high accuracy.Figure: For sparse addresses, sphering the query vector leads to fewer collisions across memory slots and thus a higher utilization of memory.", "Timing Benchmark We use the Neural Bloom Filter network architecture for the large database task (Table REF ).", "The network uses an encoder LSTM with 256 hidden units over the characters, and feeds this through a 256 fully connected layer to encode the input.", "A two-layer 256-hidden-unit MLP is used as the query architecture.", "The memory and word size is 8 and 4 respectively, and so the majority of the compute is spent in the encoder and query network.", "We compare this with an LSTM containing 32 hidden units.", "We benchmark the single-query latency of the network alongside the throughput of a batch of queries, and a batch of inserts.", "The Neural Bloom Filter and LSTM is implemented in TensorFlow without any custom kernels or specialized code.", "We benchmark it on the CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) and a GPU (NVIDIA Quadro P6000).", "We compare to empirical timing results published in a query-optimized Bloom Filter variant [10].", "It is worth noting, in several Bloom Filter applications, the actual query latency is not in the critical path of computation.", "For example, for a distributed database, the network latency and disk access latency for one tablet can be orders of magnitude greater than the in-memory latency of a Bloom Filter query.", "For this reason, we have not made run-time a point of focus in this study, and it is implicitly assumed that the neural network is trading off greater latency for less space.", "However it is worth checking whether run-time could be prohibitive." ], [ "Space Comparison", "For each task we compare the model's memory size, in bits, at a given false positive rate — usually chosen to be $1\\%$ .", "For our neural networks which output a probability $p = f(x)$ one could select an operating point $\\tau _{\\epsilon }$ such that the false positive rate is $\\epsilon $ .", "In all of our experiments the neural network outputs a memory (state) $s$ which characterizes the storage set.", "Let us say SPACE(f, $\\epsilon $ ) is the minimum size of $s$ , in bits, for the network to achieve an average false positive rate of $\\epsilon $ .", "We could compare SPACE(f,$\\epsilon $ ) with SPACE(Bloom Filter,$\\epsilon $ ) directly, but this would not be a fair comparison as our network $f$ can emit false negatives.", "To remedy this, we employ the same scheme as [23] where we use a `backup' Bloom Filter with false positive rate $\\delta $ to store all false negatives.", "When $f(x) < \\tau _{\\epsilon }$ we query the backup Bloom Filter.", "Because the overall false positive rate is $\\epsilon + (1 - \\epsilon ) \\delta $ , to achieve a false positive rate of at most $\\alpha $ (say $1\\%$ ) we can set $\\epsilon = \\delta = \\alpha / 2$ .", "The number of elements stored in the backup bloom filter is equal to the number of false negatives, denoted $n_{fn}$ .", "Thus the total space can be calculated, TOTAL_SPACE(f,$\\alpha $ ) = SPACE(f,$\\frac{\\alpha }{2}$ ) + $n_{fn}$ * SPACE(Bloom Filter,$\\frac{\\alpha }{2}$ ).", "We compare this quantity for different storage set sizes." ], [ "Model Size", "For the MNIST experiments we used a 3-layer convolutional neural network with 64 filters followed by a two-layer feed-forward network with $64 \\& 128$ hidden-layers respectively.", "The number of trainable parameters in the Neural Bloom Filter (including the encoder) is $243,437$ which amounts to $7.8$ Mb at 32-bit precision.", "We did not optimize the encoder architecture to be lean, as we consider it part of the library in a sense.", "For example, we do not count the size of the hashing library that an implemented Bloom Filter relies on, which may have a chain of dependencies, or the package size of TensorFlow used for our experiments.", "Nevertheless we can reason that when the Neural Bloom Filter is 4kb smaller than the classical, such as for the non-uniform instance-based familiarity in Figure REF b, we would expect to see a net gain if we have a collection of at least $1,950$ data-structures.", "We imagine this could be optimized quite significantly, by using 16-bit precision and perhaps using more convolution layers or smaller feed-forward linear operations.", "For the database experiments we used an LSTM character encoder with 256 hidden units followed by another 256 feed-forward layer.", "The number of trainable parameters in the Neural Bloom Filter $419,339$ which amounts to 13Mb.", "One could imagine optimizing this by switching to a GRU or investigating temporal convolutions as encoders." ], [ "Hyper-Parameters", "We swept over the following hyper-parameters, over the range of memory sizes displayed for each task.", "We computed the best model parameters by selecting those which resulted in a model consuming the least space as defined in Appendix .", "This depends on model performance as well as state size.", "The Memory Networks memory size was fixed to equal the input size (as the model does not arbitrate what inputs to avoid writing).", "Table: Hyper-parameters considered" ], [ "Experiment Details", "For the class-based familiarity task, and uniform sampling task, the model was trained on the training set and evaluated on the test set.", "For the class-based task sampling, a class is sampled at random and $S$ is formed from a random subset of images from that class.", "The queries $q$ are chosen uniformly from either $S$ or from images of a different class.", "For the non-uniform instance-based familiarity task we sampled images from an exponential distribution.", "Specifically we used a fix permutation of the training images, and from that ordering chose $p(i_{th} \\hbox{ image}) \\propto 0.999^i$ for the images to store.", "The query images were selected uniformly.", "We used a fixed permutation (or shuffle) of the images to ensure most probability mass was not placed on images of a certain class.", "I.e.", "by the natural ordering of the dataset we would have otherwise almost always sampled 0 images.", "This would be confounding task non-uniformity for other latent structure to the sets.", "Because the network needed to relate the image to its frequency of occurence for task, the models were evaluated on the training set.", "This is reasonable as we are not wishing for the model to visually generalize to unseen elements in the setting of this exact-familiarity task.", "We specifically want the network weights to compress a map of image to probability of storage.", "For the database task a universe of $2.5M$ unique tokens were extracted from GigaWord v5.", "We shuffled the tokens and placed $2.3$ M in a training set and 250K in a test set.", "These sets were then sorted alphabetically.", "A random subset, representing an SSTable, was sampled by choosing a random start index and selecting the next $n$ elements, which form our set $S$ .", "Queries are sampled uniformly at random from the universe set.", "Models are trained on the training set and evaluated on the test set." ], [ "Database Extrapolation Task", "We investigate whether neural models are able to extrapolate to larger test sizes.", "Using the database task setup, where each set contains a contiguous set of sorted strings; we train both the Neural Bloom Filter and LSTM on sets of sizes 2 - 200.", "We then evaluate on sets up to 250, i.e.", "a 25% increase over what is observed during training.", "This is to emulate the scenario that we train on a selection of databse tablets, but during evaluation we may observe some tablets that are slightly larger than those in the training set.", "Both the LSTM and Neural Bloom Filter are able to solve the task, with the Neural Bloom Filter using significantly less space for the larger allowed false positive rate of 5% and 1%.", "We do see the models' error increase as it surpasses the maximum training set size, however it is not catastrophic.", "Another interesting trend is noticeable; the neural models have higher utility for larger allowed false positive rates.", "This may be because of the difficulty in training the models to an extremely low accuracy." ], [ "Effect of Sphering", "We see the benefit of sphering in Figure REF where the converged validation performance ends up at a higher state.", "Investigating the proportion of memory filled after all elements have been written in Figure REF , we see the model uses quite a small proportion of its memory slots.", "This is likely due to the network fixating on rows it has accessed with sparse addressing, and ignoring rows it has otherwise never touched — a phenomena noted in [32].", "The model finds a local minima in continually storing and accessing the same rows in memory.", "The effect of sphering is that the query now appears to be Gaussian (up to the first two moments) and so the nearest neighbour in the address matrix A (which is initialized to Gaussian random variables) will be close to uniform.", "This results in a more uniform memory access (as seen in Figure REF ) which significantly aids performance (as seen in Figure REF ).", "Figure: For sparse addresses, sphering enables the model to learn the task of set membership to high accuracy.Figure: For sparse addresses, sphering the query vector leads to fewer collisions across memory slots and thus a higher utilization of memory." ], [ "Timing Benchmark", "We use the Neural Bloom Filter network architecture for the large database task (Table REF ).", "The network uses an encoder LSTM with 256 hidden units over the characters, and feeds this through a 256 fully connected layer to encode the input.", "A two-layer 256-hidden-unit MLP is used as the query architecture.", "The memory and word size is 8 and 4 respectively, and so the majority of the compute is spent in the encoder and query network.", "We compare this with an LSTM containing 32 hidden units.", "We benchmark the single-query latency of the network alongside the throughput of a batch of queries, and a batch of inserts.", "The Neural Bloom Filter and LSTM is implemented in TensorFlow without any custom kernels or specialized code.", "We benchmark it on the CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) and a GPU (NVIDIA Quadro P6000).", "We compare to empirical timing results published in a query-optimized Bloom Filter variant [10].", "It is worth noting, in several Bloom Filter applications, the actual query latency is not in the critical path of computation.", "For example, for a distributed database, the network latency and disk access latency for one tablet can be orders of magnitude greater than the in-memory latency of a Bloom Filter query.", "For this reason, we have not made run-time a point of focus in this study, and it is implicitly assumed that the neural network is trading off greater latency for less space.", "However it is worth checking whether run-time could be prohibitive." ] ]
1906.04304
[ [ "Learning a Matching Model with Co-teaching for Multi-turn Response\n Selection in Retrieval-based Dialogue Systems" ], [ "Abstract We study learning of a matching model for response selection in retrieval-based dialogue systems.", "The problem is equally important with designing the architecture of a model, but is less explored in existing literature.", "To learn a robust matching model from noisy training data, we propose a general co-teaching framework with three specific teaching strategies that cover both teaching with loss functions and teaching with data curriculum.", "Under the framework, we simultaneously learn two matching models with independent training sets.", "In each iteration, one model transfers the knowledge learned from its training set to the other model, and at the same time receives the guide from the other model on how to overcome noise in training.", "Through being both a teacher and a student, the two models learn from each other and get improved together.", "Evaluation results on two public data sets indicate that the proposed learning approach can generally and significantly improve the performance of existing matching models." ], [ "Introduction", "Human-machine conversation is a long-standing goal of artificial intelligence.", "Recently, building a dialogue system for open domain human-machine conversation is attracting more and more attention due to both availability of large-scale human conversation data and powerful models learned with neural networks.", "Existing methods are either retrieval-based or generation-based.", "Retrieval-based methods reply to a human input by selecting a proper response from a pre-built index [7], [40], [35], while generation-based methods synthesize a response with a natural language model [19], [18].", "In this work, we study the problem of response selection for retrieval-based dialogue systems, since retrieval-based systems are often superior to their generation-based counterparts on response fluency and diversity, are easy to evaluate, and have powered some real products such as the social bot XiaoIce from Microsoft [20], and the E-commerce assistant AliMe Assist from Alibaba Group [12].", "A key problem in response selection is how to measure the matching degree between a conversation context (a message with several turns of conversation history) and a response candidate.", "Existing studies have paid tremendous effort to build a matching model with neural architectures [14], [39], [32], [40], and advanced models such as the deep attention matching network (DAM) [40] have achieved impressive performance on benchmarks.", "In contrary to the progress on model architectures, there is little exploration on learning approaches of the models.", "On the one hand, neural matching models are becoming more and more complicated; on the other hand, all models are simply learned by distinguishing human responses from some automatically constructed negative response candidates (e.g., by random sampling).", "Although this heuristic approach can avoid expensive and exhausting human labeling, it suffers from noise in training data, as many negative examples are actually false negativesResponses sampled from other contexts may also be proper candidates for a given context.. As a result, when evaluating a well-trained model using human judgment, one can often observe a significant gap between training and test, as will be seen in our experiments.", "In this paper, instead of configuring new architectures, we investigate how to effectively learn existing matching models from noisy training data, given that human labeling is infeasible in practice.", "We propose learning a matching model under a general co-teaching framework.", "The framework maintains two peer models on two i.i.d.", "training sets, and lets the two models teach each other during learning.", "One model transfers knowledge learned from its training set to its peer model to help it combat with noise in training, and at the same time gets updated under the guide of its peer model.", "Through playing both a role of a teacher and a role of a student, the two peer models evolve together.", "Under the framework, we consider three teaching strategies including teaching with dynamic margins, teaching with dynamic instance weighting, and teaching with dynamic data curriculum.", "The first two strategies let the two peer models mutually “label” their training examples, and transfer the soft labels from one model to the other through loss functions; while in the last strategy, the two peer models directly select training examples for each other.", "To examine if the proposed learning approach can generally bridge the gap between training and test, we select sequential matching network (SMN) [32] and DAM as representative matching models, and conduct experiments on two public data sets with human judged test examples.", "The first data set is the Douban Conversation benchmark published in wu2017sequential, and the second one is the E-commerce Dialogue Corpus published in coling2018dua where we recruit human annotators to judge the appropriateness of response candidates regarding to their contexts on the entire test setWe have released labeled test data of E-commerce Dialogue Corpus at https://drive.google.com/open?id=1HMDHRU8kbbWTsPVr6lKU_-Z2Jt-n-dys..", "Evaluation results indicate that co-teaching with the three strategies can consistently improve the performance of both matching models over all metrics on both data sets with significant margins.", "On the Douban data, the most effective strategy is teaching with dynamic margins that brings $2.8$ % absolute improvement to SMN and $2.5$ % absolute improvement to DAM on P@1; while on the E-commerce data, the best strategy is teaching with dynamic data curriculum that brings $2.4$ % absolute improvement to SMN and $3.2$ % absolute improvement to DAM on P@1.", "Through further analysis, we also unveil how the peer models get evolved together in learning and how the choice of peer models affects the performance of learning.", "Our contributions in the paper are four-folds: (1) proposal of learning matching models for response selection with a general co-teaching framework; (2) proposal of two new teaching strategies as special cases of the framework; and (3) empirical verification of the effectiveness of the proposed learning approach on two public data sets." ], [ "Problem Formalization", "Given a data set $\\mathcal {D} = \\lbrace (y_i,c_i,r_i)\\rbrace _{i=1}^N$ where $c_i$ represents a conversation context, $r_i$ is a response candidate, and $y_i\\in \\lbrace 0,1\\rbrace $ denotes a label with $y_i=1$ indicating $r_i$ a proper response for $c_i$ and otherwise $y_i=0$ , the goal of the task of response selection is to learn a matching model $s(\\cdot ,\\cdot )$ from $\\mathcal {D}$ .", "For any context-response pair $(c,r)$ , $s(c,r)$ gives a score that reflects the matching degree between $c$ and $r$ , and thus allows one to rank a set of response candidates according to the scores for response selection.", "To obtain a matching model $s(\\cdot ,\\cdot )$ , one needs to deal with two problems: (1) how to define $s(\\cdot ,\\cdot )$ ; and (2) how to learn $s(\\cdot ,\\cdot )$ .", "Existing studies concentrate on Problem (1) by defining $s(\\cdot ,\\cdot )$ with sophisticated neural architectures [32], [40], and leave Problem (2) in a simple default setting where $s(\\cdot ,\\cdot )$ is optimized with $\\mathcal {D}$ using a loss function $L$ usually defined by cross entropy.", "Ideally, when $\\mathcal {D}$ is large enough and has good enough quality, a carefully designed $s(\\cdot ,\\cdot )$ learned using the existing paradigm should be able to well capture the semantics in dialogues.", "The fact is that since large-scale human labeling is infeasible, $\\mathcal {D}$ is established under simple heuristics where negative response candidates are automatically constructed (e.g., by random sampling) with a lot of noise.", "As a result, advanced matching models only have sub-optimal performance in practice.", "The gap between ideal and reality motivates us to pursue a better learning approach, as will be presented in the next section." ], [ "Learning a Matching Model through Co-teaching", "In this section, we present co-teaching, a new framework for learning a matching model.", "We first give a general description of the framework, and then elaborate three teaching strategies as special cases of the framework.", "Figure: Co-teaching framework." ], [ "Co-teaching Framework", "The idea of co-teaching is to maintain two peer models and let them learn from each other by simultaneously acting as a teacher and a student.", "Figure REF gives an overview of the co-teaching framework.", "The learning program starts from two pre-trained peer models A and B.", "In each iteration, a batch of training data is equally divided into two sub-batches without overlap as $\\bar{\\mathcal {D}}_A$ and $\\bar{\\mathcal {D}}_B$ for B and A respectively.", "A and B then examine their sub-batches and output learning protocols $(\\tilde{\\mathcal {D}}_B, \\mathcal {J}_B)$ and $(\\tilde{\\mathcal {D}}_A, \\mathcal {J}_A)$ for their peers, where $\\tilde{\\mathcal {D}}_B$ and $\\tilde{\\mathcal {D}}_A$ are training data and $\\mathcal {J}_B$ and $\\mathcal {J}_A$ are loss functions.", "After that, A and B get updated according to $(\\tilde{\\mathcal {D}}_A, \\mathcal {J}_A)$ and $(\\tilde{\\mathcal {D}}_B, \\mathcal {J}_B)$ respectively, and the learning program moves to the next iteration.", "Algorithm 1 describes the pseudo code of co-teaching.", "The rationale behind the co-teaching framework is that the peer models can gradually obtain different abilities from the different training data as the learning process goes on, even when the two models share the same architecture and the same initial configuration, and thus, they can acquire different knowledge from their training data and transfer the knowledge to their peers to make them robust over the noise in the data.", "This resembles two peer students who learn from different but related materials.", "Through knowledge exchange, one can inspire the other to get new insights from his or her material, and thus the two students get improved together.", "Advantages of the framework reside in various aspects: first, the peer models have their own “judgment” regarding to the quality of the same training example.", "Thus, one model may guide the other how to pick high quality training examples and circumvent noise; second, since the peer models are optimized with different training sub-batches, knowledge from one sub-batch could be supplementary to the other through exchange of learning protocols; third, the two peer models may have different decision boundaries, and thus are good at recognizing different patterns in data.", "This may allow one model to help the other rectify errors in learning.", "To instantiate the co-teaching framework, one needs to specify initialization of the peer models and teaching strategies that can form the learning protocols.", "In this work, to simplify the learning program of co-teaching, we assume that model A and model B are initialized by the same matching model pre-trained with the entire training data.", "We focus on design of teaching strategies, as will be elaborated in the next section.", "[t!]", "The proposed co-teaching framework model parameters $\\theta _A$ , $\\theta _B$ , learning rate $\\eta $ , number of epochs $n_T$ , number of iterations $n_K$ ; $T=1,2,...,T_{n_T}$ Shuffle training set $\\mathcal {D}$ $K=1,2,...,K_{n_K}$ Fetch a batch of training data $\\bar{\\mathcal {D}}$ Distributes $\\bar{\\mathcal {D}}$ equally to two sub-batches of training data $\\bar{\\mathcal {D}}_A, \\bar{\\mathcal {D}}_B$ ; $\\bar{\\mathcal {D}}_A, \\bar{\\mathcal {D}}_B \\subset \\bar{\\mathcal {D}} $ Obtain learning protocol $(\\tilde{\\mathcal {D}}_B, \\mathcal {J}_B)$ from model A and $\\bar{\\mathcal {D}}_B$ Obtain learning protocol $(\\tilde{\\mathcal {D}}_A, \\mathcal {J}_A)$ from model B and $\\bar{\\mathcal {D}}_A$ Update $\\theta _A = \\theta _A - \\eta \\nabla \\mathcal {J}_A(\\tilde{\\mathcal {D}}_A)$ ; Update model A by $(\\tilde{\\mathcal {D}}_A, \\mathcal {J}_A)$ .", "Update $\\theta _B = \\theta _B - \\eta \\nabla \\mathcal {J}_B(\\tilde{\\mathcal {D}}_B)$ ; Update model B by $(\\tilde{\\mathcal {D}}_B, \\mathcal {J}_B)$ .", "$\\theta _A$ , $\\theta _B$ ." ], [ "Teaching Strategies", "We consider the following three strategies that cover teaching with dynamic loss functions and teaching with data curriculum." ], [ "Teaching with Dynamic Margins:", "The strategy fixes $\\bar{\\mathcal {D}}_A$ and $\\bar{\\mathcal {D}}_B$ as $\\tilde{\\mathcal {D}}_A$ and $\\tilde{\\mathcal {D}}_B$ respectively, and dynamically creates loss functions as the learning protocols.", "Without loss of generality, the training data $\\mathcal {D}$ can be re-organized in a form of $\\lbrace ( c_i, r_i^+, r_i^-) \\rbrace _{i=1}^{N^{\\prime }}$ , where $r_i^+$ and $r_i^-$ refer to a positive response candidate and a negative response candidate regarding to $c_i$ respectively.", "Suppose that $\\bar{\\mathcal {D}}_A=\\lbrace (c_{A,i}, r_{A,i}^+, r_{A,i}^-)\\rbrace _{i=1}^{N_A}$ and $\\bar{\\mathcal {D}}_B=\\lbrace (c_{B,i}, r_{B,i}^+, r_{B,i}^-)\\rbrace _{i=1}^{N_B}$ , then model A evaluates each $(c_{B,i}, r_{B,i}^+, r_{B,i}^-) \\in \\bar{\\mathcal {D}}_B$ with matching scores $s_A(c_{B,i}, r_{B,i}^+)$ and $s_A(c_{B,i}, r_{B,i}^-)$ , and form a margin for model B as $ \\begin{aligned}\\Delta _{B,i}=\\max \\Big (0, \\lambda \\big (s_A(c_{B,i}, r_{B,i}^+) - s_A(c_{B,i}, r_{B,i}^-)\\big )\\Big ),\\end{aligned}$ where $\\lambda $ is a hyper-parameter.", "Similarly, $\\forall (c_{A,i}, r_{A,i}^+, r_{A,i}^-) \\in \\bar{\\mathcal {D}}_A$ , the margin provided by model B for model A can be formulated as $ \\begin{aligned}\\Delta _{A,i}=\\max \\Big (0, \\lambda \\big (s_B(c_{A,i}, r_{A,i}^+) - s_B(c_{A,i}, r_{A,i}^-)\\big )\\Big ),\\end{aligned}$ where $s_B(c_{A,i}, r_{A,i}^+)$ and $s_B(c_{A,i}, r_{A,i}^-)$ are matching scores calculated with model B.", "Loss functions $\\mathcal {J}_A$ and $\\mathcal {J}_B$ are then defined as $\\begin{aligned}\\mathcal {J}_A = \\sum _{i=1}^{N_A} \\max \\lbrace 0, \\Delta _{A,i} & -s_A(c_{A,i}, r_{A,i}^+) \\\\& + s_A(c_{A,i}, r_{A,i}^-)\\rbrace ,\\end{aligned}$ $\\begin{aligned}\\mathcal {J}_B = \\sum _{i=1}^{N_B} \\max \\lbrace 0, \\Delta _{B,i} & -s_B(c_{B,i}, r_{B,i}^+) \\\\& +s_B(c_{B,i}, r_{B,i}^-)\\rbrace .\\end{aligned}$ Intuitively, one model may assign a small margin to a negative example if it identifies the example as a false negative.", "Then, its peer model will pay less attention to such an example in its optimization.", "This is how the two peer models help each other combat with noise under the strategy of teaching with dynamic margins." ], [ "Teaching with Dynamic Instance Weighting:", "Similar to the first strategy, this strategy also defines the learning protocols with dynamic loss functions.", "The difference is that this strategy penalizes low-quality negative training examples with weights.", "Formally, let us represent $\\bar{\\mathcal {D}}_B$ as $\\lbrace (y_{B,i}, c_{B,i}, r_{B,i})\\rbrace _{i=1}^{N^{\\prime }_B}$ , then $\\forall (y_{B,i}, c_{B,i}, r_{B,i}) \\in \\bar{\\mathcal {D}}_B$ , its weight from model A is defined as $w_{B,i}=\\left\\lbrace \\begin{array}{lr}1 & y_{B,i}=1 \\\\1 - s_A(c_{B, i}, r_{B, i}) & y_{B, i} = 0\\end{array}\\right.$ Similarly, $\\forall (y_{A,i}, c_{A,i}, r_{A,i}) \\in \\bar{\\mathcal {D}}_A$ , model B assign a weight as $w_{A, i}=\\left\\lbrace \\begin{array}{lr}1 & y_{A,i}=1 \\\\1 - s_B(c_{A,i}, r_{A,i}) & y_{A,i} = 0\\end{array}\\right.$ Then, loss functions $\\mathcal {J_A}$ and $\\mathcal {J_B}$ can be formulated as $\\mathcal {J}_A=\\sum _{i=1}^{N^{\\prime }_A} w_{A,i} L(y_{A,i}, s_A(c_{A,i}, r_{A,i})), \\\\\\mathcal {J}_B=\\sum _{i=1}^{N^{\\prime }_B} w_{B,i} L(y_{B,i}, s_B(c_{B,i}, r_{B,i})),$ where $L(\\cdot ,\\cdot )$ is defined by cross entropy: $- y \\log (s(c, r)) +(1-y) \\log (1-s(c, r)).$ In this strategy, negative examples that are identified as false negatives by one model will obtain small weights from the model, and thus be less important than other examples in the learning process of the other model." ], [ "Teaching with Dynamic Data Curriculum:", "In the first two strategies, knowledge is transferred mutually through “soft labels” defined by the peer matching models.", "In this strategy, we directly transfer data to each model.", "During learning, $\\mathcal {J}_A$ and $\\mathcal {J}_B$ are fixed as cross entropy, and the learning protocols vary by $\\tilde{\\mathcal {D}}_A$ and $\\tilde{\\mathcal {D}}_B$ .", "Inspired by BoHanNIPS2018, we construct $\\tilde{\\mathcal {D}}_A$ and $\\tilde{\\mathcal {D}}_B$ with small-loss instances.", "These instances are far from decision boundaries of the two models, and thus are more likely to be true positives and true negatives.", "Formally, $\\tilde{\\mathcal {D}}_A$ and $\\tilde{\\mathcal {D}}_B$ are defined as $ \\normalsize \\begin{aligned}\\tilde{\\mathcal {D}}_B = argmin_{\\left|\\tilde{\\mathcal {D}}_B\\right|=\\delta \\left|\\bar{\\mathcal {D}}_B\\right|, \\tilde{\\mathcal {D}}_B \\subset \\bar{\\mathcal {D}}_B} \\mathcal {J}_A (\\tilde{\\mathcal {D}}_B), \\\\\\tilde{\\mathcal {D}}_A = argmin_{\\left|\\tilde{\\mathcal {D}}_A\\right|=\\delta \\left|\\bar{\\mathcal {D}}_A\\right|, \\tilde{\\mathcal {D}}_A \\subset \\bar{\\mathcal {D}}_A} \\mathcal {J}_B(\\tilde{\\mathcal {D}}_A),\\end{aligned}$ where $|\\cdot |$ measures the size of a set, $\\mathcal {J}_A (\\tilde{\\mathcal {D}}_B)$ and $\\mathcal {J}_B(\\tilde{\\mathcal {D}}_A)$ stand for accumulation of loss on the corresponding data sets, and $\\delta $ is a hyper-parameter.", "Note that we do not shrink $\\delta $ as in BoHanNIPS2018, since fixing $\\delta $ as a constant yields a simple yet effective learning program, as will be seen in our experiments." ], [ "Experiments", "We test our learning schemes on two public data sets with human annotated test examples." ], [ "Experimental Setup", "The first data set we use is Douban Conversation Corpus (Douban) [32] which is a multi-turn Chinese conversation data set crawled from Douban grouphttps://www.douban.com/group.", "The data set consists of 1 million context-response pairs for training, 50 thousand pairs for validation, and $6,670$ pairs for test.", "In the training set and the validation set, the last turn of each conversation is regarded as a positive response and negative responses are randomly sampled.", "The ratio of the positive and the negative is 1:1 in training and validation.", "In the test set, each context has 10 response candidates retrieved from an index whose appropriateness regarding to the context is judged by human annotators.", "The average number of positive responses per context is $1.18$ .", "Following wu2017sequential, we employ R$_{10}$ @1, R$_{10}$ @2, R$_{10}$ @5, mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1) as evaluation metrics.", "In addition to the Douban data, we also choose E-commerce Dialogue Corpus (ECD) [37] as an experimental data set.", "The data consists of real-world conversations between customers and customer service staff in Taobaohttps://www.taobao.com, which is the largest e-commerce platform in China.", "There are 1 million context-response pairs in the training set, and 10 thousand pairs in both the validation set and the test set.", "Each context in the training set and the validation set corresponds to one positive response candidate and one negative response candidate, while in the test set, the number of response candidates per context is 10 with only one of them positive.", "In the released data, human responses are treated as positive responses, and negative ones are automatically collected by ranking the response corpus based on conversation history augmented messages using Apache Lucenehttp://lucene.apache.org/.", "Thus, we recruit 3 active users of Taobao as human annotators, and ask them to judge each context-response pair in the test data (i.e., in total 10 thousand pairs are judged).", "If a response can naturally reply to a message given the conversation history before it, then the context-response pair is labeled as 1, otherwise, it is labeled as 0.", "Each pair receives three labels and the majority is taken as the final decision.", "On average, each context has $2.5$ response candidates labeled as positive.", "There are only 33 contexts with all responses labeled as positive or negative, and we remove them from test.", "Fleiss' kappa [4] of the labeling is $0.64$ , indicating substantial agreement among the annotators.", "We employ the same metrics as in Douban for evaluation.", "Note that we do not choose the Ubuntu Dialogue Corpus [14] for experiments, because (1) the test set of the Ubuntu data is constructed by randomly sampling; and (2) conversations in the Ubuntu data are in a casual style and too technical, and thus it is very difficult for us to find qualified human annotators to label the data.", "Table: Evaluation results on the two data sets.", "Numbers marked with ** mean that the improvement is statistically significant compared with the best baseline (t-test with pp-value <0.05<0.05).", "Numbers in bold indicate the best strategies for the corresponding models on specific metrics." ], [ "Matching Models", "We select the following two models that achieve superior performance on benchmarks to test our learning approach.", "SMN: [32] first lets each utterance in a context interact with a response, and forms a matching vector for the pair through CNNs.", "Matching vectors of all the pairs are then aggregated with an RNN as a matching score.", "DAM: [40] performs matching under a representation-matching-aggregation framework, and represents a context and a response with stacked self-attention and cross-attention.", "Both models are implemented with TensorFlow according to the details in wu2017sequential and zhou2018multi.", "To implement co-teaching, we pre-train the two models using the training sets of Douban and ECD, and tune the models with the validation sets of the two data.", "Each pre-trained model is used to initialize both model A and model B.", "After co-teaching, the one in A and B that performs better on the validation sets is picked for comparison.", "We denote models learned with the teaching strategies in Section REF as Model-Margin, Model-Weighting, and Model-Curriculum respectively, where “Model” refers to either SMN or DAM.", "These models are compared with the pre-trained model denoted as Model-Pre-training, and those reported in wu2017sequential,zhou2018multi,coling2018dua." ], [ "Implementation Details", "We limit the maximum number of utterances in each context as 10 and the maximum number of words in each utterance and response as 50 for computational efficiency.", "Truncation or zero-padding are applied when necessary.", "Word embedding is pre-trained with Word2Vec [16] on the training sets of Douban and ECD, and the dimension of word vectors is 200.", "The co-teaching framework is implemented with TensorFlow.", "In co-teaching, learning rates (i.e., $\\eta $ in Algorithm 1) in dynamic margins, dynamic instance weighting, and dynamic data curriculum are set as $0.001$ , $0.0001$ , and $0.0001$ respectively.", "We choose 200 in co-teaching with SMN and 50 in co-teaching with DAM as the size of mini-batches.", "Optimization is conducted using stochastic gradient descent with Adam algorithm [11].", "In teaching with dynamic margins, we vary $\\lambda $ in $\\lbrace 1, \\frac{1}{2}, \\frac{1}{3}, \\frac{1}{5}, \\frac{1}{10}, \\frac{1}{15}, \\frac{1}{20}\\rbrace $ , and choose $\\frac{1}{10}$ for SMN on Douban, $\\frac{1}{2}$ for SMN on ECD, $\\frac{1}{3}$ for DAM on Douban, and $\\frac{1}{2}$ for DAM on ECD.", "In teaching with dynamic data curriculum, we select $\\delta $ in $\\lbrace 0.1, 0.2, ..., 0.9, 1.0\\rbrace $ , and find that $0.9$ is the best choice for both models on both data sets." ], [ "Evaluation Results", "Table REF reports evaluation results of co-teaching with the three teaching strategies on the two data sets.", "We can see that all teaching strategies can improve the original models on both data sets, and improvement from the best strategy is statistically significant (t-test with $p$ -value $<0.05)$ on most metrics.", "On Douban, the best strategy for SMN is teaching with dynamic margins, and it is comparable with teaching with dynamic instance weighting for DAM, while on ECD, for both SMN and DAM, the best strategy is teaching with dynamic data curriculum.", "The difference may stem from the nature of training sets of the two data.", "The training set of Douban is built from random sampling, while the training set of ECD is constructed through response retrieval that may contain more false negatives.", "Thus, in training, Douban could be cleaner than ECD, making “hard data filtering” more effective than “soft labeling” on ECD.", "It is worth noting that on ECD, there are significant gaps between the results of SMN (pre-trained) reported in Table REF and those reported in coling2018dua, since SMN in this paper is evaluated on the human-judged test set while SMN in coling2018dua is evaluated on the automatically constructed test set that is homogeneous with the training set.", "This somehow indicates the gap between training and test in real applications for the existing research on response selection, and thus demonstrates the merits of this work." ], [ "Discussions", "In addition to efficacy of co-teaching as a learning approach, we are also curious about Q1: if model A and model B can “co-evolve” when they are initialized with one network; Q2: if co-teaching is still effective when model A and model B are initialized with different networks; and Q3: if the teaching strategies are sensitive to the hyper-parameters (i.e., $\\lambda $ in Equations (REF )-(REF ) and $\\delta $ in Equation (REF )).", "Figure: Test P@1 of DAM with the three teaching strategies on ECD.", "All curves are smoothed by exponential moving average for beauty.Table: Evaluation results of co-teaching initialized with different networks." ], [ "Answer to Q1:", "Figure REF shows P@1 of DAM vs. number of iterations on the test set of ECD under the three teaching strategies.", "Co-teaching with any of the three strategies can improve both the performance of model A and the performance of model B after pre-training, and the peer models move with almost the same pace.", "The results verified our claim that “by learning from each other, the peer models can get improved together”.", "Curves of dynamic margins oscillate more fiercely than others, indicating that optimization with dynamic margins is more difficult than optimization with the other two strategies." ], [ "Answer to Q2:", "as a case study of co-teaching with two networks in different capabilities, we initialize model A and model B with DAM and SMN respectively, and select teaching with dynamic margins for Douban and teaching with dynamic data curriculum for ECD (i.e., the best strategies for the two data sets when co-teaching is initialized with one network).", "Table REF shows comparison between models before/after co-teaching.", "We find that co-teaching is still effective when starting from two networks, as both SMN and DAM get improved on the two data sets.", "Despite the improvement, it is still better to learn the two networks one by one, as co-teaching with two networks cannot bring more improvement than co-teaching with one network, and the performance of the stronger one between the two networks could also drop (e.g., DAM on Douban).", "We guess this is because the stronger model cannot be well taught by the weaker model, especially in teaching via “soft labels”, and as a result, it is not able to transfer more knowledge to the weaker one as well.", "https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average" ], [ "Answer to Q3:", "finally, we check the effect of hyper-parameters to co-teaching.", "Figure REF illustrates how the performance of DAM varies under different $\\lambda $ s in teaching with dynamic margins on Douban.", "We can see that both small $\\lambda $ s and large $\\lambda $ s will cause performance drop.", "This is because small $\\lambda $ s will reduce the effect of margins, making clean examples and noisy examples indifferent in learning, while with large $\\lambda $ s, some errors from the “soft labels” might be magnified, and thus hurt the performance of the learning approach.", "Figure REF shows the performance of DAM under different $\\delta $ s in teaching with dynamic data curriculum on ECD.", "Similarly, DAM gets worse when $\\delta $ becomes small or large, since a smaller $\\delta $ means fewer data will be involved in training, while a larger $\\delta $ brings more risks to introducing noise into training.", "Thus, we conclude that the teaching strategies are sensitive to the choice of hyper-parameters.", "Figure: Effects of λ\\lambda and δ\\delta to co-teaching.", "Experiments are conducted with DAM on the two data sets." ], [ "Related Work", "So far, methods used to build an open domain dialogue system can be divided into two categories.", "The first category utilize an encoder-decoder framework to learn response generation models.", "Since the basic sequence-to-sequence models [25], [19], [22] tend to generate generic responses, extensions have been made to incorporate external knowledge into generation [17], [33], and to generate responses with specific personas or emotions [13], [36], [38].", "The second category design a discriminative model to measure the matching degree between a human input and a response candidate for response selection.", "At the beginning, research along this line assumes that the human input is a single message [15], [26], [6], [27].", "Recently, researchers begin to make use of conversation history in matching.", "Representative methods include the dual LSTM model [14], the deep learning to respond architecture [34], the multi-view matching model [39], the sequential matching network [32], [31], the deep attention matching network [40], and the multi-representation fusion network [23].", "Our work belongs to the second group.", "Rather than crafting a new model, we are interested in how to learn the existing models with a better approach.", "Probably the most related work is the weakly supervised learning approach proposed in wu2018learning.", "However, there is stark difference between our approach and the weak supervision approach: (1) weak supervision employs a static generative model to teach a discriminative model, while co-teaching dynamically lets two discriminative models teach each other and evolve together; (2) weak supervision needs pre-training a generative model with extra resources and pre-building an index for training data construction, while co-teaching does not have such request; and (3) in terms of multi-turn response selection, weak supervision is only tested on the Douban data with SMN and the multi-view matching model, while co-teaching is proven effective on both the Douban data and the E-commerce data with SMN and DAM which achieves state-of-the-art performance on benchmarks.", "Moreover, improvement to SMN on the Douban data from co-teaching is bigger than that from weak supervision, when the ratio of the positive and the negative is 1:1 in trainingOur results are $0.559$ (MAP), $0.601$ (MRR), and $0.424$ (P@1), while results reported in [30] are $0.542$ (MAP), $0.588$ (MRR), and $0.408$ (P@1).. Our work, in a broad sense, belongs to the effort on learning with noisy data.", "Previous studies including curriculum learning (CL) [1] and self-paced learning (SPL) [8], [9] tackle the problem with heuristics, such as ordering data from easy instances to hard ones [21], [24] and retaining training instances whose losses are smaller than a threshold  [9].", "Recently, fan2018learning propose a deep reinforcement learning framework in which a simple deep neural network is used to adaptively select and filter important data instances from the training data.", "jiang2017mentornet propose a MentorNet which learns a data-driven curriculum with a Student-Net to mitigate overfitting on corrupted labels.", "In parallel to curriculum learning, several studies explore sample weighting schemes where training samples are re-weighted according to their label-quality [28], [2], [30].", "Instead of considering data quality, wu2018NIPSL2T-DLF employ a parametric model to dynamically create appropriate loss functions.", "The learning approach in this work is mainly inspired by the work of BoHanNIPS2018 for handling extremely noisy labels.", "However, with substantial extensions, our work is far beyond that work.", "First, we generalize the concept of “co-teaching” to a framework, and now the method in BoHanNIPS2018 becomes a special case of the framework.", "Second, BoHanNIPS2018 only exploits data curriculum, while in addition to data curriculum, we also propose two new strategies for teaching with dynamic loss functions as special cases of the framework.", "Third, unlike BoHanNIPS2018 who only use one network to initialize the peer models in co-teaching, we studied co-teaching with both one network and two different networks.", "Finally, BoHanNIPS2018 verified that the special co-teaching method is effective in some computer vision tasks, while we demonstrate that the co-teaching framework is generally useful for building retrieval-based dialogue systems." ], [ "Conclusions", "We propose learning a matching model for response selection under a general co-teaching framework with three specific teaching strategies.", "The learning approach lets two matching models teach each other and evolve together.", "Empirical studies on two public data sets show that the proposed approach can generally improve the performance of existing matching models." ], [ "Acknowledgement", "We would like to thank the anonymous reviewers for their constructive comments.", "This work was supported by the National Key Research and Development Program of China (No.", "2017YFC0804001), the National Science Foundation of China (NSFC Nos.", "61672058 and 61876196)." ] ]
1906.04413
[ [ "BasisConv: A method for compressed representation and learning in CNNs" ], [ "Abstract It is well known that Convolutional Neural Networks (CNNs) have significant redundancy in their filter weights.", "Various methods have been proposed in the literature to compress trained CNNs.", "These include techniques like pruning weights, filter quantization and representing filters in terms of a basis functions.", "Our approach falls in this latter class of strategies, but is distinct in that that we show both compressed learning and representation can be achieved without significant modifications of popular CNN architectures.", "Specifically, any convolution layer of the CNN is easily replaced by two successive convolution layers: the first is a set of fixed filters (that represent the knowledge space of the entire layer and do not change), which is followed by a layer of one-dimensional filters (that represent the learned knowledge in this space).", "For the pre-trained networks, the fixed layer is just the truncated eigen-decompositions of the original filters.", "The 1D filters are initialized as the weights of linear combination, but are fine-tuned to recover any performance loss due to the truncation.", "For training networks from scratch, we use a set of random orthogonal fixed filters (that never change), and learn the 1D weight vector directly from the labeled data.", "Our method substantially reduces i) the number of learnable parameters during training, and ii) the number of multiplication operations and filter storage requirements during implementation.", "It does so without requiring any special operators in the convolution layer, and extends to all known popular CNN architectures.", "We apply our method to four well known network architectures trained with three different data sets.", "Results show a consistent reduction in i) the number of operations by up to a factor of 5, and ii) number of learnable parameters by up to a factor of 18, with less than 3% drop in performance on the CIFAR100 dataset." ], [ "Introduction", "While there has been a tremendous surge in convolutional neural networks and their applications in computer vision, relatively little is understood about how information is learned and stored in the network.", "This is evidenced by the fact that researchers have successfully proposed different approaches for compressing a network after it has been trained [1], including techniques like pruning weights [2], [3], [4], [5], [6], [7], assuming row-column separability [8], applying low rank approximations for computational gains [9], and using basis representation [8], [10] .", "It is clear that CNNs do not need to explicit learning of a large number of coefficients in the manner in which they are currently trained.", "Based on this observation, we take a different view of the key component in CNNs - the filtering operation - and propose a fundamentally different approach that combines a \"fixed\" convolution operator (that is never trained or learned) with a learnable one-dimensional kernel.", "This is motivated by a salient observation that the filters are points in a hyper-dimensional space that is learned via the training process.", "We claim that the filters themselves are not important in the end, but it is the representation of the space itself is the key.", "For networks that have been already trained, the underlying knowledge space of a layer can be easily represented as truncated eigen decomposition of the filters.", "We can then efficiently fine-tune the coefficients of linear combination to find new points in this lower dimensional space which recover any loss in performance, and discard the original filters.", "As we will show, this approach dramatically reduces the number of filtering operations and filter storage requirements, without notable drop in performance.", "The same construct can be also use to train a network from scratch without having to explicitly learn the filter kernels across the network.", "For this scenario, we show that random basis functions can be used as fixed convolution kernels (which never require training), with one dimensional weight vectors that learn the relevant information.", "We refer to this ability to learn in a compressed format as \"compressed learning\" where instead of learning the 3D filter parameters, we only need to learn relatively fewer parameters that describe where these filters reside in the hyperdimensional information space in a given layer of the CNN.", "Thus, this paper unifies the goals of compressing previously trained networks, and training networks in a compressed format when learning new information from scratch.", "Figure: A side by side comparison of conventional Convolutional layer (left) and Basis Layer (right)." ], [ "Compressed Representation and Learning ", "Consider the fundamental convolution operation in any given layer of a convolutional neural network depicted on the left in Figure 1.", "Assume that an input block of data $x(m,n,l)$ (such as the activations or output of the previous layer) is convolved with a set of 3D filters $ h_k (m,n,l), \\ k=1…P$ .", "The output $y_k (m,n)$ can be expressed as $y_k(m, n) = x(m, n, l) \\ * \\ h_k(m, n, l), \\qquad 1 \\le k \\le p$ where $*$ represents the convolution operation.", "The right side of Figure 1 shows how the same output can be obtained using two successive convolution stages.", "Here, we assume that the filters can be expressed as a linear combination of Q basis functions $f_i (m,n,l), \\quad i=1…Q$ , such that $h_k (m, n, l) = \\sum _{i=1}^{Q} w_{ik} \\cdot f_i(m, n, l)$ where $w_{ik}$ are the weights of linear combination.", "Using this representation, the output can be expressed as $y_k(m, n) = \\sum _{i=1}^{Q} w_{ik} \\cdot [x(m, n, l) * f_i(m, n, l)], \\quad 1 \\le k \\le P$ The key observation is that the Q convolution terms $z_i (m,n)=x(m,n,l)*f_i (m,n,l)$ need to be computed only once, and they are common to all $P$ outputs $y_k (m,n)$ .", "These can be stacked together to form the 3D intermediate result $z(m,n,q)$ while the weights $w_{ik}$ can be treated as $1 \\times 1\\times Q$ filter $w_k (q)$ .", "Therefore, the outputs $y_k (m,n)$ are simply the convolution of two, i.e $y_k (m,n)=w_k (q)*z(m,n,q)$ We refer to this construct using two successive convolutions as BasisConv." ], [ "Compression of pretrained Networks", "It is well known that eigen decomposition results in a compact basis that minimizes the reconstruction error achieved by a linear combination of basis functions.", "We therefore choose $f_i (m,n,l)$ as the eigen filters that represent the sub-space in which the original filters $h_k (m,n,l)$ lie.", "To obtain the eigen filters, we define the $LD^2 \\times 1$ dimensional vector $\\mathbf {h_k}$ as a vectorized representation of $h_k (m,n,l)$ , and construct the matrix $\\mathbf {A}= [\\mathbf {h_1} \\ \\mathbf {h_2} \\ ... \\ \\mathbf {h_P}]$ with $\\mathbf {h_k}$ as its columns.", "The eigenvectors of $\\mathbf {A A^T}$ represent the sub-space of the filters, and satisfy the relation $\\mathbf {A A^T f_i}=\\lambda _i \\mathbf {f_i}$ , where $\\mathbf {f_i}$ are the eigenvectors, and $\\lambda _i$ are the corresponding eigenvalues.", "The eigen filter $f_i (m,n,l)$ is readily obtained by re-ordering the elements of the eigenvector $\\mathbf {f_i}$ into a $D \\times D \\times L$ array.", "Although the number of possible eigenvectors is equal to the dimensionality of the space, we select a small subset of eigenvectors which correspond to the largest Q eigen-values that best represent the dominant coordinates of the filters’ subspace.", "Since the eigen-values represent the information present in each eigen-vector, in practice we will use the metric $t = \\frac{ \\sum _{i=1}^{Q} \\lambda _i}{ \\sum _{i=1}^{LD^2} \\lambda _i}$ to choose Q such that most of the relevant information is retained in the selected eigen-vectors.", "The decomposition of the filters $h_k$ of any given layer of the network can be succinctly expressed in matrix vector notation by defining $\\mathbf {F}= [\\mathbf {f_1} \\ \\mathbf {f_2} \\ ... \\ \\mathbf {f_Q}]$ (i.e.", "the matrix of eigenvectors of the filters for that layer) so that $\\mathbf {h_k} = \\mathbf {Fw_k}$ and $\\mathbf {w_k} = [w_{1k}\\ w_{2k}\\ ... \\ w_{Qk}]$ is a $Q \\times 1$ vector of weights.", "Since $\\mathbf {F^T F=I}$ (i.e.", "the identity matrix), the weights of easily obtained by computing $\\mathbf {w_k} = \\mathbf {F^T h_k}$ Depending on the choices of $P$ and $Q$ , this can lead to substantial reduction in the number of multiplication operations.", "Specifically, let $O$ represents the number of multiplications for one convolution operation (between $x(m,n,l)$ and either $h_k (m,n,l)$ or $f_i (m,n,l)$ ).", "If the size of the filters is $D \\times D \\times L$ , and the size of the input data is $M \\times N \\times L$ , It is easy to show that $O=LD^2 (M-D+1)(N-D+1)$ .", "Therefore, multiplications required in Eq.", "(REF ) is $A = PO = PLD^2 (M-D+1)(N-D+1)$ while multiplications required in Eq (3) is $\\begin{split}B & = QLD^2 (M-D+1)(N-D+1)+PQ(M-D+1)(N-D+1) \\\\& = Q[L^2+P](M-D+1)(N-D+1)\\end{split}$ We see that the ratio of the two is $\\frac{A}{B}= \\frac{PLD^2 (M-D+1)(N-D+1)}{Q[LD^2+P](M-D+1)(N-D+1)} = \\frac{PLD^2}{Q[LD^2+P]}$ Thus, as long as $LD^2 >> P$ , the number of multiplications will be reduced by a factor close to $P/Q$ (i.e.", "the ratio of the the original number of filters and the number of basis filters used)." ], [ "compressed Learning", "The architecture shown in Figure REF is not only amenable to reducing the filter storage requirements and multiplications required for each convolution layer, but is amenable to learning in the compressed space where the number of learnable parameters is substantially reduced.", "Recall that number of learnable parameters in the original filters is $LD^2$ .", "Since there are $P$ such filters, the total number of original learnable parameters is $PLD^2$ .", "However, the total number of \"learnable\" parameters for BasisConv is $PQ$ (depicted in Figure REF as a $P$ one-dimensional filters of length $Q$ ).", "Therefore, the reduction in the number of learnable parameters is $LD^2/Q$ .", "If $LD^2>>Q$ , it is clear that the number of scalar weights that need to be refined is substantially less than the original number of learnable parameters.", "For pretrained networks, fine tuning is achieved by retraining $\\mathbf {w_k}$ while freezing the eigenfilters in each basis convolution layer.", "The reason is the weight vectors $\\mathbf {w_k}$ are the lower-dimensional embeddings of the original filters in the sub-space represented by the informative eigenvectors.", "Thus, while the eigenvectors represent “knowledge space” captured in a given layer of the network, the weights represent specific points within this space where each filter resides.", "This observation allows us to fine-tune the weights directly to mitigate the approximation errors at each layer (without having to fine-tune the filters explicitly).", "The more interesting scenario arises for training a network from scratch in compressed format.", "Of course, if the final values of $\\mathbf {h_k}$ are not known, then it is not possible to use eigen space representation.", "Therefore, for compressed learning from scratch, we propose to initialize the columns of $\\mathbf {F}$ with random vectors that will remain fixed, and only allow the coefficients $\\mathbf {w_k}$ to update during training.", "In other words, we never need to train 3D filter coefficients, but just the weights of linear combination.", "Here, we assume $\\mathbf {F}$ represent a random matrix whose columns $\\mathbf {f_i},1\\le i \\le Q$ , are orthonormal random vectors of dimension $LD^2 \\times 1$ so that $\\mathbf {F^TF} = \\mathbf {I}$ is the identity matrix.", "The question is how should such random vectors be chosen to represent the underlying knowledge space of the filters?", "Of course, the ideal (but unknown) filter $\\mathbf {h_k}$ (which are of size $LD^2 \\times 1$ ) can be exactly represented as a linear combination of $LD^2$ such orthogonal random vectors.", "However, since $\\mathbf {F}$ only has $Q$ columns, the error between the ideal filter and its linear approximation $\\mathbf {\\hat{h}_k} = \\mathbf {Fb_k}$ is $\\mathbf {e} = \\mathbf {Fb_k-h_k}$ The minimum squared error solution is $\\mathbf {b_k}=\\mathbf {F^T h_k}$ , which yields $\\mathbf {||e||^2} = \\mathbf {h^T_k}[\\mathbf {FF^T-I}]\\mathbf {h_k}$ Therefore, the relative error is bounded by $(1 - \\lambda _{max}) \\le \\frac{||\\mathbf {e}||_2}{||\\mathbf {h_k}||_2} \\le (1 - \\lambda _{min})$ where $\\lambda _{min}$ , and $\\lambda _{max}$ are minimum and maximum eigenvalues of $\\mathbf {FF^T}$ .", "In other words, an upper bound on the relative approximation error can be minimized by making $\\lambda _{min}$ as large as possible, while the lower bound can be reduced by ensuring that $\\lambda _{max}$ is also large as possible.", "The sample realizations of the random vectors used for actual experiments can be judiciously chosen to achieve these objectives to ensure that they serve as reasonable choice for basis filters." ], [ "Background and Related Work", "Filter pruning is probably the earliest explored research directions for compression and efficient implementation of CNNs.", "L. Cun et al.", "[2] and Hassibi et al.", "[3] showed that second derivative of the loss can be used to reduced the number of connections in a network.", "This strategy not only yields an efficient network but also improves generalization.", "However these methods are only applicable for training the network from scratch.", "More recently however there has been growing interest in pruning redundancies from a pre-trained network.", "Han et al.", "[5] proposed a compression method which aims to learn not only weights but also the connections between neurons from training data.", "While Srinivas et.", "al.", "[6] proposed a data-free method to prune neurons instead of the whole filters.", "Chen et al.", "[7] proposed a hash based parameter sharing strategy which intern reduces storage requirements.", "Filter quantization has been also used for network compression.", "These methods aim to reduces the number of bits required to represent the filters which can in turn lead to efficient CNN implementation.", "Quantization using k-means clustering has been explored by Gong et al.", "[11] and Wu et al.", "[12].", "Similarly Vanhoucke et al.", "[13] also showed that 8-bit quantization of the parameters can result in significant speed-up with minimal loss of accuracy.", "In contrast [4] combined quantization with pruning.", "A special case of quantized networks are binary networks, which use only one bit to represent the filter values.", "Some of the works which explore this direction are BinaryConnect [14], BinaryNet [15] and XNORNetworks [16].", "Their main idea is to directly learn binary weights or activation during the model training.", "Knowledge Distillation methods train a smaller network to mimic the output(s) of a larger pre-trained network.", "[17] is one of the earliest works exploring this idea.", "They trained a smaller model from a complex ensemble of classifiers without significant loss in accuracy.", "More recently [18] further developed this method and proposed a knowledge distillation framework, which eased the training of networks.", "Another adaption of [17] is [19] which aims to compress deep and wide networks into shallower ones.", "[20] also used this idea to transfer knowledge from larger networks to much shallower ones using adversarial loss.", "oOur work relates to a class of techniques that rely on basis functions to represent the convolution filters, but differs in several key respects.", "For instance in [8], Jaderberg et al have proposed a similar two stage decomposition in terms of basis functions followed 1D convolutions for recombining outputs of basis filters.", "However, to achieve processing speed, their focus is on approximating full rank filter banks using rank-1 filter basis filters, which were optimized to reconstruct the original filters and the response of the CNN to the training data.", "It was shown that this method leads to significant speed up of a four stage CNN for character recognition.", "However, the authors do not address the problem of learning in compressed format, nor how this method might impact the performance of other well known CNN architectures on standard data sets.", "Qiu et al [10] have also observed that a conventional convolution can be represented as a two successive convolutions involving a basis set and projection coefficients, but their construct differs from the one proposed in Figure 1.", "Their focus is on 2D Fourier Bessel functions as a basis set for reducing the number of operations required within a given 3D filter kernel, while noting that random basis functions also tend to perform well.", "Although this method learns with fewer parameters than conventional CNNs, our approach exploits the redundancy in the full 3D structure of the convolution layer (across all channels and filters) and therefore necessitates even fewer learnable parameters." ], [ "Experiments", "As described in section 3, we can use BasisConv to compressedly represent all pre-trained convolution layers in a traditional ConvNet.", "Additionally we can also train such networks (referred to as BasisNet), from scratch in this format.", "We now describe our experiments in detail." ], [ "Datasets and Models", "We performed our experiments on three publicly available image classification datasets.", "These are CIFAR10, CIFAR100 and SVHN.", "All three datasets contain 32 $\\times $ 32 pixel RGB images.", "CIFAR10 and SVHN contain 10 object classes while CIFAR100 has 100 classes.", "We tested four different CNN architectures with BasisConv.", "These are Alexnet [21], VGG16 [22], Resnet110 [23] and Densenet190 [24].", "We used the pytorch implementation and pre-trained weights of these networks provided by [25], since this implementation is suitable for 32 $\\times $ 32 input images unlike the originallly proposed networks which are designed for 224 $\\times $ 224 input size.", "Figure: A comparison of compressiblity for 4 network architectures, pre-trained on CIFAR100 dataset.", "(a) Shows the number retained basis filters as a percentage as we reduce t from 1.0 to 0.7, while (b) plots the test accuracy against the percentage of retained filters.", "Red dots indicates initial operating points for the compressed network, which is later fine tuned to improve accuracy to obtain the results shown in Table Table: Comparison of number of learnable parameters (in Millions) between ConvNet and BasisNet." ], [ "Network Compression", "To compress the pretrained network we replaced each convolution layer in the network with the BasisConv layer.", "The resulting network is refered as BasisNet.", "BasisConv layer implements two operation, i) convolution with the basis filters and ii) linear combination of output using projection coefficients, which is implemented as convolution with 1x1 filters.", "Parameters for the BasisConv layer are computed from the weights of original convolution layer using eigen decomposition as explained in section REF .", "Compression emerges from the the fact that only a small number (Q) of basis filters are needed to reconstruct the output of convolution layer and rest of them can be safely discarded.", "To determine Q, we first sort the eigenvectors such that their corresponding eigenvalues are in descending order.", "The first Q eigenvectors are then selected such that the ratio of the sum of their eigenvalues and the sum of all eigenvalues exceeds a threshold t. Naturally maximum value of t is 1.0 at which point each BasisConv layer retains all basis filters and hence all of the information contained in the original convolution layer.", "As we reduce t we are able to discard more number of filters corresponding to the smaller eigenvalues with some drop in test accuracy.", "Figure REF compares the compressed potential of all four networks, pre-trained on CIFAR100.", "Figure REF shows the percentage of retained filters in compressed network as we reduce t from 1.0 to 0.7, while figure REF plots the accuracy against the percentage of retained filters.", "In this figure we see that, all four networks can discard 20% of their filters with little change in test accuracy with Densenet190 being the most compressible which can discard upto 60% filters with only 3% drop in accuracy.", "It should be noted that these plots show the performance of the networks prior to fine tuning of the learnable parameters (also referred to as $\\textbf {w}_k$ in Figure 1), and the red dots indicate the compression points selected for performance optimization by subsequent fine tuning.", "Table: Comparison of maximum compression achieved for each network architecture pre-trained on the CIFAR100 dataset.", "As we can see Densenet190 is most compressible with the reduction in number of filters and multiplications by a factor of more than 5 while keeping the accuracy with in 3% of the original network.GFlop refers to number of multiplications in Billions, counting only the multiplications in convolutional layers." ], [ "Fine tuning of learnable parameters", "As we further reduce t we are able to get more compression but test accuracy also drops significantly.", "To mitigate this we train each network in two steps for a total of 25 epochs.", "In step one we train the projection coefficients (i.e.", "the 1D filters $\\textbf {w}_k$ ) only for 15 epochs with SGD.", "Since our network has significantly less learnable parameters (see Table REF ) 15 epochs are enough to re-train these coefficients.", "We used step learning rate starting with 0.1 and dividing by 10 every 5 epochs.", "In step two we update all non-convolutional parameters (including the fully connected layers) in the network (but hold the basis filters constant) for another 10 epochs with 5e-4 learning rate.", "This process enables us to recover test accuracy even when large numbers of basis filters are discarded.", "Table REF compares the the maximum compression we were able to achieve for all four networks, pre-trained on CIFAR100, while keeping the accuracy within 3% of the original network.", "We can see here that Densenet190 is the most compressible with reduction in number of filters by a factor of 5.7 and reduction in multiplications by a factor of more than 5.3.", "Figure: Test accuracy vs compression, for the VGG16 trained on three datasets" ], [ "Network Compression vs Dataset", "Intuitively it is clear that complexity of information learned by a network during training must depend on the complexity of the dataset it was trained on.", "This means that the same network architecture trained on different datasets will have different compressibility.", "To verify this, we trained VGG16 on three image classification detests mentioned in section REF .", "Figure REF shows the graph of test accuracy for these datasets plotted against the percentage of filters retained in the compressed network.", "These trends are consistent with our intuition that the SVHN data set is the simplest and the resulting trained network is highly compressible.", "On the other hand, the CIFAR100 is the most complex of the three datasets, which is reflected in the faster drop in performance with increasing compression.", "Not surprisingly, as the complexity of the problem increases, more knowledge is stored in each convolution layer, and larger number of eigenfilters are required to capture the most relevant information." ], [ "Training from scratch", "To illustrate the process for learning with random basis sets, we describe an example provided in Matlab 2018b that trains a simple CNN for image classification on CIFAR10 data set.", "The original network configuration (shown in Table REF on left) has three convolution layers (two of size 5x5x32 and one of size 5x5x64).", "This network is trained for 40 epochs, and achieves a classification test accuracy of 74%.", "The number of learnable parameters in the three convolution layers is 79,328.", "On the right, each conventional layer is replaced by the BasisConv structure which reduces the number of learnable parameters to 6400 (i.e.", "a reduction by a factor of 12).", "In this configuration, the 3D filters in the layers marked “conv_1”, “conv_3”, and “conv_5” are initialized as orthonormal random functions but then held frozen during the learning process, while the 1D filters marked “conv_2”, “conv_4”, and “conv_6” are allowed to update.", "After 40 epochs, the configuration on the right achieves a test accuracy of 71%.", "Setting aside the fully connected layers (which are common to both configuration), this experiment illustrates how BasisConv reduces the number of learnable parameter by an order of magnitude, without significant loss in performance.", "Additionally we also trained Alexnet and VGG16 from scratch with random basis in pytorch.", "In these experiments we normalized the intermediate tensor with BatchNormalization before convolving with 1D filters, $\\mathbf {w_k}$ .", "Recall that the conventional versions of these networks achieve 43.9% and 68.7% accuracy on the CIFAR100 data set, respectively.", "We were able to get 42.5 % and 66.3 % using BasisConv for Alexnet and VGG16 respectively, which is within 2% of the accuracy of original network while reducing the the number of learnable parameters by a factor of 7.2 and 7.9 respectively.", "Table: A comparison of a conventional convolutional network (left) with the corresponding basis convolution network (right).", "In BasisConv, the 3D convolution kernels are never trained (indicated by *), which reduces the number of learnable parameters." ], [ "Conclusion", "In summary, we have presented a general method for network compression and efficient implementation which can be easily incorporated into existing CNN architectures.", "For pre-trained network, each convolution layer is replaced by two successive convolutions: first with eigen basis filters (that capture the underlying knowledge space of the layer), followed by 1D kernels (that can be finetuned) to generate the activations.", "We used four network architectures and three datasets to show that our method consistently reduces i) the number of learnable parameters by an order of magnitude, and ii) multiplications and filter storage by as much as a factor of 5, with less than 3% degradation in performance.", "Finally, using random basis functions and significantly fewer learnable parameters, BasisNet achieve comparable performance to a conventional CNNs when learning from scratch." ] ]
1906.04509
[ [ "Closed-form expressions for Farhi's constant and related integrals and\n its generalization" ], [ "Abstract In a recent work, Farhi developed a Fourier series expansion for the function $\\,\\ln{\\Gamma(x)}\\,$ on the interval $(0,1)$, which allowed him to derive a nice formula for the constant $\\,\\eta := 2 \\int_0^1{\\ln{\\Gamma(x)} \\, \\sin{(2 \\pi x)} \\, dx}$.", "At the end of that paper, he asks whether $\\eta$ could be written in terms of other known mathematical constants.", "Here in this work, after deriving a simple closed-form expression for $\\eta$, I show how it can be used for evaluating other related integrals, as well as certain logarithmic series, which allows for a generalization in the form of a continuous function $\\eta(x)$, $x \\in [0,1]$.", "Finally, from the Fourier series expansion of $\\,\\ln{\\Gamma(x)}$, $x \\in (0,1)$, I make use of Parseval's theorem to derive a closed-form expression for $\\,\\int_0^1{\\ln^2{\\Gamma(x)}~dx}$." ], [ "Introduction", "The Fourier series expansion of the real function $\\,\\ln {\\Gamma (x)}\\,$ over the open interval $(0,1)$ , where $\\,\\Gamma (x) := \\int _0^\\infty {t^{\\,x -1}\\,e^{-t} \\: d t}\\,$ is the classical Gamma function, was developed by Farhi in a recent note [8].", "There, he shows that $\\ln {\\Gamma (x)} = \\frac{1}{2} \\ln {\\pi } + \\pi \\, \\eta \\left(\\frac{1}{2} -x \\right) -\\frac{1}{2} \\ln {\\sin {\\!", "(\\pi x)}} +\\frac{1}{\\pi } \\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n} \\, \\sin {\\!", "(2 \\pi n x)}}$ holds for all $\\,x \\in (0, 1)$ , where $\\eta := 2 \\int _0^1{\\ln {\\Gamma (x)} \\, \\sin {\\!", "(2 \\pi x)} \\: dx}$ is the Farhi constant.", "Numerical integration yields $\\,\\eta = 0.76874789\\ldots $ At the end of Ref.", "[8], Farhi asks if $\\,\\eta \\,$ could be written in terms of other known constants.", "Here in this paper, we make use of Farhi's formula itself to derive a closed-form expression for $\\eta $ in terms of $\\pi $ , $\\gamma $ , and $\\ln {(2 \\pi )}$ only, where $\\,\\gamma := \\lim _{\\,n \\rightarrow \\infty }{(H_n -\\ln {n})}\\,$ is the Euler–Mascheroni constant, $\\,H_n := \\sum _{k=1}^n{\\frac{1}{k}}\\,$ being the $n$ -th harmonic number.", "We also show how to generalize Eq.", "(REF ) and how Eq.", "(REF ) can be taken into account for evaluating some logarithmic series and other related integrals, e.g.", "$\\,\\int _0^{1/2}{\\ln {\\Gamma (x)} \\, \\sin {(2 \\pi x)} \\, d x}$ , $\\int _0^1{\\ln {\\Gamma (x)} \\, \\sin {(\\pi x)} \\, d x}$ , $\\int _0^1{\\psi (x) \\, \\sin ^2{\\!", "(\\pi x)} \\, dx}$ , $\\int _0^1{\\ln ^2{\\Gamma (x)} \\, d x}$ , and $\\,\\int _0^{1/2}{\\psi (x) \\, \\sin ^2{\\!", "(\\pi x)} \\, d x}$ , where $\\,\\psi (x) := \\frac{d }{dx} \\ln {\\Gamma (x)}\\,$ is the digamma function." ], [ "Farhi's constant $\\eta $ and some related integrals and series", "[Farhi's constant] The exact closed-form $\\eta = \\frac{\\:\\gamma +\\ln {(2 \\pi )}}{\\pi }$ holds, where $\\,\\eta \\,$ is the definite integral in Eq.", "(REF ).", "On taking $\\,x=\\frac{1}{4}\\,$ in Farhi's formula, our Eq.", "(REF ), one finds $\\ln {\\Gamma \\!\\left(\\frac{1}{4} \\right)} = \\frac{1}{2} \\ln {\\pi } + \\frac{1}{4} \\pi \\, \\eta +\\frac{1}{4} \\ln {2} +\\frac{1}{\\pi } \\,S \\, ,$ where $S := \\sum _{n=1}^\\infty {\\frac{\\,\\ln {n}}{n} \\, \\sin {\\!\\left(\\frac{\\pi }{2} \\, n \\right)}} = \\sum _{m=1}^\\infty {(-1)^{m+1} \\: \\frac{\\,\\ln {(2m -1)}}{2m -1}} \\, ,$ which agrees with Corollary 3 of Ref. [8].", "Now, we take into account a functional relation established by Coffey in Eq.", "(3.13b) of Ref.", "[4], namelyWe correct here a mistake in the Stieltjes constants at the left-hand side of the equation corresponding to this one, as it appears in Ref. [4].", "$\\gamma _1\\!\\left( \\frac{a +1}{2} \\right) -\\gamma _1\\!\\left( \\frac{a}{2} \\right) = \\ln {2} \\, \\left[ \\psi \\!\\left( \\frac{a+1}{2} \\right) -\\psi \\!\\left( \\frac{a}{2} \\right) \\right] +2\\,\\sum _{k=0}^\\infty {(-1)^{k+1}\\:\\frac{\\ln {(k+a)}}{k+a}} \\, ,$ where $\\,\\gamma _1(a)\\,$ is the first generalized Stieltjes constant, a coefficient in the Laurent series expansion of the Hurwitz zeta function $\\,\\zeta {(s,a)}\\,$ about its simple pole at $\\,s=1$ ,Here, $\\zeta {(s,a)} := \\sum _{n=0}^\\infty {1/(n +a)^s}$ , $a \\ne 0, -1, -2, \\ldots $ , a series that converges for all complex $s$ with $\\,\\Re {(s)} > 1$ .", "i.e.", "$\\frac{1}{s-1} +\\sum _{n=0}^\\infty {\\frac{(-1)^n}{n!}", "\\, \\gamma _n(a)\\,(s-1)^n}$ .", "For $\\,a=\\frac{1}{2}$ , one has $S = \\frac{1}{4} \\left[ \\gamma _1\\!\\left( \\frac{1}{4} \\right) -\\gamma _1\\!\\left( \\frac{3}{4} \\right) +\\pi \\,\\ln {4} \\right] ,$ On the other hand, in Eq.", "(3.21) of Ref.", "[4] Coffey showed that $\\gamma _1\\!\\left( \\frac{1}{4} \\right) -\\gamma _1\\!\\left( \\frac{3}{4} \\right) = -\\,\\pi \\left[ \\ln {(8 \\pi )} +\\gamma -2\\,\\ln {\\!\\left( \\frac{\\Gamma (1/4)}{\\Gamma (3/4)} \\right)}\\right] \\nonumber \\\\= \\pi \\left[4 \\,\\ln {\\Gamma \\!\\left( \\frac{1}{4} \\right)} -2\\,\\ln {4} -3\\,\\ln {\\pi } -\\gamma \\right] ,$ where the last step demanded the use of the reflection formula for $\\Gamma (x)$ , namely $\\Gamma (1-x) \\; \\Gamma (x) = \\frac{\\pi }{\\,\\sin {(\\pi x)}} \\, .$ On substituting the result of Eq.", "(REF ) in Eq.", "(REF ), one finds $S = \\pi \\, \\ln {\\Gamma \\!\\left( \\frac{1}{4} \\right)} -\\frac{\\pi }{2} \\ln {2} -\\frac{3}{4} \\,\\pi \\ln {\\pi } -\\frac{\\pi }{4} \\, \\gamma \\, .$ The proof completes by substituting this expression in Eq.", "(REF ).", "$\\Box $ The closed-form expression for $\\,\\eta \\,$ established above could also have been deduced from Eq.", "(6.443.1) of Ref.", "[10] (the case $\\,n=1$ ), or it could have been determined by comparing Eq.", "(REF ) with the Kummer's Fourier series for $\\,\\ln {\\Gamma (x)}\\,$ mentioned by Connon in Eq.", "(2.9) of Ref.", "[6] (see also Eq.", "(5.8) of Ref.", "[7]), namely $\\ln {\\Gamma (x)} = \\frac{1}{2} \\ln {\\!\\left(\\frac{\\pi }{\\,\\sin {(\\pi x)}}\\right)} +\\left(\\frac{1}{2} - x \\right) [\\gamma +\\ln {(2 \\pi )}] +\\frac{1}{\\pi } \\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n} \\, \\sin {(2 \\pi n x)}} \\, .$ Note that in Corollary 9.6.53 of Ref.", "[5] Cohen uses Abel summation to derive a formula similar to that by Connon which holds for all real $\\,x \\notin \\mathbb {Z}$ .", "Another generalization of Farhi's formula was proposed by Blagouchine in Ex.", "20(c) of Ref.", "[2], namely $\\sum _{n=1}^\\infty {\\frac{\\,\\ln {(b\\,n)}}{n} \\, \\sin {(n\\,\\phi )}} = \\pi \\, \\ln {\\Gamma \\!\\left(\\frac{\\phi }{2 \\pi } \\right)} +\\frac{\\pi }{2} \\, \\ln {\\sin {\\!\\left(\\frac{\\,\\phi }{2}\\right)}} -\\frac{\\pi }{2} \\, \\ln {\\pi } \\nonumber \\\\+ \\, \\frac{\\:\\phi -\\pi }{2} \\left[ \\gamma +\\ln {\\!\\left(\\!\\frac{\\,2 \\pi }{b}\\right)} \\right] ,$ which is valid for all $\\,b>0\\,$ and $\\,\\phi \\in (0, 2 \\pi )$ .", "On substituting $\\,\\phi = 2 \\pi x$ , one shows that Connon's formula is the particular case $\\,b=1\\,$ of Eq.", "(REF ).", "Since the Farhi constant $\\eta $ is defined as an integral, other similar integrals can be explored, as, for instance, [An integral involving $\\,\\psi (x)\\,$ ] The following exact closed-form result holds: $\\int _0^1{\\psi (x) \\: \\sin ^2{\\!", "(\\pi x)} \\: dx} = -\\,\\frac{\\:\\gamma +\\ln {(2 \\pi )}\\,}{2} \\, .$ From Lemma , one has $\\pi \\, \\eta = 2 \\pi \\int _0^1{\\ln {\\Gamma (x)} \\left[ 2 \\sin {(\\pi x)} \\, \\cos {(\\pi x)} \\right] \\, d x} = \\gamma +\\ln {(2 \\pi )} \\, ,$ which implies that $I := \\int _0^1{\\ln {\\Gamma (x)} \\, \\sin {(\\pi x)} \\, \\cos {(\\pi x)} \\, d x} = \\frac{\\gamma +\\ln {(2 \\pi )}}{4 \\pi } \\, .$ Integration by parts then yields $I = \\left[ \\ln {\\Gamma (x)} \\, \\frac{\\,\\sin ^2{(\\pi x)}}{\\pi } \\right]_{0^{+}}^1 - \\int _0^1{\\frac{\\,\\sin {(\\pi x)}}{\\pi } \\, \\left[ \\psi (x) \\, \\sin {(\\pi x)} +\\pi \\,\\cos {(\\pi x)}\\,\\ln {\\Gamma (x)} \\right] d x} \\nonumber \\\\= \\, 0 -\\frac{1}{\\pi } \\int _0^1{\\psi (x) \\, \\sin ^2{(\\pi x)} \\, d x} -\\int _0^1{\\sin {(\\pi x)} \\, \\cos {(\\pi x)} \\, \\ln {\\Gamma (x)} \\: d x} \\nonumber \\\\= - \\frac{1}{\\pi } \\int _0^1{\\psi (x) \\, \\sin ^2{(\\pi x)} \\, d x} -I \\, .", "\\qquad $ Therefore $2 \\, I = -\\frac{1}{\\,\\pi } \\int _0^1{\\psi (x) \\, \\sin ^2{(\\pi x)} \\: d x} \\, .$ The final result follows by substituting this integral in Eq.", "(REF ).", "$\\Box $ A simple closed-form expression can also be determined for a similar integral obtained by halving the argument of the sine in Eq.", "(REF ), i.e.", "$\\int _0^1{\\ln {\\Gamma (x)} \\, \\sin {\\!", "(\\pi x)} \\: dx} = 0.46205312 \\ldots $ [An integral involving $\\,\\ln {\\Gamma {(x)}}\\,$ ] The exact closed-form result $\\int _0^1{\\ln {\\Gamma (x)} \\, \\sin {\\!", "(\\pi x)} \\: dx} = \\frac{\\:\\ln {\\pi } -\\ln {2} +1}{\\pi }$ holds.", "On taking into account the logarithmic form of Eq.", "(REF ), namely $\\ln {\\Gamma (1-x)} +\\ln {\\Gamma (x)} = \\ln {\\pi } -\\ln {\\sin {(\\pi x)}} \\, ,$ one finds $\\int _0^1{\\left[ \\, \\ln {\\Gamma (1-x)} +\\ln {\\Gamma (x)} \\right] \\sin {(\\pi x)} \\: d x} \\nonumber \\\\= \\int _0^1{\\left[ \\, \\ln {\\pi } -\\ln {\\sin {(\\pi x)}}\\right] \\sin {(\\pi x)} \\: d x} \\nonumber \\\\= \\ln {\\pi } \\int _0^1{\\sin {(\\pi x)} \\: d x} -\\int _0^1{\\ln {\\sin {(\\pi x)}} \\, \\sin {(\\pi x)} \\: d x} \\nonumber \\\\= \\ln {\\pi }~\\frac{\\,2}{\\,\\pi } -\\frac{1}{\\pi } \\int _0^\\pi {\\sin {\\tilde{\\theta }} \\; \\ln {\\sin {\\tilde{\\theta }}} \\: d \\tilde{\\theta }} \\, .$ The symmetries of the first integrand with respect to $\\,x = \\frac{1}{2}\\,$ and of $\\,\\sin {\\tilde{\\theta }}\\,$ with respect to $\\,\\tilde{\\theta } = \\pi /2\\,$ lead to $2 \\int _0^1{\\ln {\\Gamma (x)} \\, \\sin {\\!", "(\\pi x)} \\: dx} = \\frac{\\,2}{\\,\\pi } \\, \\ln {\\pi } -\\frac{2}{\\pi } \\int _0^{\\,\\pi /2}{\\!\\sin {\\tilde{\\theta }} \\: \\ln {\\sin {\\tilde{\\theta }}} \\; d \\tilde{\\theta }} \\nonumber \\\\= 2 \\, \\frac{\\,\\ln {\\pi }}{\\pi } -2\\,\\frac{\\:\\ln {2} -1}{\\pi } \\, .$ $\\Box $ On searching for other closed-form results, I have realized that the integration of both sides of Farhi's formula, our Eq.", "(REF ), could lead to an interesting result.", "As usual, $\\,\\mathrm {Cl}_2(\\theta ) := \\Im {\\left[ \\mathrm {Li}_2\\left(e^{\\,i \\, \\theta }\\right) \\right]} = \\sum _{n=1}^\\infty {\\sin {(n \\, \\theta )}/n^2}\\,$ is the Clausen function and $\\,\\zeta ^{\\prime }(s,a)\\,$ denotes a partial derivative with respect to $s$ .", "[Integration of Farhi's formula] For all $\\,x \\in (0 ,1)$ ,In the interval $(0,1)$ , this logarithmic series corresponds to $\\, - \\, \\frac{1}{2} \\: \\frac{\\partial }{\\partial s} \\left[ \\mathrm {Li}_s\\!\\left(e^{2 \\pi x \\,i} \\right) +\\mathrm {Li}_s\\!\\left(e^{-2 \\pi x \\,i} \\right) \\right]_{s=2}$ , a result similar to that in Eq.", "(1.18) of Ref. .", "$\\sum _{n=1}^\\infty {\\frac{\\,\\ln {n}}{n^2} \\, \\cos {(2 \\pi n \\,x)}} = \\pi ^2 \\left( x\\,(1 -x) - \\frac{1}{6} \\right) \\left[\\,\\gamma +\\ln {(2 \\pi )} -1 \\right] +\\frac{\\pi }{2}\\,\\mathrm {Cl}_2(2 \\pi x) \\nonumber \\\\-\\,2 \\pi ^2\\,\\zeta ^{\\prime }(-1, x) \\, .$ The integration of both sides of Eq.", "(REF ) from 0 to $x$ , for any $\\,x \\in (0,1)$ , yields $\\int _0^x{\\ln {\\Gamma (t)} \\, d t} = \\frac{\\ln {\\pi }}{2} \\, x +\\frac{\\pi \\, \\eta }{2}\\,x -\\frac{\\pi \\,\\eta }{2} \\,x^2 -\\frac{1}{2} \\int _0^x{\\ln {\\sin {(\\pi t)}} \\, d t} \\nonumber \\\\+\\,\\frac{1}{\\pi } \\int _0^x{ \\left[\\,\\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n} \\sin {(2 \\pi n \\,t)} } \\right] d t} \\nonumber \\\\= \\left[\\frac{\\ln {\\pi }}{2} +\\frac{\\pi \\eta }{2} \\right] x -\\frac{\\pi \\eta }{2} \\,x^2 -\\frac{1}{2} \\int _0^x{\\ln {\\sin {(\\pi t)}} \\, d t} \\nonumber \\\\+\\,\\frac{1}{\\pi } \\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n} \\int _0^x{\\sin {(2 \\pi n \\,t)} \\, d t}} \\, ,$ where $\\,\\pi \\,\\eta = \\gamma +\\ln {(2 \\pi )}$ , as proved in Lemma .", "The absolute convergence of the last series above, for all $\\,x \\in (0,1)$ , is sufficient for the application of Fubini's theorem — see, e.g., the corollary of Theorem 7.16 in Ref.", "—, which validates the interchange of the integral and the series in the last step.", "Now, let us solve each integral above separately.", "Firstly, by definition, $\\int _0^x{\\ln {\\Gamma (t)} \\, d t} = \\psi ^{(-2)}(x)$ , known as the negapolygamma function, for which Adamchik showed in Ref.", "[1], as a particular case of his Eq.", "(14), that $\\psi ^{(-2)}(x) = \\frac{x \\, (1 -x)}{2} +\\frac{x}{2}\\,\\ln {(2 \\pi )} -\\zeta ^{\\prime }(-1) +\\zeta ^{\\prime }(-1, x) \\, ,$ where $\\,\\zeta ^{\\prime }(-1) = 1/12 -\\ln {A}$ , $\\,A\\,$ being the Glaisher-Kinkelin constant.In virtue of Eq.", "(5.2) of Ref.", "[6], namely $\\,\\zeta ^{\\prime }(-1) -\\zeta ^{\\prime }(-1,t) = \\ln {G(1+t)} -t\\,\\ln {\\Gamma (t)}$ , it is possible to write Eq.", "(REF ) in terms of the Barnes $G$ -function, but we shall not explore this form here.", "Secondly, $\\int _0^{\\,x}{\\ln {\\sin {(\\pi t)}} \\, d t} = -\\,\\frac{\\,\\mathrm {Cl}_2(2 \\pi x)\\,}{2 \\pi } - x \\, \\ln {2} \\, .$ Finally, on substituting $\\,\\int _0^x{\\sin {(2 \\pi n \\,t)} \\, d t} = -\\,[\\cos {(2 \\pi n \\,x)} -1]/(2 \\pi n)\\,$ in the last series in Eq.", "(REF ), one finds $\\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n} \\int _0^x{\\sin {(2 \\pi n \\,t)} \\, d t}} = -\\frac{1}{2 \\pi } \\, \\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n^2} \\, [\\cos {(2 \\pi n \\,x)}-1] } \\nonumber \\\\= -\\frac{1}{\\,2 \\pi } \\left[ \\, \\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n^2} \\, \\cos {(2 \\pi n \\,x)}} -\\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n^2}} \\right] ,$ which can be further simplified by noting that $\\sum _{n=1}^\\infty {\\frac{\\ln {n}}{n^2}} = -\\,\\zeta ^{\\prime }(2) = \\frac{\\,\\pi ^2}{6} \\left[ 12 \\, \\ln {A} -\\gamma -\\ln {(2 \\pi )} \\right] ,$ as shown by Glaisher in 1894 [9].", "The proof completes by substituting the four closed-form expressions above there in Eq.", "(REF ).", "$\\Box $ On putting $\\,x = \\frac{1}{2}\\,$ (or $\\,x = \\frac{1}{4}$ ) in Theorem , one finds $\\sum _{n=1}^\\infty {(-1)^n \\, \\frac{\\,\\ln {n}}{n^2}} = \\frac{\\,\\pi ^2}{12} \\, \\left[\\,\\gamma +\\ln {(2 \\pi )} -1 \\right] -2 \\pi ^2 \\, \\zeta ^{\\prime }\\!\\left(-1, \\frac{1}{2} \\right) \\nonumber \\\\= \\frac{\\:\\pi ^2}{12} \\left[\\,\\gamma +\\ln {(4 \\pi )} -12\\,\\ln A \\,\\right] .$ Since the function $\\,\\cos {(2 \\pi n\\,x)}$ , $x \\in (0,1)$ , remains the same when we exchange $\\,x\\,$ by $\\,1-x$ , then both the first term in the right-hand side and the series in Theorem  also do, whereas $\\,\\mathrm {Cl}_2(2 \\pi x)\\,$ changes the sign.", "This simple observation leads to [Reflection formula for $\\,\\zeta ^{\\prime }{(-1,x)}\\,$ ] For all $\\,x \\in [0,1]$ , $\\zeta ^{\\prime }{(-1, x)} -\\zeta ^{\\prime }{(-1, 1-x)} = \\frac{\\,\\mathrm {Cl}_2(2 \\pi x)}{2 \\pi } \\, .$ This formula corresponds to the case $\\,n=1\\,$ of Eq.", "(21) in Ref. .", "Note that it remains valid at both endpoints $\\,x=0\\,$ and $\\,x=1\\,$ because $\\,\\zeta ^{\\prime }(-1,0) = \\zeta ^{\\prime }(-1,1) = \\zeta ^{\\prime }(-1)\\,$ and $\\,\\mathrm {Cl}_2(0) = \\mathrm {Cl}_2(2 \\pi ) = 0$ .", "For instance, for $\\,x=\\frac{1}{4}$ the reflection formula yields $\\zeta ^{\\prime }{\\left(-1, \\frac{1}{4} \\right)} -\\zeta ^{\\prime }{\\left(-1, \\frac{3}{4} \\right)} = \\frac{\\mathrm {Cl}_2(\\pi /2)}{2 \\pi } = \\frac{G}{2 \\pi } \\, ,$ where $\\,G := \\sum _{n=0}^\\infty {(-1)^n/(2n +1)^2}\\,$ is Catalan's constant.", "The presence of the factor $\\,\\sin {(2 \\pi x)}\\,$ accompanying the log-Gamma function in Eq.", "(REF ) suggests that further results can be found by multiplying both sides of Farhi's formula by that factor before integration.", "Then, let us define the real function $\\eta (x) := 2 \\int _0^{\\,x}{\\ln {\\Gamma (t)}\\,\\sin {(2 \\pi t)} \\: d t} \\, , \\quad x \\in (0,1) \\, ,$ as a generalization of Farhi's constant $\\eta $ .", "[Another integral of Farhi's formula] For all $\\,x \\in (0 ,1)$ , one has $\\eta (x) = \\frac{\\,\\gamma +\\ln {(2 \\pi )}}{\\pi } \\left[ \\sin ^2{\\!\\left(\\frac{\\theta }{2}\\right)} +\\frac{\\theta \\,\\cos {\\theta } -\\sin {\\theta }}{2 \\pi } \\right] +\\frac{\\,\\sin ^2{\\!", "(\\theta /2)}}{2 \\pi } \\left[1 +2 \\, \\ln {\\!\\left(\\frac{\\pi }{\\sin {(\\theta /2)}}\\right)} \\right] \\nonumber \\\\+\\,\\frac{\\,\\cos {\\theta }}{\\pi ^2} \\, \\sum _{n=2}^\\infty {\\frac{\\ln {n}}{\\,n \\, (n^2-1)} \\, \\sin {(n \\theta )}} \\,-\\frac{\\,\\sin {\\theta }}{\\pi ^2} \\, \\sum _{n=2}^\\infty {\\frac{\\ln {n}}{\\: n^2 -1} \\, \\cos {(n \\theta )}} \\, ,$ where $\\,\\theta = 2 \\pi x$ .", "The multiplication of both sides of Eq.", "(REF ) by $\\,\\sin {(2 \\pi t)}$ , followed by integration from 0 to $x$ , for any $\\,x \\in (0,1)$ , yields $\\int _0^{\\,x}{\\ln {\\Gamma (t)} \\, \\sin {(2 \\pi t)}\\, d t} \\nonumber \\\\= \\frac{\\,\\ln {\\pi }}{4 \\pi } \\int _0^{\\,\\theta }{\\sin {\\tilde{\\theta }} \\, d \\tilde{\\theta }} +\\frac{\\gamma +\\ln {(2 \\pi )}}{4 \\pi } \\int _0^{\\,\\theta }{\\!\\left(1 -\\frac{\\tilde{\\theta }}{\\pi } \\right) \\sin {\\tilde{\\theta }} \\, d \\tilde{\\theta }} \\nonumber \\\\-\\,\\frac{1}{\\,2 \\pi } \\int _0^{\\,\\theta }{\\ln {\\!\\left[\\sin {\\!\\left(\\frac{\\tilde{\\theta }}{2}\\right)}\\right]} \\sin {\\!\\left( \\frac{\\tilde{\\theta }}{2} \\right)} \\cos {\\!\\left( \\frac{\\tilde{\\theta }}{2} \\right)} \\, d \\tilde{\\theta }} \\nonumber \\\\+ \\, \\frac{1}{\\,2 \\pi ^2} \\sum _{n=2}^\\infty {\\,\\frac{\\,\\ln {n}}{n} \\, \\int _0^\\theta {\\sin {\\!\\left(n \\tilde{\\theta }\\right)} \\, \\sin {\\tilde{\\theta }} \\: d \\tilde{\\theta }}} \\, ,$ where $\\,\\theta = 2 \\pi x\\,$ and $\\tilde{\\theta }$ is just a dummy variable.", "On applying the trigonometric identity $\\,\\sin {\\alpha }\\,\\sin {\\beta } = \\frac{1}{2} \\, [\\cos {(\\alpha -\\beta )} -\\cos (\\alpha +\\beta )]\\,$ to the last integral above, one finds $2 \\int _0^{\\,x}{\\ln {\\Gamma (t)} \\, \\sin {(2 \\pi t)}\\, d t} = \\frac{\\ln {\\pi }}{2 \\pi } \\, (1 -\\cos {\\theta }) \\nonumber \\\\+\\frac{\\gamma +\\ln {(2 \\pi )}}{2 \\pi } \\left(\\!1 -\\cos {\\theta } -\\frac{1}{\\pi } \\int _0^{\\,\\theta }{\\!\\tilde{\\theta }\\,\\sin {\\tilde{\\theta }} \\, d \\tilde{\\theta }} \\right) -\\,\\frac{2}{\\pi } \\int _0^{\\,b}{u \\, \\ln {u} \\: d u} \\nonumber \\\\+\\,\\frac{1}{\\,2 \\pi ^2} \\sum _{n=2}^\\infty {\\frac{\\,\\ln {n}}{n} \\int _0^{\\,\\theta }{\\!\\!\\left\\lbrace \\cos {\\!\\left[(n-1) \\,\\tilde{\\theta }\\right]} -\\cos {\\!\\left[(n+1) \\,\\tilde{\\theta }\\right]} \\right\\rbrace d \\tilde{\\theta }}} \\, ,$ where $\\,b = \\sin {(\\theta /2)}$ .", "The remaining integrals can be solved in terms of elementary functions: $\\int _0^\\theta {\\tilde{\\theta } \\, \\sin {\\tilde{\\theta }} \\, d \\tilde{\\theta }} = \\sin {\\theta } -\\theta \\, \\cos {\\theta } \\, , \\\\\\int _0^b{u \\, \\ln {u} \\, d u} = \\frac{\\,b^2}{4} \\: (2 \\ln {b} -1) \\, ,$ and $\\int _0^{\\,\\theta }{\\!\\!\\left\\lbrace \\,\\cos {\\left[(n-1) \\,\\tilde{\\theta }\\right]} -\\cos {\\left[(n+1) \\,\\tilde{\\theta }\\right]} \\right\\rbrace d \\tilde{\\theta }} = \\frac{\\,\\sin {[(n-1)\\,\\theta ]}}{n-1} - \\frac{\\,\\sin {[(n+1)\\,\\theta ]}}{n+1} \\nonumber \\\\= \\frac{\\,(n+1)\\,[\\sin {(n \\theta )}\\,\\cos {\\theta } -\\sin {\\theta }\\,\\cos {(n \\theta )}] -(n-1)\\,[\\sin {(n \\theta )}\\,\\cos {\\theta } +\\sin {\\theta }\\,\\cos {(n \\theta )}]}{n^2-1} \\nonumber \\\\= 2 \\, \\frac{\\,\\sin {(n \\theta )}\\,\\cos {\\theta } -n\\,\\sin {\\theta }\\,\\cos {(n \\theta )}}{n^2-1} \\, , \\qquad $ The substitution of the last three expressions, above, there in Eq.", "(REF ) completes the proof.", "$\\Box $ In particular, for $\\,x = 1/2\\,$ our Theorem  promptly yields $\\eta \\!\\left( \\frac{1}{2} \\right) = 2 \\int _0^{\\,1/2}{\\ln {\\Gamma (t)} \\, \\sin {(2 \\pi t)} \\: dt} = \\frac{\\,\\gamma +\\ln {(2 \\pi )} +2 \\ln {\\pi } +1}{2 \\pi } \\, .$ The analytic expression of $\\,\\eta (x)\\,$ established in Theorem  defines a continuous real function for all $\\, x \\in (0,1)$ .", "However, it involves a logarithmic term which is undefined at the endpoints.", "This problem can be fixed by defining $\\,\\eta (0) = \\lim _{x \\rightarrow 0^{+}}{\\eta (x)}\\,$ and $\\,\\eta (1) = \\lim _{x \\rightarrow 1^{-}}{\\eta (x)}$ , in a manner to make $\\,\\eta (x)\\,$ continuous for all $\\,x \\in [0, 1]$ .", "[Extending the domain of $\\eta (x)\\,$ ] The domain over which the real function $\\,\\eta (x) = 2 \\, \\int _0^x{\\ln {\\Gamma {(t)}} \\, \\sin {(2 \\pi t)}~dt}\\,$ is continuous can be extended to the closed interval $[0,1]$ by defining $\\,\\eta (0) := 0\\,$ and $\\,\\eta (1) := \\eta = [\\gamma +\\ln {(2 \\pi )}]/\\pi $ .", "In the analytic expression established for $\\,\\eta (x)\\,$ in Theorem , all terms promptly nullify at $\\,x=0$ , except the one involving $\\,\\ln {[\\pi /\\sin {(\\theta /2)}]}$ , where $\\,\\theta = 2 \\pi x$ .", "The choice $\\,\\eta (0) = 0\\,$ then comes from the limit $\\lim _{\\theta \\rightarrow 0^{+}}{\\sin ^2{\\!\\left( \\frac{\\theta }{2} \\right)} \\, \\ln {\\!\\left[\\frac{\\pi }{\\sin {(\\theta /2)}}\\right]}} = \\ln {\\pi } \\lim _{\\theta \\rightarrow 0^{+}}{\\sin ^2{\\!\\left( \\frac{\\theta }{2} \\right)}} - \\lim _{\\theta \\rightarrow 0^{+}}{\\sin ^2{\\!\\left( \\frac{\\theta }{2} \\right)} \\ln {\\!\\left[\\sin {\\!\\left( \\frac{\\theta }{2} \\right)}\\right]}} \\nonumber \\\\= - \\lim _{\\alpha \\rightarrow 0^{+}}{\\sin ^2{\\!\\alpha } \\; \\ln {\\left(\\sin {\\alpha }\\right)}} = \\lim _{\\alpha \\rightarrow 0^{+}}{\\!\\frac{\\,\\ln {( \\csc {\\alpha } )}}{\\csc ^2{\\alpha } \\, }} \\nonumber \\\\= \\lim _{y \\rightarrow +\\infty }{\\!\\frac{\\,\\ln {y}\\,}{y^2}} = \\lim _{y \\rightarrow \\infty }{\\!\\frac{1}{\\:2\\,y^2}} = 0 \\, , \\qquad $ where the L'Hôpital rule was applied in the last step.", "The appropriate choice for $\\,\\eta (1)\\,$ is found by taking $\\,x \\rightarrow 1^{-}$ (or, equivalently, $\\theta \\rightarrow {2 \\pi }^{-}$ ) in the expression of $\\,\\eta (x)\\,$ in Theorem , which yields $\\lim _{\\; x \\, \\rightarrow 1^{-}}{\\eta (x)} = \\frac{\\,\\gamma +\\ln {(2 \\pi )}}{\\pi } +\\frac{1}{\\,2 \\pi } \\!", "\\lim _{\\;\\, \\theta \\, \\rightarrow {\\,2 \\pi }^{-}}{\\!\\sin ^2{\\!\\left(\\frac{\\theta }{2} \\right)} \\left[1 +2 \\, \\ln {\\!\\left(\\frac{\\pi }{\\sin {(\\theta /2)}}\\right)} \\right]} \\nonumber \\\\= \\eta +\\frac{1}{\\,2 \\pi } \\!", "\\lim _{\\;\\, \\theta /2 \\, \\rightarrow {\\,\\pi }^{-}}{\\sin ^2{\\!\\left(\\frac{\\theta }{2} \\right)}} +\\frac{1}{\\pi } \\!", "\\lim _{\\;\\, \\theta /2 \\, \\rightarrow {\\,\\pi }^{-}}{\\sin ^2{\\!\\left(\\frac{\\theta }{2} \\right)} \\, \\ln {\\!\\left(\\frac{\\pi }{\\sin {(\\theta /2)}}\\right)} } \\nonumber \\\\= \\eta +\\frac{\\,\\ln {\\pi }}{\\pi } \\!", "\\lim _{\\;\\, \\theta /2 \\, \\rightarrow {\\,\\pi }^{-}}{\\sin ^2{\\!\\left(\\frac{\\theta }{2} \\right)}} -\\frac{1}{\\pi } \\!", "\\lim _{\\;\\, \\theta /2 \\, \\rightarrow {\\,\\pi }^{-}}{\\sin ^2{\\!\\left(\\frac{\\theta }{2} \\right)} \\, \\ln {\\!\\left[\\sin {\\left(\\frac{\\theta }{2} \\right)} \\right]} } \\nonumber \\\\= \\eta -\\frac{1}{\\pi } \\!", "\\lim _{\\; \\alpha \\, \\rightarrow {\\,\\pi }^{-}}{\\sin ^2{\\!\\alpha } \\; \\ln {\\left(\\sin {\\alpha }\\right)}} \\, .", "\\qquad $ As shown in Eq.", "(REF ), the last limit above is null.", "$\\Box $ Note that our choices for $\\,\\eta (0)\\,$ and $\\,\\eta (1)\\,$ agree with the values obtained directly from the integral definition of $\\,\\eta (x)\\,$ in Eq.", "(REF ), namely $\\,\\eta (0) = 2 \\int _0^0{\\ln {\\Gamma (t)} \\: \\sin {(2 \\pi t)} \\: dt} = 0\\,$ and $\\,\\eta (1) = 2 \\int _0^1{\\ln {\\Gamma (t)} \\: \\sin {(2 \\pi t)} \\: dt} = \\eta $ , as defined in Eq.", "(REF ).", "Interestingly, we can use the function $\\,\\eta (x)$ , as defined in Eq.", "(REF ), to generalize Theorem .", "[Generalization of the digamma integral] For all $\\,x \\in (0,1]$ , the closed-form result $\\int _0^{\\,x}{\\psi (t) \\: \\sin ^2{\\!", "(\\pi t)} \\: dt} = \\ln {\\Gamma (x)} \\: \\sin ^2{\\!", "(\\pi x)} -\\frac{\\pi }{2} \\: \\eta (x)$ holds, where $\\,\\eta (x)\\,$ is the function defined in Eq.", "(REF ) for $\\,x \\in (0,1)$ , a domain extended to $\\,[0,1]\\,$ in Theorem .", "From the integral definition of $\\,\\eta (x)\\,$ in Eq.", "(REF ), one has $\\pi \\, \\eta (x) = 4 \\pi \\int _0^x{\\ln {\\Gamma (t)} \\: \\sin {(\\pi t)} \\, \\cos {(\\pi t)} \\: d t} = 4 \\int _0^{\\,\\pi x}{\\ln {\\Gamma \\!\\left( \\frac{y}{\\pi } \\right)} \\sin {y} \\, \\cos {y} \\: d y} ,$ for all $\\,x \\in (0,1)$ .", "Now, define the last integral, above, as $I$ and integrate it by parts, following the same steps as those in the proof of Theorem .", "After some algebra, one finds $2\\,I = \\frac{\\pi }{2} \\: \\eta (x) = \\ln {\\Gamma (x)} \\, \\sin ^2(\\pi x) - \\frac{1}{\\pi } \\int _0^{\\,\\pi x}{\\psi \\!\\left(\\frac{y}{\\pi }\\right) \\, \\sin ^2{y} \\: d y} \\, ,$ from which the closed-form expression for $\\,\\int _0^x{\\psi (t) \\: \\sin ^2{(\\pi t)} \\, dt}\\,$ promptly follows.", "Finally, for $\\,x=1\\,$ this closed-form expression reduces to $\\int _0^1{\\psi (t) \\: \\sin ^2{\\!", "(\\pi t)} \\: dt} = 0 -\\frac{\\pi }{2} ~ \\eta (1) = -\\frac{\\,\\pi \\, \\eta \\,}{2} \\, ,$ which agrees with Theorem .", "$\\Box $ For $\\,x=\\frac{1}{2}$ , on taking into account the special value we have found in Eq.", "(REF ), one finds $\\int _0^{\\,1/2}{\\psi (t) \\, \\sin ^2{\\!", "(\\pi t)} \\: d t} = - \\frac{\\:\\gamma +\\ln {(2 \\pi )} +1}{4} \\, .$ Another useful generalization, from the point of view of Fourier series expansions, is obtained by inserting a positive integer parameter $\\,k\\,$ in the argument of the sine function in Eq.", "(REF ), as follows: $\\eta _k := 2 \\int _0^1{\\ln {\\Gamma (t)} \\: \\sin {(2 \\pi k \\, t)} \\: dt} \\, .$ The problem of finding closed-form expressions for the Fourier coefficients of $\\,\\ln {\\Gamma (x)}\\,$ on the interval $\\,(0,1)\\,$ was solved in details by Farhi in Ref.", "[8], the result being $a_0 = \\int _0^1{\\ln {\\Gamma (t)} \\: d t} = \\frac{1}{2}\\,\\ln {(2 \\pi )} \\, , \\\\a_k = 2 \\int _0^1{\\ln {\\Gamma (t)} \\, \\cos {(2 \\pi k \\, t)} \\: d t} = \\frac{1}{\\,2 k} \\: , \\quad k \\ge 1 \\, , \\\\b_k = 2 \\int _0^1{\\ln {\\Gamma (t)} \\, \\sin {(2 \\pi k \\, t)} \\: d t} = \\eta _k = \\frac{\\ln {k}}{\\,\\pi k} +\\frac{\\eta }{k} \\: , \\quad k \\ge 1 \\, .", "$ Here, we are considering the Fourier series in the form $a_0 +\\sum _{k=1}^\\infty {\\,a_k\\,\\cos {(2 \\pi k x)}} +\\sum _{k=1}^\\infty {\\,b_k\\,\\sin {(2 \\pi k x)}} \\: ,$ as adopted in Ref. [8].", "There in that paper, it is shown that this series, with the coefficients given in Eq.", "(), converges to $\\,\\ln {\\Gamma (x)}\\,$ for all $\\,x \\in (0,1)$ .", "For $\\,x=\\frac{1}{2}\\,$ one readily finds $\\,\\ln {\\Gamma (\\frac{1}{2})} = \\frac{1}{2} \\, \\ln {\\pi } = \\ln {\\sqrt{\\pi }}\\,$ , a well-known result.", "Other less-obvious results arise by considering distinct rational values of $x$ .", "For instance, for $\\,x=\\frac{1}{3}\\,$ one finds $\\qquad \\; \\ln {\\Gamma \\!\\left( \\frac{1}{3} \\right)} = \\frac{\\sqrt{3}}{6 \\pi } \\left[\\gamma _1\\!\\left(\\frac{1}{3}\\right) -\\gamma _1\\!\\left(\\frac{2}{3}\\right) \\right] +\\frac{\\gamma }{6} +\\frac{2}{3} \\, \\ln {(2 \\pi )} -\\frac{\\ln {3}}{12} \\, , \\quad $ which agrees with the closed-form expressions for $\\gamma _1\\!\\left(\\frac{1}{3}\\right)$ and $\\gamma _1\\!\\left(\\frac{2}{3}\\right)$ found by Blagouchine in Eq.", "(61) of Ref. [2].", "For $\\,x=\\frac{1}{4}$ , one finds $\\ln {\\Gamma \\!\\left( \\frac{1}{4} \\right)} = \\frac{1}{2} \\, \\sum _{k=1}^{\\infty }{\\frac{\\,\\cos {(k \\pi /2)}}{k}} +\\frac{\\,\\ln {(2 \\pi )}}{2} +\\frac{1}{\\pi } \\, \\sum _{k=1}^{\\infty }{\\frac{\\,\\sin {(k \\pi /2)}}{k} \\, (\\pi \\,\\eta +\\ln {k})} \\nonumber \\\\= \\frac{\\,\\ln {2}}{4} +\\frac{\\ln {(2 \\pi )}}{2} +\\frac{1}{4} \\left[ \\pi \\,\\eta +\\frac{\\gamma _1(1/4) -\\gamma _1(3/4)}{\\pi } \\right] , \\qquad $ which simplifies just to our Eq.", "(REF ), so this evaluation can be viewed as an alternative derivation of Coffey's formula [4].", "With the Fourier series expansion of $\\,\\ln {\\Gamma (x)}$ , $x \\in (0,1)$ , in hands, we can use Parseval's theorem to derive an additional closed-form result.", "[Parseval's theorem for $\\ln {\\Gamma (x)}\\,$ ] $\\int _0^1{\\ln ^2{\\Gamma (t)} \\: d t} = 2 \\ln {A}\\:[\\gamma +\\ln {(2 \\pi )}] -\\frac{\\,\\gamma ^2}{12} +\\frac{\\,\\pi ^2}{48} +\\frac{\\ln (2 \\pi )}{6} \\: [\\,\\ln {(2 \\pi )} -\\gamma \\,] +\\frac{\\zeta ^{\\prime \\prime }(2)}{2 \\pi ^2} \\, .$ On applying Parseval's theorem to the Fourier series in Eq.", "(REF ), one finds $\\int _0^1{\\ln ^2{\\Gamma (t)}~dt} = a_0^2 +\\frac{1}{2} \\, \\sum _{k=1}^\\infty {a_k^{\\,2}} +\\frac{1}{2} \\, \\sum _{k=1}^\\infty {b_k^{\\,2}} \\, ,$ where the Fourier coefficients are those given in Eq. ().", "This expands to $\\int _0^1{\\ln ^2{\\Gamma (t)}~dt} = \\frac{\\ln ^2{(2 \\pi )}}{4} +\\frac{1}{8} \\: \\zeta {(2)} +\\frac{1}{2} \\, \\sum _{k=1}^\\infty {\\left( \\frac{\\ln ^2{k}}{\\,\\pi \\,k^2} +\\frac{\\eta ^2}{k^2} +\\frac{2\\,\\eta }{\\pi }~\\frac{\\ln {k}}{k^2}\\right)} \\nonumber \\\\= \\frac{\\ln ^2{(2 \\pi )}}{4} +\\frac{\\pi ^2}{48} +\\frac{1}{2\\,\\pi } \\, \\sum _{k=2}^\\infty {\\frac{\\ln ^2{k}}{k^2}} +\\frac{\\eta ^2}{2}~\\frac{\\pi ^2}{6} +\\frac{\\eta }{\\pi }\\,\\sum _{k=2}^\\infty {\\frac{\\ln {k}}{k^2}} \\, .$ The first series, above, reduces to $\\,\\zeta ^{\\prime \\prime }(2)$ .", "According to our Eq.", "(REF ), the last series simplifies to $\\,(\\pi ^2/6)\\,\\left[ 12\\,\\ln {A} -\\gamma -\\ln {(2 \\pi )}\\right]$ .", "The substitution of these two summations in Eq.", "(REF ) completes the proof.", "$\\Box $ The result of the above integral evaluation agrees with Eq.", "(31) of Ref.", "[3], as well as Eq.", "(6.441.6) of Ref. [10].", "An interesting feature of the Fourier coefficients stated in Eq.", "() arises from the analysis of its asymptotic behavior for large values of $k$ .", "For $\\,k=1$ , it is clear from Eq.", "() that $\\,\\eta _1 = \\eta $ .", "From that equation it also follows that $\\eta _{\\,2 k} = \\eta _k/2 +\\ln {2}/(2 \\pi k) \\, , \\quad k \\ge 1 \\, ,$ and $\\eta _{\\,2^k} = \\frac{\\:(\\ln {2}/\\pi ) \\: k +\\eta \\,}{2^k} \\, , \\quad k \\ge 0 \\, .$ These two formulae were derived by Shamov using the Legendre duplication formula for $\\Gamma (x)$ , in his answer to a question by Silagadze in Ref. [11].", "He indeed takes Eq.", "(REF ) into account for establishing the asymptotic behavior of $\\,\\eta _k\\,$ for $\\,k \\rightarrow \\infty \\,$ and uses it to develop a heuristic proof of our Lemma .", "His reasoning is so aesthetic that it deserves to be mentioned here.", "He argues that the integrand in the definition of $\\,\\eta _k\\,$ has only one singularity at $\\,t=0$ , where $\\,\\Gamma (t) = -\\ln {t} -\\gamma \\,t +\\ldots $ , so the asymptotic behavior of the Fourier coefficient $\\,\\eta _k\\,$ should be the same as that of $\\,\\ln {t}$ , i.e.", "$2 \\int _0^1{\\sin {(2 \\pi k t)}\\,\\ln {\\!\\left(t^{-1}\\right)} \\: d t} = \\ln {k}/(\\pi k) +\\eta /k +\\ldots $ He then notes that this result can be written as $-\\,\\frac{1}{\\,\\pi k} \\, \\int _0^{\\,2 \\pi k}{\\frac{\\cos {z} -1}{z} ~ dz} = \\frac{\\,\\gamma +\\ln {(2 \\pi \\,k)} -\\mathrm {Ci}(2 \\pi k)}{\\pi \\, k} \\, ,$ where $\\,\\mathrm {Ci}(x)\\,$ is the cosine integral, which behaves as $\\,(\\sin {x})/x\\,$ for $\\,x \\rightarrow \\infty $ .", "A comparison of the terms of order $\\,1/k\\,$ in these expansions promptly yields $\\,\\eta = [\\gamma +\\ln {(2 \\pi )}]/\\pi $ , which agrees to our Lemma .", "The author thanks M. R. Javier for checking all closed-form expressions proposed in this work with mathematical software to a high numerical precision." ], [ "Conflict of interest", "The author declares that he has no conflict of interest." ] ]
1906.04303